code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
**Imports**
```
import tensorflow as tf
import numpy as np
import rcwa_utils
import tensor_utils
import solver
import matplotlib.pyplot as plt
```
**Loss Function Definition**
```
def loss_func():
# Global parameters dictionary.
global params
# Generate permitivitty and permeability distributions.
ER_t, UR_t = solver.generate_rectangular_lines(var_duty, params)
# Set the device layer thickness based on the length variable.
thickness_coeff = tf.clip_by_value(var_length, clip_value_min = params['length_min'], clip_value_max = params['length_max'])
thickness_coeff = tf.cast(thickness_coeff, dtype = tf.complex64)
length_shape = (1, 1, 1, 1, 1, 1)
substrate_layer = tf.ones(shape = length_shape, dtype = tf.complex64)
device_layer = thickness_coeff * tf.ones(shape = length_shape, dtype = tf.complex64)
wavelength = params['lam0'][0, 0, 0, 0, 0, 0].numpy()
params['L'] = wavelength * tf.concat([device_layer, substrate_layer], axis = 3)
# Simulate the system.
outputs = solver.simulate(ER_t, UR_t, params)
# Maximize the reflectance for the first polarization and minimize the reflectance for the second polarization.
ref_pol1 = outputs['REF'][0, 0, 0]
ref_pol2 = outputs['REF'][1, 0, 0]
return -ref_pol1 * (1 - ref_pol2)
```
**Setup and Initialize Variables**
```
# Initialize duty cycle variable and global params dictionary.
params = solver.initialize_params(wavelengths = [632.0, 632.0],
thetas = [0.0, 0.0],
phis = [0.0, 0.0],
pte = [1.0, 0.0],
ptm = [0.0, 1.0])
params['erd'] = 6.76 # Grating layer permittivity.
params['ers'] = 2.25 # Subtrate layer permittivity.
params['PQ'] = [11, 11] # Fourier Harmonics.
params['batchSize'] = 2
params['Lx'] = 0.75 * 632 * params['nanometers'] # period along x
params['Ly'] = params['Lx'] # period along y
# Initialize grating duty cycle variable.
var_shape = (1, params['pixelsX'], params['pixelsY'])
duty_initial = 0.4 * np.ones(shape = var_shape)
var_duty = tf.Variable(duty_initial, dtype = tf.float32)
# Initialize grating thickness variable.
length_initial = 1.0
var_length = tf.Variable(length_initial, dtype = tf.float32)
```
**Optimize**
```
# Number of optimization iterations.
N = 100
# Define an optimizer and data to be stored.
opt = tf.keras.optimizers.Adam(learning_rate = 0.003)
loss = np.zeros(N + 1)
# Compute initial loss.
loss[0] = loss_func().numpy()
# Optimize.
print('Optimizing...')
for i in range(N):
opt.minimize(loss_func, var_list = [var_duty, var_length])
loss[i + 1] = loss_func().numpy()
```
**Display Learning Curve**
```
plt.plot(loss)
plt.xlabel('Iterations')
plt.ylabel('Loss')
plt.xlim(0, N)
plt.show()
```
|
github_jupyter
|
import tensorflow as tf
import numpy as np
import rcwa_utils
import tensor_utils
import solver
import matplotlib.pyplot as plt
def loss_func():
# Global parameters dictionary.
global params
# Generate permitivitty and permeability distributions.
ER_t, UR_t = solver.generate_rectangular_lines(var_duty, params)
# Set the device layer thickness based on the length variable.
thickness_coeff = tf.clip_by_value(var_length, clip_value_min = params['length_min'], clip_value_max = params['length_max'])
thickness_coeff = tf.cast(thickness_coeff, dtype = tf.complex64)
length_shape = (1, 1, 1, 1, 1, 1)
substrate_layer = tf.ones(shape = length_shape, dtype = tf.complex64)
device_layer = thickness_coeff * tf.ones(shape = length_shape, dtype = tf.complex64)
wavelength = params['lam0'][0, 0, 0, 0, 0, 0].numpy()
params['L'] = wavelength * tf.concat([device_layer, substrate_layer], axis = 3)
# Simulate the system.
outputs = solver.simulate(ER_t, UR_t, params)
# Maximize the reflectance for the first polarization and minimize the reflectance for the second polarization.
ref_pol1 = outputs['REF'][0, 0, 0]
ref_pol2 = outputs['REF'][1, 0, 0]
return -ref_pol1 * (1 - ref_pol2)
# Initialize duty cycle variable and global params dictionary.
params = solver.initialize_params(wavelengths = [632.0, 632.0],
thetas = [0.0, 0.0],
phis = [0.0, 0.0],
pte = [1.0, 0.0],
ptm = [0.0, 1.0])
params['erd'] = 6.76 # Grating layer permittivity.
params['ers'] = 2.25 # Subtrate layer permittivity.
params['PQ'] = [11, 11] # Fourier Harmonics.
params['batchSize'] = 2
params['Lx'] = 0.75 * 632 * params['nanometers'] # period along x
params['Ly'] = params['Lx'] # period along y
# Initialize grating duty cycle variable.
var_shape = (1, params['pixelsX'], params['pixelsY'])
duty_initial = 0.4 * np.ones(shape = var_shape)
var_duty = tf.Variable(duty_initial, dtype = tf.float32)
# Initialize grating thickness variable.
length_initial = 1.0
var_length = tf.Variable(length_initial, dtype = tf.float32)
# Number of optimization iterations.
N = 100
# Define an optimizer and data to be stored.
opt = tf.keras.optimizers.Adam(learning_rate = 0.003)
loss = np.zeros(N + 1)
# Compute initial loss.
loss[0] = loss_func().numpy()
# Optimize.
print('Optimizing...')
for i in range(N):
opt.minimize(loss_func, var_list = [var_duty, var_length])
loss[i + 1] = loss_func().numpy()
plt.plot(loss)
plt.xlabel('Iterations')
plt.ylabel('Loss')
plt.xlim(0, N)
plt.show()
| 0.826817 | 0.826572 |
# Chicago Crime Prediction Pipeline
An example notebook that demonstrates how to:
* Download data from BigQuery
* Create a Kubeflow pipeline
* Include Google Cloud AI Platform components to train and deploy the model in the pipeline
* Submit a job for execution
* Query the final deployed model
The model forecasts how many crimes are expected to be reported the next day, based on how many were reported over the previous `n` days.
## Imports
```
%%capture
# Install the SDK (Uncomment the code if the SDK is not installed before)
!python3 -m pip install 'kfp>=0.1.31' --quiet
!python3 -m pip install pandas --upgrade -q
# Restart the kernel for changes to take effect
import json
import kfp
import kfp.components as comp
import kfp.dsl as dsl
import pandas as pd
import time
```
## Pipeline
### Constants
```
# Required Parameters
project_id = '<ADD GCP PROJECT HERE>'
output = 'gs://<ADD STORAGE LOCATION HERE>' # No ending slash
# Optional Parameters
REGION = 'us-central1'
RUNTIME_VERSION = '1.13'
PACKAGE_URIS=json.dumps(['gs://chicago-crime/chicago_crime_trainer-0.0.tar.gz'])
TRAINER_OUTPUT_GCS_PATH = output + '/train/output/' + str(int(time.time())) + '/'
DATA_GCS_PATH = output + '/reports.csv'
PYTHON_MODULE = 'trainer.task'
PIPELINE_NAME = 'chicago-crime-prediction'
PIPELINE_FILENAME_PREFIX = 'chicago'
PIPELINE_DESCRIPTION = ''
MODEL_NAME = 'chicago_pipeline_model' + str(int(time.time()))
MODEL_VERSION = 'chicago_pipeline_model_v1' + str(int(time.time()))
```
### Download data
Define a download function that uses the BigQuery component
```
bigquery_query_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/bigquery/query/component.yaml')
QUERY = """
SELECT count(*) as count, TIMESTAMP_TRUNC(date, DAY) as day
FROM `bigquery-public-data.chicago_crime.crime`
GROUP BY day
ORDER BY day
"""
def download(project_id, data_gcs_path):
return bigquery_query_op(
query=QUERY,
project_id=project_id,
output_gcs_path=data_gcs_path
)
```
### Train the model
Run training code that will pre-process the data and then submit a training job to the AI Platform.
```
mlengine_train_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.2/components/gcp/ml_engine/train/component.yaml')
def train(project_id,
trainer_args,
package_uris,
trainer_output_gcs_path,
gcs_working_dir,
region,
python_module,
runtime_version):
return mlengine_train_op(
project_id=project_id,
python_module=python_module,
package_uris=package_uris,
region=region,
args=trainer_args,
job_dir=trainer_output_gcs_path,
runtime_version=runtime_version
)
```
### Deploy model
Deploy the model with the ID given from the training step
```
mlengine_deploy_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.2/components/gcp/ml_engine/deploy/component.yaml')
def deploy(
project_id,
model_uri,
model_id,
model_version,
runtime_version):
return mlengine_deploy_op(
model_uri=model_uri,
project_id=project_id,
model_id=model_id,
version_id=model_version,
runtime_version=runtime_version,
replace_existing_version=True,
set_default=True)
```
### Define pipeline
```
@dsl.pipeline(
name=PIPELINE_NAME,
description=PIPELINE_DESCRIPTION
)
def pipeline(
data_gcs_path=DATA_GCS_PATH,
gcs_working_dir=output,
project_id=project_id,
python_module=PYTHON_MODULE,
region=REGION,
runtime_version=RUNTIME_VERSION,
package_uris=PACKAGE_URIS,
trainer_output_gcs_path=TRAINER_OUTPUT_GCS_PATH,
):
download_task = download(project_id,
data_gcs_path)
train_task = train(project_id,
json.dumps(
['--data-file-url',
'%s' % download_task.outputs['output_gcs_path'],
'--job-dir',
output]
),
package_uris,
trainer_output_gcs_path,
gcs_working_dir,
region,
python_module,
runtime_version)
deploy_task = deploy(project_id,
train_task.outputs['job_dir'],
MODEL_NAME,
MODEL_VERSION,
runtime_version)
return True
# Reference for invocation later
pipeline_func = pipeline
```
### Submit the pipeline for execution
```
pipeline = kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
# Run the pipeline on a separate Kubeflow Cluster instead
# (use if your notebook is not running in Kubeflow - e.x. if using AI Platform Notebooks)
# pipeline = kfp.Client(host='<ADD KFP ENDPOINT HERE>').create_run_from_pipeline_func(pipeline, arguments={})
```
### Wait for the pipeline to finish
```
run_detail = pipeline.wait_for_run_completion(timeout=1800)
print(run_detail.run.status)
```
### Use the deployed model to predict (online prediction)
```
import os
os.environ['MODEL_NAME'] = MODEL_NAME
os.environ['MODEL_VERSION'] = MODEL_VERSION
```
Create normalized input representing 14 days prior to prediction day.
```
%%writefile test.json
{"lstm_input": [[-1.24344569, -0.71910112, -0.86641698, -0.91635456, -1.04868914, -1.01373283, -0.7690387, -0.71910112, -0.86641698, -0.91635456, -1.04868914, -1.01373283, -0.7690387 , -0.90387016]]}
!gcloud ai-platform predict --model=$MODEL_NAME --version=$MODEL_VERSION --json-instances=test.json
```
### Examine cloud services invoked by the pipeline
- BigQuery query: https://console.cloud.google.com/bigquery?page=queries (click on 'Project History')
- AI Platform training job: https://console.cloud.google.com/ai-platform/jobs
- AI Platform model serving: https://console.cloud.google.com/ai-platform/models
### Clean models
```
!gcloud ai-platform versions delete $MODEL_VERSION --model $MODEL_NAME -q
!gcloud ai-platform models delete $MODEL_NAME -q
```
|
github_jupyter
|
%%capture
# Install the SDK (Uncomment the code if the SDK is not installed before)
!python3 -m pip install 'kfp>=0.1.31' --quiet
!python3 -m pip install pandas --upgrade -q
# Restart the kernel for changes to take effect
import json
import kfp
import kfp.components as comp
import kfp.dsl as dsl
import pandas as pd
import time
# Required Parameters
project_id = '<ADD GCP PROJECT HERE>'
output = 'gs://<ADD STORAGE LOCATION HERE>' # No ending slash
# Optional Parameters
REGION = 'us-central1'
RUNTIME_VERSION = '1.13'
PACKAGE_URIS=json.dumps(['gs://chicago-crime/chicago_crime_trainer-0.0.tar.gz'])
TRAINER_OUTPUT_GCS_PATH = output + '/train/output/' + str(int(time.time())) + '/'
DATA_GCS_PATH = output + '/reports.csv'
PYTHON_MODULE = 'trainer.task'
PIPELINE_NAME = 'chicago-crime-prediction'
PIPELINE_FILENAME_PREFIX = 'chicago'
PIPELINE_DESCRIPTION = ''
MODEL_NAME = 'chicago_pipeline_model' + str(int(time.time()))
MODEL_VERSION = 'chicago_pipeline_model_v1' + str(int(time.time()))
bigquery_query_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/bigquery/query/component.yaml')
QUERY = """
SELECT count(*) as count, TIMESTAMP_TRUNC(date, DAY) as day
FROM `bigquery-public-data.chicago_crime.crime`
GROUP BY day
ORDER BY day
"""
def download(project_id, data_gcs_path):
return bigquery_query_op(
query=QUERY,
project_id=project_id,
output_gcs_path=data_gcs_path
)
mlengine_train_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.2/components/gcp/ml_engine/train/component.yaml')
def train(project_id,
trainer_args,
package_uris,
trainer_output_gcs_path,
gcs_working_dir,
region,
python_module,
runtime_version):
return mlengine_train_op(
project_id=project_id,
python_module=python_module,
package_uris=package_uris,
region=region,
args=trainer_args,
job_dir=trainer_output_gcs_path,
runtime_version=runtime_version
)
mlengine_deploy_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.2/components/gcp/ml_engine/deploy/component.yaml')
def deploy(
project_id,
model_uri,
model_id,
model_version,
runtime_version):
return mlengine_deploy_op(
model_uri=model_uri,
project_id=project_id,
model_id=model_id,
version_id=model_version,
runtime_version=runtime_version,
replace_existing_version=True,
set_default=True)
@dsl.pipeline(
name=PIPELINE_NAME,
description=PIPELINE_DESCRIPTION
)
def pipeline(
data_gcs_path=DATA_GCS_PATH,
gcs_working_dir=output,
project_id=project_id,
python_module=PYTHON_MODULE,
region=REGION,
runtime_version=RUNTIME_VERSION,
package_uris=PACKAGE_URIS,
trainer_output_gcs_path=TRAINER_OUTPUT_GCS_PATH,
):
download_task = download(project_id,
data_gcs_path)
train_task = train(project_id,
json.dumps(
['--data-file-url',
'%s' % download_task.outputs['output_gcs_path'],
'--job-dir',
output]
),
package_uris,
trainer_output_gcs_path,
gcs_working_dir,
region,
python_module,
runtime_version)
deploy_task = deploy(project_id,
train_task.outputs['job_dir'],
MODEL_NAME,
MODEL_VERSION,
runtime_version)
return True
# Reference for invocation later
pipeline_func = pipeline
pipeline = kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
# Run the pipeline on a separate Kubeflow Cluster instead
# (use if your notebook is not running in Kubeflow - e.x. if using AI Platform Notebooks)
# pipeline = kfp.Client(host='<ADD KFP ENDPOINT HERE>').create_run_from_pipeline_func(pipeline, arguments={})
run_detail = pipeline.wait_for_run_completion(timeout=1800)
print(run_detail.run.status)
import os
os.environ['MODEL_NAME'] = MODEL_NAME
os.environ['MODEL_VERSION'] = MODEL_VERSION
%%writefile test.json
{"lstm_input": [[-1.24344569, -0.71910112, -0.86641698, -0.91635456, -1.04868914, -1.01373283, -0.7690387, -0.71910112, -0.86641698, -0.91635456, -1.04868914, -1.01373283, -0.7690387 , -0.90387016]]}
!gcloud ai-platform predict --model=$MODEL_NAME --version=$MODEL_VERSION --json-instances=test.json
!gcloud ai-platform versions delete $MODEL_VERSION --model $MODEL_NAME -q
!gcloud ai-platform models delete $MODEL_NAME -q
| 0.542379 | 0.842345 |
# โป ๋ณต์
## 1) ๋งค๊ฐ๋ณ์์ ์ธ์
- ๋งค๊ฐ๋ณ์ : ํจ์์ ์
๋ ฅ์ผ๋ก ์ ๋ฌ๋ ๊ฐ์ ๋ฐ๋ ๋ณ์ (์์ )
- ์ธ์ : ํจ์๋ฅผ ํธ์ถํ ๋ ์ ๋ฌํ๋ ์
๋ ฅ๊ฐ (์ก์ )
```
```
## 2) ๋์
๋๋ฆฌ ๊ฐ ๊ฐ์ ธ์ค๊ธฐ
```
alist = {
'ํค':['๊ฐ1', '๊ฐ2'],
'ํค2':['๊ฐ3','๊ฐ4'],
'ํค3':['๊ฐ5','๊ฐ6'],
'ํค4':{'ํค':100}
}
```
# 1.
```
```
# 2.๋ชจ๋๊ณผ ํจํค์ง
```
```
## 1)
```
```
## 2)
```
```
## 3)
```
```
```
```
# 3.ํด๋์ค
```
```
# โป ์์
## ์์ . DT์ค์ฟจ ์ต์ ์ฐ๋ น์?
```
# ์ ์๋์ด ๋ณด๋ด์ค ์ฝ๋
# ๊ฐ๋จํ ํ์ผ ํ์์ผ๋ก ์ฐ๊ธฐ ์ข๋ค.
alist = {
'1ํ':[25,39,29,27,22],
'2ํ':[27,24,25,29,25],
'3ํ':[25,26,25,23,23,28,27],
'4ํ':[21,23,30],
'5ํ':[26,41,24,31,21,34,27],
'6ํ':[21,29,27,25,25]}
age_li = []
for i,v in alist.items():
print(f'{i}์ ์ต์ ์ฐ๋ น์{min(v)} ์ต๊ณ ์ฐ๋ น์ {max(v)} ํ๊ท ์ฐ๋ น์ {v / len(alist.values()}')
# ๊ธฐ์ต์๋จ
age_li.extend(v)
# print(age_li)
print(f'DT์ค์ฟจ ์ต์ ์ฐ๋ น์{min(age_li)} ์ต๊ณ ์ฐ๋ น์ {max(age_li)}')
alist = {
'1ํ':[25,39,29,27,22],
'2ํ':[27,24,25,29,25],
'3ํ':[25,26,25,23,23,28,27],
'4ํ':[21,23,30],
'5ํ':[26,41,24,31,21,34,27],
'6ํ':[21,29,27,25,25]}
age_li = []
for i,v in alist.items():
print(f'{i}์ ์ต์ ์ฐ๋ น์{min(v)} ์ต๊ณ ์ฐ๋ น์ {max(v)} ํ๊ท ์ฐ๋ น์ {v / len(alist.values()}')
age_li.extend(v)
# print(age_li)
print(f'DT์ค์ฟจ ์ต์ ์ฐ๋ น์{min(age_li)} ์ต๊ณ ์ฐ๋ น์ {max(age_li)}')
```
## ์์ . ์ฐ๋ฆฌํ ์บ๋ฆญํฐ ์ฌ์ ๋ง๋ค๊ธฐ
1. ๋๋ค์์ ๋ฐ๋ฅธ ์ฌ์ ์ ๋ง๋ ๋ค.
- ํค๊ฐ: ๋๋ค์
- ๊ฐ: ์ทจ๋ฏธ, ์ฒด๋ ฅ, ์ ์ฒด์ฐ๋ น, ์ ์ ์ฐ๋ น
2. ๋๋ค์์ ์
๋ ฅ๋ฐ์ต๋๋ค. input('๋๋ค์์ ์
๋ ฅํ์ธ์')
3. ๋๋ค์๊ณผ ๊ฐ์ ์ฌ๋ฏธ์๊ฒ ๊ตฌ์ฑํ์ฌ ํ์ ์๊ฐ๊ธ์ ์ถ๋ ฅํ์ธ์
- (์:๊ฐ๊ณ ์ถ์๊ณณ,ํ๊ณ ์ถ์๊ฒ, ์ฆ๊ฑฐ์ด ์ผ)
- ์บ๋ฆญํฐ ์ฌ์ ์ถ๋ ฅ ํจ์๋ก ๋ง๋ค๊ณ random.choice()๋ก ์ถ๋ ฅํ๊ฒ ์์ ํด์ฃผ์ธ์.
```
import random
team_dic = {
'sunny':['๋
์',90,30,40,'INFP'],
'Zoey':['์คํ ๋ฐ์ด',50,30,30,'ISFJ'],
'Sophie':['์์
๊ฐ์',90,30,30,'ISFJ'],
'Alex':['์ฐฝ์',70,27,25,'INTP'],
'Annie':['๊ทธ๋ฆผ',50,30,30,'ENFP']
}
# team_keys = list(team_dic.keys()) # values ๋ก ์๋ชปํ๋ค๊ฐ ํ๋ ธ์๋ค.
a = random.choice(team_keys)
b_list = team_dic[a]
print(f'์ฐ๋ฆฌ {a}๋์ ์๊ฐํฉ๋๋ค. {a}๋์ {b_list[0]}๋ฅผ ์ข์ํ๊ณ , MBTI ์ ํ์ {b_list[4]}์
๋๋ค.')
print(f'์ฒด๋ ฅ์ {b_list[1]}! {b_list[2]}์ธ์ ์ ์ฒด์ฐ๋ น์ ์ ๊ธฐํ๊ฒ๋ ์ ์ ์ฐ๋ น์ {b_list[3]}์ธ !!!!')
# ์ฒด๋ ฅ, ์ ์ฒด์ฐ๋ น, ์ ์ ์ฐ๋ น์ ๋ฐ๋ผ ์กฐ๊ฑด์ ๋ถ์ฌ์ ์ถ๋ ฅ๋ฌธ์ด ๋ฐ๋๋ ๊ฒ?
print(f'์ฌ๋์ค๋ฌ์ด {a}๋ ์ด์๋ค์')
# 1ํ ์์ค์ ๋ ์ฝ๋
import random
def c_open():
team_dic = {
'sunny':['๋
์',90,30,40,'INFP'],
'Zoey':['์คํ ๋ฐ์ด',50,30,30,'ISFJ'],
'Sophie':['์์
๊ฐ์',90,30,30,'ISFJ'],
'Alex':['์ฐฝ์',70,27,25,'INTP'],
'Annie':['๊ทธ๋ฆผ',50,30,30,'ENFP']
}
c_list = list(team_dic.keys())
a = random.choice(c_list)
b_list = team_dic[a]
print(f'์ฐ๋ฆฌ {a}๋์ ์๊ฐํฉ๋๋ค. {a}๋์ {b_list[0]}๋ฅผ ์ข์ํ๊ณ , MBTI์ ํ์ {b_list[4]}์
๋๋ค.')
print(f'์ฒด๋ ฅ์ {b_list[0]}! {b_list[2]}์ธ์ ์ ์ฒด์ฐ๋ น์ ์ ๊ธฐํ๊ฒ๋ ์ ์ ์ฐ๋ น์ {b_list[2]}์ธ!!! ')
print(f'์ฌ๋์ค๋ฌ์ด {a}๋ ์ด์๋ค์')
c_open()
# 5ํ ์ฝ๋
team_may = {'์๋ฃจํค':['ํ์ฌ๊ธฐ','๋์ค','๊ณฑ์ฐฝ','๋ฐ๋ฆฌ'],
'๊ฐ๋':['์ ๋์ด','์์ ๊ฑฐ','๋๊น์ค','์์ด์ฌ๋๋'],
'๋๊ตฌ๋ฆฌ':['์ด์์ด', '์ํ,๊ฐ์', '์ ', '๋ด์'],
'๋งค์ด์ค๋๋ผ':['๋ฐ์ ์ ','์ฑ
์ฝ๊ธฐ' '์ซ๋ฉด','ํฌ๋ก์ํฐ์'],
'๋๋':['๊น์์' , '๋
ธ๋ ๋ฃ๊ธฐ' , '๋ก๋ณถ์ด' , '์ ๋ฝ'],
'๋ฐ์ด์ง':['๋
ธํ์ ','์ ํ๋ธ๋ณด๊ธฐ','๋งค๋ฒ๋ค๋ฆ','์ด์คํ๋ถ'],
nick_name = input('๋๊ฐ ๊ถ๊ธํ๊ฐ์?')
if nick_name != team_may :
print('์ฐ๋ฆฌํ์์ด ์๋์์')
b_list = team_may[nick_name]
print(f'์ค์์ ์คํ {nick_name}๋์ ์๊ฐํฉ๋๋ค.')
print(f'{nick_name}๋์ ์ด๋ฆ์ {b_list[0]}.')
print(f'์ทจ๋ฏธ๋ {b_list[1]}.')
print(f'{nick_name}๋์ด ์ข์ํ๋ ์์์ {b_list[2]} ์ด๊ณ ,')
print(f'์ฌํ๊ฐ๊ณ ์ถ์ ๊ณณ์ {b_list[3]} ์
๋๋ค!.')
print(f'์ฐ๋ฆฌ ์ค์์ ์คํ ์ฌ๋์ค๋ฐ {nick_name}๋์ ํ์ํด์ฃผ์ธ์ฉโก')
# 6ํ ์ฝ๋ (์๋จ)
# team_dic = {'๋๋ค์':[์ทจ๋ฏธ, mbti, ์ข์ํ๋ ์ฌํ์ง, ๊ฐ๊ณ ์ถ์ ์ฌํ์ง, ์ข์ํ๋ ์ผ]}
team6_dic = {'ํ':['์ํ๊ฐ์', 'ENTP', '๊ฒฝ์ฃผ', '๋๋ฐ์ด', '์น๊ตฌ๋ ์ ๋ง์๊ธฐ'],
'์ธ':['๋ฅํ๋ฆญ์ค ๋ณด๊ธฐ', 'ENFP', '์ ์ฃผ๋', 'ํ์๋์', '๋ง์๋ ์์ ๋จน๊ธฐ'],
'๋น':['ํธ๋ํฐ ๋ณด๊ธฐ', 'ESTP', '๋ฐ๋ค ์ ์ด๋๋ ', '์ฝํํค๋๋ฐ๋ฃจ', '๋๋ผ์ด๋ธ'],
'์ ':['๊ฒ์', 'enfj', '๋ถ์ฐ', '๊ฐํ', '์๋ค๋จ๊ธฐ'],
'์':['๋
์','ISFP','๋ํด','๋ค๋ญ','๋ณด๋๊ฒ์๊ณผ ๋ฐฉํ์ถ']
}
import random
a = random.choice(team6_dic)
b_list = team6_dic[a]
print(f'{a}๋์ ์๊ฐํฉ๋๋ค. {a}๋์ ์ทจ๋ฏธ๋ {b_list[0]}์
๋๋ค.')
print(f'mbti๋ {b_list[1]}์ด๊ณ , ์ข์ํ๋ ์ฌํ์ง๋ {b_list[2]}, ๊ฐ๊ณ ์ถ์ ์ฌํ์ง๋ {b_list[3]}!')
print(f'์ข์ํ๋ ์ผ์ {b_list[4]}์
๋๋ค!')
print(f'์ฌ๋์ค๋ฌ์ด {a}๋์ด์๋ค์ :D')
```
## ์์ . ์ฐ๋ฆฌํ์ ํ๊ท ๋์ด ๊ตฌํ๊ธฐ
```
```
|
github_jupyter
|
```
## 2) ๋์
๋๋ฆฌ ๊ฐ ๊ฐ์ ธ์ค๊ธฐ
# 1.
# 2.๋ชจ๋๊ณผ ํจํค์ง
## 1)
## 2)
## 3)
# 3.ํด๋์ค
# โป ์์
## ์์ . DT์ค์ฟจ ์ต์ ์ฐ๋ น์?
## ์์ . ์ฐ๋ฆฌํ ์บ๋ฆญํฐ ์ฌ์ ๋ง๋ค๊ธฐ
1. ๋๋ค์์ ๋ฐ๋ฅธ ์ฌ์ ์ ๋ง๋ ๋ค.
- ํค๊ฐ: ๋๋ค์
- ๊ฐ: ์ทจ๋ฏธ, ์ฒด๋ ฅ, ์ ์ฒด์ฐ๋ น, ์ ์ ์ฐ๋ น
2. ๋๋ค์์ ์
๋ ฅ๋ฐ์ต๋๋ค. input('๋๋ค์์ ์
๋ ฅํ์ธ์')
3. ๋๋ค์๊ณผ ๊ฐ์ ์ฌ๋ฏธ์๊ฒ ๊ตฌ์ฑํ์ฌ ํ์ ์๊ฐ๊ธ์ ์ถ๋ ฅํ์ธ์
- (์:๊ฐ๊ณ ์ถ์๊ณณ,ํ๊ณ ์ถ์๊ฒ, ์ฆ๊ฑฐ์ด ์ผ)
- ์บ๋ฆญํฐ ์ฌ์ ์ถ๋ ฅ ํจ์๋ก ๋ง๋ค๊ณ random.choice()๋ก ์ถ๋ ฅํ๊ฒ ์์ ํด์ฃผ์ธ์.
## ์์ . ์ฐ๋ฆฌํ์ ํ๊ท ๋์ด ๊ตฌํ๊ธฐ
| 0.363534 | 0.90261 |
# Stock Long-Term Investment
```
import numpy as np
import pandas as pd
from pandas_datareader import data as pdr
import fix_yahoo_finance as yf
import matplotlib.pyplot as plt
tickers = tickers = ['DPS', 'KO', 'PEP']
start = '2011-1-1'
end = '2015-12-31'
portfolio = pd.DataFrame()
for t in tickers:
portfolio[t] = pdr.get_data_yahoo(t, start, end)['Adj Close']
portfolio.info()
portfolio.head()
portfolio.tail()
# Extract the very first row of the data
portfolio.iloc[0]
# Normalize returns to 100
(portfolio / portfolio.iloc[0] * 100).plot(figsize = (15,6))
plt.show()
portfolio_returns = np.log(portfolio / portfolio.shift(1))
# Calculate the variance of a particular assets daily returns
DPS_var = portfolio_returns['DPS'].var()
DPS_var
# Annualize the var from daily returns
DPS_var_a = portfolio_returns['DPS'].var() * 250
DPS_var_a
each_var = portfolio_returns.var()
each_var
each_std = portfolio_returns.std()
each_std
each_mean = portfolio_returns.mean()
each_mean
# Risk_free rate: 1 %
excess_daily_ret = portfolio - 0.01/252
excess_daily_ret
each_ex = excess_daily_ret.mean()
each_ex
# Sharpe Ratio
Rf = 0.01
each_SR = (np.sqrt(252) * each_mean) / each_std
each_SR
portfolio_returns['DPS'].plot()
plt.show()
# Calculate the simple rate of return
portfolio_simple_return = (portfolio / portfolio.shift(1)) - 1
print(portfolio_simple_return.head())
# plot the daily returns
portfolio_simple_return['DPS'].plot(figsize=(8,5))
plt.show()
# Calculate the average daily return
avg_returns_d = portfolio_simple_return['DPS'].mean()
avg_returns_d
# Calculate the average annual return per 250 trading days
avg_returns_a = portfolio_simple_return['DPS'].mean() * 250
print(str(round(avg_returns_a,5)*100) + '%')
# Calclulate the log returns for a given security
portfolio_log_return = np.log(portfolio / portfolio.shift(1))
print(portfolio_log_return.head())
# Plot the log returns
portfolio_log_return['DPS'].plot(figsize=(8,5))
plt.show()
# find the log daily return
log_return_d = portfolio_log_return['DPS'].mean()
# find the log annual return
log_return_a = portfolio_log_return['DPS'].mean() * 250
print(str(round(log_return_a,5) * 100) + '%')
# calculate the covariance matrix for the daily returns of selected assets
cov_matrix_d = portfolio_returns.cov()
cov_matrix_d
# Annualize the covariance matrix
cov_matrix_a = portfolio_returns.cov() * 250
cov_matrix_a
# Calculate the correlation of the daily returns of selected assets
corr_matrix = portfolio_returns.corr()
corr_matrix
```
|
github_jupyter
|
import numpy as np
import pandas as pd
from pandas_datareader import data as pdr
import fix_yahoo_finance as yf
import matplotlib.pyplot as plt
tickers = tickers = ['DPS', 'KO', 'PEP']
start = '2011-1-1'
end = '2015-12-31'
portfolio = pd.DataFrame()
for t in tickers:
portfolio[t] = pdr.get_data_yahoo(t, start, end)['Adj Close']
portfolio.info()
portfolio.head()
portfolio.tail()
# Extract the very first row of the data
portfolio.iloc[0]
# Normalize returns to 100
(portfolio / portfolio.iloc[0] * 100).plot(figsize = (15,6))
plt.show()
portfolio_returns = np.log(portfolio / portfolio.shift(1))
# Calculate the variance of a particular assets daily returns
DPS_var = portfolio_returns['DPS'].var()
DPS_var
# Annualize the var from daily returns
DPS_var_a = portfolio_returns['DPS'].var() * 250
DPS_var_a
each_var = portfolio_returns.var()
each_var
each_std = portfolio_returns.std()
each_std
each_mean = portfolio_returns.mean()
each_mean
# Risk_free rate: 1 %
excess_daily_ret = portfolio - 0.01/252
excess_daily_ret
each_ex = excess_daily_ret.mean()
each_ex
# Sharpe Ratio
Rf = 0.01
each_SR = (np.sqrt(252) * each_mean) / each_std
each_SR
portfolio_returns['DPS'].plot()
plt.show()
# Calculate the simple rate of return
portfolio_simple_return = (portfolio / portfolio.shift(1)) - 1
print(portfolio_simple_return.head())
# plot the daily returns
portfolio_simple_return['DPS'].plot(figsize=(8,5))
plt.show()
# Calculate the average daily return
avg_returns_d = portfolio_simple_return['DPS'].mean()
avg_returns_d
# Calculate the average annual return per 250 trading days
avg_returns_a = portfolio_simple_return['DPS'].mean() * 250
print(str(round(avg_returns_a,5)*100) + '%')
# Calclulate the log returns for a given security
portfolio_log_return = np.log(portfolio / portfolio.shift(1))
print(portfolio_log_return.head())
# Plot the log returns
portfolio_log_return['DPS'].plot(figsize=(8,5))
plt.show()
# find the log daily return
log_return_d = portfolio_log_return['DPS'].mean()
# find the log annual return
log_return_a = portfolio_log_return['DPS'].mean() * 250
print(str(round(log_return_a,5) * 100) + '%')
# calculate the covariance matrix for the daily returns of selected assets
cov_matrix_d = portfolio_returns.cov()
cov_matrix_d
# Annualize the covariance matrix
cov_matrix_a = portfolio_returns.cov() * 250
cov_matrix_a
# Calculate the correlation of the daily returns of selected assets
corr_matrix = portfolio_returns.corr()
corr_matrix
| 0.688573 | 0.80147 |
# Legacy Surveys DR9: Exploring galaxy ellipticities
This notebook illustrates that the *shape_r* field -- galaxy radius -- measures the major axis, *not* the circularized radius. It also demonstrates some of the Tractor and LegacyPipe code interfaces.
Installation: some of the software is a bit fiddly to install, so the easiest thing is to use the Docker container we provide, *legacysurvey/legacypipe:DR9.7.2*. At NERSC, you can install this so that it is one of your available kernels by:
```
mkdir -p ~/.local/share/jupyter/kernels/legacypipe-DR9.7.2
wget -O ~/.local/share/jupyter/kernels/legacypipe-DR9.7.2/kernel.json \
https://raw.githubusercontent.com/legacysurvey/legacypipe/main/py/legacyanalysis/jupyter-legacypipe-kernel.json
```
```
%matplotlib inline
import sys
import pylab as plt
from astrometry.util.fits import *
from glob import glob
from tractor.ellipses import EllipseE
from legacypipe.survey import LegacySurveyData, wcs_for_brick
# We plot a grid of galaxies with different ellipticity components e1,e2.
n1, n2 = 7, 7
E1, E2 = np.meshgrid(np.linspace(-1., 1., n1), np.linspace(-1., 1., n2))
angle = np.linspace(0., 2. * np.pi, 20)
xy = np.vstack((np.sin(angle), np.cos(angle)))
for e1, e2 in zip(E1.ravel(), E2.ravel()):
e = EllipseE(np.exp(6), e1, e2)
T = e.getRaDecBasis()
txy = np.dot(T, xy)
plt.plot(e1 + txy[0, :], e2 + txy[1, :], '-', color='b')
plt.xlabel('shape_e1')
plt.ylabel('shape_e2')
plt.axis('scaled')
plt.title('Ellipticities e1,e2');
# Next, we'll render galaxy models with these ellipticities
from tractor.psf import NCircularGaussianPSF
from tractor import Image, NullWCS, ConstantFitsWcs, LinearPhotoCal, PixPos, RaDecPos, Flux, Tractor, EllipseESoft, ExpGalaxy
from astrometry.util.util import Tan
W, H = 500, 500
img = np.zeros((H, W), np.float32)
sig1 = 1.
pixscale = 0.262
# Create a typical north-up, east-left WCS
ps = pixscale/3600
wcs = Tan(0., 0., (W+1)/2., (H+1)/2., -ps, 0., 0., ps, float(W), float(H))
# PSF model: we'll assume 1" FWHM seeing with a Gaussian PSF shape.
# The PSF model class we're using takes Gaussian sigma in pixels rather than FWHM, so factor of 2.35
seeing = 1.
psf = NCircularGaussianPSF([seeing / pixscale / 2.35], [1.])
# Create a fake Tractor image object
tim = Image(data=img, inverr=np.zeros_like(img) + (1. / sig1),
psf=psf, wcs=ConstantFitsWcs(wcs),
photocal=LinearPhotoCal(1.))
# Could also work in pixel space:
# wcs=NullWCS(pixscale=pixscale)
# Create a catalog of galaxies with different ellipticities
cat = []
x = np.linspace(0, W, n1, endpoint=False)
x += (x[1] - x[0]) / 2.
y = np.linspace(0, H, n2, endpoint=False)
y += (y[1] - y[0]) / 2.
xx, yy = np.meshgrid(x, y)
rr, dd = wcs.pixelxy2radec(xx, yy)
for e1, e2, x, y, r, d in zip(E1.ravel(), E2.ravel(), xx.ravel(), yy.ravel(), rr.ravel(), dd.ravel()):
e = EllipseE(5., e1, e2)
#pos = PixPos(x, y)
pos = RaDecPos(r, d)
gal = ExpGalaxy(pos, Flux(1000. * sig1), e)
cat.append(gal)
ima = dict(interpolation='nearest', origin='lower', cmap='gray',
vmin=-1 * sig1, vmax=3 * sig1)
tractor = Tractor([tim], cat)
mod = tractor.getModelImage(0)
plt.imshow(mod, **ima);
plt.xticks([]); plt.yticks([])
plt.xlabel('shape_e1')
plt.ylabel('shape_e2')
plt.title('Sky rendering of EXP galaxy ellipticities');
```
Next, we'll examine a couple of real galaxies to show their radii.
```
# The LegacySurveyData object handles finding files on disk, among other things
survey = LegacySurveyData('/global/cfs/cdirs/cosmo/data/legacysurvey/dr9/south')
# This is a specific galaxy I selected to examine.
brick, objid = '1764p197', 2721
# Find the tractor file and read it
filename = survey.find_file('tractor', brick=brick)
T = fits_table(filename)
# Select just the single galaxy we want.
t = (T[T.objid == objid])[0]
print(t.shape_r, t.shape_e1, t.shape_e2)
# Find the JPEG image of this brick
fn = survey.find_file('image-jpeg', brick=brick)
# The way JPEGs get read, they're vertically flipped wrt FITS coordinates.
img = np.flipud(plt.imread(fn))
# Create a FITS WCS header for this brick.
brickobj = survey.get_brick_by_name(brick)
wcs = wcs_for_brick(brickobj)
# Show the JPEG image cutout of this galaxy.
x,y = int(t.bx), int(t.by)
S=100
x0 = x-S//2
y0 = y-S//2
# Pull out the subimage of the JPEG
subimg = img[y0:y0+S, x0:x0+S]
sh,sw,x = subimg.shape
# Here we create a WCS object for the subimage.
subwcs = wcs.get_subimage(x0, y0, sw, sh)
plt.imshow(subimg, origin='lower');
# Create a tractor ellipse object for this galaxy, and compute where 1 r_e is.
ell = EllipseE(t.shape_r, t.shape_e1, t.shape_e2)
# the getRaDecBasis() function returns a matrix that converts r_e to delta_RA, delta_Dec
R = ell.getRaDecBasis()
angle = np.linspace(0., 2. * np.pi, 30)
xx, yy = np.sin(angle), np.cos(angle)
xy = np.vstack((xx,yy))
dra,ddec = np.dot(R, xy)
# Also compute locations of major and minor axes
vxy = np.array([[0, 1], [0,0], [1,0]]).T
vdra,vddec = np.dot(R, vxy)
# Plot 1 r_e contour, converting to RA,Dec then to pixels.
cosdec = np.cos(np.deg2rad(t.dec))
ra = t.ra + dra / cosdec
dec = t.dec + ddec
ok,xx,yy = subwcs.radec2pixelxy(ra, dec)
ok,vx,vy = subwcs.radec2pixelxy(t.ra + vdra/cosdec, t.dec + vddec)
plt.imshow(subimg, origin='lower');
ax = plt.axis()
# The -1s here are to convert FITS 1-indexed pixels to numpy 0-indexed arrays
plt.plot(xx-1, yy-1, 'r-')
plt.plot(vx-1, vy-1, 'r--')
plt.axis(ax);
# Repeat for a second example galaxy that is more round.
brick, objid = '1745p200', 3017
fn = survey.find_file('tractor', brick=brick)
T = fits_table(fn)
t = (T[T.objid == objid])[0]
fn = survey.find_file('image-jpeg', brick=brick)
img = np.flipud(plt.imread(fn))
brickobj = survey.get_brick_by_name(brick)
wcs = wcs_for_brick(brickobj)
x,y = int(t.bx), int(t.by)
S=100
x0 = x-S//2
y0 = y-S//2
subimg = img[y0:y0+S, x0:x0+S]
sh,sw,x = subimg.shape
subwcs = wcs.get_subimage(x0, y0, sw, sh)
ell = EllipseE(t.shape_r, t.shape_e1, t.shape_e2)
R = ell.getRaDecBasis()
angle = np.linspace(0., 2. * np.pi, 30)
xx, yy = np.sin(angle), np.cos(angle)
xy = np.vstack((xx,yy))
dra,ddec = np.dot(R, xy)
vxy = np.array([[0, 1], [0,0], [1,0]]).T
vdra,vddec = np.dot(R, vxy)
cosdec = np.cos(np.deg2rad(t.dec))
ra = t.ra + dra / cosdec
dec = t.dec + ddec
ok,xx,yy = subwcs.radec2pixelxy(ra, dec)
ok,vx,vy = subwcs.radec2pixelxy(t.ra + vdra/cosdec, t.dec + vddec)
plt.imshow(subimg, origin='lower');
ax = plt.axis()
plt.plot(xx-1, yy-1, 'r-')
plt.plot(vx-1, vy-1, 'r--')
plt.axis(ax);
# Plot the minor and major axis radii as well.
e = ell.e
ab = (1 - e) / (1 + e)
rx,ry = np.cos(angle), np.sin(angle)
pix_r = t.shape_r / subwcs.pixel_scale()
ok,cx,cy = subwcs.radec2pixelxy(t.ra, t.dec)
plt.imshow(subimg, origin='lower');
ax = plt.axis()
plt.plot(xx-1, yy-1, 'r-')
plt.plot(vx-1, vy-1, 'r--')
plt.plot(cx-1+pix_r*rx, cy-1+pix_r*ry, 'w-')
plt.plot(cx-1+pix_r*rx*ab, cy-1+pix_r*ry*ab, 'w-')
plt.axis(ax);
```
|
github_jupyter
|
mkdir -p ~/.local/share/jupyter/kernels/legacypipe-DR9.7.2
wget -O ~/.local/share/jupyter/kernels/legacypipe-DR9.7.2/kernel.json \
https://raw.githubusercontent.com/legacysurvey/legacypipe/main/py/legacyanalysis/jupyter-legacypipe-kernel.json
%matplotlib inline
import sys
import pylab as plt
from astrometry.util.fits import *
from glob import glob
from tractor.ellipses import EllipseE
from legacypipe.survey import LegacySurveyData, wcs_for_brick
# We plot a grid of galaxies with different ellipticity components e1,e2.
n1, n2 = 7, 7
E1, E2 = np.meshgrid(np.linspace(-1., 1., n1), np.linspace(-1., 1., n2))
angle = np.linspace(0., 2. * np.pi, 20)
xy = np.vstack((np.sin(angle), np.cos(angle)))
for e1, e2 in zip(E1.ravel(), E2.ravel()):
e = EllipseE(np.exp(6), e1, e2)
T = e.getRaDecBasis()
txy = np.dot(T, xy)
plt.plot(e1 + txy[0, :], e2 + txy[1, :], '-', color='b')
plt.xlabel('shape_e1')
plt.ylabel('shape_e2')
plt.axis('scaled')
plt.title('Ellipticities e1,e2');
# Next, we'll render galaxy models with these ellipticities
from tractor.psf import NCircularGaussianPSF
from tractor import Image, NullWCS, ConstantFitsWcs, LinearPhotoCal, PixPos, RaDecPos, Flux, Tractor, EllipseESoft, ExpGalaxy
from astrometry.util.util import Tan
W, H = 500, 500
img = np.zeros((H, W), np.float32)
sig1 = 1.
pixscale = 0.262
# Create a typical north-up, east-left WCS
ps = pixscale/3600
wcs = Tan(0., 0., (W+1)/2., (H+1)/2., -ps, 0., 0., ps, float(W), float(H))
# PSF model: we'll assume 1" FWHM seeing with a Gaussian PSF shape.
# The PSF model class we're using takes Gaussian sigma in pixels rather than FWHM, so factor of 2.35
seeing = 1.
psf = NCircularGaussianPSF([seeing / pixscale / 2.35], [1.])
# Create a fake Tractor image object
tim = Image(data=img, inverr=np.zeros_like(img) + (1. / sig1),
psf=psf, wcs=ConstantFitsWcs(wcs),
photocal=LinearPhotoCal(1.))
# Could also work in pixel space:
# wcs=NullWCS(pixscale=pixscale)
# Create a catalog of galaxies with different ellipticities
cat = []
x = np.linspace(0, W, n1, endpoint=False)
x += (x[1] - x[0]) / 2.
y = np.linspace(0, H, n2, endpoint=False)
y += (y[1] - y[0]) / 2.
xx, yy = np.meshgrid(x, y)
rr, dd = wcs.pixelxy2radec(xx, yy)
for e1, e2, x, y, r, d in zip(E1.ravel(), E2.ravel(), xx.ravel(), yy.ravel(), rr.ravel(), dd.ravel()):
e = EllipseE(5., e1, e2)
#pos = PixPos(x, y)
pos = RaDecPos(r, d)
gal = ExpGalaxy(pos, Flux(1000. * sig1), e)
cat.append(gal)
ima = dict(interpolation='nearest', origin='lower', cmap='gray',
vmin=-1 * sig1, vmax=3 * sig1)
tractor = Tractor([tim], cat)
mod = tractor.getModelImage(0)
plt.imshow(mod, **ima);
plt.xticks([]); plt.yticks([])
plt.xlabel('shape_e1')
plt.ylabel('shape_e2')
plt.title('Sky rendering of EXP galaxy ellipticities');
# The LegacySurveyData object handles finding files on disk, among other things
survey = LegacySurveyData('/global/cfs/cdirs/cosmo/data/legacysurvey/dr9/south')
# This is a specific galaxy I selected to examine.
brick, objid = '1764p197', 2721
# Find the tractor file and read it
filename = survey.find_file('tractor', brick=brick)
T = fits_table(filename)
# Select just the single galaxy we want.
t = (T[T.objid == objid])[0]
print(t.shape_r, t.shape_e1, t.shape_e2)
# Find the JPEG image of this brick
fn = survey.find_file('image-jpeg', brick=brick)
# The way JPEGs get read, they're vertically flipped wrt FITS coordinates.
img = np.flipud(plt.imread(fn))
# Create a FITS WCS header for this brick.
brickobj = survey.get_brick_by_name(brick)
wcs = wcs_for_brick(brickobj)
# Show the JPEG image cutout of this galaxy.
x,y = int(t.bx), int(t.by)
S=100
x0 = x-S//2
y0 = y-S//2
# Pull out the subimage of the JPEG
subimg = img[y0:y0+S, x0:x0+S]
sh,sw,x = subimg.shape
# Here we create a WCS object for the subimage.
subwcs = wcs.get_subimage(x0, y0, sw, sh)
plt.imshow(subimg, origin='lower');
# Create a tractor ellipse object for this galaxy, and compute where 1 r_e is.
ell = EllipseE(t.shape_r, t.shape_e1, t.shape_e2)
# the getRaDecBasis() function returns a matrix that converts r_e to delta_RA, delta_Dec
R = ell.getRaDecBasis()
angle = np.linspace(0., 2. * np.pi, 30)
xx, yy = np.sin(angle), np.cos(angle)
xy = np.vstack((xx,yy))
dra,ddec = np.dot(R, xy)
# Also compute locations of major and minor axes
vxy = np.array([[0, 1], [0,0], [1,0]]).T
vdra,vddec = np.dot(R, vxy)
# Plot 1 r_e contour, converting to RA,Dec then to pixels.
cosdec = np.cos(np.deg2rad(t.dec))
ra = t.ra + dra / cosdec
dec = t.dec + ddec
ok,xx,yy = subwcs.radec2pixelxy(ra, dec)
ok,vx,vy = subwcs.radec2pixelxy(t.ra + vdra/cosdec, t.dec + vddec)
plt.imshow(subimg, origin='lower');
ax = plt.axis()
# The -1s here are to convert FITS 1-indexed pixels to numpy 0-indexed arrays
plt.plot(xx-1, yy-1, 'r-')
plt.plot(vx-1, vy-1, 'r--')
plt.axis(ax);
# Repeat for a second example galaxy that is more round.
brick, objid = '1745p200', 3017
fn = survey.find_file('tractor', brick=brick)
T = fits_table(fn)
t = (T[T.objid == objid])[0]
fn = survey.find_file('image-jpeg', brick=brick)
img = np.flipud(plt.imread(fn))
brickobj = survey.get_brick_by_name(brick)
wcs = wcs_for_brick(brickobj)
x,y = int(t.bx), int(t.by)
S=100
x0 = x-S//2
y0 = y-S//2
subimg = img[y0:y0+S, x0:x0+S]
sh,sw,x = subimg.shape
subwcs = wcs.get_subimage(x0, y0, sw, sh)
ell = EllipseE(t.shape_r, t.shape_e1, t.shape_e2)
R = ell.getRaDecBasis()
angle = np.linspace(0., 2. * np.pi, 30)
xx, yy = np.sin(angle), np.cos(angle)
xy = np.vstack((xx,yy))
dra,ddec = np.dot(R, xy)
vxy = np.array([[0, 1], [0,0], [1,0]]).T
vdra,vddec = np.dot(R, vxy)
cosdec = np.cos(np.deg2rad(t.dec))
ra = t.ra + dra / cosdec
dec = t.dec + ddec
ok,xx,yy = subwcs.radec2pixelxy(ra, dec)
ok,vx,vy = subwcs.radec2pixelxy(t.ra + vdra/cosdec, t.dec + vddec)
plt.imshow(subimg, origin='lower');
ax = plt.axis()
plt.plot(xx-1, yy-1, 'r-')
plt.plot(vx-1, vy-1, 'r--')
plt.axis(ax);
# Plot the minor and major axis radii as well.
e = ell.e
ab = (1 - e) / (1 + e)
rx,ry = np.cos(angle), np.sin(angle)
pix_r = t.shape_r / subwcs.pixel_scale()
ok,cx,cy = subwcs.radec2pixelxy(t.ra, t.dec)
plt.imshow(subimg, origin='lower');
ax = plt.axis()
plt.plot(xx-1, yy-1, 'r-')
plt.plot(vx-1, vy-1, 'r--')
plt.plot(cx-1+pix_r*rx, cy-1+pix_r*ry, 'w-')
plt.plot(cx-1+pix_r*rx*ab, cy-1+pix_r*ry*ab, 'w-')
plt.axis(ax);
| 0.554229 | 0.897874 |
# Simulate the Spin Dynamics on a Heisenberg Chain
<em> Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. </em>
## Introduction
The simulation of quantum systems is one of the many important applications of quantum computers. In general, the system's properties are characterized by its Hamiltonian operator $H$. For physical systems at different scales, their Hamiltonian takes different forms. For example in quantum chemistry, where we are often interested in the properties of molecules, which are determined mostly by electron-electron Coulomb interactions. As a consequence, a molecular Hamiltonian is usually written in the form of fermionic operators which act on the electron's wave function. On the other hand, the basic computational unit of a quantum computer - qubit, and its corresponding operations, correspond to spin and spin operators in physics. So in order to simulate a molecular Hamiltonian on a quantum computer, one needs to first map fermionic operators into spin operators with mappings such as Jordan-Wigner or Bravyi-Kitaev transformation, etc. Those transformations often create additional overhead for quantum simulation algorithms, make them more demanding in terms of a quantum computer's number of qubits, connectivity, and error control. It was commonly believed that one of the most near-term applications for quantum computers it the simulation of quantum spin models, whose Hamiltonian are natively composed of Pauli operators.
This tutorial will demonstrate how to simulate the time evolution process of a one-dimensional Heisenberg chain, one of the most commonly studied quantum spin models. This tutorial is based on the `construct_trotter_circuit()`, which can construct the Trotter-Suzuki or any custom trotterization circuit to simulate the time-evolving operator. We have already covered some of the basic usage as well as the theoretical background in another tutorial [Hamiltonian Simulation with Product Formula](./HamiltonianSimulation_EN.ipynb). A brief introduction of the Suzuki product formula is provided below for readers who are not familiar with it. In the remainder of this tutorial, we will be focusing on two parts:
- Simulating the spin dynamics on a Heisenberg chain
- Using randomized permutation to build a custom trotter circuit
---
Before discussing the physical background of the Heisenberg model, let's go over the basic concepts of time evolution simulation with a quantum circuit. Readers already familiar with this or uninterested in such details could choose to skip to the section of **Heisenberg model and its dynamical simulation** to continue reading.
### Simulate the time evolution with Suzuki product formula
The core idea of the Suzuki product formula can be described as follows: First, for a time-independent Hamiltonian $H = \sum_k^L h_k$, the system's time evolution operator is
$$
U(t) = e^{-iHt}.
\tag{1}
$$
Further dividing it into $r$ pieces, we have
$$
e^{-iHt} = \left( e^{-iH \tau} \right)^r, ~\tau=\frac{t}{r}.
\tag{2}
$$
This strategy is sometimes referred to as "Totterization".
And for each $e^{-iH \tau}$ operator, its Suzuki decompositions are
$$
\begin{aligned}
S_1(\tau) &= \prod_{k=0}^L \exp ( -i h_k \tau),
\\
S_2(\tau) &= \prod_{k=0}^L \exp ( -i h_k \frac{\tau}{2})\prod_{k=L}^0 \exp ( -i h_k \frac{\tau}{2}),
\\
S_{2k+2}(\tau) &= [S_{2k}(p_k\tau)]^2S_{2k}\left( (1-4p_k)\tau\right)[S_{2k}(p_k\tau)]^2.
\end{aligned}
\tag{3}
$$
Back to the original time evolution operator $U(t)$, with the $k$-th order Suzuki decomposition, it can be reformulated as
$$
U(t) = e^{-iHt} = \left( S_{k}\left(\frac{t}{r}\right) \right)^r.
\tag{4}
$$
The above scheme is referred to as the Suzuki product formula or Trotter-Suzuki decomposition. It is proven that it could efficiently simulate any time evolution process of a system with a k-local Hamiltonian up to arbitrary precision [1]. In another tutorial [Hamiltonian Simulation with Product Formula](./HamiltonianSimulation_EN.ipynb), we have shown how to calculate its error upper bound.
---
## Heisenberg Model and Its Dynamic Simulation
The Heisenberg model is arguably one of the most commonly used model in the research of quantum magnetism and quantum many-body physics. Its Hamiltonian can be expressed as
$$
H = \sum_{\langle i, j\rangle}
\left( J_x S^x_{i} S^x_{j} + J_y S^y_{i} S^y_{j} + J_z S^z_{i} S^z_{j} \right)
+
\sum_{i} h_z S^z_i,
\tag{5}
$$
with $\langle i, j\rangle$ depends on the specific lattice structure, $J_x, J_y, J_z$ describe the spin coupling strength respectively in the $xyz$ directions and $h_z$ is the magnetic field applied along the $z$ direction. When taking $J_z = 0$, the Hamiltonian in (5) can be used to describe the XY model; or when taking $J_x = J_y = 0$, then (5) is reduced to the Hamiltonian of Ising model. Note that here we used a notation of many-body spin operators $S^x_i, S^y_i, S^z_i$ which act on each of the local spins, this is slightly different from our usual notations but are very common in the field of quantum many-body physics. For a spin-1/2 system, when neglecting a coefficient of $\hbar/2$, the many-body spin operators are simple tensor products of Pauli operators, i.e.
$$
S^P_{i} = \left ( \otimes_{j=0}^{i-1} I \right ) \otimes \sigma_{P} \otimes \left ( \otimes_{j=i+1}^{L} I \right ),
P \in \{ x, y, z \},
\tag{6}
$$
where the $\sigma_{P}$ are Pauli operators, which can also be represented as $XYZ$. It is worth noting that while the Heisenberg model is an important theoretical model, but it also describes the physics in realistic materials (crystals). Starting from the Hubbard model, which describes the interactions and movement of electrons on a lattice, under certain conditions, the electrons are fixed to each site and form a half-filling case. In this case, the only left-over interaction is an effective spin-spin exchange interaction and the Hubbard model is reduced to the Heisenberg model [2]. While it seems that many approximations are made, the Heisenberg model has successfully described the properties of many crystal materials at low temperatures [3]. For example, many readers might be familiar with the copper nitrate crystal ($\rm Cu(NO_3)_2 \cdot 2.5 H_2 O$), and its behavior at $\sim 3k$ can be described by an alternating spin-1/2 Heisenberg chain [4].
Depending on the lattice structure, the Heisenberg model can host highly non-trivial quantum phenomena. As a one-dimensional chain, it demonstrates ferromagnetism and anti-ferromagnetism, symmetry breaking and gapless excitations [3]. On frustrated two-dimension lattices, some Heisenberg models constitute candidate models for quantum spin liquids, a long-range entangled quantum matter [5]. When under a disordered external magnet field, the Heisenberg model also can be used in the research of a heated topic, many-body localization, where the system violates the thermalization hypothesis and retains memories of its initial state after infinitely long time's evolution [6].
Simulating the time evolution of a Heisenberg model, i.e. the dynamical simulation, could help us to investigate the non-equilibrium properties of the system, and it might help us to locate novel quantum phases such as the many-body localized (MBL) phase introduced above or even more interestingly, time crystal phases [7]. Other than developing theories, the dynamic simulation plays a vital role for experimentalists, as the spin correlation function (also referred to as dynamical structure factors) is directly linked to the cross sections for scattering experiments or line shapes in nuclear magnetic resonance (NMR) experiments [3]. And this function, which we omit its exact form here, is a function of integration over $\langle S(t) S(0) \rangle$. So that in order to bridge the experiment and theory, one also need to compute the system's evolution in time.
### Use Paddle Quantum to simulate and observe the time evolution process of a Heisenberg chain
Now, we will take a one dimensional Heisenberg chain under disordered field of length 5 as an example, and demonstrate how the construct its time evolving circuit in Paddle Quantum. First we need to import relevant packages.
```
import numpy as np
import scipy
from scipy import linalg
import matplotlib.pyplot as plt
from paddle_quantum.circuit import UAnsatz
from paddle_quantum.utils import SpinOps, Hamiltonian, gate_fidelity
from paddle_quantum.trotter import construct_trotter_circuit, get_1d_heisenberg_hamiltonian
```
Then we use `get_1d_heisenberg_hamiltonian()` function to generate the Hamiltonian of a Heisenberg chain.
```
h = get_1d_heisenberg_hamiltonian(length=5, j_x=1, j_y=1, j_z=2, h_z=2 * np.random.rand(5) - 1,
periodic_boundary_condition=False)
print('The Hamiltoninan is:')
print(h)
```
After obtaining its Hamiltonian, we can then pass it to the `construct_trotter_circuit()` function to construct its time evolution circuit. Also, with `Hamiltonian.construct_h_matrix()` who returns the matrix form of a `Hamiltonian` object, we can calculate its exponential, i.e. the exact time-evolving operator. By taking the quantum circuit's unitary matrix `UAnsatz.U` and comparing it to the exact time-evolving operator by calculating their fidelity, we can evaluate how well the constructed circuit could describe the correct time evolution process.
```
# calculate the exact evolution operator of time t
def get_evolve_op(t): return scipy.linalg.expm(-1j * t * h.construct_h_matrix())
# set the total evolution time and the number of trotter steps
t = 3
r = 10
# construct the evolution circuit
cir_evolve = UAnsatz(5)
construct_trotter_circuit(cir_evolve, h, tau=t/r, steps=r, order=2)
# get the circuit's unitary matrix and calculate its fidelity to the exact evolution operator
U_cir = cir_evolve.U.numpy()
print('The fidelity between the circuit\'s unitary and the exact evolution operator is : %.2f' % gate_fidelity(get_evolve_op(t), U_cir))
```
#### Permute the Hamiltonian according to commutation relationships
It has been shown that the product formula's simulating error can be reduced by rearranging different terms. Since the error of simulation arises from the non-commuting terms in the Hamiltonian, one natural idea is to permute the Hamiltonian so that commuting terms are put together. For example, we could divide a Hamiltonian into four parts,
$$
H = H_x + H_y + H_z + H_{\rm other},
\tag{7}
$$
where $H_x, H_y, H_z$ contain terms only composed of $X, Y, Z$ operators, and $H_{\rm other}$ are all the other terms. For Hamiltonian describe in (5), all terms can be grouped into $H_x, H_y, H_z$.
Another approach is to decompose the Hamiltonian according to the system geometry. Especially for one-dimensional nearest-neighbor systems, the Hamiltonian can be divided into even and odd terms,
$$
H = H_{\rm even} + H_{\rm odd}.
\tag{8}
$$
where $H_{\rm even}$ are interactions on sites $(0, 1), (2, 3), ...$ and $H_{\rm odd}$ are interactions on sites $(1, 2), (3, 4), ...$.
Note that these two permutation strategies do **not** reduce the bound on simulation error, and empirical results return a more case-by-case effect on the error. Nevertheless, we provide the above two decompositions as a built-in option of the `construct_trotter_circuit()` function. By setting the argument `grouping='xyz'` or `grouping='even_odd'`, the function will automatically try to rearrange the Hamiltonian when adding the trotter circuit. Besides, users can also customize permutation by using the argument `permutation`, which we will introduce shortly in the next section. For now, let's test the `grouping` option and check the variations in fidelity:
```
# using the same evolution parameters, but set 'grouping="xyz"' and 'grouping="even_odd"'
cir_evolve_xyz = UAnsatz(5)
cir_evolve_even_odd = UAnsatz(5)
construct_trotter_circuit(cir_evolve_xyz, h, tau=t/r, steps=r, order=2, grouping='xyz')
construct_trotter_circuit(cir_evolve_even_odd, h, tau=t/r, steps=r, order=2, grouping='even_odd')
U_cir_xyz = cir_evolve_xyz.U.numpy()
U_cir_even_odd = cir_evolve_even_odd.U.numpy()
print('Original fidelity๏ผ', gate_fidelity(get_evolve_op(t), U_cir))
print('XYZ permuted fidelity๏ผ', gate_fidelity(get_evolve_op(t), U_cir_xyz))
print('Even-odd permuted fidelity๏ผ', gate_fidelity(get_evolve_op(t), U_cir_even_odd))
```
#### Initial state preparation and final state observation
Now let's prepare the system's initial state. Generally speaking, one common approach when studying the dynamics of a quantum system is to start the evolution from different direct product states. In Paddle Quantum, the default initial state is $\vert 0...0 \rangle$, so we can simply apply $X$ gate to different qubits to get a direct product initial state. For example, here we apply $X$ gate to qubits representing spins on odd sites, so the initial state will become $\vert 01010 \rangle$, as in spin notation, $\vert \downarrow \uparrow \downarrow \uparrow \downarrow \rangle$.
```
# create a circuit used for initial state preparation
cir = UAnsatz(5)
cir.x(1)
cir.x(3)
# run the circuit the get the initial state
init_state = cir.run_state_vector()
```
By passing the initial state `init_state` into the method `UAnsatz.run_state_vector(init_state)`, we can evolve the initial state with a quantum circuit. Then by `UAnsatz.expecval()` method, the expectation value of a user-specified observable on the final state could be measured. For simplicity, we only consider a single-spin observable $\langle S_i^z \rangle$ here, its corresponding Pauli string is `[[1, 'Zi']]` (i being an integer).
```
cir_evolve_even_odd.run_state_vector(init_state)
print('Sz observable on the site 0 is๏ผ', cir_evolve_even_odd.expecval([[1, 'Z0']]).numpy()[0])
```
Similarly, by adjusting the simulation time length and changing the observable, we could plot the entire evolution process of different spins. Note here in order to compute the exact solution, we need to construct the matrix form of each observable $S_i^z$ using `SpinOps` class and calculate their expectation value with $\langle \psi(t) \vert S_i^z \vert \psi(t) \rangle$.
```
def get_evolution_z_obs(h, t_total, order=None, n_steps=None, exact=None):
"""
a function to calculate a system's Sz observable on each site for an entire evolution process t
specify the order the trotter length by setting order and n_steps
set exact=True to get the exact results
"""
z_obs_total = []
for t in np.linspace(0., t_total, t_total * 3 + 1):
z_obs = []
# get the final state by either evolving with a circuit or the exact operator
if exact:
spin_operators = SpinOps(h.n_qubits)
fin_state = get_evolve_op(t).dot(init_state)
else:
cir_evolve = UAnsatz(5)
construct_trotter_circuit(cir_evolve, h, tau=t/n_steps, steps=n_steps, order=order, grouping='even_odd')
fin_state = cir_evolve.run_state_vector(init_state)
# measure the observable on each site
for site in range(h.n_qubits):
if exact:
z_obs.append(fin_state.conj().T.dot(spin_operators.sigz_p[site]).dot(fin_state))
else:
z_obs.append(cir_evolve.expecval([[1, 'Z' + str(site)]]).numpy()[0])
z_obs_total.append(z_obs)
return np.array(z_obs_total).real
def plot_comparison(**z_obs_to_plot):
"""
plot comparison between different evolution results
assume each argument passed into it is returned from get_evolution_z_obs() function for the same t_total
"""
fig, axes = plt.subplots(1, len(z_obs_to_plot), figsize = [len(z_obs_to_plot) * 3, 5.5])
ax_idx = 0
for label in z_obs_to_plot.keys():
im = axes[ax_idx].imshow(z_obs_to_plot[label], cmap='coolwarm_r', interpolation='kaiser', origin='lower')
axes[ax_idx].set_title(label, fontsize=15)
ax_idx += 1
for ax in axes:
ax.set_xlabel('site', fontsize=15)
ax.set_yticks(np.arange(0, z_obs_total_exact.shape[0], 3))
ax.set_yticklabels(np.arange(0, z_obs_total_exact.shape[0]/3, 1))
ax.set_xticks(np.arange(z_obs_total_exact.shape[1]))
ax.set_xticklabels(np.arange(z_obs_total_exact.shape[1]))
axes[0].set_ylabel('t', fontsize=15)
cax = fig.add_axes([0.92, 0.125, 0.02, 0.755])
fig.colorbar(im, cax)
cax.set_ylabel(r'$\langle S^z_i (t) \rangle$', fontsize=15)
# calculate the evolution process with circuits of trotter number 25 and 5, and the exact result
z_obs_total_exact = get_evolution_z_obs(h, t_total=3, exact=True)
z_obs_total_cir = get_evolution_z_obs(h, order=1, n_steps=25, t_total=3)
z_obs_total_cir_short = get_evolution_z_obs(h, order=1, n_steps=5, t_total=3)
plot_comparison(
Exact=z_obs_total_exact,
L25_Circuit=z_obs_total_cir,
L5_Circuit=z_obs_total_cir_short)
```
Observed that with 25 trotter blocks, the circuit could very well simulate the spin dynamics for the entire period. In contrast, the shorter circuit with only 5 trotter blocks could only describe the system's behavior correctly up to a certain time until the simulation breaks down.
**Exercise๏ผ** Could the readers try to observe the evolution of spatial spin correlation function $\langle S_i^z S_j^{z} \rangle$๏ผ
## Design customized trotter circuit with random permutation
### Random permutation
Although it seems more physically reasonable to group the commuting terms in the Hamiltonian to achieve better simulation performance, many evidence has shown that using a fixed order Hamiltonian for each trotter block might cause the errors to accumulate. On the other hand, evolving the Hamiltonian according to an random ordering might "wash-out" some of the coherent error in the simulation process and replace it with less harmful stochastic noise [8]. Both theoretical analyses on the error upper bound and empirical evidences show that this randomization could effectively reduce the simulation error [9].
### Customize trotter circuit construction
By default, the function `construct_trotter_circuit()` constructs a time evolving circuit according to the Suzuki product formula. However, users could choose to customize both the coefficients and permutations by setting `method='custom'` and passing custom arrays to arguments `permutation` and `coefficient`.
**Note:** The user should be very cautious when using arguments `coefficient`, `tau` and `steps` altogether. By setting `steps` other than 1 and `tau` other than $t$ (the total evolution time), it is possible to further trotterize the custom coefficient and permutation. For example, when setting `permutation=np.arange(h.n_qubits)` and `coefficient=np.ones(h.n_qubits)`, the effect of `tau` and `steps` is exactly the same as constructing the first-order product formula circuit.
Let us further demonstrate the customization function with a concrete example. With the same spin chain Hamiltonian, now we wish to design an evolution strategy similar to the first-order product formula, however the ordering of the Hamiltonian terms within each trotter block is independently random. We could implement this by pass an arraying of shape `(n_steps, h.n_terms)` to the argument `permutation`, and each row of that array is a random permutation $P(N)$.
```
# An example for customize permutation
permutation = np.vstack([np.random.permutation(h.n_terms) for i in range(100)])
```
Then, we compare the fidelity of such strategy with the first order product formula under different trotter length.
```
def compare(n_steps):
"""
compare the first order product formula and random permutation's fidelity for a fixed evolution time t=2
input n_steps is the number of trotter steps
output is respectively the first order PF and random permutations' fidelity
"""
t = 2
cir_evolve = UAnsatz(5)
construct_trotter_circuit(cir_evolve, h, tau=t/n_steps, steps=n_steps, order=1)
U_cir = cir_evolve.U.numpy()
fid_suzuki = gate_fidelity(get_evolve_op(t), U_cir)
cir_permute = UAnsatz(5)
permutation = np.vstack([np.random.permutation(h.n_terms) for i in range(n_steps)])
# when coefficient is not specified, a normalized uniform coefficient will be automatically set
construct_trotter_circuit(cir_permute, h, tau=t, steps=1, method='custom', permutation=permutation)
U_cir = cir_permute.U.numpy()
fid_random = gate_fidelity(get_evolve_op(t), U_cir)
return fid_suzuki, fid_random
# compare the two fidelity for different trotter steps
# as a demo, we only run the experiment once. Interested readers could run multiple times to calculate the error bar
n_range = [100, 200, 500, 1000]
result = [compare(n) for n in n_range]
result = 1 - np.array(result)
plt.loglog(n_range, result[:, 0], 'o-', label='1st order PF')
plt.loglog(n_range, result[:, 1], 'o-', label='Random')
plt.xlabel(r'Trotter number $r$', fontsize=12)
plt.ylabel(r'Error: $1 - {\rm Fid}$', fontsize=12)
plt.legend()
plt.show()
```
The 1st order PF refers to the first order product formula circuit with a fixed order. As expected, there is a good improvement in the fidelity for randomized trotter circuit over the first order product formula.
**Note:** In [9], the authors noted that the randomization achieved better performance without even utilizing any specific information about the Hamiltonian, and there should be a even more efficient algorithm compared to the simple randomization.
## Conclusion
Dynamical simulation plays a central role in the research of exotic quantum states. Due to its highly entangled nature, both experimental and theoretical research constitute highly challenging topics. Up until now, people haven't been able to fully understand the physics on some of the two-dimensional or even one-dimensional spin systems. On the other hand, the rapid development of general quantum computers and a series of quantum simulators give researchers new tools to deal with these challenging problems. Take the general quantum computer as an example, it could use digital simulation to simulate almost any quantum system's evolution process under complex conditions (for example a time-dependent Hamiltonian), which is beyond the reach of any classical computer. As the number of qubits and their precisions grow, it seems more like a question of when will the quantum computer surpass its classical counterpart on the tasks of quantum simulation. And among those tasks, it is commonly believed that the simulation of quantum spin systems will be one of the few cases where this breakthrough will first happen.
We have presented in this tutorial a hands-on case of simulating dynamical process on a quantum spin model with Paddle Quantum, and further discussed the possibility of designing new time-evolving strategies. Users can now easily design and benchmark their time evolution circuits with the `construct_trotter_circuit()` function and methods provided in the `Hamiltonian` and `SpinOps` class. We encourage our users to experiment and explore various time evolution strategies on different quantum systems.
---
## References
[1] Childs, Andrew M., et al. "Toward the first quantum simulation with quantum speedup." [Proceedings of the National Academy of Sciences 115.38 (2018): 9456-9461](https://www.pnas.org/content/115/38/9456.short).
[2] Eckle, Hans-Peter. Models of Quantum Matter: A First Course on Integrability and the Bethe Ansatz. [Oxford University Press, 2019](https://oxford.universitypressscholarship.com/view/10.1093/oso/9780199678839.001.0001/oso-9780199678839).
[3] Mikeska, Hans-Jรผrgen, and Alexei K. Kolezhuk. "One-dimensional magnetism." Quantum magnetism. Springer, Berlin, Heidelberg, 2004. 1-83.
[4] Berger, L., S. A. Friedberg, and J. T. Schriempf. "Magnetic Susceptibility of $\rm Cu(NO_3)_2ยท2.5 H_2O$ at Low Temperature." [Physical Review 132.3 (1963): 1057](https://journals.aps.org/pr/abstract/10.1103/PhysRev.132.1057).
[5] Broholm, C., et al. "Quantum spin liquids." [Science 367.6475 (2020)](https://science.sciencemag.org/content/367/6475/eaay0668).
[6] Abanin, Dmitry A., et al. "Colloquium: Many-body localization, thermalization, and entanglement." [Reviews of Modern Physics 91.2 (2019): 021001](https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.91.021001).
[7] Medenjak, Marko, Berislav Buฤa, and Dieter Jaksch. "Isolated Heisenberg magnet as a quantum time crystal." [Physical Review B 102.4 (2020): 041117](https://journals.aps.org/prb/abstract/10.1103/PhysRevB.102.041117).
[8] Wallman, Joel J., and Joseph Emerson. "Noise tailoring for scalable quantum computation via randomized compiling." [Physical Review A 94.5 (2016): 052325](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.94.052325).
[9] Childs, Andrew M., Aaron Ostrander, and Yuan Su. "Faster quantum simulation by randomization." [Quantum 3 (2019): 182](https://quantum-journal.org/papers/q-2019-09-02-182/).
|
github_jupyter
|
import numpy as np
import scipy
from scipy import linalg
import matplotlib.pyplot as plt
from paddle_quantum.circuit import UAnsatz
from paddle_quantum.utils import SpinOps, Hamiltonian, gate_fidelity
from paddle_quantum.trotter import construct_trotter_circuit, get_1d_heisenberg_hamiltonian
h = get_1d_heisenberg_hamiltonian(length=5, j_x=1, j_y=1, j_z=2, h_z=2 * np.random.rand(5) - 1,
periodic_boundary_condition=False)
print('The Hamiltoninan is:')
print(h)
# calculate the exact evolution operator of time t
def get_evolve_op(t): return scipy.linalg.expm(-1j * t * h.construct_h_matrix())
# set the total evolution time and the number of trotter steps
t = 3
r = 10
# construct the evolution circuit
cir_evolve = UAnsatz(5)
construct_trotter_circuit(cir_evolve, h, tau=t/r, steps=r, order=2)
# get the circuit's unitary matrix and calculate its fidelity to the exact evolution operator
U_cir = cir_evolve.U.numpy()
print('The fidelity between the circuit\'s unitary and the exact evolution operator is : %.2f' % gate_fidelity(get_evolve_op(t), U_cir))
# using the same evolution parameters, but set 'grouping="xyz"' and 'grouping="even_odd"'
cir_evolve_xyz = UAnsatz(5)
cir_evolve_even_odd = UAnsatz(5)
construct_trotter_circuit(cir_evolve_xyz, h, tau=t/r, steps=r, order=2, grouping='xyz')
construct_trotter_circuit(cir_evolve_even_odd, h, tau=t/r, steps=r, order=2, grouping='even_odd')
U_cir_xyz = cir_evolve_xyz.U.numpy()
U_cir_even_odd = cir_evolve_even_odd.U.numpy()
print('Original fidelity๏ผ', gate_fidelity(get_evolve_op(t), U_cir))
print('XYZ permuted fidelity๏ผ', gate_fidelity(get_evolve_op(t), U_cir_xyz))
print('Even-odd permuted fidelity๏ผ', gate_fidelity(get_evolve_op(t), U_cir_even_odd))
# create a circuit used for initial state preparation
cir = UAnsatz(5)
cir.x(1)
cir.x(3)
# run the circuit the get the initial state
init_state = cir.run_state_vector()
cir_evolve_even_odd.run_state_vector(init_state)
print('Sz observable on the site 0 is๏ผ', cir_evolve_even_odd.expecval([[1, 'Z0']]).numpy()[0])
def get_evolution_z_obs(h, t_total, order=None, n_steps=None, exact=None):
"""
a function to calculate a system's Sz observable on each site for an entire evolution process t
specify the order the trotter length by setting order and n_steps
set exact=True to get the exact results
"""
z_obs_total = []
for t in np.linspace(0., t_total, t_total * 3 + 1):
z_obs = []
# get the final state by either evolving with a circuit or the exact operator
if exact:
spin_operators = SpinOps(h.n_qubits)
fin_state = get_evolve_op(t).dot(init_state)
else:
cir_evolve = UAnsatz(5)
construct_trotter_circuit(cir_evolve, h, tau=t/n_steps, steps=n_steps, order=order, grouping='even_odd')
fin_state = cir_evolve.run_state_vector(init_state)
# measure the observable on each site
for site in range(h.n_qubits):
if exact:
z_obs.append(fin_state.conj().T.dot(spin_operators.sigz_p[site]).dot(fin_state))
else:
z_obs.append(cir_evolve.expecval([[1, 'Z' + str(site)]]).numpy()[0])
z_obs_total.append(z_obs)
return np.array(z_obs_total).real
def plot_comparison(**z_obs_to_plot):
"""
plot comparison between different evolution results
assume each argument passed into it is returned from get_evolution_z_obs() function for the same t_total
"""
fig, axes = plt.subplots(1, len(z_obs_to_plot), figsize = [len(z_obs_to_plot) * 3, 5.5])
ax_idx = 0
for label in z_obs_to_plot.keys():
im = axes[ax_idx].imshow(z_obs_to_plot[label], cmap='coolwarm_r', interpolation='kaiser', origin='lower')
axes[ax_idx].set_title(label, fontsize=15)
ax_idx += 1
for ax in axes:
ax.set_xlabel('site', fontsize=15)
ax.set_yticks(np.arange(0, z_obs_total_exact.shape[0], 3))
ax.set_yticklabels(np.arange(0, z_obs_total_exact.shape[0]/3, 1))
ax.set_xticks(np.arange(z_obs_total_exact.shape[1]))
ax.set_xticklabels(np.arange(z_obs_total_exact.shape[1]))
axes[0].set_ylabel('t', fontsize=15)
cax = fig.add_axes([0.92, 0.125, 0.02, 0.755])
fig.colorbar(im, cax)
cax.set_ylabel(r'$\langle S^z_i (t) \rangle$', fontsize=15)
# calculate the evolution process with circuits of trotter number 25 and 5, and the exact result
z_obs_total_exact = get_evolution_z_obs(h, t_total=3, exact=True)
z_obs_total_cir = get_evolution_z_obs(h, order=1, n_steps=25, t_total=3)
z_obs_total_cir_short = get_evolution_z_obs(h, order=1, n_steps=5, t_total=3)
plot_comparison(
Exact=z_obs_total_exact,
L25_Circuit=z_obs_total_cir,
L5_Circuit=z_obs_total_cir_short)
# An example for customize permutation
permutation = np.vstack([np.random.permutation(h.n_terms) for i in range(100)])
def compare(n_steps):
"""
compare the first order product formula and random permutation's fidelity for a fixed evolution time t=2
input n_steps is the number of trotter steps
output is respectively the first order PF and random permutations' fidelity
"""
t = 2
cir_evolve = UAnsatz(5)
construct_trotter_circuit(cir_evolve, h, tau=t/n_steps, steps=n_steps, order=1)
U_cir = cir_evolve.U.numpy()
fid_suzuki = gate_fidelity(get_evolve_op(t), U_cir)
cir_permute = UAnsatz(5)
permutation = np.vstack([np.random.permutation(h.n_terms) for i in range(n_steps)])
# when coefficient is not specified, a normalized uniform coefficient will be automatically set
construct_trotter_circuit(cir_permute, h, tau=t, steps=1, method='custom', permutation=permutation)
U_cir = cir_permute.U.numpy()
fid_random = gate_fidelity(get_evolve_op(t), U_cir)
return fid_suzuki, fid_random
# compare the two fidelity for different trotter steps
# as a demo, we only run the experiment once. Interested readers could run multiple times to calculate the error bar
n_range = [100, 200, 500, 1000]
result = [compare(n) for n in n_range]
result = 1 - np.array(result)
plt.loglog(n_range, result[:, 0], 'o-', label='1st order PF')
plt.loglog(n_range, result[:, 1], 'o-', label='Random')
plt.xlabel(r'Trotter number $r$', fontsize=12)
plt.ylabel(r'Error: $1 - {\rm Fid}$', fontsize=12)
plt.legend()
plt.show()
| 0.690142 | 0.992879 |
## Figure 2 - Structural Laplacian Eigenmode Examples
#### Structural Connectome Complex Laplacian Eigenmodes:
---
Here we define a brain's structural connectivity as a graph $C_{i,j}$ with a white matter fiber tract distance adjacency matrix $D_{i,j}$. To incorporate the distance between brain regions information into our connectivity network, we expand the symmetric degree normalized Laplacian to a Laplacian matrix with a imaginary component. In Fourier space, the delays caused by physical fiber tract distances as well as transmission velocity becomes frequency dependent phases.
Therefore, we define a "complex connectivity matrix" at any given angular frequency $\omega = 2\pi f$ as
$$C^{*}(\omega) = c_{ij} e^{-j \omega \tau_{i,j}^{\nu}}$$
$$C^{*}(\omega) = c_{ij}e^{-j D_{i,j} k}$$
Where $k$ is the wave number. We can normalize $C*(\omega)$ by the degree of each node such that $C(\omega) = I - diag(\frac{1}{degree}) C^{*}(\omega)$. Then we compute the Laplacian of this complex structural connectivity matrix:
$$ L = I - \alpha C(\omega)$$
Where $\alpha$ is the coupling strength parameter. Then we acquire the structural eigenmodes via the eigendecomposition:
```
from ipywidgets import interactive, widgets, fixed
from surfer import Brain as surface
from matplotlib.colors import ListedColormap
from sklearn.preprocessing import minmax_scale
import os
import nibabel as nib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# spectrome imports
from spectrome.brain import Brain
from spectrome.utils import functions, path
#Function to get eigenmodes ready for Pysurfer visualizations:
def eigmode2plot(labels, slider_alpha, slider_k, em, lap_type = 'complex'):
## spectrome brain object with HCP connectome:
hcp_dir = '../data'
brain = Brain.Brain()
brain.add_connectome(hcp_dir)
brain.reorder_connectome(brain.connectome, brain.distance_matrix)
brain.bi_symmetric_c()
brain.reduce_extreme_dir()
## compute eigenmodes:
if lap_type == 'complex':
brain.decompose_complex_laplacian(alpha=slider_alpha, k=slider_k, num_ev=86)
## prep the eigenmodes for visualization
lh_cort_eigs = minmax_scale(brain.norm_eigenmodes[0:34, em - 1]) # select the first for display
rh_cort_eigs = minmax_scale(brain.norm_eigenmodes[34:68,em - 1]) # hemisphere selection
elif lap_type == 'real':
brain.decompose_regular_laplacian(alpha = slider_alpha, num_ev = 86, vis = False)
## prep the eigenmodes for visualization
lh_cort_eigs = minmax_scale(brain.norm_eigenmodes[0:34, em - 1]) # select the first for display
rh_cort_eigs = minmax_scale(brain.norm_eigenmodes[34:68,em - 1]) # hemisphere selection
## pad our eigenmodes for pysurfer requirements:
lh_cort_padded = np.insert(lh_cort_eigs, [0,3], [0,0])
rh_cort_padded = np.insert(rh_cort_eigs, [0,3], [0,0])
lh_cort = lh_cort_padded[labels]
rh_cort = rh_cort_padded[labels]
return lh_cort, rh_cort, brain.norm_eigenmodes, em
def eigmode_widget(labels, surf_brain, slider_alpha, slider_k, em, hemi):
## define turbo colormap
#turbo_colormap_data = [[0.18995,0.07176,0.23217],[0.19483,0.08339,0.26149],[0.19956,0.09498,0.29024],[0.20415,0.10652,0.31844],[0.20860,0.11802,0.34607],[0.21291,0.12947,0.37314],[0.21708,0.14087,0.39964],[0.22111,0.15223,0.42558],[0.22500,0.16354,0.45096],[0.22875,0.17481,0.47578],[0.23236,0.18603,0.50004],[0.23582,0.19720,0.52373],[0.23915,0.20833,0.54686],[0.24234,0.21941,0.56942],[0.24539,0.23044,0.59142],[0.24830,0.24143,0.61286],[0.25107,0.25237,0.63374],[0.25369,0.26327,0.65406],[0.25618,0.27412,0.67381],[0.25853,0.28492,0.69300],[0.26074,0.29568,0.71162],[0.26280,0.30639,0.72968],[0.26473,0.31706,0.74718],[0.26652,0.32768,0.76412],[0.26816,0.33825,0.78050],[0.26967,0.34878,0.79631],[0.27103,0.35926,0.81156],[0.27226,0.36970,0.82624],[0.27334,0.38008,0.84037],[0.27429,0.39043,0.85393],[0.27509,0.40072,0.86692],[0.27576,0.41097,0.87936],[0.27628,0.42118,0.89123],[0.27667,0.43134,0.90254],[0.27691,0.44145,0.91328],[0.27701,0.45152,0.92347],[0.27698,0.46153,0.93309],[0.27680,0.47151,0.94214],[0.27648,0.48144,0.95064],[0.27603,0.49132,0.95857],[0.27543,0.50115,0.96594],[0.27469,0.51094,0.97275],[0.27381,0.52069,0.97899],[0.27273,0.53040,0.98461],[0.27106,0.54015,0.98930],[0.26878,0.54995,0.99303],[0.26592,0.55979,0.99583],[0.26252,0.56967,0.99773],[0.25862,0.57958,0.99876],[0.25425,0.58950,0.99896],[0.24946,0.59943,0.99835],[0.24427,0.60937,0.99697],[0.23874,0.61931,0.99485],[0.23288,0.62923,0.99202],[0.22676,0.63913,0.98851],[0.22039,0.64901,0.98436],[0.21382,0.65886,0.97959],[0.20708,0.66866,0.97423],[0.20021,0.67842,0.96833],[0.19326,0.68812,0.96190],[0.18625,0.69775,0.95498],[0.17923,0.70732,0.94761],[0.17223,0.71680,0.93981],[0.16529,0.72620,0.93161],[0.15844,0.73551,0.92305],[0.15173,0.74472,0.91416],[0.14519,0.75381,0.90496],[0.13886,0.76279,0.89550],[0.13278,0.77165,0.88580],[0.12698,0.78037,0.87590],[0.12151,0.78896,0.86581],[0.11639,0.79740,0.85559],[0.11167,0.80569,0.84525],[0.10738,0.81381,0.83484],[0.10357,0.82177,0.82437],[0.10026,0.82955,0.81389],[0.09750,0.83714,0.80342],[0.09532,0.84455,0.79299],[0.09377,0.85175,0.78264],[0.09287,0.85875,0.77240],[0.09267,0.86554,0.76230],[0.09320,0.87211,0.75237],[0.09451,0.87844,0.74265],[0.09662,0.88454,0.73316],[0.09958,0.89040,0.72393],[0.10342,0.89600,0.71500],[0.10815,0.90142,0.70599],[0.11374,0.90673,0.69651],[0.12014,0.91193,0.68660],[0.12733,0.91701,0.67627],[0.13526,0.92197,0.66556],[0.14391,0.92680,0.65448],[0.15323,0.93151,0.64308],[0.16319,0.93609,0.63137],[0.17377,0.94053,0.61938],[0.18491,0.94484,0.60713],[0.19659,0.94901,0.59466],[0.20877,0.95304,0.58199],[0.22142,0.95692,0.56914],[0.23449,0.96065,0.55614],[0.24797,0.96423,0.54303],[0.26180,0.96765,0.52981],[0.27597,0.97092,0.51653],[0.29042,0.97403,0.50321],[0.30513,0.97697,0.48987],[0.32006,0.97974,0.47654],[0.33517,0.98234,0.46325],[0.35043,0.98477,0.45002],[0.36581,0.98702,0.43688],[0.38127,0.98909,0.42386],[0.39678,0.99098,0.41098],[0.41229,0.99268,0.39826],[0.42778,0.99419,0.38575],[0.44321,0.99551,0.37345],[0.45854,0.99663,0.36140],[0.47375,0.99755,0.34963],[0.48879,0.99828,0.33816],[0.50362,0.99879,0.32701],[0.51822,0.99910,0.31622],[0.53255,0.99919,0.30581],[0.54658,0.99907,0.29581],[0.56026,0.99873,0.28623],[0.57357,0.99817,0.27712],[0.58646,0.99739,0.26849],[0.59891,0.99638,0.26038],[0.61088,0.99514,0.25280],[0.62233,0.99366,0.24579],[0.63323,0.99195,0.23937],[0.64362,0.98999,0.23356],[0.65394,0.98775,0.22835],[0.66428,0.98524,0.22370],[0.67462,0.98246,0.21960],[0.68494,0.97941,0.21602],[0.69525,0.97610,0.21294],[0.70553,0.97255,0.21032],[0.71577,0.96875,0.20815],[0.72596,0.96470,0.20640],[0.73610,0.96043,0.20504],[0.74617,0.95593,0.20406],[0.75617,0.95121,0.20343],[0.76608,0.94627,0.20311],[0.77591,0.94113,0.20310],[0.78563,0.93579,0.20336],[0.79524,0.93025,0.20386],[0.80473,0.92452,0.20459],[0.81410,0.91861,0.20552],[0.82333,0.91253,0.20663],[0.83241,0.90627,0.20788],[0.84133,0.89986,0.20926],[0.85010,0.89328,0.21074],[0.85868,0.88655,0.21230],[0.86709,0.87968,0.21391],[0.87530,0.87267,0.21555],[0.88331,0.86553,0.21719],[0.89112,0.85826,0.21880],[0.89870,0.85087,0.22038],[0.90605,0.84337,0.22188],[0.91317,0.83576,0.22328],[0.92004,0.82806,0.22456],[0.92666,0.82025,0.22570],[0.93301,0.81236,0.22667],[0.93909,0.80439,0.22744],[0.94489,0.79634,0.22800],[0.95039,0.78823,0.22831],[0.95560,0.78005,0.22836],[0.96049,0.77181,0.22811],[0.96507,0.76352,0.22754],[0.96931,0.75519,0.22663],[0.97323,0.74682,0.22536],[0.97679,0.73842,0.22369],[0.98000,0.73000,0.22161],[0.98289,0.72140,0.21918],[0.98549,0.71250,0.21650],[0.98781,0.70330,0.21358],[0.98986,0.69382,0.21043],[0.99163,0.68408,0.20706],[0.99314,0.67408,0.20348],[0.99438,0.66386,0.19971],[0.99535,0.65341,0.19577],[0.99607,0.64277,0.19165],[0.99654,0.63193,0.18738],[0.99675,0.62093,0.18297],[0.99672,0.60977,0.17842],[0.99644,0.59846,0.17376],[0.99593,0.58703,0.16899],[0.99517,0.57549,0.16412],[0.99419,0.56386,0.15918],[0.99297,0.55214,0.15417],[0.99153,0.54036,0.14910],[0.98987,0.52854,0.14398],[0.98799,0.51667,0.13883],[0.98590,0.50479,0.13367],[0.98360,0.49291,0.12849],[0.98108,0.48104,0.12332],[0.97837,0.46920,0.11817],[0.97545,0.45740,0.11305],[0.97234,0.44565,0.10797],[0.96904,0.43399,0.10294],[0.96555,0.42241,0.09798],[0.96187,0.41093,0.09310],[0.95801,0.39958,0.08831],[0.95398,0.38836,0.08362],[0.94977,0.37729,0.07905],[0.94538,0.36638,0.07461],[0.94084,0.35566,0.07031],[0.93612,0.34513,0.06616],[0.93125,0.33482,0.06218],[0.92623,0.32473,0.05837],[0.92105,0.31489,0.05475],[0.91572,0.30530,0.05134],[0.91024,0.29599,0.04814],[0.90463,0.28696,0.04516],[0.89888,0.27824,0.04243],[0.89298,0.26981,0.03993],[0.88691,0.26152,0.03753],[0.88066,0.25334,0.03521],[0.87422,0.24526,0.03297],[0.86760,0.23730,0.03082],[0.86079,0.22945,0.02875],[0.85380,0.22170,0.02677],[0.84662,0.21407,0.02487],[0.83926,0.20654,0.02305],[0.83172,0.19912,0.02131],[0.82399,0.19182,0.01966],[0.81608,0.18462,0.01809],[0.80799,0.17753,0.01660],[0.79971,0.17055,0.01520],[0.79125,0.16368,0.01387],[0.78260,0.15693,0.01264],[0.77377,0.15028,0.01148],[0.76476,0.14374,0.01041],[0.75556,0.13731,0.00942],[0.74617,0.13098,0.00851],[0.73661,0.12477,0.00769],[0.72686,0.11867,0.00695],[0.71692,0.11268,0.00629],[0.70680,0.10680,0.00571],[0.69650,0.10102,0.00522],[0.68602,0.09536,0.00481],[0.67535,0.08980,0.00449],[0.66449,0.08436,0.00424],[0.65345,0.07902,0.00408],[0.64223,0.07380,0.00401],[0.63082,0.06868,0.00401],[0.61923,0.06367,0.00410],[0.60746,0.05878,0.00427],[0.59550,0.05399,0.00453],[0.58336,0.04931,0.00486],[0.57103,0.04474,0.00529],[0.55852,0.04028,0.00579],[0.54583,0.03593,0.00638],[0.53295,0.03169,0.00705],[0.51989,0.02756,0.00780],[0.50664,0.02354,0.00863],[0.49321,0.01963,0.00955],[0.47960,0.01583,0.01055]]
#turbo = ListedColormap(turbo_colormap_data)
## spectrome brain object with HCP connectome:
hcp_dir = '../data'
brain = Brain.Brain()
brain.add_connectome(hcp_dir)
brain.reorder_connectome(brain.connectome, brain.distance_matrix)
brain.bi_symmetric_c()
brain.reduce_extreme_dir()
## compute eigenmodes:
brain.decompose_complex_laplacian(alpha=slider_alpha, k=slider_k, num_ev=86)
## prep the eigenmodes for visualization
lh_cort_eigs = minmax_scale(brain.norm_eigenmodes[0:34, em - 1]) # select the first for display
rh_cort_eigs = minmax_scale(brain.norm_eigenmodes[34:68, em - 1]) # hemisphere selection
## pad our eigenmodes for pysurfer requirements:
lh_cort_padded = np.insert(lh_cort_eigs, [0,3], [0,0])
rh_cort_padded = np.insert(rh_cort_eigs, [0,3], [0,0])
lh_cort = lh_cort_padded[labels]
rh_cort = rh_cort_padded[labels]
color_fmin = 0.50+lh_cort.min()
color_fmax = 0.95*lh_cort.max()
color_fmid = 0.65*lh_cort.max()
if hemi == 'Left':
surf_brain = surface(subject_id, "lh", surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
surf_brain.add_data(lh_cort, hemi = 'lh', thresh = 0.20, colormap = plt.cm.autumn_r, remove_existing = True)
surf_brain.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
elif hemi == 'Right':
surf_brain = surface(subject_id, "rh", surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
surf_brain.add_data(rh_cort, hemi = "rh", thresh = 0.20, colormap = plt.cm.autumn_r, remove_existing = True)
surf_brain.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
elif hemi == 'Both':
surf_brain = surface(subject_id, "both", surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
surf_brain.add_data(lh_cort, hemi = 'lh', thresh = 0.20, colormap = plt.cm.autumn_r, remove_existing = True)
surf_brain.add_data(rh_cort, hemi = "rh", thresh = 0.20, colormap = plt.cm.autumn_r, remove_existing = False)
surf_brain.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
return lh_cort, rh_cort, brain.norm_eigenmodes
```
Set-up functional network dataset and parameters to plot (coupling strength and wave number):
```
## parameters where we hold alpha constatnt and look for k behavior
param1 = np.array([1, 0.1])
param2 = np.array([1, 30])
## Load Yeo 2017's canonical network maps:
fc_dk = np.load('../data/com_dk.npy', allow_pickle = True).item()
fc_dk_normalized = pd.read_csv('../data/DK_dictionary_normalized.csv').set_index('Unnamed: 0')
```
Initialize `pysurfer`:
```
%gui qt
# set up Pysurfer variables
subject_id = "fsaverage"
hemi = ["lh","rh"]
surf = "white"
"""
Read in the automatic parcellation of sulci and gyri.
"""
hemi_side = "lh"
aparc_file = os.path.join(os.environ["SUBJECTS_DIR"],
subject_id, "label",
hemi_side + ".aparc.annot")
labels, ctab, names = nib.freesurfer.read_annot(aparc_file)
```
### Look at real laplacian first:
```
lh_cort, rh_cort, eigs, em = eigmode2plot(labels, slider_alpha = param2[0], slider_k = param2[1], em = 3, lap_type = 'real')
# colormap:
color_fmin = 0.50+lh_cort.min()
color_fmax = 0.95*lh_cort.max()
color_fmid = 0.65*lh_cort.max()
reg_surf = surface(subject_id, 'lh', surf, background = 'white', alpha = 1, title = 'Eigenmodes of Regular Laplacian')
reg_surf.add_data(lh_cort, hemi = 'lh', thresh = 0.4, colormap = plt.cm.autumn_r, remove_existing = True)
reg_surf.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
## save figures?
reg_surf.show_view('lat')
reg_surf.save_image('%s_lat_%1d.svg' % ('reg', em))
reg_surf.show_view('med')
reg_surf.save_image('%s_med_%1d.svg' % ('reg', em))
reg_surf = surface(subject_id, 'both', surf, background = 'white', alpha = 1, title = 'Eigenmodes of Regular Laplacian')
reg_surf.add_data(lh_cort, hemi = 'lh', thresh = 0.4, colormap = plt.cm.autumn_r, remove_existing = True)
reg_surf.add_data(rh_cort, hemi = 'rh', thresh = 0.4, colormap = plt.cm.autumn_r, remove_existing = True)
reg_surf.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
reg_surf.show_view('dor')
reg_surf.save_image('%s_dor_%1d.svg' % ('reg', em))
```
Visualize eigenmodes with the first set of parameters:
```
## define turbo colormap
#turbo_colormap_data = [[0.18995,0.07176,0.23217],[0.19483,0.08339,0.26149],[0.19956,0.09498,0.29024],[0.20415,0.10652,0.31844],[0.20860,0.11802,0.34607],[0.21291,0.12947,0.37314],[0.21708,0.14087,0.39964],[0.22111,0.15223,0.42558],[0.22500,0.16354,0.45096],[0.22875,0.17481,0.47578],[0.23236,0.18603,0.50004],[0.23582,0.19720,0.52373],[0.23915,0.20833,0.54686],[0.24234,0.21941,0.56942],[0.24539,0.23044,0.59142],[0.24830,0.24143,0.61286],[0.25107,0.25237,0.63374],[0.25369,0.26327,0.65406],[0.25618,0.27412,0.67381],[0.25853,0.28492,0.69300],[0.26074,0.29568,0.71162],[0.26280,0.30639,0.72968],[0.26473,0.31706,0.74718],[0.26652,0.32768,0.76412],[0.26816,0.33825,0.78050],[0.26967,0.34878,0.79631],[0.27103,0.35926,0.81156],[0.27226,0.36970,0.82624],[0.27334,0.38008,0.84037],[0.27429,0.39043,0.85393],[0.27509,0.40072,0.86692],[0.27576,0.41097,0.87936],[0.27628,0.42118,0.89123],[0.27667,0.43134,0.90254],[0.27691,0.44145,0.91328],[0.27701,0.45152,0.92347],[0.27698,0.46153,0.93309],[0.27680,0.47151,0.94214],[0.27648,0.48144,0.95064],[0.27603,0.49132,0.95857],[0.27543,0.50115,0.96594],[0.27469,0.51094,0.97275],[0.27381,0.52069,0.97899],[0.27273,0.53040,0.98461],[0.27106,0.54015,0.98930],[0.26878,0.54995,0.99303],[0.26592,0.55979,0.99583],[0.26252,0.56967,0.99773],[0.25862,0.57958,0.99876],[0.25425,0.58950,0.99896],[0.24946,0.59943,0.99835],[0.24427,0.60937,0.99697],[0.23874,0.61931,0.99485],[0.23288,0.62923,0.99202],[0.22676,0.63913,0.98851],[0.22039,0.64901,0.98436],[0.21382,0.65886,0.97959],[0.20708,0.66866,0.97423],[0.20021,0.67842,0.96833],[0.19326,0.68812,0.96190],[0.18625,0.69775,0.95498],[0.17923,0.70732,0.94761],[0.17223,0.71680,0.93981],[0.16529,0.72620,0.93161],[0.15844,0.73551,0.92305],[0.15173,0.74472,0.91416],[0.14519,0.75381,0.90496],[0.13886,0.76279,0.89550],[0.13278,0.77165,0.88580],[0.12698,0.78037,0.87590],[0.12151,0.78896,0.86581],[0.11639,0.79740,0.85559],[0.11167,0.80569,0.84525],[0.10738,0.81381,0.83484],[0.10357,0.82177,0.82437],[0.10026,0.82955,0.81389],[0.09750,0.83714,0.80342],[0.09532,0.84455,0.79299],[0.09377,0.85175,0.78264],[0.09287,0.85875,0.77240],[0.09267,0.86554,0.76230],[0.09320,0.87211,0.75237],[0.09451,0.87844,0.74265],[0.09662,0.88454,0.73316],[0.09958,0.89040,0.72393],[0.10342,0.89600,0.71500],[0.10815,0.90142,0.70599],[0.11374,0.90673,0.69651],[0.12014,0.91193,0.68660],[0.12733,0.91701,0.67627],[0.13526,0.92197,0.66556],[0.14391,0.92680,0.65448],[0.15323,0.93151,0.64308],[0.16319,0.93609,0.63137],[0.17377,0.94053,0.61938],[0.18491,0.94484,0.60713],[0.19659,0.94901,0.59466],[0.20877,0.95304,0.58199],[0.22142,0.95692,0.56914],[0.23449,0.96065,0.55614],[0.24797,0.96423,0.54303],[0.26180,0.96765,0.52981],[0.27597,0.97092,0.51653],[0.29042,0.97403,0.50321],[0.30513,0.97697,0.48987],[0.32006,0.97974,0.47654],[0.33517,0.98234,0.46325],[0.35043,0.98477,0.45002],[0.36581,0.98702,0.43688],[0.38127,0.98909,0.42386],[0.39678,0.99098,0.41098],[0.41229,0.99268,0.39826],[0.42778,0.99419,0.38575],[0.44321,0.99551,0.37345],[0.45854,0.99663,0.36140],[0.47375,0.99755,0.34963],[0.48879,0.99828,0.33816],[0.50362,0.99879,0.32701],[0.51822,0.99910,0.31622],[0.53255,0.99919,0.30581],[0.54658,0.99907,0.29581],[0.56026,0.99873,0.28623],[0.57357,0.99817,0.27712],[0.58646,0.99739,0.26849],[0.59891,0.99638,0.26038],[0.61088,0.99514,0.25280],[0.62233,0.99366,0.24579],[0.63323,0.99195,0.23937],[0.64362,0.98999,0.23356],[0.65394,0.98775,0.22835],[0.66428,0.98524,0.22370],[0.67462,0.98246,0.21960],[0.68494,0.97941,0.21602],[0.69525,0.97610,0.21294],[0.70553,0.97255,0.21032],[0.71577,0.96875,0.20815],[0.72596,0.96470,0.20640],[0.73610,0.96043,0.20504],[0.74617,0.95593,0.20406],[0.75617,0.95121,0.20343],[0.76608,0.94627,0.20311],[0.77591,0.94113,0.20310],[0.78563,0.93579,0.20336],[0.79524,0.93025,0.20386],[0.80473,0.92452,0.20459],[0.81410,0.91861,0.20552],[0.82333,0.91253,0.20663],[0.83241,0.90627,0.20788],[0.84133,0.89986,0.20926],[0.85010,0.89328,0.21074],[0.85868,0.88655,0.21230],[0.86709,0.87968,0.21391],[0.87530,0.87267,0.21555],[0.88331,0.86553,0.21719],[0.89112,0.85826,0.21880],[0.89870,0.85087,0.22038],[0.90605,0.84337,0.22188],[0.91317,0.83576,0.22328],[0.92004,0.82806,0.22456],[0.92666,0.82025,0.22570],[0.93301,0.81236,0.22667],[0.93909,0.80439,0.22744],[0.94489,0.79634,0.22800],[0.95039,0.78823,0.22831],[0.95560,0.78005,0.22836],[0.96049,0.77181,0.22811],[0.96507,0.76352,0.22754],[0.96931,0.75519,0.22663],[0.97323,0.74682,0.22536],[0.97679,0.73842,0.22369],[0.98000,0.73000,0.22161],[0.98289,0.72140,0.21918],[0.98549,0.71250,0.21650],[0.98781,0.70330,0.21358],[0.98986,0.69382,0.21043],[0.99163,0.68408,0.20706],[0.99314,0.67408,0.20348],[0.99438,0.66386,0.19971],[0.99535,0.65341,0.19577],[0.99607,0.64277,0.19165],[0.99654,0.63193,0.18738],[0.99675,0.62093,0.18297],[0.99672,0.60977,0.17842],[0.99644,0.59846,0.17376],[0.99593,0.58703,0.16899],[0.99517,0.57549,0.16412],[0.99419,0.56386,0.15918],[0.99297,0.55214,0.15417],[0.99153,0.54036,0.14910],[0.98987,0.52854,0.14398],[0.98799,0.51667,0.13883],[0.98590,0.50479,0.13367],[0.98360,0.49291,0.12849],[0.98108,0.48104,0.12332],[0.97837,0.46920,0.11817],[0.97545,0.45740,0.11305],[0.97234,0.44565,0.10797],[0.96904,0.43399,0.10294],[0.96555,0.42241,0.09798],[0.96187,0.41093,0.09310],[0.95801,0.39958,0.08831],[0.95398,0.38836,0.08362],[0.94977,0.37729,0.07905],[0.94538,0.36638,0.07461],[0.94084,0.35566,0.07031],[0.93612,0.34513,0.06616],[0.93125,0.33482,0.06218],[0.92623,0.32473,0.05837],[0.92105,0.31489,0.05475],[0.91572,0.30530,0.05134],[0.91024,0.29599,0.04814],[0.90463,0.28696,0.04516],[0.89888,0.27824,0.04243],[0.89298,0.26981,0.03993],[0.88691,0.26152,0.03753],[0.88066,0.25334,0.03521],[0.87422,0.24526,0.03297],[0.86760,0.23730,0.03082],[0.86079,0.22945,0.02875],[0.85380,0.22170,0.02677],[0.84662,0.21407,0.02487],[0.83926,0.20654,0.02305],[0.83172,0.19912,0.02131],[0.82399,0.19182,0.01966],[0.81608,0.18462,0.01809],[0.80799,0.17753,0.01660],[0.79971,0.17055,0.01520],[0.79125,0.16368,0.01387],[0.78260,0.15693,0.01264],[0.77377,0.15028,0.01148],[0.76476,0.14374,0.01041],[0.75556,0.13731,0.00942],[0.74617,0.13098,0.00851],[0.73661,0.12477,0.00769],[0.72686,0.11867,0.00695],[0.71692,0.11268,0.00629],[0.70680,0.10680,0.00571],[0.69650,0.10102,0.00522],[0.68602,0.09536,0.00481],[0.67535,0.08980,0.00449],[0.66449,0.08436,0.00424],[0.65345,0.07902,0.00408],[0.64223,0.07380,0.00401],[0.63082,0.06868,0.00401],[0.61923,0.06367,0.00410],[0.60746,0.05878,0.00427],[0.59550,0.05399,0.00453],[0.58336,0.04931,0.00486],[0.57103,0.04474,0.00529],[0.55852,0.04028,0.00579],[0.54583,0.03593,0.00638],[0.53295,0.03169,0.00705],[0.51989,0.02756,0.00780],[0.50664,0.02354,0.00863],[0.49321,0.01963,0.00955],[0.47960,0.01583,0.01055]]
#turbo = ListedColormap(turbo_colormap_data)
#color_fmin, color_fmid, color_fmax = 0.1, 0.5, 0.95
# which eigenmode to display?
enumber = 3
lh_cort, rh_cort, eigs, em = eigmode2plot(labels, slider_alpha = param1[0], slider_k = param1[1], em = enumber)
# colormap:
color_fmin = 0.50+lh_cort.min()
color_fmax = 0.99*lh_cort.max()
color_fmid = 0.65*lh_cort.max()
## initialize pysurfer rendering:
sb = surface(subject_id, 'lh', surf, background = "white", cortex = 'classic', alpha = 1, title = "Eigen Modes of Complex LaPlacian")
sb.add_data(lh_cort, hemi = 'lh', thresh = 0.30, colormap = plt.cm.autumn_r, remove_existing = True)
sb.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
## show lateral and medial views of left hemisphere and save figures
sb.show_view('lat')
sb.save_image('%s_lat_%1d.svg' % ('par1', em))
sb.show_view('med')
sb.save_image('%s_med_%1d.svg' % ('par1', em))
sb = surface(subject_id, "both", surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
sb.add_data(rh_cort, hemi = 'rh', thresh = 0.30, colormap = plt.cm.autumn_r, remove_existing = True)
sb.add_data(lh_cort, hemi = 'lh', thresh = 0.30, colormap = plt.cm.autumn_r, remove_existing = True)
sb.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
## save figures?
sb.show_view('dor')
sb.save_image('%s_dor_%1d.svg' % ('par1', em))
```
Repeat for second set of parameters:
```
enumber = 3
lh_cort, rh_cort, eigs, em = eigmode2plot(labels, slider_alpha = param2[0], slider_k = param2[1], em = enumber)
# colormap:
color_fmin = 0.35+lh_cort.min()
color_fmax = 0.99*lh_cort.max()
color_fmid = 0.60*lh_cort.max()
sb = surface(subject_id, 'lh', surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
sb.add_data(lh_cort, hemi = 'lh', thresh = 0.25, colormap = plt.cm.autumn_r, remove_existing = True)
sb.scale_data_colormap(color_fmin, color_fmid, 1, transparent = False)
## save figures?
sb.show_view('lat')
sb.save_image('%s_lat_%1d.svg' % ('par2', em))
sb.show_view('med')
sb.save_image('%s_med_%1d.svg' % ('par2', em))
sb = surface(subject_id, "both", surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
sb.add_data(lh_cort, hemi = 'lh', thresh = 0.25, colormap = plt.cm.autumn_r, remove_existing = True)
sb.add_data(rh_cort, hemi = 'rh', thresh = 0.25, colormap = plt.cm.autumn_r, remove_existing = True)
sb.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
## save figures?
sb.show_view('dor')
sb.save_image('%s_dor_%1d.svg' % ('par2', em))
```
Try another set of parameters with higher $\alpha$:
```
param3 = [5.0, 30]
enumber = 3
lh_cort, rh_cort, eigs, em = eigmode2plot(labels, slider_alpha = param3[0], slider_k = param3[1], em = enumber)
# colormap:
color_fmin = 0.35+lh_cort.min()
color_fmax = 0.99*lh_cort.max()
color_fmid = 0.68*lh_cort.max()
sb = surface(subject_id, 'lh', surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
sb.add_data(lh_cort, hemi = 'lh', thresh = 0.35, colormap = plt.cm.autumn_r, remove_existing = True)
sb.scale_data_colormap(color_fmin, color_fmid, 1, transparent = False)
## save figures?
sb.show_view('lat')
sb.save_image('%s_lat_%1d.svg' % ('par3', em))
sb.show_view('med')
sb.save_image('%s_med_%1d.svg' % ('par3', em))
sb = surface(subject_id, "both", surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
sb.add_data(lh_cort, hemi = 'lh', thresh = 0.35, colormap = plt.cm.autumn_r, remove_existing = True)
sb.add_data(rh_cort, hemi = 'rh', thresh = 0.35, colormap = plt.cm.autumn_r, remove_existing = True)
sb.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
## save figures?
sb.show_view('dor')
sb.save_image('%s_dor_%1d.svg' % ('par3', em))
```
### initiate ipywidgets for parameter exploration.
```
#%matplotlib inline
interactive(eigmode_widget, labels = fixed(labels), surf_brain = fixed(sb),
slider_alpha = widgets.FloatSlider(min=0.2,max=5,step=0.1,value=2.5, description = 'Coupling Strength',continuous_update=False),
slider_k = widgets.IntSlider(min = 0, max = 50, step = 1, value = 2, description = 'Wave Number',continuous_update=False),
em = widgets.IntSlider(min = 1, max = 10, step = 1, value = 3, description = 'Eigenmode Number',continuous_update=False),
hemi = widgets.RadioButtons(options = ['Left', 'Right', 'Both'], value = 'Left', description = 'Select hemisphere'))
```
|
github_jupyter
|
from ipywidgets import interactive, widgets, fixed
from surfer import Brain as surface
from matplotlib.colors import ListedColormap
from sklearn.preprocessing import minmax_scale
import os
import nibabel as nib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# spectrome imports
from spectrome.brain import Brain
from spectrome.utils import functions, path
#Function to get eigenmodes ready for Pysurfer visualizations:
def eigmode2plot(labels, slider_alpha, slider_k, em, lap_type = 'complex'):
## spectrome brain object with HCP connectome:
hcp_dir = '../data'
brain = Brain.Brain()
brain.add_connectome(hcp_dir)
brain.reorder_connectome(brain.connectome, brain.distance_matrix)
brain.bi_symmetric_c()
brain.reduce_extreme_dir()
## compute eigenmodes:
if lap_type == 'complex':
brain.decompose_complex_laplacian(alpha=slider_alpha, k=slider_k, num_ev=86)
## prep the eigenmodes for visualization
lh_cort_eigs = minmax_scale(brain.norm_eigenmodes[0:34, em - 1]) # select the first for display
rh_cort_eigs = minmax_scale(brain.norm_eigenmodes[34:68,em - 1]) # hemisphere selection
elif lap_type == 'real':
brain.decompose_regular_laplacian(alpha = slider_alpha, num_ev = 86, vis = False)
## prep the eigenmodes for visualization
lh_cort_eigs = minmax_scale(brain.norm_eigenmodes[0:34, em - 1]) # select the first for display
rh_cort_eigs = minmax_scale(brain.norm_eigenmodes[34:68,em - 1]) # hemisphere selection
## pad our eigenmodes for pysurfer requirements:
lh_cort_padded = np.insert(lh_cort_eigs, [0,3], [0,0])
rh_cort_padded = np.insert(rh_cort_eigs, [0,3], [0,0])
lh_cort = lh_cort_padded[labels]
rh_cort = rh_cort_padded[labels]
return lh_cort, rh_cort, brain.norm_eigenmodes, em
def eigmode_widget(labels, surf_brain, slider_alpha, slider_k, em, hemi):
## define turbo colormap
#turbo_colormap_data = [[0.18995,0.07176,0.23217],[0.19483,0.08339,0.26149],[0.19956,0.09498,0.29024],[0.20415,0.10652,0.31844],[0.20860,0.11802,0.34607],[0.21291,0.12947,0.37314],[0.21708,0.14087,0.39964],[0.22111,0.15223,0.42558],[0.22500,0.16354,0.45096],[0.22875,0.17481,0.47578],[0.23236,0.18603,0.50004],[0.23582,0.19720,0.52373],[0.23915,0.20833,0.54686],[0.24234,0.21941,0.56942],[0.24539,0.23044,0.59142],[0.24830,0.24143,0.61286],[0.25107,0.25237,0.63374],[0.25369,0.26327,0.65406],[0.25618,0.27412,0.67381],[0.25853,0.28492,0.69300],[0.26074,0.29568,0.71162],[0.26280,0.30639,0.72968],[0.26473,0.31706,0.74718],[0.26652,0.32768,0.76412],[0.26816,0.33825,0.78050],[0.26967,0.34878,0.79631],[0.27103,0.35926,0.81156],[0.27226,0.36970,0.82624],[0.27334,0.38008,0.84037],[0.27429,0.39043,0.85393],[0.27509,0.40072,0.86692],[0.27576,0.41097,0.87936],[0.27628,0.42118,0.89123],[0.27667,0.43134,0.90254],[0.27691,0.44145,0.91328],[0.27701,0.45152,0.92347],[0.27698,0.46153,0.93309],[0.27680,0.47151,0.94214],[0.27648,0.48144,0.95064],[0.27603,0.49132,0.95857],[0.27543,0.50115,0.96594],[0.27469,0.51094,0.97275],[0.27381,0.52069,0.97899],[0.27273,0.53040,0.98461],[0.27106,0.54015,0.98930],[0.26878,0.54995,0.99303],[0.26592,0.55979,0.99583],[0.26252,0.56967,0.99773],[0.25862,0.57958,0.99876],[0.25425,0.58950,0.99896],[0.24946,0.59943,0.99835],[0.24427,0.60937,0.99697],[0.23874,0.61931,0.99485],[0.23288,0.62923,0.99202],[0.22676,0.63913,0.98851],[0.22039,0.64901,0.98436],[0.21382,0.65886,0.97959],[0.20708,0.66866,0.97423],[0.20021,0.67842,0.96833],[0.19326,0.68812,0.96190],[0.18625,0.69775,0.95498],[0.17923,0.70732,0.94761],[0.17223,0.71680,0.93981],[0.16529,0.72620,0.93161],[0.15844,0.73551,0.92305],[0.15173,0.74472,0.91416],[0.14519,0.75381,0.90496],[0.13886,0.76279,0.89550],[0.13278,0.77165,0.88580],[0.12698,0.78037,0.87590],[0.12151,0.78896,0.86581],[0.11639,0.79740,0.85559],[0.11167,0.80569,0.84525],[0.10738,0.81381,0.83484],[0.10357,0.82177,0.82437],[0.10026,0.82955,0.81389],[0.09750,0.83714,0.80342],[0.09532,0.84455,0.79299],[0.09377,0.85175,0.78264],[0.09287,0.85875,0.77240],[0.09267,0.86554,0.76230],[0.09320,0.87211,0.75237],[0.09451,0.87844,0.74265],[0.09662,0.88454,0.73316],[0.09958,0.89040,0.72393],[0.10342,0.89600,0.71500],[0.10815,0.90142,0.70599],[0.11374,0.90673,0.69651],[0.12014,0.91193,0.68660],[0.12733,0.91701,0.67627],[0.13526,0.92197,0.66556],[0.14391,0.92680,0.65448],[0.15323,0.93151,0.64308],[0.16319,0.93609,0.63137],[0.17377,0.94053,0.61938],[0.18491,0.94484,0.60713],[0.19659,0.94901,0.59466],[0.20877,0.95304,0.58199],[0.22142,0.95692,0.56914],[0.23449,0.96065,0.55614],[0.24797,0.96423,0.54303],[0.26180,0.96765,0.52981],[0.27597,0.97092,0.51653],[0.29042,0.97403,0.50321],[0.30513,0.97697,0.48987],[0.32006,0.97974,0.47654],[0.33517,0.98234,0.46325],[0.35043,0.98477,0.45002],[0.36581,0.98702,0.43688],[0.38127,0.98909,0.42386],[0.39678,0.99098,0.41098],[0.41229,0.99268,0.39826],[0.42778,0.99419,0.38575],[0.44321,0.99551,0.37345],[0.45854,0.99663,0.36140],[0.47375,0.99755,0.34963],[0.48879,0.99828,0.33816],[0.50362,0.99879,0.32701],[0.51822,0.99910,0.31622],[0.53255,0.99919,0.30581],[0.54658,0.99907,0.29581],[0.56026,0.99873,0.28623],[0.57357,0.99817,0.27712],[0.58646,0.99739,0.26849],[0.59891,0.99638,0.26038],[0.61088,0.99514,0.25280],[0.62233,0.99366,0.24579],[0.63323,0.99195,0.23937],[0.64362,0.98999,0.23356],[0.65394,0.98775,0.22835],[0.66428,0.98524,0.22370],[0.67462,0.98246,0.21960],[0.68494,0.97941,0.21602],[0.69525,0.97610,0.21294],[0.70553,0.97255,0.21032],[0.71577,0.96875,0.20815],[0.72596,0.96470,0.20640],[0.73610,0.96043,0.20504],[0.74617,0.95593,0.20406],[0.75617,0.95121,0.20343],[0.76608,0.94627,0.20311],[0.77591,0.94113,0.20310],[0.78563,0.93579,0.20336],[0.79524,0.93025,0.20386],[0.80473,0.92452,0.20459],[0.81410,0.91861,0.20552],[0.82333,0.91253,0.20663],[0.83241,0.90627,0.20788],[0.84133,0.89986,0.20926],[0.85010,0.89328,0.21074],[0.85868,0.88655,0.21230],[0.86709,0.87968,0.21391],[0.87530,0.87267,0.21555],[0.88331,0.86553,0.21719],[0.89112,0.85826,0.21880],[0.89870,0.85087,0.22038],[0.90605,0.84337,0.22188],[0.91317,0.83576,0.22328],[0.92004,0.82806,0.22456],[0.92666,0.82025,0.22570],[0.93301,0.81236,0.22667],[0.93909,0.80439,0.22744],[0.94489,0.79634,0.22800],[0.95039,0.78823,0.22831],[0.95560,0.78005,0.22836],[0.96049,0.77181,0.22811],[0.96507,0.76352,0.22754],[0.96931,0.75519,0.22663],[0.97323,0.74682,0.22536],[0.97679,0.73842,0.22369],[0.98000,0.73000,0.22161],[0.98289,0.72140,0.21918],[0.98549,0.71250,0.21650],[0.98781,0.70330,0.21358],[0.98986,0.69382,0.21043],[0.99163,0.68408,0.20706],[0.99314,0.67408,0.20348],[0.99438,0.66386,0.19971],[0.99535,0.65341,0.19577],[0.99607,0.64277,0.19165],[0.99654,0.63193,0.18738],[0.99675,0.62093,0.18297],[0.99672,0.60977,0.17842],[0.99644,0.59846,0.17376],[0.99593,0.58703,0.16899],[0.99517,0.57549,0.16412],[0.99419,0.56386,0.15918],[0.99297,0.55214,0.15417],[0.99153,0.54036,0.14910],[0.98987,0.52854,0.14398],[0.98799,0.51667,0.13883],[0.98590,0.50479,0.13367],[0.98360,0.49291,0.12849],[0.98108,0.48104,0.12332],[0.97837,0.46920,0.11817],[0.97545,0.45740,0.11305],[0.97234,0.44565,0.10797],[0.96904,0.43399,0.10294],[0.96555,0.42241,0.09798],[0.96187,0.41093,0.09310],[0.95801,0.39958,0.08831],[0.95398,0.38836,0.08362],[0.94977,0.37729,0.07905],[0.94538,0.36638,0.07461],[0.94084,0.35566,0.07031],[0.93612,0.34513,0.06616],[0.93125,0.33482,0.06218],[0.92623,0.32473,0.05837],[0.92105,0.31489,0.05475],[0.91572,0.30530,0.05134],[0.91024,0.29599,0.04814],[0.90463,0.28696,0.04516],[0.89888,0.27824,0.04243],[0.89298,0.26981,0.03993],[0.88691,0.26152,0.03753],[0.88066,0.25334,0.03521],[0.87422,0.24526,0.03297],[0.86760,0.23730,0.03082],[0.86079,0.22945,0.02875],[0.85380,0.22170,0.02677],[0.84662,0.21407,0.02487],[0.83926,0.20654,0.02305],[0.83172,0.19912,0.02131],[0.82399,0.19182,0.01966],[0.81608,0.18462,0.01809],[0.80799,0.17753,0.01660],[0.79971,0.17055,0.01520],[0.79125,0.16368,0.01387],[0.78260,0.15693,0.01264],[0.77377,0.15028,0.01148],[0.76476,0.14374,0.01041],[0.75556,0.13731,0.00942],[0.74617,0.13098,0.00851],[0.73661,0.12477,0.00769],[0.72686,0.11867,0.00695],[0.71692,0.11268,0.00629],[0.70680,0.10680,0.00571],[0.69650,0.10102,0.00522],[0.68602,0.09536,0.00481],[0.67535,0.08980,0.00449],[0.66449,0.08436,0.00424],[0.65345,0.07902,0.00408],[0.64223,0.07380,0.00401],[0.63082,0.06868,0.00401],[0.61923,0.06367,0.00410],[0.60746,0.05878,0.00427],[0.59550,0.05399,0.00453],[0.58336,0.04931,0.00486],[0.57103,0.04474,0.00529],[0.55852,0.04028,0.00579],[0.54583,0.03593,0.00638],[0.53295,0.03169,0.00705],[0.51989,0.02756,0.00780],[0.50664,0.02354,0.00863],[0.49321,0.01963,0.00955],[0.47960,0.01583,0.01055]]
#turbo = ListedColormap(turbo_colormap_data)
## spectrome brain object with HCP connectome:
hcp_dir = '../data'
brain = Brain.Brain()
brain.add_connectome(hcp_dir)
brain.reorder_connectome(brain.connectome, brain.distance_matrix)
brain.bi_symmetric_c()
brain.reduce_extreme_dir()
## compute eigenmodes:
brain.decompose_complex_laplacian(alpha=slider_alpha, k=slider_k, num_ev=86)
## prep the eigenmodes for visualization
lh_cort_eigs = minmax_scale(brain.norm_eigenmodes[0:34, em - 1]) # select the first for display
rh_cort_eigs = minmax_scale(brain.norm_eigenmodes[34:68, em - 1]) # hemisphere selection
## pad our eigenmodes for pysurfer requirements:
lh_cort_padded = np.insert(lh_cort_eigs, [0,3], [0,0])
rh_cort_padded = np.insert(rh_cort_eigs, [0,3], [0,0])
lh_cort = lh_cort_padded[labels]
rh_cort = rh_cort_padded[labels]
color_fmin = 0.50+lh_cort.min()
color_fmax = 0.95*lh_cort.max()
color_fmid = 0.65*lh_cort.max()
if hemi == 'Left':
surf_brain = surface(subject_id, "lh", surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
surf_brain.add_data(lh_cort, hemi = 'lh', thresh = 0.20, colormap = plt.cm.autumn_r, remove_existing = True)
surf_brain.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
elif hemi == 'Right':
surf_brain = surface(subject_id, "rh", surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
surf_brain.add_data(rh_cort, hemi = "rh", thresh = 0.20, colormap = plt.cm.autumn_r, remove_existing = True)
surf_brain.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
elif hemi == 'Both':
surf_brain = surface(subject_id, "both", surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
surf_brain.add_data(lh_cort, hemi = 'lh', thresh = 0.20, colormap = plt.cm.autumn_r, remove_existing = True)
surf_brain.add_data(rh_cort, hemi = "rh", thresh = 0.20, colormap = plt.cm.autumn_r, remove_existing = False)
surf_brain.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
return lh_cort, rh_cort, brain.norm_eigenmodes
## parameters where we hold alpha constatnt and look for k behavior
param1 = np.array([1, 0.1])
param2 = np.array([1, 30])
## Load Yeo 2017's canonical network maps:
fc_dk = np.load('../data/com_dk.npy', allow_pickle = True).item()
fc_dk_normalized = pd.read_csv('../data/DK_dictionary_normalized.csv').set_index('Unnamed: 0')
%gui qt
# set up Pysurfer variables
subject_id = "fsaverage"
hemi = ["lh","rh"]
surf = "white"
"""
Read in the automatic parcellation of sulci and gyri.
"""
hemi_side = "lh"
aparc_file = os.path.join(os.environ["SUBJECTS_DIR"],
subject_id, "label",
hemi_side + ".aparc.annot")
labels, ctab, names = nib.freesurfer.read_annot(aparc_file)
lh_cort, rh_cort, eigs, em = eigmode2plot(labels, slider_alpha = param2[0], slider_k = param2[1], em = 3, lap_type = 'real')
# colormap:
color_fmin = 0.50+lh_cort.min()
color_fmax = 0.95*lh_cort.max()
color_fmid = 0.65*lh_cort.max()
reg_surf = surface(subject_id, 'lh', surf, background = 'white', alpha = 1, title = 'Eigenmodes of Regular Laplacian')
reg_surf.add_data(lh_cort, hemi = 'lh', thresh = 0.4, colormap = plt.cm.autumn_r, remove_existing = True)
reg_surf.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
## save figures?
reg_surf.show_view('lat')
reg_surf.save_image('%s_lat_%1d.svg' % ('reg', em))
reg_surf.show_view('med')
reg_surf.save_image('%s_med_%1d.svg' % ('reg', em))
reg_surf = surface(subject_id, 'both', surf, background = 'white', alpha = 1, title = 'Eigenmodes of Regular Laplacian')
reg_surf.add_data(lh_cort, hemi = 'lh', thresh = 0.4, colormap = plt.cm.autumn_r, remove_existing = True)
reg_surf.add_data(rh_cort, hemi = 'rh', thresh = 0.4, colormap = plt.cm.autumn_r, remove_existing = True)
reg_surf.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
reg_surf.show_view('dor')
reg_surf.save_image('%s_dor_%1d.svg' % ('reg', em))
## define turbo colormap
#turbo_colormap_data = [[0.18995,0.07176,0.23217],[0.19483,0.08339,0.26149],[0.19956,0.09498,0.29024],[0.20415,0.10652,0.31844],[0.20860,0.11802,0.34607],[0.21291,0.12947,0.37314],[0.21708,0.14087,0.39964],[0.22111,0.15223,0.42558],[0.22500,0.16354,0.45096],[0.22875,0.17481,0.47578],[0.23236,0.18603,0.50004],[0.23582,0.19720,0.52373],[0.23915,0.20833,0.54686],[0.24234,0.21941,0.56942],[0.24539,0.23044,0.59142],[0.24830,0.24143,0.61286],[0.25107,0.25237,0.63374],[0.25369,0.26327,0.65406],[0.25618,0.27412,0.67381],[0.25853,0.28492,0.69300],[0.26074,0.29568,0.71162],[0.26280,0.30639,0.72968],[0.26473,0.31706,0.74718],[0.26652,0.32768,0.76412],[0.26816,0.33825,0.78050],[0.26967,0.34878,0.79631],[0.27103,0.35926,0.81156],[0.27226,0.36970,0.82624],[0.27334,0.38008,0.84037],[0.27429,0.39043,0.85393],[0.27509,0.40072,0.86692],[0.27576,0.41097,0.87936],[0.27628,0.42118,0.89123],[0.27667,0.43134,0.90254],[0.27691,0.44145,0.91328],[0.27701,0.45152,0.92347],[0.27698,0.46153,0.93309],[0.27680,0.47151,0.94214],[0.27648,0.48144,0.95064],[0.27603,0.49132,0.95857],[0.27543,0.50115,0.96594],[0.27469,0.51094,0.97275],[0.27381,0.52069,0.97899],[0.27273,0.53040,0.98461],[0.27106,0.54015,0.98930],[0.26878,0.54995,0.99303],[0.26592,0.55979,0.99583],[0.26252,0.56967,0.99773],[0.25862,0.57958,0.99876],[0.25425,0.58950,0.99896],[0.24946,0.59943,0.99835],[0.24427,0.60937,0.99697],[0.23874,0.61931,0.99485],[0.23288,0.62923,0.99202],[0.22676,0.63913,0.98851],[0.22039,0.64901,0.98436],[0.21382,0.65886,0.97959],[0.20708,0.66866,0.97423],[0.20021,0.67842,0.96833],[0.19326,0.68812,0.96190],[0.18625,0.69775,0.95498],[0.17923,0.70732,0.94761],[0.17223,0.71680,0.93981],[0.16529,0.72620,0.93161],[0.15844,0.73551,0.92305],[0.15173,0.74472,0.91416],[0.14519,0.75381,0.90496],[0.13886,0.76279,0.89550],[0.13278,0.77165,0.88580],[0.12698,0.78037,0.87590],[0.12151,0.78896,0.86581],[0.11639,0.79740,0.85559],[0.11167,0.80569,0.84525],[0.10738,0.81381,0.83484],[0.10357,0.82177,0.82437],[0.10026,0.82955,0.81389],[0.09750,0.83714,0.80342],[0.09532,0.84455,0.79299],[0.09377,0.85175,0.78264],[0.09287,0.85875,0.77240],[0.09267,0.86554,0.76230],[0.09320,0.87211,0.75237],[0.09451,0.87844,0.74265],[0.09662,0.88454,0.73316],[0.09958,0.89040,0.72393],[0.10342,0.89600,0.71500],[0.10815,0.90142,0.70599],[0.11374,0.90673,0.69651],[0.12014,0.91193,0.68660],[0.12733,0.91701,0.67627],[0.13526,0.92197,0.66556],[0.14391,0.92680,0.65448],[0.15323,0.93151,0.64308],[0.16319,0.93609,0.63137],[0.17377,0.94053,0.61938],[0.18491,0.94484,0.60713],[0.19659,0.94901,0.59466],[0.20877,0.95304,0.58199],[0.22142,0.95692,0.56914],[0.23449,0.96065,0.55614],[0.24797,0.96423,0.54303],[0.26180,0.96765,0.52981],[0.27597,0.97092,0.51653],[0.29042,0.97403,0.50321],[0.30513,0.97697,0.48987],[0.32006,0.97974,0.47654],[0.33517,0.98234,0.46325],[0.35043,0.98477,0.45002],[0.36581,0.98702,0.43688],[0.38127,0.98909,0.42386],[0.39678,0.99098,0.41098],[0.41229,0.99268,0.39826],[0.42778,0.99419,0.38575],[0.44321,0.99551,0.37345],[0.45854,0.99663,0.36140],[0.47375,0.99755,0.34963],[0.48879,0.99828,0.33816],[0.50362,0.99879,0.32701],[0.51822,0.99910,0.31622],[0.53255,0.99919,0.30581],[0.54658,0.99907,0.29581],[0.56026,0.99873,0.28623],[0.57357,0.99817,0.27712],[0.58646,0.99739,0.26849],[0.59891,0.99638,0.26038],[0.61088,0.99514,0.25280],[0.62233,0.99366,0.24579],[0.63323,0.99195,0.23937],[0.64362,0.98999,0.23356],[0.65394,0.98775,0.22835],[0.66428,0.98524,0.22370],[0.67462,0.98246,0.21960],[0.68494,0.97941,0.21602],[0.69525,0.97610,0.21294],[0.70553,0.97255,0.21032],[0.71577,0.96875,0.20815],[0.72596,0.96470,0.20640],[0.73610,0.96043,0.20504],[0.74617,0.95593,0.20406],[0.75617,0.95121,0.20343],[0.76608,0.94627,0.20311],[0.77591,0.94113,0.20310],[0.78563,0.93579,0.20336],[0.79524,0.93025,0.20386],[0.80473,0.92452,0.20459],[0.81410,0.91861,0.20552],[0.82333,0.91253,0.20663],[0.83241,0.90627,0.20788],[0.84133,0.89986,0.20926],[0.85010,0.89328,0.21074],[0.85868,0.88655,0.21230],[0.86709,0.87968,0.21391],[0.87530,0.87267,0.21555],[0.88331,0.86553,0.21719],[0.89112,0.85826,0.21880],[0.89870,0.85087,0.22038],[0.90605,0.84337,0.22188],[0.91317,0.83576,0.22328],[0.92004,0.82806,0.22456],[0.92666,0.82025,0.22570],[0.93301,0.81236,0.22667],[0.93909,0.80439,0.22744],[0.94489,0.79634,0.22800],[0.95039,0.78823,0.22831],[0.95560,0.78005,0.22836],[0.96049,0.77181,0.22811],[0.96507,0.76352,0.22754],[0.96931,0.75519,0.22663],[0.97323,0.74682,0.22536],[0.97679,0.73842,0.22369],[0.98000,0.73000,0.22161],[0.98289,0.72140,0.21918],[0.98549,0.71250,0.21650],[0.98781,0.70330,0.21358],[0.98986,0.69382,0.21043],[0.99163,0.68408,0.20706],[0.99314,0.67408,0.20348],[0.99438,0.66386,0.19971],[0.99535,0.65341,0.19577],[0.99607,0.64277,0.19165],[0.99654,0.63193,0.18738],[0.99675,0.62093,0.18297],[0.99672,0.60977,0.17842],[0.99644,0.59846,0.17376],[0.99593,0.58703,0.16899],[0.99517,0.57549,0.16412],[0.99419,0.56386,0.15918],[0.99297,0.55214,0.15417],[0.99153,0.54036,0.14910],[0.98987,0.52854,0.14398],[0.98799,0.51667,0.13883],[0.98590,0.50479,0.13367],[0.98360,0.49291,0.12849],[0.98108,0.48104,0.12332],[0.97837,0.46920,0.11817],[0.97545,0.45740,0.11305],[0.97234,0.44565,0.10797],[0.96904,0.43399,0.10294],[0.96555,0.42241,0.09798],[0.96187,0.41093,0.09310],[0.95801,0.39958,0.08831],[0.95398,0.38836,0.08362],[0.94977,0.37729,0.07905],[0.94538,0.36638,0.07461],[0.94084,0.35566,0.07031],[0.93612,0.34513,0.06616],[0.93125,0.33482,0.06218],[0.92623,0.32473,0.05837],[0.92105,0.31489,0.05475],[0.91572,0.30530,0.05134],[0.91024,0.29599,0.04814],[0.90463,0.28696,0.04516],[0.89888,0.27824,0.04243],[0.89298,0.26981,0.03993],[0.88691,0.26152,0.03753],[0.88066,0.25334,0.03521],[0.87422,0.24526,0.03297],[0.86760,0.23730,0.03082],[0.86079,0.22945,0.02875],[0.85380,0.22170,0.02677],[0.84662,0.21407,0.02487],[0.83926,0.20654,0.02305],[0.83172,0.19912,0.02131],[0.82399,0.19182,0.01966],[0.81608,0.18462,0.01809],[0.80799,0.17753,0.01660],[0.79971,0.17055,0.01520],[0.79125,0.16368,0.01387],[0.78260,0.15693,0.01264],[0.77377,0.15028,0.01148],[0.76476,0.14374,0.01041],[0.75556,0.13731,0.00942],[0.74617,0.13098,0.00851],[0.73661,0.12477,0.00769],[0.72686,0.11867,0.00695],[0.71692,0.11268,0.00629],[0.70680,0.10680,0.00571],[0.69650,0.10102,0.00522],[0.68602,0.09536,0.00481],[0.67535,0.08980,0.00449],[0.66449,0.08436,0.00424],[0.65345,0.07902,0.00408],[0.64223,0.07380,0.00401],[0.63082,0.06868,0.00401],[0.61923,0.06367,0.00410],[0.60746,0.05878,0.00427],[0.59550,0.05399,0.00453],[0.58336,0.04931,0.00486],[0.57103,0.04474,0.00529],[0.55852,0.04028,0.00579],[0.54583,0.03593,0.00638],[0.53295,0.03169,0.00705],[0.51989,0.02756,0.00780],[0.50664,0.02354,0.00863],[0.49321,0.01963,0.00955],[0.47960,0.01583,0.01055]]
#turbo = ListedColormap(turbo_colormap_data)
#color_fmin, color_fmid, color_fmax = 0.1, 0.5, 0.95
# which eigenmode to display?
enumber = 3
lh_cort, rh_cort, eigs, em = eigmode2plot(labels, slider_alpha = param1[0], slider_k = param1[1], em = enumber)
# colormap:
color_fmin = 0.50+lh_cort.min()
color_fmax = 0.99*lh_cort.max()
color_fmid = 0.65*lh_cort.max()
## initialize pysurfer rendering:
sb = surface(subject_id, 'lh', surf, background = "white", cortex = 'classic', alpha = 1, title = "Eigen Modes of Complex LaPlacian")
sb.add_data(lh_cort, hemi = 'lh', thresh = 0.30, colormap = plt.cm.autumn_r, remove_existing = True)
sb.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
## show lateral and medial views of left hemisphere and save figures
sb.show_view('lat')
sb.save_image('%s_lat_%1d.svg' % ('par1', em))
sb.show_view('med')
sb.save_image('%s_med_%1d.svg' % ('par1', em))
sb = surface(subject_id, "both", surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
sb.add_data(rh_cort, hemi = 'rh', thresh = 0.30, colormap = plt.cm.autumn_r, remove_existing = True)
sb.add_data(lh_cort, hemi = 'lh', thresh = 0.30, colormap = plt.cm.autumn_r, remove_existing = True)
sb.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
## save figures?
sb.show_view('dor')
sb.save_image('%s_dor_%1d.svg' % ('par1', em))
enumber = 3
lh_cort, rh_cort, eigs, em = eigmode2plot(labels, slider_alpha = param2[0], slider_k = param2[1], em = enumber)
# colormap:
color_fmin = 0.35+lh_cort.min()
color_fmax = 0.99*lh_cort.max()
color_fmid = 0.60*lh_cort.max()
sb = surface(subject_id, 'lh', surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
sb.add_data(lh_cort, hemi = 'lh', thresh = 0.25, colormap = plt.cm.autumn_r, remove_existing = True)
sb.scale_data_colormap(color_fmin, color_fmid, 1, transparent = False)
## save figures?
sb.show_view('lat')
sb.save_image('%s_lat_%1d.svg' % ('par2', em))
sb.show_view('med')
sb.save_image('%s_med_%1d.svg' % ('par2', em))
sb = surface(subject_id, "both", surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
sb.add_data(lh_cort, hemi = 'lh', thresh = 0.25, colormap = plt.cm.autumn_r, remove_existing = True)
sb.add_data(rh_cort, hemi = 'rh', thresh = 0.25, colormap = plt.cm.autumn_r, remove_existing = True)
sb.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
## save figures?
sb.show_view('dor')
sb.save_image('%s_dor_%1d.svg' % ('par2', em))
param3 = [5.0, 30]
enumber = 3
lh_cort, rh_cort, eigs, em = eigmode2plot(labels, slider_alpha = param3[0], slider_k = param3[1], em = enumber)
# colormap:
color_fmin = 0.35+lh_cort.min()
color_fmax = 0.99*lh_cort.max()
color_fmid = 0.68*lh_cort.max()
sb = surface(subject_id, 'lh', surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
sb.add_data(lh_cort, hemi = 'lh', thresh = 0.35, colormap = plt.cm.autumn_r, remove_existing = True)
sb.scale_data_colormap(color_fmin, color_fmid, 1, transparent = False)
## save figures?
sb.show_view('lat')
sb.save_image('%s_lat_%1d.svg' % ('par3', em))
sb.show_view('med')
sb.save_image('%s_med_%1d.svg' % ('par3', em))
sb = surface(subject_id, "both", surf, background = "white", alpha = 1, title = "Eigen Modes of Complex LaPlacian")
sb.add_data(lh_cort, hemi = 'lh', thresh = 0.35, colormap = plt.cm.autumn_r, remove_existing = True)
sb.add_data(rh_cort, hemi = 'rh', thresh = 0.35, colormap = plt.cm.autumn_r, remove_existing = True)
sb.scale_data_colormap(color_fmin, color_fmid, color_fmax, transparent = False)
## save figures?
sb.show_view('dor')
sb.save_image('%s_dor_%1d.svg' % ('par3', em))
#%matplotlib inline
interactive(eigmode_widget, labels = fixed(labels), surf_brain = fixed(sb),
slider_alpha = widgets.FloatSlider(min=0.2,max=5,step=0.1,value=2.5, description = 'Coupling Strength',continuous_update=False),
slider_k = widgets.IntSlider(min = 0, max = 50, step = 1, value = 2, description = 'Wave Number',continuous_update=False),
em = widgets.IntSlider(min = 1, max = 10, step = 1, value = 3, description = 'Eigenmode Number',continuous_update=False),
hemi = widgets.RadioButtons(options = ['Left', 'Right', 'Both'], value = 'Left', description = 'Select hemisphere'))
| 0.449393 | 0.912084 |
```
import cv2
import numpy as np
import requests
import io
import json
import matplotlib.pyplot as plt
import pyttsx3
import time
engine = pyttsx3.init()
net = cv2.dnn.readNet("yolov3.weights", "yolov3.cfg")
classes = []
with open("coco.names", "r") as f:
classes = [line.strip() for line in f.readlines()]
layer_names = net.getLayerNames()
output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]
colors = np.random.uniform(0, 255, size=(len(classes), 3))
cap = cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_PLAIN
starting_time = time.time()
frame_id = 0
while True:
_, img = cap.read()
frame_id +=1
height, width, channels = img.shape
blob = cv2.dnn.blobFromImage(img, 0.00392, (320, 320), (0, 0, 0), True, crop=False)
net.setInput(blob)
outs = net.forward(output_layers)
class_ids = []
confidences = []
boxes = []
for out in outs:
for detection in out:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
if confidence > 0.5:
center_x = int(detection[0] * width)
center_y = int(detection[1] * height)
w = int(detection[2] * width)
h = int(detection[3] * height)
x = int(center_x - w / 2)
y = int(center_y - h / 2)
boxes.append([x, y, w, h])
confidences.append(float(confidence))
class_ids.append(class_id)
indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
for i in range(len(boxes)):
if i in indexes:
x, y, w, h = boxes[i]
label = str(classes[class_ids[i]])
color = colors[class_ids[i]]
cv2.rectangle(img, (x, y), (x + w, y + h), color, 2)
cv2.putText(img, label, (x, y + 30), font, 3, color, 3)
if label == "cat":
engine.say(label)
engine.say("Entered the room")
if label == "dog":
engine.say(label)
engine.say("Entered the room")
elapsed_time = time.time() - starting_time
fps = frame_id / elapsed_time
cv2.putText(img, "FPS:" + str(fps), (10,50),font, 4, (0, 0, 0), 3)
cv2.imshow("Image", img)
key = cv2.waitKey(1)
engine.runAndWait()
if key == 27:
break
cap.release()
cv2.destroyAllWindows()
```
|
github_jupyter
|
import cv2
import numpy as np
import requests
import io
import json
import matplotlib.pyplot as plt
import pyttsx3
import time
engine = pyttsx3.init()
net = cv2.dnn.readNet("yolov3.weights", "yolov3.cfg")
classes = []
with open("coco.names", "r") as f:
classes = [line.strip() for line in f.readlines()]
layer_names = net.getLayerNames()
output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]
colors = np.random.uniform(0, 255, size=(len(classes), 3))
cap = cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_PLAIN
starting_time = time.time()
frame_id = 0
while True:
_, img = cap.read()
frame_id +=1
height, width, channels = img.shape
blob = cv2.dnn.blobFromImage(img, 0.00392, (320, 320), (0, 0, 0), True, crop=False)
net.setInput(blob)
outs = net.forward(output_layers)
class_ids = []
confidences = []
boxes = []
for out in outs:
for detection in out:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
if confidence > 0.5:
center_x = int(detection[0] * width)
center_y = int(detection[1] * height)
w = int(detection[2] * width)
h = int(detection[3] * height)
x = int(center_x - w / 2)
y = int(center_y - h / 2)
boxes.append([x, y, w, h])
confidences.append(float(confidence))
class_ids.append(class_id)
indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
for i in range(len(boxes)):
if i in indexes:
x, y, w, h = boxes[i]
label = str(classes[class_ids[i]])
color = colors[class_ids[i]]
cv2.rectangle(img, (x, y), (x + w, y + h), color, 2)
cv2.putText(img, label, (x, y + 30), font, 3, color, 3)
if label == "cat":
engine.say(label)
engine.say("Entered the room")
if label == "dog":
engine.say(label)
engine.say("Entered the room")
elapsed_time = time.time() - starting_time
fps = frame_id / elapsed_time
cv2.putText(img, "FPS:" + str(fps), (10,50),font, 4, (0, 0, 0), 3)
cv2.imshow("Image", img)
key = cv2.waitKey(1)
engine.runAndWait()
if key == 27:
break
cap.release()
cv2.destroyAllWindows()
| 0.319758 | 0.222954 |
```
%matplotlib inline
import matplotlib.pyplot as plt
import healpy as hp
import numpy as np
import tensorflow as tf
import healpy_layers as hp_layer
import healpy_unet as hp_unet
nside = 32
npix = hp.nside2npix(nside=nside)
batch_size = 10
learning_rate = 1e-5
###Importing mask
mask = hp.read_map('DESY3_sky_mask.fits') # It's in RING ordering
mask = np.ceil(hp.ud_grade(mask,nside))
mask = hp.reorder(mask,r2n=True)
hp.mollview(mask,title='NEST ordering',nest=True)
def rotate(sky,z,y,x,nside,p=3,pixel=True,forward=True,nest2ring=True):
'''
Up-samples the data, rotates map, then pools it to original nside. Map has to be in "NEST" ordering.
Input:
sky map (In NEST ordering if nest2ring=True)
z longitude
y latitude
x about axis that goes through center of map (rotation of object centered in center)
nside
p up-samples data by 2**p
pixel if True rotation happens in pixel space. Otherwise it happens in spherical harmonics space.
forward if True, +10degree rotation does +10degree rotation. Otherwise it does a -10 degree rotation
nest2ring if True converts NEST ordering to RING ordering before rotating, and RING to NEST after rotation.
(rotation only works with RING ordering)
Output:
Rotated map
'''
#the point provided in rot will be the center of the map
rot_custom = hp.Rotator(rot=[z,y,x],inv=forward)#deg=True
if nest2ring == True:
sky = hp.reorder(sky,n2r=True)
up = hp.ud_grade(sky,nside*2**p)#up-sample
if pixel == True:
m_smoothed_rotated_pixel = rot_custom.rotate_map_pixel(up)
else:
m_smoothed_rotated_pixel = rot_custom.rotate_map_alms(up)#uses spherical harmonics instad
down = hp.ud_grade(m_smoothed_rotated_pixel,nside)#down-sample
if nest2ring == True:
down = hp.reorder(down,r2n=True)
return down
def power_spectrum(l, A, mu, sigma):
"""Generate power spectrum from gaussian distribution.
Input:
l angular location
A amplitude
simga standard deviation
mu mean
"""
return A*np.exp((-1/2)*(l-mu)**2/(sigma**2))
indices = np.arange(hp.nside2npix(nside)) # indices of relevant pixels [0, pixels)
l = np.arange(nside)
c_l = power_spectrum(l, 1, 5,25)
n=1000 # number of sets
gaussian_maps = np.array([hp.reorder(hp.synfast(c_l, nside),inp='RING',out='NESTED') for i in range(n)])
#rescale - ensure all values are between 0-1 with average on 0.5.
factor = 20*np.max(np.abs(gaussian_maps))*np.ones(shape=gaussian_maps.shape)
mean = 0.5 #because of last layer having softmax activation, model can't replicate -ve values.
gaussian_maps = gaussian_maps/factor+mean*np.ones(shape=gaussian_maps.shape)
```
We build the network with an input shape and print out a summary.
```
tf.keras.backend.clear_session()
unet_instance = hp_unet.HealpyUNet(nside, indices, learning_rate, mask, mean, n_neighbors=20)
unet_model = unet_instance.model()
```
We create a dataset for the training with a channel dimension (not strictly necessary if you use the TF routines
for training) and labels.
We run the model.
```
tot_train_loss=[]
tot_val_loss=[]
import time
epoch=60
p=0.95 #train-test split
for k in range(epoch):
start=time.time()
ang1, ang2, ang3 = np.random.uniform(high=360,size=(3,gaussian_maps.shape[0]))
y = np.array([rotate(j,ang1[i],ang2[i],ang3[i],nside,p=1) for i,j in enumerate(gaussian_maps)])
x = y + np.random.normal(0,0.04,y.shape)
x = np.array([np.where(mask>0.5,x_i,mean) for x_i in x])
y = np.array([np.where(mask>0.5,y_i,mean) for y_i in y])
x = x.astype(np.float32)[..., None]
y = y.astype(np.float32)[..., None]
n_train = int(x.shape[0]*p)
print('Epoch {}/{}'.format(k+1,epoch))
history = unet_model.fit(
x=x[:n_train],
y=y[:n_train],
batch_size=batch_size,
epochs=1,
validation_data=(x[n_train:],y[n_train:])
)
tot_train_loss.append(history.history["loss"][0])
tot_val_loss.append(history.history["val_loss"][0])
print(time.time()-start)
```
A final evaluation on the test set.
```
unet_model.evaluate(x[n_train:],y[n_train:], batch_size)
```
Let's plot the loss.
(Note that the history object contains much more info.)
```
plt.figure(figsize=(12,8))
plt.plot(tot_train_loss, label="training")
plt.plot(tot_val_loss, label="validation")
plt.grid()
plt.yscale("log")
plt.legend()
plt.xlabel("Epoch")
plt.ylabel("Loss")
prediction = unet_model.predict(x[-1][None,...])
cm = plt.cm.RdBu_r
hp.mollview(x[-1].flatten(), title='Input', nest=True, cmap=cm, min=np.min(y[-1]), max=np.max(y[-1]))
hp.graticule()
hp.mollview(prediction[0].flatten(), title='Prediction', nest=True, cmap=cm, min=np.min(y[-1]), max=np.max(y[-1]))
hp.graticule()
hp.mollview(y[-1].flatten(), title='Real', nest=True, cmap=cm, min=np.min(y[-1]), max=np.max(y[-1]))
hp.graticule()
prediction = np.array([prediction[0].flatten()])
prediction = np.array([np.where(mask>0.5,x_i,mean) for x_i in prediction])
hp.mollview(prediction[0], title='Prediction Masked', nest=True, cmap=cm, min=np.min(y[-1]), max=np.max(y[-1]))
hp.graticule()
```
|
github_jupyter
|
%matplotlib inline
import matplotlib.pyplot as plt
import healpy as hp
import numpy as np
import tensorflow as tf
import healpy_layers as hp_layer
import healpy_unet as hp_unet
nside = 32
npix = hp.nside2npix(nside=nside)
batch_size = 10
learning_rate = 1e-5
###Importing mask
mask = hp.read_map('DESY3_sky_mask.fits') # It's in RING ordering
mask = np.ceil(hp.ud_grade(mask,nside))
mask = hp.reorder(mask,r2n=True)
hp.mollview(mask,title='NEST ordering',nest=True)
def rotate(sky,z,y,x,nside,p=3,pixel=True,forward=True,nest2ring=True):
'''
Up-samples the data, rotates map, then pools it to original nside. Map has to be in "NEST" ordering.
Input:
sky map (In NEST ordering if nest2ring=True)
z longitude
y latitude
x about axis that goes through center of map (rotation of object centered in center)
nside
p up-samples data by 2**p
pixel if True rotation happens in pixel space. Otherwise it happens in spherical harmonics space.
forward if True, +10degree rotation does +10degree rotation. Otherwise it does a -10 degree rotation
nest2ring if True converts NEST ordering to RING ordering before rotating, and RING to NEST after rotation.
(rotation only works with RING ordering)
Output:
Rotated map
'''
#the point provided in rot will be the center of the map
rot_custom = hp.Rotator(rot=[z,y,x],inv=forward)#deg=True
if nest2ring == True:
sky = hp.reorder(sky,n2r=True)
up = hp.ud_grade(sky,nside*2**p)#up-sample
if pixel == True:
m_smoothed_rotated_pixel = rot_custom.rotate_map_pixel(up)
else:
m_smoothed_rotated_pixel = rot_custom.rotate_map_alms(up)#uses spherical harmonics instad
down = hp.ud_grade(m_smoothed_rotated_pixel,nside)#down-sample
if nest2ring == True:
down = hp.reorder(down,r2n=True)
return down
def power_spectrum(l, A, mu, sigma):
"""Generate power spectrum from gaussian distribution.
Input:
l angular location
A amplitude
simga standard deviation
mu mean
"""
return A*np.exp((-1/2)*(l-mu)**2/(sigma**2))
indices = np.arange(hp.nside2npix(nside)) # indices of relevant pixels [0, pixels)
l = np.arange(nside)
c_l = power_spectrum(l, 1, 5,25)
n=1000 # number of sets
gaussian_maps = np.array([hp.reorder(hp.synfast(c_l, nside),inp='RING',out='NESTED') for i in range(n)])
#rescale - ensure all values are between 0-1 with average on 0.5.
factor = 20*np.max(np.abs(gaussian_maps))*np.ones(shape=gaussian_maps.shape)
mean = 0.5 #because of last layer having softmax activation, model can't replicate -ve values.
gaussian_maps = gaussian_maps/factor+mean*np.ones(shape=gaussian_maps.shape)
tf.keras.backend.clear_session()
unet_instance = hp_unet.HealpyUNet(nside, indices, learning_rate, mask, mean, n_neighbors=20)
unet_model = unet_instance.model()
tot_train_loss=[]
tot_val_loss=[]
import time
epoch=60
p=0.95 #train-test split
for k in range(epoch):
start=time.time()
ang1, ang2, ang3 = np.random.uniform(high=360,size=(3,gaussian_maps.shape[0]))
y = np.array([rotate(j,ang1[i],ang2[i],ang3[i],nside,p=1) for i,j in enumerate(gaussian_maps)])
x = y + np.random.normal(0,0.04,y.shape)
x = np.array([np.where(mask>0.5,x_i,mean) for x_i in x])
y = np.array([np.where(mask>0.5,y_i,mean) for y_i in y])
x = x.astype(np.float32)[..., None]
y = y.astype(np.float32)[..., None]
n_train = int(x.shape[0]*p)
print('Epoch {}/{}'.format(k+1,epoch))
history = unet_model.fit(
x=x[:n_train],
y=y[:n_train],
batch_size=batch_size,
epochs=1,
validation_data=(x[n_train:],y[n_train:])
)
tot_train_loss.append(history.history["loss"][0])
tot_val_loss.append(history.history["val_loss"][0])
print(time.time()-start)
unet_model.evaluate(x[n_train:],y[n_train:], batch_size)
plt.figure(figsize=(12,8))
plt.plot(tot_train_loss, label="training")
plt.plot(tot_val_loss, label="validation")
plt.grid()
plt.yscale("log")
plt.legend()
plt.xlabel("Epoch")
plt.ylabel("Loss")
prediction = unet_model.predict(x[-1][None,...])
cm = plt.cm.RdBu_r
hp.mollview(x[-1].flatten(), title='Input', nest=True, cmap=cm, min=np.min(y[-1]), max=np.max(y[-1]))
hp.graticule()
hp.mollview(prediction[0].flatten(), title='Prediction', nest=True, cmap=cm, min=np.min(y[-1]), max=np.max(y[-1]))
hp.graticule()
hp.mollview(y[-1].flatten(), title='Real', nest=True, cmap=cm, min=np.min(y[-1]), max=np.max(y[-1]))
hp.graticule()
prediction = np.array([prediction[0].flatten()])
prediction = np.array([np.where(mask>0.5,x_i,mean) for x_i in prediction])
hp.mollview(prediction[0], title='Prediction Masked', nest=True, cmap=cm, min=np.min(y[-1]), max=np.max(y[-1]))
hp.graticule()
| 0.447943 | 0.766862 |
```
import numpy as np
import matplotlib.pyplot as plt
import h5py
```
## ODE Solution
```
def forward_euler(ddt, u0, T, *args):
u = np.empty((T.size, u0.size))
u[0] = u0
for i in range(1, T.size):
u[i] = u[i-1] + (T[i] - T[i-1]) * ddt(u[i-1], T[i-1], *args)
return u
def ddt(u, t, params):
beta, rho, sigma = params
x, y, z = u
return np.array([sigma*(y-x), x*(rho-z)-y, x*y-beta*z])
def solve_ode(N, dt, u0, params=[8/3, 28, 10]):
"""
Solves the ODEs for N time steps starting from u0.
Returned values are normalized.
Args:
N: number of time steps
u0: initial condition
norm: normalisation factor of u0 (None if not normalised)
params: parameters for ODE
Returns:
normalized time series of shape (N+1, u0.size)
"""
T = np.arange(N+1) * dt
U = forward_euler(ddt, u0, T, params)
return U
```
## ESN
```
## ESN with bias architecture
def step(x_pre, u):
""" Advances one ESN time step.
Args:
x_pre: reservoir state
u: input
Returns:
new augmented state (new state with bias_out appended)
"""
# input is normalized and input bias added
u_augmented = np.hstack([u/norm, bias_in])
# hyperparameters are explicit here
x_post = np.tanh(np.dot(u_augmented*sigma_in, Win) + rho*np.dot(x_pre, W))
# output bias added
x_augmented = np.hstack([x_post, bias_out])
return x_augmented
def open_loop(U, x0):
""" Advances ESN in open-loop.
Args:
U: input time series
x0: initial reservoir state
Returns:
time series of augmented reservoir states
"""
N = U.shape[0]
Xa = np.empty((N+1, N_units+1))
Xa[0] = np.hstack([x0, bias_out])
for i in 1+np.arange(N):
Xa[i] = step(Xa[i-1,:N_units], U[i-1])
return Xa
def closed_loop(N, x0, Wout):
""" Advances ESN in closed-loop.
Args:
N: number of time steps
x0: initial reservoir state
Wout: output matrix
Returns:
time series of prediction
final augmented reservoir state
"""
xa = x0.copy()
Yh = np.empty((N+1, dim))
Yh[0] = np.dot(xa, Wout)
for i in 1+np.arange(N):
xa = step(xa[:-1], Yh[i-1])
Yh[i] = np.dot(xa, Wout)
return Yh, xa
def train(U_washout, U_train, Y_train, tikh):
""" Trains ESN.
Args:
U_washout: washout input time series
U_train: training input time series
tikh: Tikhonov factor
Returns:
time series of augmented reservoir states
optimal output matrix
"""
## washout phase
xf_washout = open_loop(U_washout, np.zeros(N_units))[-1,:N_units]
## open-loop train phase
Xa = open_loop(U_train, xf_washout)
## Ridge Regression
LHS = np.dot(Xa[1:].T, Xa[1:]) + tikh*np.eye(N_units+1)
RHS = np.dot(Xa[1:].T, Y_train)
Wout = np.linalg.solve(LHS, RHS)
return Xa, Wout, LHS, RHS
def predictability_horizon(xa, Y, Wout):
""" Compute predictability horizon. It evolves the network until the
error is greater than the threshold. Before that it initialises
the network by running a washout phase.
Args:
threshold: error threshold
U_washout: time series for washout
Y: time series to compare prediction
Returns:
predictability horizon (in time units, not Lyapunov times)
time series of normalised error
time series of prediction
"""
# calculate denominator of the normalised error
error_denominator = np.mean(np.sum((Y)**2, axis=1))
N = Y.shape[0]
E = np.zeros(N)
Yh = np.zeros((N, dim))
Yh[0] = np.dot(xa, Wout)
for i in range(1, N):
# advance one step
xa = step(xa[:N_units], Yh[i-1])
Yh[i] = np.dot(xa, Wout)
# calculate error
error_numerator = np.sum(((Yh[i]-Y[i]))**2)
E[i] = np.sqrt(error_numerator/error_denominator)
if E[i] > threshold:
break
return i/N_lyap
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import h5py
def forward_euler(ddt, u0, T, *args):
u = np.empty((T.size, u0.size))
u[0] = u0
for i in range(1, T.size):
u[i] = u[i-1] + (T[i] - T[i-1]) * ddt(u[i-1], T[i-1], *args)
return u
def ddt(u, t, params):
beta, rho, sigma = params
x, y, z = u
return np.array([sigma*(y-x), x*(rho-z)-y, x*y-beta*z])
def solve_ode(N, dt, u0, params=[8/3, 28, 10]):
"""
Solves the ODEs for N time steps starting from u0.
Returned values are normalized.
Args:
N: number of time steps
u0: initial condition
norm: normalisation factor of u0 (None if not normalised)
params: parameters for ODE
Returns:
normalized time series of shape (N+1, u0.size)
"""
T = np.arange(N+1) * dt
U = forward_euler(ddt, u0, T, params)
return U
## ESN with bias architecture
def step(x_pre, u):
""" Advances one ESN time step.
Args:
x_pre: reservoir state
u: input
Returns:
new augmented state (new state with bias_out appended)
"""
# input is normalized and input bias added
u_augmented = np.hstack([u/norm, bias_in])
# hyperparameters are explicit here
x_post = np.tanh(np.dot(u_augmented*sigma_in, Win) + rho*np.dot(x_pre, W))
# output bias added
x_augmented = np.hstack([x_post, bias_out])
return x_augmented
def open_loop(U, x0):
""" Advances ESN in open-loop.
Args:
U: input time series
x0: initial reservoir state
Returns:
time series of augmented reservoir states
"""
N = U.shape[0]
Xa = np.empty((N+1, N_units+1))
Xa[0] = np.hstack([x0, bias_out])
for i in 1+np.arange(N):
Xa[i] = step(Xa[i-1,:N_units], U[i-1])
return Xa
def closed_loop(N, x0, Wout):
""" Advances ESN in closed-loop.
Args:
N: number of time steps
x0: initial reservoir state
Wout: output matrix
Returns:
time series of prediction
final augmented reservoir state
"""
xa = x0.copy()
Yh = np.empty((N+1, dim))
Yh[0] = np.dot(xa, Wout)
for i in 1+np.arange(N):
xa = step(xa[:-1], Yh[i-1])
Yh[i] = np.dot(xa, Wout)
return Yh, xa
def train(U_washout, U_train, Y_train, tikh):
""" Trains ESN.
Args:
U_washout: washout input time series
U_train: training input time series
tikh: Tikhonov factor
Returns:
time series of augmented reservoir states
optimal output matrix
"""
## washout phase
xf_washout = open_loop(U_washout, np.zeros(N_units))[-1,:N_units]
## open-loop train phase
Xa = open_loop(U_train, xf_washout)
## Ridge Regression
LHS = np.dot(Xa[1:].T, Xa[1:]) + tikh*np.eye(N_units+1)
RHS = np.dot(Xa[1:].T, Y_train)
Wout = np.linalg.solve(LHS, RHS)
return Xa, Wout, LHS, RHS
def predictability_horizon(xa, Y, Wout):
""" Compute predictability horizon. It evolves the network until the
error is greater than the threshold. Before that it initialises
the network by running a washout phase.
Args:
threshold: error threshold
U_washout: time series for washout
Y: time series to compare prediction
Returns:
predictability horizon (in time units, not Lyapunov times)
time series of normalised error
time series of prediction
"""
# calculate denominator of the normalised error
error_denominator = np.mean(np.sum((Y)**2, axis=1))
N = Y.shape[0]
E = np.zeros(N)
Yh = np.zeros((N, dim))
Yh[0] = np.dot(xa, Wout)
for i in range(1, N):
# advance one step
xa = step(xa[:N_units], Yh[i-1])
Yh[i] = np.dot(xa, Wout)
# calculate error
error_numerator = np.sum(((Yh[i]-Y[i]))**2)
E[i] = np.sqrt(error_numerator/error_denominator)
if E[i] > threshold:
break
return i/N_lyap
| 0.808332 | 0.884788 |
```
import numpy as np
import pandas as pd
import scipy.sparse as sparse
import implicit
train = pd.read_parquet('data/train.par')
test = pd.read_parquet('data/test.par')
items = pd.read_parquet('data/items.par')
items.drop_duplicates(subset=['item_id'], inplace=True)
items['brand'].replace('', '-', inplace=True)
items['brand'].fillna('-', inplace=True)
items['category1'] = items.category.apply(lambda x: x[0] if len(x) > 0 else pd.NA)
items['category2'] = items.category.apply(lambda x: x[1] if len(x) > 1 else pd.NA)
items['category3'] = items.category.apply(lambda x: x[2] if len(x) > 2 else pd.NA)
items['category4'] = items.category.apply(lambda x: x[3] if len(x) > 3 else pd.NA)
items['category123'] = items[['category1', 'category2', 'category3']].apply(
lambda row: f'{row.category1} > {row.category2} > {row.category3}', axis=1)
items['brand_id'] = pd.Categorical(items.brand).codes
items['category123_id'] = pd.Categorical(items.category123).codes
items
train = train \
.merge(items[['item_id', 'brand_id']], on='item_id', how='left') \
.merge(items[['item_id', 'category123_id']], on='item_id', how='left')
train
test_1 = train.groupby('user_id').sample(frac=0.1)
train_1 = train[~train.index.isin(test_1.index)]
def train_als(interactions, feature):
n_items = interactions[feature].max() + 1
n_users = interactions.user_id.max() + 1
train_ratings = interactions \
.groupby([feature, 'user_id'], as_index=False) \
.size() \
.rename(columns={'size': 'rating'})
user_sum_rating = train_ratings.groupby('user_id').rating.sum()
train_ratings = train_ratings.join(user_sum_rating, on='user_id', rsuffix='_sum')
train_ratings['rating_normal'] = train_ratings['rating'] / train_ratings['rating_sum']
confidence = 1.0 + train_ratings.rating_normal.values * 30.0
rating_matrix = sparse.csr_matrix(
(
confidence,
(
train_ratings[feature].values,
train_ratings.user_id.values
)
),
shape=(n_items, n_users)
)
rating_matrix_T = sparse.csr_matrix(
(
np.full(rating_matrix.nnz, 1),
(
train_ratings.user_id.values,
train_ratings[feature].values
)
),
shape=(n_users, n_items)
)
als = implicit.als.AlternatingLeastSquares(factors=128,
calculate_training_loss=True,
iterations=100)
als.fit(rating_matrix)
return als, rating_matrix_T
item_als, item_ratings_T = train_als(train_1, 'item_id')
brand_als, _ = train_als(train_1, 'brand_id')
category123_als, _ = train_als(train_1, 'category123_id')
import joblib
def predict_als_for_user(user_id):
recommendations = item_als.recommend(user_id, item_ratings_T, N=100)
recommended_items = [x for x, _ in recommendations]
recommended_scores = [x for _, x in recommendations]
return user_id, recommended_items, recommended_scores
item_als_prediction_raw = joblib.Parallel(backend='multiprocessing', verbose=1, n_jobs=32)(
joblib.delayed(predict_als_for_user)(u) for u in train.user_id.unique()
)
item_als_prediction = pd.DataFrame(item_als_prediction_raw, columns=['user_id', 'item_id', 'score'])
import my_metrics
print('Full:', my_metrics.compute(item_als_prediction, test))
print('Test_1:', my_metrics.compute(item_als_prediction, test_1))
user2item_als_prediction = item_als_prediction.set_index('user_id')
item2brand = items[['item_id', 'brand_id']].set_index('item_id')
item2category_123 = items[['item_id', 'category123_id']].set_index('item_id')
def samples_to_df(user_id, positive_samples: list, negative_samples: list) -> pd.DataFrame:
positive = pd.DataFrame({
'user_id': user_id,
'item_id': positive_samples,
}).explode('item_id')
positive['label'] = 1
negative = pd.DataFrame({
'user_id': user_id,
'item_id': negative_samples,
}).explode('item_id')
negative['label'] = 0
samples = pd.concat([
positive,
negative
])
samples['user_id'] = samples.user_id.values.astype(np.int64)
samples['item_id'] = samples.item_id.values.astype(np.int64)
return samples
def feature_combinations(features, user_id, item_ids):
brand_ids = item2brand.loc[item_ids].brand_id.values
category123_ids = item2category_123.loc[item_ids].category123_id.values
als1 = item_als
als2 = brand_als
als3 = category123_als
u1 = als1.user_factors[user_id]
i1 = als1.item_factors[item_ids]
u2 = als2.user_factors[user_id]
i2 = als2.item_factors[brand_ids]
u3 = als3.user_factors[user_id]
i3 = als3.item_factors[category123_ids]
features['score_1'] = i1 @ u1
features['score_2'] = i2 @ u2
features['score_3'] = i3 @ u3
features['score_4'] = u1 @ u2
features['score_5'] = i2 @ u1
features['score_6'] = i1 @ u2
features['score_7'] = np.sum(i1 * i2 , axis=1)
features['score_8'] = u1 @ u3
features['score_9'] = i3 @ u1
features['score_10'] = i1 @ u3
features['score_11'] = np.sum(i1 * i3 , axis=1)
features['score_12'] = u2 @ u3
features['score_13'] = i3 @ u2
features['score_14'] = i2 @ u3
features['score_15'] = np.sum(i2 * i3 , axis=1)
def generate_samples_for_user(user_id):
candidates = set(np.array(user2item_als_prediction.loc[user_id].item_id))
valid = set(test_1[test_1.user_id == user_id].item_id.values)
positive_samples = list(candidates.intersection(valid))
negative_samples = list(candidates.difference(valid))
features = samples_to_df(user_id, positive_samples, negative_samples)
feature_combinations(features, user_id, features.item_id.values)
return features
stage2_samples = joblib.Parallel(backend='multiprocessing', verbose=1, n_jobs=32)(
joblib.delayed(generate_samples_for_user)(id) for id in train.user_id.unique()
)
all_samples = pd.concat(stage2_samples)
all_samples = all_samples.sample(n=len(all_samples))
from sklearn.model_selection import train_test_split
selected_features = [
f'score_{(i + 1)}' for i in range(0, 15)
]
selected_cat_features = []
all_features = all_samples[selected_features + ['label']]
all_features_X = all_features.drop(columns=['label'])
all_features_Y = all_features[['label']]
X_train, X_test, y_train, y_test = train_test_split(all_features_X, all_features_Y, test_size=0.3)
value_count_01 = y_train.value_counts()
w0 = value_count_01[0] / len(y_train)
w1 = value_count_01[1] / len(y_train)
print('w_0 =', w0)
print('w_1 =', w1)
from catboost import Pool as CatBoostPool
from catboost import CatBoostClassifier
from catboost.metrics import BalancedAccuracy
from catboost.metrics import Logloss
cb_train_pool = CatBoostPool(X_train, y_train, cat_features=selected_cat_features)
cb_test_pool = CatBoostPool(X_test, y_test, cat_features=selected_cat_features)
cb_params = {
'n_estimators': 500,
'depth': 6,
'class_weights': [w1, w0],
'objective': Logloss(),
'eval_metric': BalancedAccuracy(),
'early_stopping_rounds': 100,
'learning_rate': 0.1
}
cb_classifier = CatBoostClassifier(**cb_params)
cb_classifier.fit(cb_train_pool, eval_set=cb_test_pool)
for x in sorted(zip(X_train.columns, cb_classifier.feature_importances_), key=lambda x: -x[1]):
print(x)
cb_predictions = cb_classifier.predict(X_test)
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
cm = confusion_matrix(y_test, cb_predictions, normalize='true')
ConfusionMatrixDisplay(confusion_matrix=cm).plot()
cb_params.update({ 'n_estimators': 60 })
cb_classifier_final = CatBoostClassifier(**cb_params)
cb_final_pool = CatBoostPool(all_features_X, all_features_Y, cat_features=selected_cat_features)
cb_classifier_final.fit(cb_final_pool)
seen_items = train.groupby('user_id').agg({'item_id': set}).item_id
def filter_seen_items(user_id, recommended_items):
user_seen_items = seen_items.loc[user_id]
final_recommended_items = []
for i in recommended_items:
if i not in user_seen_items:
final_recommended_items.append(i)
return final_recommended_items
def features2recommendations(user_id, recommended_items, features):
probs = cb_classifier_final.predict_proba(features, thread_count=1)[:, 1]
ranks = np.argsort(-probs)
filtered_items = filter_seen_items(user_id, recommended_items[ranks])
return filtered_items
def predict_als_catboost_for_user(user_id):
recommendations = item_als.recommend(user_id, item_ratings_T, N=100)
recommended_items = np.array([x for x, _ in recommendations])
features = pd.DataFrame()
feature_combinations(features, user_id, recommended_items)
features = features[selected_features]
final_recommendations = features2recommendations(user_id, recommended_items, features)
return user_id, final_recommendations
als_catboost_prediction = joblib.Parallel(backend='multiprocessing', verbose=1, n_jobs=32)(
joblib.delayed(predict_als_catboost_for_user)(u) for u in test_1.user_id.unique()
)
als_catboost_prediction = pd.DataFrame(als_catboost_prediction, columns=['user_id', 'item_id'])
my_metrics.compute(als_catboost_prediction, test)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import scipy.sparse as sparse
import implicit
train = pd.read_parquet('data/train.par')
test = pd.read_parquet('data/test.par')
items = pd.read_parquet('data/items.par')
items.drop_duplicates(subset=['item_id'], inplace=True)
items['brand'].replace('', '-', inplace=True)
items['brand'].fillna('-', inplace=True)
items['category1'] = items.category.apply(lambda x: x[0] if len(x) > 0 else pd.NA)
items['category2'] = items.category.apply(lambda x: x[1] if len(x) > 1 else pd.NA)
items['category3'] = items.category.apply(lambda x: x[2] if len(x) > 2 else pd.NA)
items['category4'] = items.category.apply(lambda x: x[3] if len(x) > 3 else pd.NA)
items['category123'] = items[['category1', 'category2', 'category3']].apply(
lambda row: f'{row.category1} > {row.category2} > {row.category3}', axis=1)
items['brand_id'] = pd.Categorical(items.brand).codes
items['category123_id'] = pd.Categorical(items.category123).codes
items
train = train \
.merge(items[['item_id', 'brand_id']], on='item_id', how='left') \
.merge(items[['item_id', 'category123_id']], on='item_id', how='left')
train
test_1 = train.groupby('user_id').sample(frac=0.1)
train_1 = train[~train.index.isin(test_1.index)]
def train_als(interactions, feature):
n_items = interactions[feature].max() + 1
n_users = interactions.user_id.max() + 1
train_ratings = interactions \
.groupby([feature, 'user_id'], as_index=False) \
.size() \
.rename(columns={'size': 'rating'})
user_sum_rating = train_ratings.groupby('user_id').rating.sum()
train_ratings = train_ratings.join(user_sum_rating, on='user_id', rsuffix='_sum')
train_ratings['rating_normal'] = train_ratings['rating'] / train_ratings['rating_sum']
confidence = 1.0 + train_ratings.rating_normal.values * 30.0
rating_matrix = sparse.csr_matrix(
(
confidence,
(
train_ratings[feature].values,
train_ratings.user_id.values
)
),
shape=(n_items, n_users)
)
rating_matrix_T = sparse.csr_matrix(
(
np.full(rating_matrix.nnz, 1),
(
train_ratings.user_id.values,
train_ratings[feature].values
)
),
shape=(n_users, n_items)
)
als = implicit.als.AlternatingLeastSquares(factors=128,
calculate_training_loss=True,
iterations=100)
als.fit(rating_matrix)
return als, rating_matrix_T
item_als, item_ratings_T = train_als(train_1, 'item_id')
brand_als, _ = train_als(train_1, 'brand_id')
category123_als, _ = train_als(train_1, 'category123_id')
import joblib
def predict_als_for_user(user_id):
recommendations = item_als.recommend(user_id, item_ratings_T, N=100)
recommended_items = [x for x, _ in recommendations]
recommended_scores = [x for _, x in recommendations]
return user_id, recommended_items, recommended_scores
item_als_prediction_raw = joblib.Parallel(backend='multiprocessing', verbose=1, n_jobs=32)(
joblib.delayed(predict_als_for_user)(u) for u in train.user_id.unique()
)
item_als_prediction = pd.DataFrame(item_als_prediction_raw, columns=['user_id', 'item_id', 'score'])
import my_metrics
print('Full:', my_metrics.compute(item_als_prediction, test))
print('Test_1:', my_metrics.compute(item_als_prediction, test_1))
user2item_als_prediction = item_als_prediction.set_index('user_id')
item2brand = items[['item_id', 'brand_id']].set_index('item_id')
item2category_123 = items[['item_id', 'category123_id']].set_index('item_id')
def samples_to_df(user_id, positive_samples: list, negative_samples: list) -> pd.DataFrame:
positive = pd.DataFrame({
'user_id': user_id,
'item_id': positive_samples,
}).explode('item_id')
positive['label'] = 1
negative = pd.DataFrame({
'user_id': user_id,
'item_id': negative_samples,
}).explode('item_id')
negative['label'] = 0
samples = pd.concat([
positive,
negative
])
samples['user_id'] = samples.user_id.values.astype(np.int64)
samples['item_id'] = samples.item_id.values.astype(np.int64)
return samples
def feature_combinations(features, user_id, item_ids):
brand_ids = item2brand.loc[item_ids].brand_id.values
category123_ids = item2category_123.loc[item_ids].category123_id.values
als1 = item_als
als2 = brand_als
als3 = category123_als
u1 = als1.user_factors[user_id]
i1 = als1.item_factors[item_ids]
u2 = als2.user_factors[user_id]
i2 = als2.item_factors[brand_ids]
u3 = als3.user_factors[user_id]
i3 = als3.item_factors[category123_ids]
features['score_1'] = i1 @ u1
features['score_2'] = i2 @ u2
features['score_3'] = i3 @ u3
features['score_4'] = u1 @ u2
features['score_5'] = i2 @ u1
features['score_6'] = i1 @ u2
features['score_7'] = np.sum(i1 * i2 , axis=1)
features['score_8'] = u1 @ u3
features['score_9'] = i3 @ u1
features['score_10'] = i1 @ u3
features['score_11'] = np.sum(i1 * i3 , axis=1)
features['score_12'] = u2 @ u3
features['score_13'] = i3 @ u2
features['score_14'] = i2 @ u3
features['score_15'] = np.sum(i2 * i3 , axis=1)
def generate_samples_for_user(user_id):
candidates = set(np.array(user2item_als_prediction.loc[user_id].item_id))
valid = set(test_1[test_1.user_id == user_id].item_id.values)
positive_samples = list(candidates.intersection(valid))
negative_samples = list(candidates.difference(valid))
features = samples_to_df(user_id, positive_samples, negative_samples)
feature_combinations(features, user_id, features.item_id.values)
return features
stage2_samples = joblib.Parallel(backend='multiprocessing', verbose=1, n_jobs=32)(
joblib.delayed(generate_samples_for_user)(id) for id in train.user_id.unique()
)
all_samples = pd.concat(stage2_samples)
all_samples = all_samples.sample(n=len(all_samples))
from sklearn.model_selection import train_test_split
selected_features = [
f'score_{(i + 1)}' for i in range(0, 15)
]
selected_cat_features = []
all_features = all_samples[selected_features + ['label']]
all_features_X = all_features.drop(columns=['label'])
all_features_Y = all_features[['label']]
X_train, X_test, y_train, y_test = train_test_split(all_features_X, all_features_Y, test_size=0.3)
value_count_01 = y_train.value_counts()
w0 = value_count_01[0] / len(y_train)
w1 = value_count_01[1] / len(y_train)
print('w_0 =', w0)
print('w_1 =', w1)
from catboost import Pool as CatBoostPool
from catboost import CatBoostClassifier
from catboost.metrics import BalancedAccuracy
from catboost.metrics import Logloss
cb_train_pool = CatBoostPool(X_train, y_train, cat_features=selected_cat_features)
cb_test_pool = CatBoostPool(X_test, y_test, cat_features=selected_cat_features)
cb_params = {
'n_estimators': 500,
'depth': 6,
'class_weights': [w1, w0],
'objective': Logloss(),
'eval_metric': BalancedAccuracy(),
'early_stopping_rounds': 100,
'learning_rate': 0.1
}
cb_classifier = CatBoostClassifier(**cb_params)
cb_classifier.fit(cb_train_pool, eval_set=cb_test_pool)
for x in sorted(zip(X_train.columns, cb_classifier.feature_importances_), key=lambda x: -x[1]):
print(x)
cb_predictions = cb_classifier.predict(X_test)
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
cm = confusion_matrix(y_test, cb_predictions, normalize='true')
ConfusionMatrixDisplay(confusion_matrix=cm).plot()
cb_params.update({ 'n_estimators': 60 })
cb_classifier_final = CatBoostClassifier(**cb_params)
cb_final_pool = CatBoostPool(all_features_X, all_features_Y, cat_features=selected_cat_features)
cb_classifier_final.fit(cb_final_pool)
seen_items = train.groupby('user_id').agg({'item_id': set}).item_id
def filter_seen_items(user_id, recommended_items):
user_seen_items = seen_items.loc[user_id]
final_recommended_items = []
for i in recommended_items:
if i not in user_seen_items:
final_recommended_items.append(i)
return final_recommended_items
def features2recommendations(user_id, recommended_items, features):
probs = cb_classifier_final.predict_proba(features, thread_count=1)[:, 1]
ranks = np.argsort(-probs)
filtered_items = filter_seen_items(user_id, recommended_items[ranks])
return filtered_items
def predict_als_catboost_for_user(user_id):
recommendations = item_als.recommend(user_id, item_ratings_T, N=100)
recommended_items = np.array([x for x, _ in recommendations])
features = pd.DataFrame()
feature_combinations(features, user_id, recommended_items)
features = features[selected_features]
final_recommendations = features2recommendations(user_id, recommended_items, features)
return user_id, final_recommendations
als_catboost_prediction = joblib.Parallel(backend='multiprocessing', verbose=1, n_jobs=32)(
joblib.delayed(predict_als_catboost_for_user)(u) for u in test_1.user_id.unique()
)
als_catboost_prediction = pd.DataFrame(als_catboost_prediction, columns=['user_id', 'item_id'])
my_metrics.compute(als_catboost_prediction, test)
| 0.439266 | 0.417865 |
# Project: Sentiment Classification
- Make a model to determine whether a tweet positive or negative
### Step 1: Import the libraries
```
import nltk
import string
from nltk.tag import pos_tag
from nltk.stem.wordnet import WordNetLemmatizer
from nltk import classify
from nltk.corpus import stopwords
from nltk import NaiveBayesClassifier
from random import shuffle
```
### Step 2: Download the sample tweets
- Execute the following cell
```
nltk.download('twitter_samples')
nltk.download('averaged_perceptron_tagger')
nltk.download('wordnet')
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('omw-1.4')
```
### Step 3: The tweets
- Get the positive and negative tweets.
- HINT: You access the positive tweets by: **nltk.corpus.twitter_samples.strings('positive_tweets.json')**
- HINT: Similarly for the negative tweets.
- Notice: There is also tweets with no sentiment - we will ignore them in this project
- Check a few tweets
```
positive_tweets = nltk.corpus.twitter_samples.strings('positive_tweets.json')
negative_tweets = nltk.corpus.twitter_samples.strings('negative_tweets.json')
positive_tweets[0]
```
### Step 4: Tokenize the tweets
- You get the tokenized tweets as follows:
- **nltk.corpus.twitter_samples.tokenized('positive_tweets.json')**
- Simlarly for **negative_tweets**
- Why tokenize?
- To make processing easier
- Check a few tweets (tokenized)
```
positive_tweets = nltk.corpus.twitter_samples.tokenized('positive_tweets.json')
negative_tweets = nltk.corpus.twitter_samples.tokenized('negative_tweets.json')
positive_tweets[0]
```
### Step 5: Remove noise from data
- The following tokens do not add value in our analysis
- Twitter usernames (starting with @)
- Hyperlinks (starting with http:// or https://)
- Punctuation and special characters
- HINT: if word in **string.punctuation**
- Numeric values only
- HINT: use **.isnumeric()**
- If word is a stopword ([wiki](https://en.wikipedia.org/wiki/Stop_word))
- HINT: Check if lower case word is in **stopwords.words('english')**
- To simplify createa a helper function **is_clean** to check for the above
- Create another helper function **clean_tokens**
- The function takes **tokens** (a list of tokens) as input
- Then returns a list of tokens, where **is_clean** has been used to filter
- Also, let's lowercase it all
- HINT: Use **lower()**
- Finally, use list comprehension on the lists of positive and negative tweets where **clean_tokens** is applied on each element (tokens).
```
def is_clean(word: str):
if word.startswith('@'):
return False
if word.startswith('http://') or word.startswith('https://'):
return False
if word in string.punctuation:
return False
if word.isnumeric():
return False
if word in stopwords.words('english'):
return False
return True
def clean_tokens(tokens: list):
return [word.lower() for word in tokens if is_clean(word)]
positive_tweets_cleaned = [clean_tokens(tokens) for tokens in positive_tweets]
negative_tweets_cleaned = [clean_tokens(tokens) for tokens in negative_tweets]
positive_tweets_cleaned[0]
negative_tweets_cleaned[0]
```
### Step 6: Normalize the data
- The process of converting a word to its canonical form.
- Without normalization, โranโ, โrunsโ, and โrunningโ would be treated as different words.
- Create a lemmatizer of **WordNetLemmatizer()**
- HINT: use **lemmatizer = WordNetLemmatizer()**
- Create a helper function to lemmatize
- HINT: Create a helper function **lemmatize(word, tag)**
- Convert tag to **n** or **v** if tag starts with **NN** or **VB**, else **a**
- Return **lemmatizer.lemmatize(word, tag)**
- Create a helper function **lemmatize_tokens(tokens: list)**
- Return a list, where each element of **word, tag in pos_tag(...)** of **lemmatize(word, tag)**.
- Use list comprehension to normalize the positive and negative tweets
- HINT: apply **lemmatize_tokens(...)** on all elements
```
lemmatizer = WordNetLemmatizer()
def lemmatize(word: str, tag: str):
if tag.startswith('NN'):
pos = 'n'
elif tag.startswith('VB'):
pos = 'v'
else:
pos = 'a'
return lemmatizer.lemmatize(word, pos)
def lemmatize_tokens(tokens:list):
return [lemmatize(word, tag) for word, tag in pos_tag(tokens)]
positive_tweets_normalized = [lemmatize_tokens(tokens) for tokens in positive_tweets_cleaned]
negative_tweets_normalized = [lemmatize_tokens(tokens) for tokens in negative_tweets_cleaned]
positive_tweets_normalized[0]
negative_tweets_normalized[0]
```
### Step 7: Prepare data for Model
- Example of normalized tweet: **['hopeless', 'tmr', ':(']**
- Should become **({'hopeless': True, 'tmr': True, ':(': True}, 'Negative')**
- Hence, the list of tweets (positive and negative) should be converted
- HINT: use a dict comprehension inside a list comprehension
```
positive_dataset = [({token: True for token in tokens}, 'Positive') for tokens in positive_tweets_normalized]
negative_dataset = [({token: True for token in tokens}, 'Negative') for tokens in negative_tweets_normalized]
positive_dataset[0]
negative_dataset[0]
```
### Step 8: Prepare training and test dataset
- Make the dataset of the combined positive and negative datasets
- Shuffle the dataset
- Use **shuffle**
- Let the training dataset be the first 7000 entries
- Let the test dataset be the remaining entries
```
dataset = positive_dataset + negative_dataset
shuffle(dataset)
train_ds = dataset[:7000]
test_ds = dataset[7000:]
```
### Step 9: Train and test Model
- Train the model:
- HINT: **classifier = NaiveBayesClassifier.train(train_data)**
- Test the accuracy
- HINT: **classify.accuracy(classifier, test_data)**
```
classifier = NaiveBayesClassifier.train(train_ds)
classify.accuracy(classifier, test_ds)
```
### Step 10: Show the most informative features
- HINT: Get the 10 most informative features: **classifier.show_most_informative_features(10)**
```
classifier.show_most_informative_features(10)
```
### Step 11: Test the model
- Try your model as follows:
- Define a tweet: **tweet = 'this is fun and awesome'**
- Prepare data for model: **tweet_dict = {token: True for token in lemmatize_tokens(clean_tokens(tweet.split()))}**
- Classify data: **classifier.classify(tweet_dict)**
```
tweet = 'this is fun and awesome'
tweet_dict = {token: True for token in lemmatize_tokens(clean_tokens(tweet.split()))}
classifier.classify(tweet_dict)
```
### Bonus: The pre-trained Sentiment Intensity Analyzer
- VADER (Valence Aware Dictionary and sEntiment Reasoner) ([Vader](https://www.nltk.org/howto/sentiment.html))
```
from nltk.sentiment import SentimentIntensityAnalyzer
nltk.download('vader_lexicon')
sia = SentimentIntensityAnalyzer()
sia.polarity_scores('this is fun and awesome')
```
|
github_jupyter
|
import nltk
import string
from nltk.tag import pos_tag
from nltk.stem.wordnet import WordNetLemmatizer
from nltk import classify
from nltk.corpus import stopwords
from nltk import NaiveBayesClassifier
from random import shuffle
nltk.download('twitter_samples')
nltk.download('averaged_perceptron_tagger')
nltk.download('wordnet')
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('omw-1.4')
positive_tweets = nltk.corpus.twitter_samples.strings('positive_tweets.json')
negative_tweets = nltk.corpus.twitter_samples.strings('negative_tweets.json')
positive_tweets[0]
positive_tweets = nltk.corpus.twitter_samples.tokenized('positive_tweets.json')
negative_tweets = nltk.corpus.twitter_samples.tokenized('negative_tweets.json')
positive_tweets[0]
def is_clean(word: str):
if word.startswith('@'):
return False
if word.startswith('http://') or word.startswith('https://'):
return False
if word in string.punctuation:
return False
if word.isnumeric():
return False
if word in stopwords.words('english'):
return False
return True
def clean_tokens(tokens: list):
return [word.lower() for word in tokens if is_clean(word)]
positive_tweets_cleaned = [clean_tokens(tokens) for tokens in positive_tweets]
negative_tweets_cleaned = [clean_tokens(tokens) for tokens in negative_tweets]
positive_tweets_cleaned[0]
negative_tweets_cleaned[0]
lemmatizer = WordNetLemmatizer()
def lemmatize(word: str, tag: str):
if tag.startswith('NN'):
pos = 'n'
elif tag.startswith('VB'):
pos = 'v'
else:
pos = 'a'
return lemmatizer.lemmatize(word, pos)
def lemmatize_tokens(tokens:list):
return [lemmatize(word, tag) for word, tag in pos_tag(tokens)]
positive_tweets_normalized = [lemmatize_tokens(tokens) for tokens in positive_tweets_cleaned]
negative_tweets_normalized = [lemmatize_tokens(tokens) for tokens in negative_tweets_cleaned]
positive_tweets_normalized[0]
negative_tweets_normalized[0]
positive_dataset = [({token: True for token in tokens}, 'Positive') for tokens in positive_tweets_normalized]
negative_dataset = [({token: True for token in tokens}, 'Negative') for tokens in negative_tweets_normalized]
positive_dataset[0]
negative_dataset[0]
dataset = positive_dataset + negative_dataset
shuffle(dataset)
train_ds = dataset[:7000]
test_ds = dataset[7000:]
classifier = NaiveBayesClassifier.train(train_ds)
classify.accuracy(classifier, test_ds)
classifier.show_most_informative_features(10)
tweet = 'this is fun and awesome'
tweet_dict = {token: True for token in lemmatize_tokens(clean_tokens(tweet.split()))}
classifier.classify(tweet_dict)
from nltk.sentiment import SentimentIntensityAnalyzer
nltk.download('vader_lexicon')
sia = SentimentIntensityAnalyzer()
sia.polarity_scores('this is fun and awesome')
| 0.396185 | 0.90101 |
# CIFAR-10: Image Dataset
Throughout this course, we will teach you all basic skills and how to use all neccessary tools that you need to implement deep neural networks, which is the main focus of this class. However, you should also be proficient with handling data and know how to prepare it for your specific task. In fact, most of the jobs that involve deep learning in industry are very data related so this is an important skill that you have to pick up.
Therefore, we will take a deep dive into data preparation this week by implementing our own datasets and dataloader. In this notebook, we will focus on the image dataset CIFAR-10. The CIFAR-10 dataset consists of 50000 32x32 colour images in 10 classes, which are *plane*, *car*, *bird*, *cat*, *deer*, *dog*, *frog*, *horse*, *ship*, *truck*.
Let's start by importing some libraries that you will need along the way, as well as some code files that you will work on throughout this notebook.
```
import os
import pickle
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
from tqdm import tqdm
from exercise_code.data import (
ImageFolderDataset,
RescaleTransform,
NormalizeTransform,
ComposeTransform,
compute_image_mean_and_std,
)
from exercise_code.tests import (
test_image_folder_dataset,
test_rescale_transform,
test_compute_image_mean_and_std,
test_len_dataset,
test_item_dataset,
test_transform_dataset,
save_pickle
)
%load_ext autoreload
%autoreload 2
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
```
## 1. Dataset Download
Let us get started by downloading the data. In `exercise_code/data/image_folder_dataset.py` you can find a class `ImageFolderDataset`, which you will have to complete throughout this notebook.
This class automatically downloads the raw data for you. To do so, simply initialize the class as below:
```
# Set up the output dataset folder
i2dl_exercises_path = os.path.dirname(os.path.abspath(os.getcwd()))
cifar_root = os.path.join(i2dl_exercises_path, "datasets", "cifar10")
# Init the dataset and display downloading information this one time
dataset = ImageFolderDataset(
root=cifar_root,
force_download=False,
verbose=True
)
```
You should now be able to see the images in `i2dl_exercises/datasets/cifar10` in your file browser, which should contain one subfolder per class, each containing the respective images labeled `0001.png`, `0002.png`, ...
By default, the dataset will only be downloaded the first time you initialize a dataset class. If, for some reason, your version of the dataset gets corrupted and you wish to re-download it, simply initialize the class with `force_download=True` in the download cell above.
## 2. Data Visualization
Before training any model you should *always* take a look at some samples of your dataset. In this way, you can make sure that the data input has worked as intended and also get a feeling for the dataset.
Let's load the CIFAR-10 data and visualize a subset of the images. To do so, `PIL.Image.open()` is used to open an image, and then `numpy.asarray()` to cast the image to a numpy array, which will have shape 32x32x3. In this way 7 images will be loaded per class, and then use `matplotlib.pyplot` to visualize those images in a grid.
```
def load_image_as_numpy(image_path):
return np.asarray(Image.open(image_path), dtype=float)
classes = [
'plane', 'car', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck',
]
num_classes = len(classes)
samples_per_class = 7
for label, cls in enumerate(sorted(classes)):
for i in range(samples_per_class):
image_path = os.path.join(
cifar_root,
cls,
str(i+1).zfill(4) + ".png"
) # e.g. cifar10/plane/0001.png
image = np.asarray(Image.open(image_path)) # open image as numpy array
plt_idx = i * num_classes + label + 1 # calculate plot location in the grid
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(image.astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls) # plot class names above columns
plt.show()
```
## 3. ImageFolderDataset Implementation
Loading images following steps above is a bit cumbersome. Therefore, the next step is to write a custom **Dataset** class, which takes care of the data loading. This is always the first thing you have to implement when starting a new deep learning project.
### 3.1 Dataset Class
The **Dataset** class is a wrapper that loads the data from a given file path and returns a dictionary containing already prepared data, as you have done above. Datasets always need to have the following two methods implemented:
- `__len__(self)` is a method that should simply calculate and return the number of images in the dataset. After it is implemented, you can simply call it with `len(dataset)`.
- `__getitem__(self, index)` should return the image with the given index from your dataset. Implementing this will allow you to access your dataset like a list, i.e. you can then simply call `dataset[9]` to access the 10th image in the dataset.
Generally, you will have to implement a different dataset for every project. However, base dataset classes for future projects will be provided for you in future projects.
### 3.2 ImageFolderDataset Implementation
Now it is your turn to implement such a dataset class for CIFAR-10. To do so, open `exercise_code/data/image_folder_dataset.py` and check the following three methods of `ImageFolderDataset`:
- `make_dataset(directory, class_to_idx)` should load the prepared data from a given directory root (`directory`) into two lists (`images` and `labels`). `class_to_idx` is a dict mapping class (e.g. 'cat') to label (e.g. 1).
- `__len__(self)` should calculate and return the number of images in your dataset.
- `__getitem__(self, index)` should return the image with the given index from your dataset.
<div class="alert alert-success">
<h3>Task: Check Code</h3>
<p>Please read <code>make_dataset(directory, class_to_idx)</code> and make sure to familiarize with its output as you will need to interact with it for the following tasks. Additionally, it would be a wise decision to get familiar with python's os library which will be of utmost importance for most datasets you will write in future projects. As it is not beginner friendly, we removed it for this exercise but it is an important skill for a DL practicer.</p>
</div>
<div class="alert alert-info">
<h3>Task: Implement</h3>
<p>Implement the <code>__len__(self)</code> method in <code>exercise_code/data/image_folder_dataset.py</code> and test your implementation by running the following cell.
</p>
</div>
```
dataset = ImageFolderDataset(
root=cifar_root,
)
_ = test_len_dataset(dataset)
```
<div class="alert alert-info">
<h3>Task: Implement</h3>
<p>Implement the <code>__getitem__(self, index)</code> method in <code>exercise_code/data/image_folder_dataset.py</code> and test your implementation by running the following cell.
</p>
<p><b>Hint:</b> You may want to reuse parts of the '2. Data Visualization' code above in your implementation of <code>__getitem__()</code>.
</div>
```
dataset = ImageFolderDataset(
root=cifar_root,
)
_ = test_item_dataset(dataset)
```
### 3.3 Dataset Usage
Now that you have implemented all required parts of the ImageFolderDataset, using the `__getitem__()` method, you can now access our dataset as conveniently as you would access a list:
```
sample_item = dataset[0]
sample_image = sample_item["image"]
sample_label = sample_item["label"]
print('Sample image shape:', sample_image.shape)
print('Sample label:', sample_label)
print('Sample image first values:', sample_image[0][0])
```
As you can see, the images are represented as uint8 values for each of the three RGB color channels. The data type and scale will be important later.
As you have implemented both `__len__()` and `__getitem__()`, you can now even iterate over the dataset with a simple for loop!
```
num_samples = 0
for sample in tqdm(dataset):
num_samples += 1
print("Number of samples:", num_samples)
```
## 4. Transforms and Image Preprocessing
Before training machine learning models, you often need to pre-process the data. For image datasets, two commonly applied techniques are:
1. Normalize all images so that each value is either in [-1, 1] or [0, 1]. By doing so the image are also converted to floating point numbers.
2. Compute the mean over all images and subtract this mean from all images in the dataset
These transform classes are callables, meaning that you will be able to simply use them as follows:
```transform = Transform()
images_transformed = transform(images)```
This will be realized in the pipeline by defining so called transforms. Instead of applying them globally to the input data, you will apply those seperatly to each sample after loading it in the `__getitem__` call of the dataset.
<div class="alert alert-info">
<h3>Task: Implement</h3>
<p>Modify the <code>__getitem__(self, index)</code> method in <code>exercise_code/data/image_folder_dataset.py</code> such that it applies <code>self.transform</code>. With this change you can simply define the transforms during dataset creation and apply those automatically for each <code>__getitem__</code> call. Make sure not to break it though ;).
</div>
```
dataset = ImageFolderDataset(
root=cifar_root,
)
_ = test_transform_dataset(dataset)
```
Equipped with this change, you can now easily add the two preprocessing techniques above for CIFAR-10. You will do so in the following steps by implementing the classes `RescaleTransform` and `NormalizeTransform` in `exercise_code/data/transforms.py`.
### 4.1 Rescaling Images using RescaleTransform
Let's start by implementing `RescaleTransform`. If you look at the `__init__()` method, you will notice it has four arguments:
* **out_range** is the range you wish to rescale your images to. E.g. if you want to scale your images to [-1, 1], you would use `range=(-1, 1)`. By default, they will be scaled to [0, 1].
* **in_range** is the value range of the data prior to rescaling. For uint8 images, this will always be (0, 255).
<div class="alert alert-info">
<h3>Task: Implement</h3>
<p>Implement the <code>__call__()</code> method of <code>RescaleTransform</code> in <code>exercise_code/data/transforms.py</code> and test your implementation by running the following cell.
</div>
```
rescale_transform = RescaleTransform()
dataset_rescaled = ImageFolderDataset(
root=cifar_root,
transform=rescale_transform
)
_ = test_rescale_transform(dataset_rescaled)
```
If you look at the first image, you should now see that all values are between 0 and 1.
```
sample_item = dataset_rescaled[0]
sample_label = sample_item["label"]
sample_image = sample_item["image"]
print("Max value:", np.max(sample_image))
print("Min value:", np.min(sample_image))
print('Sample rescaled image first values:', sample_image[0][0])
```
### 4.2 Normalize Images to Standard Gaussian using NormalizeTransform
Let us now move on to the `NormalizeTransform` class. The `NormalizeTransform` class normalizes images channel-wise and its `__init__` method has two arguments:
* **mean** is the normalization mean, which will be subtracted from the dataset.
* **std** is the normalization standard deviation. By scaling the data with a factor of `1/std` the standard deviation will be normazlied accordingly.
Have a look at the code in `exercise_code/data/transforms.py`.
The next step is to normalize the CIFAR-10 **images channel-wise** to standard normal. To do so, you need to calculate the **per-channel image mean and standard deviation** first, which you can then provide to `NormalizeTransform` to normalize the data accordingly.
```
# You first have to load all rescaled images
rescaled_images = []
for sample in tqdm(dataset_rescaled):
rescaled_images.append(sample["image"])
rescaled_images = np.array(rescaled_images)
```
<div class="alert alert-info">
<h3>Task: Implement</h3>
<p>Implement the <code>compute_image_mean_and_std()</code> method and the <code>__call__()</code> method of <code>NormalizeTransform</code> in <code>exercise_code/data/transforms.py</code>. Compute the rescaled dataset's mean and variance by running the following cell.
</div>
```
cifar_mean, cifar_std = compute_image_mean_and_std(rescaled_images)
print("Mean:\t", cifar_mean, "\nStd:\t", cifar_std)
```
To test your implementation, run the following code:
```
_ = test_compute_image_mean_and_std(cifar_mean, cifar_std)
# The rescaled images will be deleted now from your ram as they are no longer needed
try:
del rescaled_images
except NameError:
pass
```
Now you can use the mean and standard deviation you computed to normalize the loaded data. This can be done by simply adding the `NormalizeTransform` to the list of transformations our dataset applies in `__getitem__()`.
<div class="alert alert-success">
<h3>Task: Check Code</h3>
<p>Please check out the <code>ComposeTransform</code> in <code>transforms.py</code>. Later on, we will most often use multiple transforms and chain them together. Remember that the order is of importance here!</p>
</div>
```
# Set up both transforms using the parameters computed above
rescale_transform = RescaleTransform()
normalize_transform = NormalizeTransform(
mean=cifar_mean,
std=cifar_std
)
final_dataset = ImageFolderDataset(
root=cifar_root,
transform=ComposeTransform([rescale_transform, normalize_transform])
)
```
You can now check out the results of the transformed samples:
```
sample_item = final_dataset[0]
sample_label = sample_item["label"]
sample_image = sample_item["image"]
print('Sample normalized image shape:', sample_image.shape)
print('Sample normalized image first values:', sample_image[0][0])
```
## 5. Save your Dataset
Now save your dataset and transforms using the following cell. This will save it to a pickle file `models/cifar_dataset.p`. We will use this dataset for the next notebook and this will count for the submission.
<div class="alert alert-danger">
<h3>Note</h3>
<p>Each time you make changes in `dataset`, you need to rerun the following code to make your changes saved, but <b>this is NOT the file which you should submit</b>. You will find the final file for submission in the second notebook.</p>
</div>
```
save_pickle(
data_dict={
"dataset": final_dataset,
"cifar_mean": cifar_mean,
"cifar_std": cifar_std,
},
file_name="cifar_dataset.p"
)
```
# Key Takeaways
1. Always have a look at your data before you start training any models on it.
2. Datasets should be organized in corresponding **Dataset** classes that support `__len__` and `__getitem__` methods, which allow us to call `len(dataset)` and `dataset[index]`.
3. Data often needs to be preprocessed. Such preprocessing can be implemented in **Transform** classes, which are callables that can be simply applied via `data_transformed = transform(data)`. However, we will rarely do that and apply transforms on the fly using a dataloader which we will introduce in the next notebook.
|
github_jupyter
|
import os
import pickle
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
from tqdm import tqdm
from exercise_code.data import (
ImageFolderDataset,
RescaleTransform,
NormalizeTransform,
ComposeTransform,
compute_image_mean_and_std,
)
from exercise_code.tests import (
test_image_folder_dataset,
test_rescale_transform,
test_compute_image_mean_and_std,
test_len_dataset,
test_item_dataset,
test_transform_dataset,
save_pickle
)
%load_ext autoreload
%autoreload 2
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Set up the output dataset folder
i2dl_exercises_path = os.path.dirname(os.path.abspath(os.getcwd()))
cifar_root = os.path.join(i2dl_exercises_path, "datasets", "cifar10")
# Init the dataset and display downloading information this one time
dataset = ImageFolderDataset(
root=cifar_root,
force_download=False,
verbose=True
)
def load_image_as_numpy(image_path):
return np.asarray(Image.open(image_path), dtype=float)
classes = [
'plane', 'car', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck',
]
num_classes = len(classes)
samples_per_class = 7
for label, cls in enumerate(sorted(classes)):
for i in range(samples_per_class):
image_path = os.path.join(
cifar_root,
cls,
str(i+1).zfill(4) + ".png"
) # e.g. cifar10/plane/0001.png
image = np.asarray(Image.open(image_path)) # open image as numpy array
plt_idx = i * num_classes + label + 1 # calculate plot location in the grid
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(image.astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls) # plot class names above columns
plt.show()
dataset = ImageFolderDataset(
root=cifar_root,
)
_ = test_len_dataset(dataset)
dataset = ImageFolderDataset(
root=cifar_root,
)
_ = test_item_dataset(dataset)
sample_item = dataset[0]
sample_image = sample_item["image"]
sample_label = sample_item["label"]
print('Sample image shape:', sample_image.shape)
print('Sample label:', sample_label)
print('Sample image first values:', sample_image[0][0])
num_samples = 0
for sample in tqdm(dataset):
num_samples += 1
print("Number of samples:", num_samples)
This will be realized in the pipeline by defining so called transforms. Instead of applying them globally to the input data, you will apply those seperatly to each sample after loading it in the `__getitem__` call of the dataset.
<div class="alert alert-info">
<h3>Task: Implement</h3>
<p>Modify the <code>__getitem__(self, index)</code> method in <code>exercise_code/data/image_folder_dataset.py</code> such that it applies <code>self.transform</code>. With this change you can simply define the transforms during dataset creation and apply those automatically for each <code>__getitem__</code> call. Make sure not to break it though ;).
</div>
Equipped with this change, you can now easily add the two preprocessing techniques above for CIFAR-10. You will do so in the following steps by implementing the classes `RescaleTransform` and `NormalizeTransform` in `exercise_code/data/transforms.py`.
### 4.1 Rescaling Images using RescaleTransform
Let's start by implementing `RescaleTransform`. If you look at the `__init__()` method, you will notice it has four arguments:
* **out_range** is the range you wish to rescale your images to. E.g. if you want to scale your images to [-1, 1], you would use `range=(-1, 1)`. By default, they will be scaled to [0, 1].
* **in_range** is the value range of the data prior to rescaling. For uint8 images, this will always be (0, 255).
<div class="alert alert-info">
<h3>Task: Implement</h3>
<p>Implement the <code>__call__()</code> method of <code>RescaleTransform</code> in <code>exercise_code/data/transforms.py</code> and test your implementation by running the following cell.
</div>
If you look at the first image, you should now see that all values are between 0 and 1.
### 4.2 Normalize Images to Standard Gaussian using NormalizeTransform
Let us now move on to the `NormalizeTransform` class. The `NormalizeTransform` class normalizes images channel-wise and its `__init__` method has two arguments:
* **mean** is the normalization mean, which will be subtracted from the dataset.
* **std** is the normalization standard deviation. By scaling the data with a factor of `1/std` the standard deviation will be normazlied accordingly.
Have a look at the code in `exercise_code/data/transforms.py`.
The next step is to normalize the CIFAR-10 **images channel-wise** to standard normal. To do so, you need to calculate the **per-channel image mean and standard deviation** first, which you can then provide to `NormalizeTransform` to normalize the data accordingly.
<div class="alert alert-info">
<h3>Task: Implement</h3>
<p>Implement the <code>compute_image_mean_and_std()</code> method and the <code>__call__()</code> method of <code>NormalizeTransform</code> in <code>exercise_code/data/transforms.py</code>. Compute the rescaled dataset's mean and variance by running the following cell.
</div>
To test your implementation, run the following code:
Now you can use the mean and standard deviation you computed to normalize the loaded data. This can be done by simply adding the `NormalizeTransform` to the list of transformations our dataset applies in `__getitem__()`.
<div class="alert alert-success">
<h3>Task: Check Code</h3>
<p>Please check out the <code>ComposeTransform</code> in <code>transforms.py</code>. Later on, we will most often use multiple transforms and chain them together. Remember that the order is of importance here!</p>
</div>
You can now check out the results of the transformed samples:
## 5. Save your Dataset
Now save your dataset and transforms using the following cell. This will save it to a pickle file `models/cifar_dataset.p`. We will use this dataset for the next notebook and this will count for the submission.
<div class="alert alert-danger">
<h3>Note</h3>
<p>Each time you make changes in `dataset`, you need to rerun the following code to make your changes saved, but <b>this is NOT the file which you should submit</b>. You will find the final file for submission in the second notebook.</p>
</div>
| 0.66454 | 0.993163 |
# Linear algebra
**Linear algebra** is one of the most useful areas of mathematics in all applied work, including data science, artificial intelligence, and machine learning.
## Vectors in two dimensions
In everyday life, we are used to doing arithmetics with numbers, such as
```
5 + 3
```
and
```
10 * 5
```
The numbers 5 and 3 are mathematical objects. One can think of other kinds of mathematical objects. They may or may not be composed of numbers. In order to specify the location of a point on a two-dimensional plane you need a mathematical object composed of *two* different numbers: the $x$- and $y$-coordinates. Such a point may be given by a single mathematical object, $(5, 3)$, where we understand that the first number specifies the $x$-coordinate, while the second the $y$-coordinate — the order of numbers in this pair matters.
We can visualise this mathematical object by means of a plot:
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(5, 5))
plt.plot(5, 3, 'x')
plt.xlim(-10, 10)
plt.ylim(-10, 10);
```
It may be useful to think of this object, $(5, 3)$, (which we shall call a **vector**) as *displacement* from the **origin** $(0, 0)$. We can then read $(5, 3)$ as "go to the right (of the origin) by five units, and then go up (from the origin) by three units". Therefore vectors may be visualised by *arrows* as well as by *points*:
```
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, 5, 3, head_width=.75, length_includes_head=True)
plt.xlim(-10, 10)
plt.ylim(-10, 10);
```
## Vector addition
Would it make sense to define **addition** for vectors? And if it would, how would we define it? Thinking of vectors as displacements gives us a clue: the sum of two vectors, $\mathbf{u}$ and $\mathbf{v}$, could be defined by "go in the direction specified by $\mathbf{u}$, then in the direction specified by $\mathbf{v}$".
If, for example, $\mathbf{u} = (5, 3)$ and $\mathbf{v} = (4, 6)$, then their sum would be obtained as follows:
* Start at the origin.
* Move in the direction specified by $\mathbf{u}$: "go to the right by five units, and then go up by three units".
* Then move in the direction specified by $\mathbf{v}$: "go to the right by four units, and then go up by six units".
The end result?
```
u = np.array((5, 3))
v = np.array((4, 6))
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{u}$', u + (-.5, -1.5))
plt.arrow(u[0], u[1], v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{v}$', u + v + (-.5, -2.))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
```
Geometrically, we have appended the arrow representing the vector $\mathbf{v}$ to the end of the arrow representing the vector $\mathbf{u}$ drawn starting at the origin.
What if we started at the origin, went in the direction specified by $\mathbf{v}$ and then went in the direction specified by $\mathbf{u}$? Where would we end up?
```
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{v}$', v + (-.5, .5))
plt.arrow(v[0], v[1], u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{u}$', v + u + (-2., -.5))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
```
We would end up in the same place! More generally, for any vectors $\mathbf{u}$ and $\mathbf{v}$, vector addition is **commutative**, in other words, $\mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u}$. Let us visualise this on a plot:
```
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{u}$', u + (-.5, -1.5))
plt.arrow(u[0], u[1], v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{v}$', u + v + (-.5, -2.))
plt.arrow(0, 0, v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{v}$', v + (-.5, .5))
plt.arrow(v[0], v[1], u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{u}$', v + u + (-2., -.5))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
```
The sum $\mathbf{u} + \mathbf{v}$ (which, of course, is equal to $\mathbf{v} + \mathbf{u}$ since vector addition is commutative) is itself a vector, which is represented by the diagonal of the parallelogram formed by the arrows above:
```
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{u}$', u + (-.5, -1.5))
plt.arrow(u[0], u[1], v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{v}$', u + v + (-.5, -2.))
plt.arrow(0, 0, v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{v}$', v + (-.5, .5))
plt.arrow(v[0], v[1], u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{u}$', v + u + (-2., -.5))
plt.arrow(0, 0, u[0] + v[0], u[1] + v[1], head_width=.75, length_includes_head=True)
plt.xlim(-10, 10)
plt.ylim(-10, 10);
```
We observe that the sum of $\mathbf{u} = (5, 3)$ and $\mathbf{v} = (4, 6)$ is given by adding them *element-wise* or *coordinate-wise*: $\mathbf{u} + \mathbf{v} = (5 + 4, 3 + 6) = (9, 9)$. It is indeed unsurprising that vector addition is commutative, since the addition of ordinary numbers is commutative: $$\mathbf{u} + \mathbf{v} = (5 + 4, 3 + 6) = (4 + 5, 6 + 3) = \mathbf{v} + \mathbf{u}.$$
## Scalar multiplication
Would it make sense to multiply a vector, such as $\mathbf{u} = (5, 3)$ by a number, say $\alpha = 1.5$ (we'll start referring to ordinary numbers as **scalars**)? A natural way to define **scalar multiplication** of vectors would also be element-wise:
$$\alpha \mathbf{u} = 1.5 (5, 3) = (1.5 \cdot 5, 1.5 \cdot 3) = (7.5, 4.5).$$
How can we interpret this geometrically? Well, it turns out that we obtain a vector whose length is $1.5$ times that of $u$, and whose direction is the same as that of $u$:
```
alpha = 1.5
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\mathbf{u}$', u + (-.5, 1.))
plt.arrow(0, 0, alpha * u[0], alpha * u[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\alpha \mathbf{u}$', alpha * u + (-.5, 1.))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
```
What if, instead, we multiplied $\mathbf{u}$ by $\beta = -1.5$? Well,
$$\beta \mathbf{u} = -1.5(5, 3) = (-7.5, -4.5).$$
```
beta = -1.5
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\mathbf{u}$', u + (-.5, 1.))
plt.arrow(0, 0, beta * u[0], beta * u[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\beta \mathbf{u}$', beta * u + (-.5, 1.))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
```
Geometrically, we have obtained a vector whose length is $1.5$ times that of $\mathbf{u}$, and whose direction is the *opposite* (because $\beta$ is negative) to that of $\mathbf{u}$.
## The length of a vector: vector norm
By the way, how do we obtain the length of a vector? By Pythagoras's theorem, we add up the coordinates and take the square root:
```
beta = -1.5
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\mathbf{u} = (5, 3)$', u + (-.5, 1.))
plt.arrow(0, 0, u[0], 0, head_width=.75, length_includes_head=True)
plt.annotate(r'$(5, 0)$', np.array((u[0], 0)) + (-.5, -1.))
plt.arrow(u[0], 0, 0, u[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$(0, 3)$', u + (.5, -1.))
#plt.annotate(r'$\mathbf{u}$', np.array((u[0], 0)) + (-.5, -1.))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
```
The resulting quantity, which is equal to the length of the vector, is called the **norm** of the vector and is denoted by
$$\|u\| = \sqrt{u_1^2 + u_2^2} = \sqrt{5^2 + 3^2} = \sqrt{34} = 5.8309518... .$$
Notice that, in Python, we can use NumPy arrays to represent vectors:
```
import numpy
u = np.array((5, 3))
u
v = np.array((4, 6))
v
```
NumPy "knows" the correct definition of vector addition:
```
u + v
v + u
```
It also knows the correct definition of multiplication of vectors by scalars:
```
alpha = 1.5
alpha * u
```
To obtain the norm of a vector, we can use
```
np.linalg.norm(u)
```
## The inner product, the angle between two vectors
The **inner product** or **dot product** of two vectors is the sum of products of their respective coordinates:
$$\langle \mathbf{u}, \mathbf{v} \rangle = u_1 \cdot v_1 + u_2 \cdot v_2.$$
In particular, for $\mathbf{u} = (5, 3)$ and $\mathbf{v} = (4, 6)$, it is given by
$$\langle \mathbf{u}, \mathbf{v} \rangle = 5 \cdot 4 + 3 \cdot 6 = 38.$$
We can check our calculations using Python:
```
np.dot(u, v)
```
It is easy to see that
$$\|\mathbf{u}\| = \sqrt{\langle \mathbf{u}, \mathbf{u} \rangle}.$$
It is also easy to notice that the inner product is **commutative**,
$$\langle \mathbf{u}, \mathbf{v} \rangle = \langle \mathbf{v}, \mathbf{u} \rangle.$$
Furthermore, if $\alpha$ is a scalar, then
$$\langle \alpha \mathbf{u}, \mathbf{v} \rangle = \alpha \langle \mathbf{u}, \mathbf{v} \rangle,$$
and
$$\langle \mathbf{u} + \mathbf{v}, \mathbf{w} \rangle = \langle \mathbf{u}, \mathbf{w} \rangle + \langle \mathbf{v}, \mathbf{w} \rangle;$$
these two properties together are referred to as **linearity in the first argument**.
The inner product is **positive-definite**. In other words, for all vectors $\mathbf{u}$,
$$\langle \mathbf{u}, \mathbf{u} \rangle \geq 0,$$
and
$$\langle \mathbf{u}, \mathbf{u} \rangle = 0$$
if and only if $\mathbf{u}$ is the **zero vector**, $\mathbf{0}$, i.e. the vector whose elements are all zero.
One can use the following formula to find the angle $\theta$ between two vectors $\mathbf{u}$ and $\mathbf{v}$:
$$\cos \theta = \frac{\langle \mathbf{u}, \mathbf{v} \rangle}{\|\mathbf{u}\| \|\mathbf{v}\|}.$$
Thus, in our example, with $\mathbf{u} = (5, 3)$ and $\mathbf{v} = (4, 6)$, the angle between the vectors is given by
```
np.arccos(np.dot(u, v) / (np.linalg.norm(u) * np.linalg.norm(v)))
```
radians or
```
0.44237422297674489 / np.pi * 180.
```
degrees. We can visually verify that this is indeed true:
```
u = np.array((5, 3))
v = np.array((4, 6))
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{u}$', u + (-.5, -1.5))
plt.arrow(0, 0, v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{v}$', v + (-.5, .75))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
```
## Vectors in three dimensions
So far, we have considered vectors that have two coordinates each, corresponding to coordinates on the two-dimensional plane. Instead, we could consider three-dimensional vectors, such as $a = (3, 5, 7)$ and $b = (4, 6, 4)$:
```
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.set_xlim((-10, 10))
ax.set_ylim((-10, 10))
ax.set_zlim((-10, 10))
ax.quiver(0, 0, 0, 3, 5, 7)
ax.quiver(0, 0, 0, 4, 6, 4);
```
In the three-dimensional case, vector addition and multiplication by scalars are defined elementwise, as before:
```
a = np.array((3., 5., 7.))
b = np.array((4., 6., 4.))
a + b
alpha * a
beta = -alpha
beta * a
```
## Vectors in general
We needn't restrict ourselves to three-dimensional vectors. We could easily define $c = (4, 7, 8, 2)$ and $d = (-12, 3, 7, 3)$, and do arithmetic element-wise, as before:
```
c = np.array((4, 7, 8, 2))
d = np.array((-12, 3, 7, 3))
c + d
alpha * c
```
The problem is that we wouldn't be able to visualise four-dimensional vectors. (We can nonetheless gain some geometric intuition by "pretending" that we deal with familiar two- and three-dimensional spaces.
Notice that it would only make sense to talk about adding the vectors $u$ and $v$ if they have the same number of elements. In general, we talk about the **vector space** of two-dimensional vectors, $\mathbb{R}^2$, the vector space of three-dimensional vectors, $\mathbb{R}^3$, the vector space of four-dimensional vectors, $\mathbb{R}^4$, etc. and write $$(3, 5, 7) \in \mathbb{R}^3$$ meaning that the vector $(3, 5, 7)$ is an element of $\mathbb{R}^3$. It makes sense to talk about the addition of two vectors if they belong to the same vector space.
Mathematicians like abstraction. Indeed, much of the power of mathematics is in abstraction. The notions of a vector and vector space can be further generalised as follows.
Formally, a **vector space** over a field $F$ is a set $V$ together with two operations that satisfy the following eight axioms, the first four axioms stipulate the properties of vector addition alone, whereas the last four involve scalar multiplication:
* **A1**: Associativity of addition: $(u + v) + w = u + (v + w)$.
* **A2**: Commutativity of addition: $u + v = v + u$.
* **A3**: **Identity** element of addition: there exists an element $0 \in V$, called the **zero vector**, such that $0 + v = v$ for all $v \in V$.
* **A4**: **Inverse** elements of addition: $v + (-v) = 0$.
* **S1**: Distributivity of scalar multiplication over vector addition: $\alpha(u + v) = \alpha u + \alpha v$.
* **S2**: Distributivity of scalar multiplication over field addition: $(\alpha + \beta)v = \alpha v + \beta v$.
* **S3**: Compatibility of scalar multiplication with field multiplication: $\alpha (\beta v) = (\alpha \beta) v$.
* **S4**: **Identity** element of scalar multiplication, preservation of scale: $1 v = v$.
```
u = np.array((3., 5., 7.))
v = np.array((4., 6., 4.))
w = np.array((-3., -3., 10.))
(u + v) + w
u + (v + w)
(u + v) + w == u + (v + w)
np.all((u + v) + w == u + (v + w))
np.all(u + v == v + u)
zero = np.array((0., 0., 0.))
np.all(zero + v == v)
np.all(np.array(v + (-v) == zero))
alpha = -5.
beta = 7.
np.all(alpha * (u + v) == alpha * u + alpha * v)
np.all((alpha + beta) * v == alpha * v + beta * v)
np.all(alpha * (beta * v) == (alpha * beta) * v)
np.all(1 * v == v)
u = lambda x: 2. * x
v = lambda x: x * x
w = lambda x: 3. * x + 1.
def plus(f1, f2):
return lambda x: f1(x) + f2(x)
lhs = plus(plus(u, v), w)
rhs = plus(u, plus(v, w))
lhs
rhs
lhs(5.)
rhs(5.)
lhs(5.) == rhs(5.)
lhs(10.) == rhs(10.)
plus(u, v)
plus(u, v)(5.)
plus(u, v)(5.) == plus(v, u)(5.)
def scalar_product(s, f):
return lambda x: s * f(x)
lhs = scalar_product(alpha, plus(u, v))
rhs = plus(scalar_product(alpha, u), scalar_product(alpha, v))
lhs(5.) == rhs(5.)
u = np.array((4., 6.))
v = np.array((5., 3.))
alpha = -.5
beta = 2.
w = alpha * u + beta * v
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.arrow(0, 0, v[0], v[1], head_width=.75, length_includes_head=True)
plt.arrow(0, 0, alpha*u[0], alpha*u[1], head_width=.75, length_includes_head=True)
plt.arrow(alpha*u[0], alpha*u[1], beta*v[0], beta*v[1], head_width=.75, length_includes_head=True)
plt.arrow(0, 0, w[0], w[1], head_width=.75, length_includes_head=True)
plt.xlim(-10, 10)
plt.ylim(-10, 10);
w = np.array((-5., 2.5))
90-35
55/14
3*14
55-42
(-55/14) * u + (15/7) * v
2.*u - 3.*v
```
Now let us suppose that we are given a vector in two dimensions, say, $w = (-7, 3)$:
```
w = np.array((-7., 3.))
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\mathbf{u}$', u + (-.5, .5))
plt.arrow(0, 0, v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\mathbf{v}$', v + (-.5, .5))
plt.arrow(0, 0, w[0], w[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\mathbf{w}$', w + (-.5, .5))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
```
Can we obtain this vector as a linear combination of $\mathbf{u}$ and $\mathbf{v}$? In other words, can we find the scalars $\alpha$ and $\beta$ such that $$\alpha u + \beta v = w?$$
This seems easy enough: what we really need is
$$\alpha (4, 6) + \beta (5, 3) = (-7, 3),$$
i.e.
$$(4\alpha, 6\alpha) + (5\beta, 3\beta) = (-7, 3),$$
or
$$(4\alpha + 5\beta, 6\alpha + 3\beta) = (-7, 3).$$
The left-hand side and the right-hand side must be equal coordinatewise. Thus we obtain a system of linear equations
$$4\alpha + 5\beta = -7,$$
$$6\alpha + 3\beta = 3.$$
From the second linear equation, we obtain
$$\alpha = \frac{3 - 3\beta}{6} = \frac{1 - \beta}{2}.$$
We substitute this into the first linear equation, obtaining
$$4 \cdot \frac{1 - \beta}{2} + 5\beta = -7,$$
whence $\beta = -3$, and so $\alpha = \frac{1 - (-3)}{2} = 2$.
|
github_jupyter
|
5 + 3
10 * 5
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(5, 5))
plt.plot(5, 3, 'x')
plt.xlim(-10, 10)
plt.ylim(-10, 10);
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, 5, 3, head_width=.75, length_includes_head=True)
plt.xlim(-10, 10)
plt.ylim(-10, 10);
u = np.array((5, 3))
v = np.array((4, 6))
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{u}$', u + (-.5, -1.5))
plt.arrow(u[0], u[1], v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{v}$', u + v + (-.5, -2.))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{v}$', v + (-.5, .5))
plt.arrow(v[0], v[1], u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{u}$', v + u + (-2., -.5))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{u}$', u + (-.5, -1.5))
plt.arrow(u[0], u[1], v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{v}$', u + v + (-.5, -2.))
plt.arrow(0, 0, v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{v}$', v + (-.5, .5))
plt.arrow(v[0], v[1], u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{u}$', v + u + (-2., -.5))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{u}$', u + (-.5, -1.5))
plt.arrow(u[0], u[1], v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{v}$', u + v + (-.5, -2.))
plt.arrow(0, 0, v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{v}$', v + (-.5, .5))
plt.arrow(v[0], v[1], u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{u}$', v + u + (-2., -.5))
plt.arrow(0, 0, u[0] + v[0], u[1] + v[1], head_width=.75, length_includes_head=True)
plt.xlim(-10, 10)
plt.ylim(-10, 10);
alpha = 1.5
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\mathbf{u}$', u + (-.5, 1.))
plt.arrow(0, 0, alpha * u[0], alpha * u[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\alpha \mathbf{u}$', alpha * u + (-.5, 1.))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
beta = -1.5
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\mathbf{u}$', u + (-.5, 1.))
plt.arrow(0, 0, beta * u[0], beta * u[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\beta \mathbf{u}$', beta * u + (-.5, 1.))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
beta = -1.5
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\mathbf{u} = (5, 3)$', u + (-.5, 1.))
plt.arrow(0, 0, u[0], 0, head_width=.75, length_includes_head=True)
plt.annotate(r'$(5, 0)$', np.array((u[0], 0)) + (-.5, -1.))
plt.arrow(u[0], 0, 0, u[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$(0, 3)$', u + (.5, -1.))
#plt.annotate(r'$\mathbf{u}$', np.array((u[0], 0)) + (-.5, -1.))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
import numpy
u = np.array((5, 3))
u
v = np.array((4, 6))
v
u + v
v + u
alpha = 1.5
alpha * u
np.linalg.norm(u)
np.dot(u, v)
np.arccos(np.dot(u, v) / (np.linalg.norm(u) * np.linalg.norm(v)))
0.44237422297674489 / np.pi * 180.
u = np.array((5, 3))
v = np.array((4, 6))
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{u}$', u + (-.5, -1.5))
plt.arrow(0, 0, v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate('$\mathbf{v}$', v + (-.5, .75))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.set_xlim((-10, 10))
ax.set_ylim((-10, 10))
ax.set_zlim((-10, 10))
ax.quiver(0, 0, 0, 3, 5, 7)
ax.quiver(0, 0, 0, 4, 6, 4);
a = np.array((3., 5., 7.))
b = np.array((4., 6., 4.))
a + b
alpha * a
beta = -alpha
beta * a
c = np.array((4, 7, 8, 2))
d = np.array((-12, 3, 7, 3))
c + d
alpha * c
u = np.array((3., 5., 7.))
v = np.array((4., 6., 4.))
w = np.array((-3., -3., 10.))
(u + v) + w
u + (v + w)
(u + v) + w == u + (v + w)
np.all((u + v) + w == u + (v + w))
np.all(u + v == v + u)
zero = np.array((0., 0., 0.))
np.all(zero + v == v)
np.all(np.array(v + (-v) == zero))
alpha = -5.
beta = 7.
np.all(alpha * (u + v) == alpha * u + alpha * v)
np.all((alpha + beta) * v == alpha * v + beta * v)
np.all(alpha * (beta * v) == (alpha * beta) * v)
np.all(1 * v == v)
u = lambda x: 2. * x
v = lambda x: x * x
w = lambda x: 3. * x + 1.
def plus(f1, f2):
return lambda x: f1(x) + f2(x)
lhs = plus(plus(u, v), w)
rhs = plus(u, plus(v, w))
lhs
rhs
lhs(5.)
rhs(5.)
lhs(5.) == rhs(5.)
lhs(10.) == rhs(10.)
plus(u, v)
plus(u, v)(5.)
plus(u, v)(5.) == plus(v, u)(5.)
def scalar_product(s, f):
return lambda x: s * f(x)
lhs = scalar_product(alpha, plus(u, v))
rhs = plus(scalar_product(alpha, u), scalar_product(alpha, v))
lhs(5.) == rhs(5.)
u = np.array((4., 6.))
v = np.array((5., 3.))
alpha = -.5
beta = 2.
w = alpha * u + beta * v
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.arrow(0, 0, v[0], v[1], head_width=.75, length_includes_head=True)
plt.arrow(0, 0, alpha*u[0], alpha*u[1], head_width=.75, length_includes_head=True)
plt.arrow(alpha*u[0], alpha*u[1], beta*v[0], beta*v[1], head_width=.75, length_includes_head=True)
plt.arrow(0, 0, w[0], w[1], head_width=.75, length_includes_head=True)
plt.xlim(-10, 10)
plt.ylim(-10, 10);
w = np.array((-5., 2.5))
90-35
55/14
3*14
55-42
(-55/14) * u + (15/7) * v
2.*u - 3.*v
w = np.array((-7., 3.))
plt.figure(figsize=(5, 5))
plt.plot(0, 0, 'o', markerfacecolor='none')
plt.arrow(0, 0, u[0], u[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\mathbf{u}$', u + (-.5, .5))
plt.arrow(0, 0, v[0], v[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\mathbf{v}$', v + (-.5, .5))
plt.arrow(0, 0, w[0], w[1], head_width=.75, length_includes_head=True)
plt.annotate(r'$\mathbf{w}$', w + (-.5, .5))
plt.xlim(-10, 10)
plt.ylim(-10, 10);
| 0.478529 | 0.992108 |
# Understanding Effect of Outliers on diffrent PDF
```
import jax.numpy as jnp
from jax import random
import matplotlib.pyplot as plt
from scipy.stats import t, laplace, norm
import seaborn as sns
import numpy as np
try:
from probml_utils import savefig, latexify
except ModuleNotFoundError:
%pip install -qq git+https://github.com/probml/probml-utils.git
from probml_utils import savefig, latexify
latexify(width_scale_factor=2, fig_height=2.0)
def plot_outlier_effect(
save_name,
outlier_pos=0,
outliers=[],
bins=7,
samples_norm_dist=30,
samples_graph_xaxis=500,
range_xaxis=[-5, 10],
range_yaxis=[0, 0.60],
fig=None,
ax=None,
):
"""
Sample from a normal distribution and plot the PDF for
normal distribution, laplacian distribution, and the student T
distribution. The function plots/saves data for distributions.
If outliers are provided, we see the robustness of the student
T distribution compared to the normal distribution.
Args:
----------
save_name : string
The filenames to save the graphs
outlier_pos : int, default=0
Changes position of outliers
outliers : list, default=[]
A list of outlier values
bins : int, default=7
Value of bin size for normal distribution histogram
samples_norm_dist : int, default=30
Number of samples to be taken from the normal distribution
samples_graph_xaxis : int, default=500
Number of values for the x-axis i.e the values the
random variable can take
range_xaxis : list, default=[-5, 10]
The range of values for the x-axis
range_yaxis : list, default=[0, 0.6]
The range of values for the y-axis
fig : None
Will be used to store matplotlib figure
ax : None
Will be used to store matplotlib axes
Returns:
----------
fig : matplotlib figure object
Stores the graph data displayed
ax : matplotlib axis object
Stores the axes data of the graph displayed
"""
# Generate Samples from normal distribution
norm_dist_sample = random.normal(random.PRNGKey(42), shape=(samples_norm_dist,))
# Generate values for x axis i.e. the values your random variable can take
x_axis = jnp.linspace(range_xaxis[0], range_xaxis[1], samples_graph_xaxis)
# Set figure
fig, ax = plt.subplots()
if outliers:
samples = jnp.hstack((norm_dist_sample, jnp.array(outliers) + outlier_pos))
# Plot the data from normal distribution
ax.hist(
np.array(norm_dist_sample),
bins,
color="steelblue",
ec="steelblue",
weights=[1 / (norm_dist_sample.shape[0] + len(outliers))] * norm_dist_sample.shape[0],
rwidth=0.8,
)
# Plot outlier data
ax.hist(
np.array(outliers) + outlier_pos,
len(outliers),
color="steelblue",
ec="steelblue",
weights=[1 / (norm_dist_sample.shape[0] + len(outliers))] * len(outliers),
rwidth=0.8,
)
else:
samples = norm_dist_sample
# Plot the data from normal distribution
ax.hist(
np.array(norm_dist_sample),
bins,
color="steelblue",
ec="steelblue",
weights=[1 / norm_dist_sample.shape[0]] * norm_dist_sample.shape[0],
rwidth=0.8,
)
# Calculate mean and standard deviation for different distributions and then
# find the PDF for each distribution
loc, scale = norm.fit(samples)
norm_pdf = norm.pdf(x_axis, loc=loc, scale=scale)
loc, scale = laplace.fit(samples)
laplace_pdf = laplace.pdf(x_axis, loc=loc, scale=scale)
fd, loc, scale = t.fit(samples)
studentT_pdf = t.pdf(x_axis, fd, loc=loc, scale=scale)
# Find range of values for PDF i.e y-axis
y_range = range_yaxis
# Update tick intervals for x-axis
ax.set_xticks(jnp.arange(range_xaxis[0], range_xaxis[1] + 1, 5))
# Update the tick intervals and limit for y-axis
ax.set_ylim(y_range)
ax.set_yticks(jnp.linspace(y_range[0], y_range[1], 5))
# Plot the different PDF's obtained
ax.plot(x_axis, norm_pdf, "k-", linewidth=2.0)
ax.plot(x_axis, studentT_pdf, "r-.", linewidth=2.0)
ax.plot(x_axis, laplace_pdf, "b:", linewidth=2.0)
# Update the Legend and the axis labels
ax.legend(("gaussian", "student T", "laplace", "data"))
ax.set_xlabel("$x$")
ax.set_ylabel("$p(x)$")
sns.despine()
# Save figure to files
if len(save_name) > 0:
savefig(save_name)
return fig, ax
plot_outlier_effect(save_name="robust_pdf_plot_latexified")
plot_outlier_effect(save_name="robust_pdf_plot_outliers_latexified", outliers=[8, 8.75, 9.5])
from ipywidgets import interact
@interact(outlier_pos=(-5, 5))
def interactive_plot(outlier_pos):
fig, ax = plot_outlier_effect(save_name="", outlier_pos=outlier_pos, outliers=[8, 8.75, 9.5])
```
|
github_jupyter
|
import jax.numpy as jnp
from jax import random
import matplotlib.pyplot as plt
from scipy.stats import t, laplace, norm
import seaborn as sns
import numpy as np
try:
from probml_utils import savefig, latexify
except ModuleNotFoundError:
%pip install -qq git+https://github.com/probml/probml-utils.git
from probml_utils import savefig, latexify
latexify(width_scale_factor=2, fig_height=2.0)
def plot_outlier_effect(
save_name,
outlier_pos=0,
outliers=[],
bins=7,
samples_norm_dist=30,
samples_graph_xaxis=500,
range_xaxis=[-5, 10],
range_yaxis=[0, 0.60],
fig=None,
ax=None,
):
"""
Sample from a normal distribution and plot the PDF for
normal distribution, laplacian distribution, and the student T
distribution. The function plots/saves data for distributions.
If outliers are provided, we see the robustness of the student
T distribution compared to the normal distribution.
Args:
----------
save_name : string
The filenames to save the graphs
outlier_pos : int, default=0
Changes position of outliers
outliers : list, default=[]
A list of outlier values
bins : int, default=7
Value of bin size for normal distribution histogram
samples_norm_dist : int, default=30
Number of samples to be taken from the normal distribution
samples_graph_xaxis : int, default=500
Number of values for the x-axis i.e the values the
random variable can take
range_xaxis : list, default=[-5, 10]
The range of values for the x-axis
range_yaxis : list, default=[0, 0.6]
The range of values for the y-axis
fig : None
Will be used to store matplotlib figure
ax : None
Will be used to store matplotlib axes
Returns:
----------
fig : matplotlib figure object
Stores the graph data displayed
ax : matplotlib axis object
Stores the axes data of the graph displayed
"""
# Generate Samples from normal distribution
norm_dist_sample = random.normal(random.PRNGKey(42), shape=(samples_norm_dist,))
# Generate values for x axis i.e. the values your random variable can take
x_axis = jnp.linspace(range_xaxis[0], range_xaxis[1], samples_graph_xaxis)
# Set figure
fig, ax = plt.subplots()
if outliers:
samples = jnp.hstack((norm_dist_sample, jnp.array(outliers) + outlier_pos))
# Plot the data from normal distribution
ax.hist(
np.array(norm_dist_sample),
bins,
color="steelblue",
ec="steelblue",
weights=[1 / (norm_dist_sample.shape[0] + len(outliers))] * norm_dist_sample.shape[0],
rwidth=0.8,
)
# Plot outlier data
ax.hist(
np.array(outliers) + outlier_pos,
len(outliers),
color="steelblue",
ec="steelblue",
weights=[1 / (norm_dist_sample.shape[0] + len(outliers))] * len(outliers),
rwidth=0.8,
)
else:
samples = norm_dist_sample
# Plot the data from normal distribution
ax.hist(
np.array(norm_dist_sample),
bins,
color="steelblue",
ec="steelblue",
weights=[1 / norm_dist_sample.shape[0]] * norm_dist_sample.shape[0],
rwidth=0.8,
)
# Calculate mean and standard deviation for different distributions and then
# find the PDF for each distribution
loc, scale = norm.fit(samples)
norm_pdf = norm.pdf(x_axis, loc=loc, scale=scale)
loc, scale = laplace.fit(samples)
laplace_pdf = laplace.pdf(x_axis, loc=loc, scale=scale)
fd, loc, scale = t.fit(samples)
studentT_pdf = t.pdf(x_axis, fd, loc=loc, scale=scale)
# Find range of values for PDF i.e y-axis
y_range = range_yaxis
# Update tick intervals for x-axis
ax.set_xticks(jnp.arange(range_xaxis[0], range_xaxis[1] + 1, 5))
# Update the tick intervals and limit for y-axis
ax.set_ylim(y_range)
ax.set_yticks(jnp.linspace(y_range[0], y_range[1], 5))
# Plot the different PDF's obtained
ax.plot(x_axis, norm_pdf, "k-", linewidth=2.0)
ax.plot(x_axis, studentT_pdf, "r-.", linewidth=2.0)
ax.plot(x_axis, laplace_pdf, "b:", linewidth=2.0)
# Update the Legend and the axis labels
ax.legend(("gaussian", "student T", "laplace", "data"))
ax.set_xlabel("$x$")
ax.set_ylabel("$p(x)$")
sns.despine()
# Save figure to files
if len(save_name) > 0:
savefig(save_name)
return fig, ax
plot_outlier_effect(save_name="robust_pdf_plot_latexified")
plot_outlier_effect(save_name="robust_pdf_plot_outliers_latexified", outliers=[8, 8.75, 9.5])
from ipywidgets import interact
@interact(outlier_pos=(-5, 5))
def interactive_plot(outlier_pos):
fig, ax = plot_outlier_effect(save_name="", outlier_pos=outlier_pos, outliers=[8, 8.75, 9.5])
| 0.755186 | 0.895796 |
# CS294-112 Fall 2018 Tensorflow Tutorial
This tutorial will provide a brief overview of the core concepts and functionality of Tensorflow. This tutorial will cover the following:
0. What is Tensorflow
1. How to input data
2. How to perform computations
3. How to create variables
4. How to train a neural network for a simple regression problem
5. Tips and tricks
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.patches as mpatches
def tf_reset():
try:
sess.close()
except:
pass
tf.reset_default_graph()
return tf.Session()
```
# 0. What is Tensorflow
Tensorflow is a framework to define a series of computations. You define inputs, what operations should be performed, and then Tensorflow will compute the outputs for you.
Below is a simple high-level example:
```
# create the session you'll work in
# you can think of this as a "blank piece of paper" that you'll be writing math on
sess = tf_reset()
# define your inputs
a = tf.constant(1.0)
b = tf.constant(2.0)
# do some operations
c = a + b
# get the result
c_run = sess.run(c)
print('c = {0}'.format(c_run))
```
# 1. How to input data
Tensorflow has multiple ways for you to input data. One way is to have the inputs be constants:
```
sess = tf_reset()
# define your inputs
a = tf.constant(1.0)
b = tf.constant(2.0)
# do some operations
c = a + b
# get the result
c_run = sess.run(c)
print('c = {0}'.format(c_run))
```
However, having our inputs be constants is inflexible. We want to be able to change what data we input at runtime. We can do this using placeholders:
```
sess = tf_reset()
# define your inputs
a = tf.placeholder(dtype=tf.float32, shape=[1], name='a_placeholder')
b = tf.placeholder(dtype=tf.float32, shape=[1], name='b_placeholder')
# do some operations
c = a + b
# get the result
c0_run = sess.run(c, feed_dict={a: [1.0], b: [2.0]})
c1_run = sess.run(c, feed_dict={a: [2.0], b: [4.0]})
print('c0 = {0}'.format(c0_run))
print('c1 = {0}'.format(c1_run))
```
But what if we don't know the size of our input beforehand? One dimension of a tensor is allowed to be 'None', which means it can be variable sized:
```
sess = tf_reset()
# inputs
a = tf.placeholder(dtype=tf.float32, shape=[None], name='a_placeholder')
b = tf.placeholder(dtype=tf.float32, shape=[None], name='b_placeholder')
# do some operations
c = a + b
# get outputs
c0_run = sess.run(c, feed_dict={a: [1.0], b: [2.0]})
c1_run = sess.run(c, feed_dict={a: [1.0, 2.0], b: [2.0, 4.0]})
print(a)
print('a shape: {0}'.format(a.get_shape()))
print(b)
print('b shape: {0}'.format(b.get_shape()))
print('c0 = {0}'.format(c0_run))
print('c1 = {0}'.format(c1_run))
```
# 2. How to perform computations
Now that we can input data, we want to perform useful computations on the data.
First, let's create some data to work with:
```
sess = tf_reset()
# inputs
a = tf.constant([[-1.], [-2.], [-3.]], dtype=tf.float32)
b = tf.constant([[1., 2., 3.]], dtype=tf.float32)
a_run, b_run = sess.run([a, b])
print('a:\n{0}'.format(a_run))
print('b:\n{0}'.format(b_run))
```
We can do simple operations, such as addition:
```
c = b + b
c_run = sess.run(c)
print('b:\n{0}'.format(b_run))
print('c:\n{0}'.format(c_run))
```
Be careful about the dimensions of the tensors, some operations may work even when you think they shouldn't...
```
c = a + b
c_run = sess.run(c)
print('a:\n{0}'.format(a_run))
print('b:\n{0}'.format(b_run))
print('c:\n{0}'.format(c_run))
```
Also, some operations may be different than what you expect:
```
c_elementwise = a * b
c_matmul = tf.matmul(b, a)
c_elementwise_run, c_matmul_run = sess.run([c_elementwise, c_matmul])
print('a:\n{0}'.format(a_run))
print('b:\n{0}'.format(b_run))
print('c_elementwise:\n{0}'.format(c_elementwise_run))
print('c_matmul: \n{0}'.format(c_matmul_run))
```
Operations can be chained together:
```
# operations can be chained together
c0 = b + b
c1 = c0 + 1
c0_run, c1_run = sess.run([c0, c1])
print('b:\n{0}'.format(b_run))
print('c0:\n{0}'.format(c0_run))
print('c1:\n{0}'.format(c1_run))
```
Finally, Tensorflow has many useful built-in operations:
```
c = tf.reduce_mean(b)
c_run = sess.run(c)
print('b:\n{0}'.format(b_run))
print('c:\n{0}'.format(c_run))
```
# 3. How to create variables
Now that we can input data and perform computations, we want some of these operations to involve variables that are free parameters, and can be trained using an optimizer (e.g., gradient descent).
First, let's create some data to work with:
```
sess = tf_reset()
# inputs
b = tf.constant([[1., 2., 3.]], dtype=tf.float32)
sess = tf.Session()
b_run = sess.run(b)
print('b:\n{0}'.format(b_run))
```
We'll now create a variable
```
var_init_value = [[2.0, 4.0, 6.0]]
var = tf.get_variable(name='myvar',
shape=[1, 3],
dtype=tf.float32,
initializer=tf.constant_initializer(var_init_value))
print(var)
```
and check that it's been added to Tensorflow's variables list:
```
print(tf.global_variables())
```
We can do operations with the variable just like any other tensor:
```
# can do operations
c = b + var
print(b)
print(var)
print(c)
```
Before we can run any of these operations, we must first initalize the variables
```
init_op = tf.global_variables_initializer()
sess.run(init_op)
```
and then we can run the operations just as we normally would.
```
c_run = sess.run(c)
print('b:\n{0}'.format(b_run))
print('var:\n{0}'.format(var_init_value))
print('c:\n{0}'.format(c_run))
```
So far we haven't said yet how to optimize these variables. We'll cover that next in the context of an example.
# 4. How to train a neural network for a simple regression problem
We've discussed how to input data, perform operations, and create variables. We'll now show how to combine all of these---with some minor additions---to train a neural network on a simple regression problem.
First, we'll create data for a 1-dimensional regression problem:
```
# generate the data
inputs = np.linspace(-2*np.pi, 2*np.pi, 10000)[:, None]
outputs = np.sin(inputs) + 0.05 * np.random.normal(size=[len(inputs),1])
plt.scatter(inputs[:, 0], outputs[:, 0], s=0.1, color='k', marker='o')
```
The below code creates the inputs, variables, neural network operations, mean-squared-error loss, gradient descent optimizer, and runs the optimizer using minibatches of the data.
```
sess = tf_reset()
def create_model():
# create inputs
input_ph = tf.placeholder(dtype=tf.float32, shape=[None, 1])
output_ph = tf.placeholder(dtype=tf.float32, shape=[None, 1])
# create variables
W0 = tf.get_variable(name='W0', shape=[1, 20], initializer=tf.contrib.layers.xavier_initializer())
W1 = tf.get_variable(name='W1', shape=[20, 20], initializer=tf.contrib.layers.xavier_initializer())
W2 = tf.get_variable(name='W2', shape=[20, 1], initializer=tf.contrib.layers.xavier_initializer())
b0 = tf.get_variable(name='b0', shape=[20], initializer=tf.constant_initializer(0.))
b1 = tf.get_variable(name='b1', shape=[20], initializer=tf.constant_initializer(0.))
b2 = tf.get_variable(name='b2', shape=[1], initializer=tf.constant_initializer(0.))
weights = [W0, W1, W2]
biases = [b0, b1, b2]
activations = [tf.nn.relu, tf.nn.relu, None]
# create computation graph
layer = input_ph
for W, b, activation in zip(weights, biases, activations):
layer = tf.matmul(layer, W) + b
if activation is not None:
layer = activation(layer)
output_pred = layer
return input_ph, output_ph, output_pred
input_ph, output_ph, output_pred = create_model()
# create loss
mse = tf.reduce_mean(0.5 * tf.square(output_pred - output_ph))
# create optimizer
opt = tf.train.AdamOptimizer().minimize(mse)
# initialize variables
sess.run(tf.global_variables_initializer())
# create saver to save model variables
saver = tf.train.Saver()
# run training
batch_size = 32
for training_step in range(10001):
# get a random subset of the training data
indices = np.random.randint(low=0, high=len(inputs), size=batch_size)
input_batch = inputs[indices]
output_batch = outputs[indices]
# run the optimizer and get the mse
_, mse_run = sess.run([opt, mse], feed_dict={input_ph: input_batch, output_ph: output_batch})
# print the mse every so often
if training_step % 1000 == 0:
print('{0:04d} mse: {1:.3f}'.format(training_step, mse_run))
saver.save(sess, '/tmp/model.ckpt')
```
Now that the neural network is trained, we can use it to make predictions:
```
sess = tf_reset()
# create the model
input_ph, output_ph, output_pred = create_model()
# restore the saved model
saver = tf.train.Saver()
saver.restore(sess, "/tmp/model.ckpt")
output_pred_run = sess.run(output_pred, feed_dict={input_ph: inputs})
plt.scatter(inputs[:, 0], outputs[:, 0], c='k', marker='o', s=0.1)
plt.scatter(inputs[:, 0], output_pred_run[:, 0], c='r', marker='o', s=0.1)
```
Not so hard after all! There is much more functionality to Tensorflow besides what we've covered, but you now know the basics.
# 5. Tips and tricks
##### (a) Check your dimensions
```
# example of "surprising" resulting dimensions due to broadcasting
a = tf.constant(np.random.random((4, 1)))
b = tf.constant(np.random.random((1, 4)))
c = a * b
assert c.get_shape() == (4, 4)
```
##### (b) Check what variables have been created
```
sess = tf_reset()
a = tf.get_variable('I_am_a_variable', shape=[4, 6])
b = tf.get_variable('I_am_a_variable_too', shape=[2, 7])
for var in tf.global_variables():
print(var.name)
```
##### (c) Look at the [tensorflow API](https://www.tensorflow.org/api_docs/python/), or open up a python terminal and investigate!
```
help(tf.reduce_mean)
```
##### (d) Tensorflow has some built-in layers to simplify your code.
```
help(tf.contrib.layers.fully_connected)
```
##### (e) Use [variable scope](https://www.tensorflow.org/guide/variables#sharing_variables) to keep your variables organized.
```
sess = tf_reset()
# create variables
with tf.variable_scope('layer_0'):
W0 = tf.get_variable(name='W0', shape=[1, 20], initializer=tf.contrib.layers.xavier_initializer())
b0 = tf.get_variable(name='b0', shape=[20], initializer=tf.constant_initializer(0.))
with tf.variable_scope('layer_1'):
W1 = tf.get_variable(name='W1', shape=[20, 20], initializer=tf.contrib.layers.xavier_initializer())
b1 = tf.get_variable(name='b1', shape=[20], initializer=tf.constant_initializer(0.))
with tf.variable_scope('layer_2'):
W2 = tf.get_variable(name='W2', shape=[20, 1], initializer=tf.contrib.layers.xavier_initializer())
b2 = tf.get_variable(name='b2', shape=[1], initializer=tf.constant_initializer(0.))
# print the variables
var_names = sorted([v.name for v in tf.global_variables()])
print('\n'.join(var_names))
```
##### (f) You can specify which GPU you want to use and how much memory you want to use
```
gpu_device = 0
gpu_frac = 0.5
# make only one of the GPUs visible
import os
os.environ["CUDA_VISIBLE_DEVICES"] = str(gpu_device)
# only use part of the GPU memory
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_frac)
config = tf.ConfigProto(gpu_options=gpu_options)
# create the session
tf_sess = tf.Session(graph=tf.Graph(), config=config)
```
##### (g) You can use [tensorboard](https://www.tensorflow.org/guide/summaries_and_tensorboard) to visualize and monitor the training process.
|
github_jupyter
|
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.patches as mpatches
def tf_reset():
try:
sess.close()
except:
pass
tf.reset_default_graph()
return tf.Session()
# create the session you'll work in
# you can think of this as a "blank piece of paper" that you'll be writing math on
sess = tf_reset()
# define your inputs
a = tf.constant(1.0)
b = tf.constant(2.0)
# do some operations
c = a + b
# get the result
c_run = sess.run(c)
print('c = {0}'.format(c_run))
sess = tf_reset()
# define your inputs
a = tf.constant(1.0)
b = tf.constant(2.0)
# do some operations
c = a + b
# get the result
c_run = sess.run(c)
print('c = {0}'.format(c_run))
sess = tf_reset()
# define your inputs
a = tf.placeholder(dtype=tf.float32, shape=[1], name='a_placeholder')
b = tf.placeholder(dtype=tf.float32, shape=[1], name='b_placeholder')
# do some operations
c = a + b
# get the result
c0_run = sess.run(c, feed_dict={a: [1.0], b: [2.0]})
c1_run = sess.run(c, feed_dict={a: [2.0], b: [4.0]})
print('c0 = {0}'.format(c0_run))
print('c1 = {0}'.format(c1_run))
sess = tf_reset()
# inputs
a = tf.placeholder(dtype=tf.float32, shape=[None], name='a_placeholder')
b = tf.placeholder(dtype=tf.float32, shape=[None], name='b_placeholder')
# do some operations
c = a + b
# get outputs
c0_run = sess.run(c, feed_dict={a: [1.0], b: [2.0]})
c1_run = sess.run(c, feed_dict={a: [1.0, 2.0], b: [2.0, 4.0]})
print(a)
print('a shape: {0}'.format(a.get_shape()))
print(b)
print('b shape: {0}'.format(b.get_shape()))
print('c0 = {0}'.format(c0_run))
print('c1 = {0}'.format(c1_run))
sess = tf_reset()
# inputs
a = tf.constant([[-1.], [-2.], [-3.]], dtype=tf.float32)
b = tf.constant([[1., 2., 3.]], dtype=tf.float32)
a_run, b_run = sess.run([a, b])
print('a:\n{0}'.format(a_run))
print('b:\n{0}'.format(b_run))
c = b + b
c_run = sess.run(c)
print('b:\n{0}'.format(b_run))
print('c:\n{0}'.format(c_run))
c = a + b
c_run = sess.run(c)
print('a:\n{0}'.format(a_run))
print('b:\n{0}'.format(b_run))
print('c:\n{0}'.format(c_run))
c_elementwise = a * b
c_matmul = tf.matmul(b, a)
c_elementwise_run, c_matmul_run = sess.run([c_elementwise, c_matmul])
print('a:\n{0}'.format(a_run))
print('b:\n{0}'.format(b_run))
print('c_elementwise:\n{0}'.format(c_elementwise_run))
print('c_matmul: \n{0}'.format(c_matmul_run))
# operations can be chained together
c0 = b + b
c1 = c0 + 1
c0_run, c1_run = sess.run([c0, c1])
print('b:\n{0}'.format(b_run))
print('c0:\n{0}'.format(c0_run))
print('c1:\n{0}'.format(c1_run))
c = tf.reduce_mean(b)
c_run = sess.run(c)
print('b:\n{0}'.format(b_run))
print('c:\n{0}'.format(c_run))
sess = tf_reset()
# inputs
b = tf.constant([[1., 2., 3.]], dtype=tf.float32)
sess = tf.Session()
b_run = sess.run(b)
print('b:\n{0}'.format(b_run))
var_init_value = [[2.0, 4.0, 6.0]]
var = tf.get_variable(name='myvar',
shape=[1, 3],
dtype=tf.float32,
initializer=tf.constant_initializer(var_init_value))
print(var)
print(tf.global_variables())
# can do operations
c = b + var
print(b)
print(var)
print(c)
init_op = tf.global_variables_initializer()
sess.run(init_op)
c_run = sess.run(c)
print('b:\n{0}'.format(b_run))
print('var:\n{0}'.format(var_init_value))
print('c:\n{0}'.format(c_run))
# generate the data
inputs = np.linspace(-2*np.pi, 2*np.pi, 10000)[:, None]
outputs = np.sin(inputs) + 0.05 * np.random.normal(size=[len(inputs),1])
plt.scatter(inputs[:, 0], outputs[:, 0], s=0.1, color='k', marker='o')
sess = tf_reset()
def create_model():
# create inputs
input_ph = tf.placeholder(dtype=tf.float32, shape=[None, 1])
output_ph = tf.placeholder(dtype=tf.float32, shape=[None, 1])
# create variables
W0 = tf.get_variable(name='W0', shape=[1, 20], initializer=tf.contrib.layers.xavier_initializer())
W1 = tf.get_variable(name='W1', shape=[20, 20], initializer=tf.contrib.layers.xavier_initializer())
W2 = tf.get_variable(name='W2', shape=[20, 1], initializer=tf.contrib.layers.xavier_initializer())
b0 = tf.get_variable(name='b0', shape=[20], initializer=tf.constant_initializer(0.))
b1 = tf.get_variable(name='b1', shape=[20], initializer=tf.constant_initializer(0.))
b2 = tf.get_variable(name='b2', shape=[1], initializer=tf.constant_initializer(0.))
weights = [W0, W1, W2]
biases = [b0, b1, b2]
activations = [tf.nn.relu, tf.nn.relu, None]
# create computation graph
layer = input_ph
for W, b, activation in zip(weights, biases, activations):
layer = tf.matmul(layer, W) + b
if activation is not None:
layer = activation(layer)
output_pred = layer
return input_ph, output_ph, output_pred
input_ph, output_ph, output_pred = create_model()
# create loss
mse = tf.reduce_mean(0.5 * tf.square(output_pred - output_ph))
# create optimizer
opt = tf.train.AdamOptimizer().minimize(mse)
# initialize variables
sess.run(tf.global_variables_initializer())
# create saver to save model variables
saver = tf.train.Saver()
# run training
batch_size = 32
for training_step in range(10001):
# get a random subset of the training data
indices = np.random.randint(low=0, high=len(inputs), size=batch_size)
input_batch = inputs[indices]
output_batch = outputs[indices]
# run the optimizer and get the mse
_, mse_run = sess.run([opt, mse], feed_dict={input_ph: input_batch, output_ph: output_batch})
# print the mse every so often
if training_step % 1000 == 0:
print('{0:04d} mse: {1:.3f}'.format(training_step, mse_run))
saver.save(sess, '/tmp/model.ckpt')
sess = tf_reset()
# create the model
input_ph, output_ph, output_pred = create_model()
# restore the saved model
saver = tf.train.Saver()
saver.restore(sess, "/tmp/model.ckpt")
output_pred_run = sess.run(output_pred, feed_dict={input_ph: inputs})
plt.scatter(inputs[:, 0], outputs[:, 0], c='k', marker='o', s=0.1)
plt.scatter(inputs[:, 0], output_pred_run[:, 0], c='r', marker='o', s=0.1)
# example of "surprising" resulting dimensions due to broadcasting
a = tf.constant(np.random.random((4, 1)))
b = tf.constant(np.random.random((1, 4)))
c = a * b
assert c.get_shape() == (4, 4)
sess = tf_reset()
a = tf.get_variable('I_am_a_variable', shape=[4, 6])
b = tf.get_variable('I_am_a_variable_too', shape=[2, 7])
for var in tf.global_variables():
print(var.name)
help(tf.reduce_mean)
help(tf.contrib.layers.fully_connected)
sess = tf_reset()
# create variables
with tf.variable_scope('layer_0'):
W0 = tf.get_variable(name='W0', shape=[1, 20], initializer=tf.contrib.layers.xavier_initializer())
b0 = tf.get_variable(name='b0', shape=[20], initializer=tf.constant_initializer(0.))
with tf.variable_scope('layer_1'):
W1 = tf.get_variable(name='W1', shape=[20, 20], initializer=tf.contrib.layers.xavier_initializer())
b1 = tf.get_variable(name='b1', shape=[20], initializer=tf.constant_initializer(0.))
with tf.variable_scope('layer_2'):
W2 = tf.get_variable(name='W2', shape=[20, 1], initializer=tf.contrib.layers.xavier_initializer())
b2 = tf.get_variable(name='b2', shape=[1], initializer=tf.constant_initializer(0.))
# print the variables
var_names = sorted([v.name for v in tf.global_variables()])
print('\n'.join(var_names))
gpu_device = 0
gpu_frac = 0.5
# make only one of the GPUs visible
import os
os.environ["CUDA_VISIBLE_DEVICES"] = str(gpu_device)
# only use part of the GPU memory
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_frac)
config = tf.ConfigProto(gpu_options=gpu_options)
# create the session
tf_sess = tf.Session(graph=tf.Graph(), config=config)
| 0.543106 | 0.974869 |
```
from collections import defaultdict
import csv
import pandas as pd
import numpy as np
map_list = defaultdict(list)
mrr_list = defaultdict(list)
recall_list = defaultdict(list)
ndcg_list = defaultdict(list)
# Read all the files and put the metrics in a defaultdict
with open('D:\\Coursework\\Thesis\\OfflineEval\\Acl\\meanmetrics_acl.tsv') as modsfile:
csv_reader = csv.reader((line for line in modsfile), delimiter='\t', quoting=csv.QUOTE_NONE)
for metricname, metricvalue in csv_reader:
if metricname.startswith('average_precision_hd2v'):
map_list['hd2vINOUT'].append(metricvalue)
if metricname.startswith('average_precision_lda'):
map_list['ldamallet'].append(metricvalue)
if metricname.startswith('average_precision_bm25'):
map_list['bm25'].append(metricvalue)
if metricname.startswith('recall_hd2v'):
recall_list['hd2vINOUT'].append(metricvalue)
if metricname.startswith('recall_lda'):
recall_list['ldamallet'].append(metricvalue)
if metricname.startswith('recall_bm25'):
recall_list['bm25'].append(metricvalue)
if metricname.startswith('reciprocal_rank_hd2v'):
mrr_list['hd2vINOUT'].append(metricvalue)
if metricname.startswith('reciprocal_rank_lda'):
mrr_list['ldamallet'].append(metricvalue)
if metricname.startswith('reciprocal_rank_bm25'):
mrr_list['bm25'].append(metricvalue)
if metricname.startswith('ndcg_hd2v'):
ndcg_list['hd2vINOUT'].append(metricvalue)
if metricname.startswith('ndcg_lda'):
ndcg_list['ldamallet'].append(metricvalue)
if metricname.startswith('ndcg_bm25'):
ndcg_list['bm25'].append(metricvalue)
with open('D:\\Coursework\\Thesis\\OfflineEval\\Acl\\meanmetrics_expanded_may18.tsv') as modsfile:
csv_reader = csv.reader((line for line in modsfile), delimiter='\t', quoting=csv.QUOTE_NONE)
for metricname, metricvalue in csv_reader:
if metricname.startswith('average_precision_hd2v_wv0dv1'):
map_list['hd2vOUT'].append(metricvalue)
if metricname.startswith('recall_hd2v_wv0dv1'):
recall_list['hd2vOUT'].append(metricvalue)
if metricname.startswith('reciprocal_rank_hd2v_wv0dv1'):
mrr_list['hd2vOUT'].append(metricvalue)
if metricname.startswith('ndcg_hd2v_wv0dv1'):
ndcg_list['hd2vOUT'].append(metricvalue)
with open('D:\\Coursework\\Thesis\\OfflineEval\\Acl\\meanmetrics_acl_may26_hybrid5050.tsv') as modsfile:
csv_reader = csv.reader((line for line in modsfile), delimiter='\t', quoting=csv.QUOTE_NONE)
for metricname, metricvalue in csv_reader:
if metricname.startswith('average_precision'):
map_list['hybrid'].append(metricvalue)
if metricname.startswith('recall'):
recall_list['hybrid'].append(metricvalue)
if metricname.startswith('reciprocal_rank'):
mrr_list['hybrid'].append(metricvalue)
if metricname.startswith('ndcg'):
ndcg_list['hybrid'].append(metricvalue)
with open('D:\\Coursework\\Thesis\\OfflineEval\\Acl\\meanmetrics_normallda.tsv') as modsfile:
csv_reader = csv.reader((line for line in modsfile), delimiter='\t', quoting=csv.QUOTE_NONE)
for metricname, metricvalue in csv_reader:
if metricname.startswith('average_precision_lda'):
map_list['lda'].append(metricvalue)
if metricname.startswith('recall_lda'):
recall_list['lda'].append(metricvalue)
if metricname.startswith('reciprocal_rank_lda'):
mrr_list['lda'].append(metricvalue)
if metricname.startswith('ndcg_lda'):
ndcg_list['lda'].append(metricvalue)
with open('D:\\Coursework\\Thesis\\OfflineEval\\Acl\\meanmetrics_p2v_and_d2v_may21.tsv') as modsfile:
csv_reader = csv.reader((line for line in modsfile), delimiter='\t', quoting=csv.QUOTE_NONE)
for metricname, metricvalue in csv_reader:
if metricname.startswith('average_precision_p2v'):
map_list['paper2vec'].append(metricvalue)
if metricname.startswith('average_precision_d2v'):
map_list['doc2vec'].append(metricvalue)
if metricname.startswith('recall_p2v'):
recall_list['paper2vec'].append(metricvalue)
if metricname.startswith('recall_d2v'):
recall_list['doc2vec'].append(metricvalue)
if metricname.startswith('reciprocal_rank_p2v'):
mrr_list['paper2vec'].append(metricvalue)
if metricname.startswith('reciprocal_rank_d2v'):
mrr_list['doc2vec'].append(metricvalue)
if metricname.startswith('ndcg_p2v'):
ndcg_list['paper2vec'].append(metricvalue)
if metricname.startswith('ndcg_d2v'):
ndcg_list['doc2vec'].append(metricvalue)
for key, value in map_list.items():
print(key, len(value))
map_list
# Convert the default dicts into pd Dataframes
def create_df_from_defdict(dictlist_name, allk=True):
df = pd.DataFrame(dictlist_name)
# The columns are all strings, make them floats
df = df[[col for col in df.columns]].astype('float')
if not allk:
df = df.head(10)
df['k'] = np.arange(1, 11)
else:
df['k'] = [1,2,3,4,5,6,7,8,9,10,20,30,40,50,100,200,300,500]
df = df.set_index('k', drop=True)
return df
map_df = create_df_from_defdict(map_list)
recall_df = create_df_from_defdict(recall_list)
mrr_df = create_df_from_defdict(mrr_list)
ndcg_df = create_df_from_defdict(ndcg_list)
with pd.ExcelWriter('D:\\Coursework\\Thesis\\OfflineEval\\Acl\\acl_metrics.xlsx') as writer:
map_df.to_excel(writer, sheet_name='map')
recall_df.to_excel(writer, sheet_name='recall')
mrr_df.to_excel(writer, sheet_name='mrr')
ndcg_df.to_excel(writer, sheet_name='ndcg')
map_list
```
|
github_jupyter
|
from collections import defaultdict
import csv
import pandas as pd
import numpy as np
map_list = defaultdict(list)
mrr_list = defaultdict(list)
recall_list = defaultdict(list)
ndcg_list = defaultdict(list)
# Read all the files and put the metrics in a defaultdict
with open('D:\\Coursework\\Thesis\\OfflineEval\\Acl\\meanmetrics_acl.tsv') as modsfile:
csv_reader = csv.reader((line for line in modsfile), delimiter='\t', quoting=csv.QUOTE_NONE)
for metricname, metricvalue in csv_reader:
if metricname.startswith('average_precision_hd2v'):
map_list['hd2vINOUT'].append(metricvalue)
if metricname.startswith('average_precision_lda'):
map_list['ldamallet'].append(metricvalue)
if metricname.startswith('average_precision_bm25'):
map_list['bm25'].append(metricvalue)
if metricname.startswith('recall_hd2v'):
recall_list['hd2vINOUT'].append(metricvalue)
if metricname.startswith('recall_lda'):
recall_list['ldamallet'].append(metricvalue)
if metricname.startswith('recall_bm25'):
recall_list['bm25'].append(metricvalue)
if metricname.startswith('reciprocal_rank_hd2v'):
mrr_list['hd2vINOUT'].append(metricvalue)
if metricname.startswith('reciprocal_rank_lda'):
mrr_list['ldamallet'].append(metricvalue)
if metricname.startswith('reciprocal_rank_bm25'):
mrr_list['bm25'].append(metricvalue)
if metricname.startswith('ndcg_hd2v'):
ndcg_list['hd2vINOUT'].append(metricvalue)
if metricname.startswith('ndcg_lda'):
ndcg_list['ldamallet'].append(metricvalue)
if metricname.startswith('ndcg_bm25'):
ndcg_list['bm25'].append(metricvalue)
with open('D:\\Coursework\\Thesis\\OfflineEval\\Acl\\meanmetrics_expanded_may18.tsv') as modsfile:
csv_reader = csv.reader((line for line in modsfile), delimiter='\t', quoting=csv.QUOTE_NONE)
for metricname, metricvalue in csv_reader:
if metricname.startswith('average_precision_hd2v_wv0dv1'):
map_list['hd2vOUT'].append(metricvalue)
if metricname.startswith('recall_hd2v_wv0dv1'):
recall_list['hd2vOUT'].append(metricvalue)
if metricname.startswith('reciprocal_rank_hd2v_wv0dv1'):
mrr_list['hd2vOUT'].append(metricvalue)
if metricname.startswith('ndcg_hd2v_wv0dv1'):
ndcg_list['hd2vOUT'].append(metricvalue)
with open('D:\\Coursework\\Thesis\\OfflineEval\\Acl\\meanmetrics_acl_may26_hybrid5050.tsv') as modsfile:
csv_reader = csv.reader((line for line in modsfile), delimiter='\t', quoting=csv.QUOTE_NONE)
for metricname, metricvalue in csv_reader:
if metricname.startswith('average_precision'):
map_list['hybrid'].append(metricvalue)
if metricname.startswith('recall'):
recall_list['hybrid'].append(metricvalue)
if metricname.startswith('reciprocal_rank'):
mrr_list['hybrid'].append(metricvalue)
if metricname.startswith('ndcg'):
ndcg_list['hybrid'].append(metricvalue)
with open('D:\\Coursework\\Thesis\\OfflineEval\\Acl\\meanmetrics_normallda.tsv') as modsfile:
csv_reader = csv.reader((line for line in modsfile), delimiter='\t', quoting=csv.QUOTE_NONE)
for metricname, metricvalue in csv_reader:
if metricname.startswith('average_precision_lda'):
map_list['lda'].append(metricvalue)
if metricname.startswith('recall_lda'):
recall_list['lda'].append(metricvalue)
if metricname.startswith('reciprocal_rank_lda'):
mrr_list['lda'].append(metricvalue)
if metricname.startswith('ndcg_lda'):
ndcg_list['lda'].append(metricvalue)
with open('D:\\Coursework\\Thesis\\OfflineEval\\Acl\\meanmetrics_p2v_and_d2v_may21.tsv') as modsfile:
csv_reader = csv.reader((line for line in modsfile), delimiter='\t', quoting=csv.QUOTE_NONE)
for metricname, metricvalue in csv_reader:
if metricname.startswith('average_precision_p2v'):
map_list['paper2vec'].append(metricvalue)
if metricname.startswith('average_precision_d2v'):
map_list['doc2vec'].append(metricvalue)
if metricname.startswith('recall_p2v'):
recall_list['paper2vec'].append(metricvalue)
if metricname.startswith('recall_d2v'):
recall_list['doc2vec'].append(metricvalue)
if metricname.startswith('reciprocal_rank_p2v'):
mrr_list['paper2vec'].append(metricvalue)
if metricname.startswith('reciprocal_rank_d2v'):
mrr_list['doc2vec'].append(metricvalue)
if metricname.startswith('ndcg_p2v'):
ndcg_list['paper2vec'].append(metricvalue)
if metricname.startswith('ndcg_d2v'):
ndcg_list['doc2vec'].append(metricvalue)
for key, value in map_list.items():
print(key, len(value))
map_list
# Convert the default dicts into pd Dataframes
def create_df_from_defdict(dictlist_name, allk=True):
df = pd.DataFrame(dictlist_name)
# The columns are all strings, make them floats
df = df[[col for col in df.columns]].astype('float')
if not allk:
df = df.head(10)
df['k'] = np.arange(1, 11)
else:
df['k'] = [1,2,3,4,5,6,7,8,9,10,20,30,40,50,100,200,300,500]
df = df.set_index('k', drop=True)
return df
map_df = create_df_from_defdict(map_list)
recall_df = create_df_from_defdict(recall_list)
mrr_df = create_df_from_defdict(mrr_list)
ndcg_df = create_df_from_defdict(ndcg_list)
with pd.ExcelWriter('D:\\Coursework\\Thesis\\OfflineEval\\Acl\\acl_metrics.xlsx') as writer:
map_df.to_excel(writer, sheet_name='map')
recall_df.to_excel(writer, sheet_name='recall')
mrr_df.to_excel(writer, sheet_name='mrr')
ndcg_df.to_excel(writer, sheet_name='ndcg')
map_list
| 0.484868 | 0.078184 |
# Apply a trained land classifier model in ArcGIS Pro
This tutorial will assume that you have already provisioned a [Geo AI Data Science Virtual Machine](http://aka.ms/dsvm/GeoAI) and are using this Jupyter notebook while connected via remote desktop on that VM. If not, please see our guide to [provisioning and connecting to a Geo AI DSVM](https://github.com/Azure/pixel_level_land_classification/blob/master/geoaidsvm/setup.md).
By default, this tutorial will make use of a model we have pre-trained for 250 epochs. If you have completed the associated notebook on [training a land classifier from scratch](./02_Train_a_land_classification_model_from_scratch.ipynb), you will have the option of using your own model file.
## Setup instructions
### Log into ArcGIS Pro
[ArcGIS Pro](https://pro.arcgis.com) 2.1.1 is pre-installed on the Geo AI DSVM. If you are running this tutorial on another machine, you may need to perform these additional steps: install ArcGIS Pro, [install CNTK](https://docs.microsoft.com/cognitive-toolkit/setup-windows-python) in the Python environment ArcGIS Pro creates, and ensure that [ArcGIS Pro's Python environment](http://pro.arcgis.com/en/pro-app/arcpy/get-started/installing-python-for-arcgis-pro.htm) is on your system path.
To log into ArcGIS Pro, follow these steps:
1. Search for and launch the ArcGIS Pro program.
1. When prompted, enter your username and password.
- If you don't have an ArcGIS Pro license, see the instructions for getting a trial license in the [intro notebook](./01_Intro_to_pixel-level_land_classification.ipynb).
### Install the supporting files
If you have not already completed the associated notebook on [training a land classifier from scratch](./02_Train_a_land_classification_model_from_scratch.ipynb), execute the following cell to download supporting files to your Geo AI DSVM's D: drive.
```
!AzCopy /Source:https://aiforearthcollateral.blob.core.windows.net/imagesegmentationtutorial /SourceSAS:"?st=2018-01-16T10%3A40%3A00Z&se=2028-01-17T10%3A40%3A00Z&sp=rl&sv=2017-04-17&sr=c&sig=KeEzmTaFvVo2ptu2GZQqv5mJ8saaPpeNRNPoasRS0RE%3D" /Dest:D:\pixellevellandclassification /S
print('Done.')
```
### Install the custom raster function
We will use Python scripts to apply a trained model to aerial imagery in real-time as the user scrolls through a region of interest in ArcGIS Pro. These Python scripts are surfaced in ArcGIS Pro as a [custom raster function](https://github.com/Esri/raster-functions). The three files needed for the raster function (the main Python script, helper functions for e.g. colorizing the model's results, and an XML description file) must be copied into the ArcGIS Pro subdirectory as follows:
1. In Windows Explorer, navigate to `C:\Program Files\ArcGIS\Pro\Resources\Raster\Functions` and create a subdirectory named `Custom`.
1. Copy the `ClassifyCNTK` folder in `D:\pixellevellandclassification\arcgispro` into your new folder named `Custom`.
When this is complete, you should have a folder named `C:\Program Files\ArcGIS\Pro\Resources\Raster\Functions\Custom\ClassifyCNTK` that contains two Python scripts and an XML file.
## Evaluate the model in real-time using ArcGIS Pro
### Load the sample project in ArcGIS Pro
Begin by loading the sample ArcGIS Pro project we have provided:
1. Search for and launch the ArcGIS Pro program.
- If ArcGIS Pro was open, restart it to ensure that all changes above are reflected when the proram loads.
1. On the ArcGIS Pro start screen, click on "Open an Existing Project".
1. Navigate to the folder where you extracted the sample project, and select the `D:\pixellevellandclassification\arcgispro\sampleproject.aprx` file. Click "OK."
Once the project has loaded (allow ~30 seconds), you should see a screen split into three windows. After a moment, NAIP aerial imagery should become visible in the upper window; slightly later, the model's soft and hard predictions for land cover will populate in the lower-left and lower-right windows, respectively.
<img src="https://github.com/Azure/pixel_level_land_classification/raw/master/outputs/arcgispro_three_windows.PNG">
The bottom windows will show the model's best-guess labels (bottom right) and an average of label colors weighted by predicted probability (bottom left, providing an indication of uncertainty). If you wish to use your own trained model, or the bottom windows do not populate with results, you may need to add their layers manually using the following steps:
1. Begin by selecting the "AI Mixed Probabilities" window at bottom-left.
1. Add and modify an aerial imagery layer:
1. In the Catalog Pane (accessible from the View menu), click on Portal, then the cloud icon (labeled "All Portal" on hover).
1. In the search field, type NAIP.
1. Drag and drop the "USA NAIP Imagery: Natural Color" option into the window at bottom-left. You should see a new layer with this name appear in the Contents Pane at left.
1. Right-click on "USA NAIP Imagery: Natural Color" in the Contents Pane and select "Properties".
1. In the "Processing Templates" tab of the layer properties, change the Processing Template from "Natural Color" to "None," then click OK.
1. Add a model predictions layer:
1. In the Raster Functions Pane (accessible from the Analysis menu), click on the "Custom" option along the top.
1. You should see a "[ClassifyCNTK]" heading in the Custom section. Collapse and re-expand it to reveal an option named "Classify". Click this button to bring up the raster function's options.
1. Set the input raster to "USA NAIP Imagery: Natural Color".
1. Set the trained model location to `D:\pixellevellandclassification\models\250epochs.model`.
- Note: if you trained your own model using our companion notebook, you can use it instead by choosing `D:\pixellevellandclassification\models\trained.model` as the location.
1. Set the output type to "Softmax", indicating that each pixel's color will be an average of the class label colors, weighted by their relative probabilities.
- Note: selecting "Hardmax" will assign each pixel its most likely label's color insead.
1. Click "Create new layer". After a few seconds, the model's predictions should appear in the bottom-left quadrant.
1. Repeat these steps with the bottom-right window, selecting "Hardmax" as the output type.
Now that your project is complete, you can navigate and zoom in any window to compare imagery and predicted labels throughout the U.S.
## Next steps
In this notebook series, we trained and deployed a model on a Geo AI Data Science VM. To improve model accuracy, we recommend training for more epochs on a larger dataset. Please see [our GitHub repository](https://github.com/Azure/pixel_level_land_classification) for more details on scaling up training using Batch AI.
When you are done using your Geo AI Data Science VM, we recommend that you stop or delete it to prevent further charges.
For comments and suggestions regarding this notebook, please post a [Git issue](https://github.com/Azure/pixel_level_land_classification/issues/new) or submit a pull request in the [pixel-level land classification repository](https://github.com/Azure/pixel_level_land_classification).
|
github_jupyter
|
!AzCopy /Source:https://aiforearthcollateral.blob.core.windows.net/imagesegmentationtutorial /SourceSAS:"?st=2018-01-16T10%3A40%3A00Z&se=2028-01-17T10%3A40%3A00Z&sp=rl&sv=2017-04-17&sr=c&sig=KeEzmTaFvVo2ptu2GZQqv5mJ8saaPpeNRNPoasRS0RE%3D" /Dest:D:\pixellevellandclassification /S
print('Done.')
| 0.510496 | 0.978672 |
# Training a Neural Network
As we've seen, the Network we built in the previous notebook wasn't very smart. The previous Network was quite naive, in order to solve this issue, what we do is we feed real data into the network and we adjust the parameters of the network such that when given another input it approximately predicts the right answer.
To do this we need to first find out how badly the current network predicts the answers, for this we calculate something called as the **Loss Function**, it is a measure of the prediction error of our network. For example the mean squared loss is generally used in regression and binary classification problems.
$$
\large \ell = \frac{1}{2n}\sum_i^n{\left(y_i - \hat{y}_i\right)^2}
$$
where $n$ is the number of training examples, $y_i$ are the true labels, and $\hat{y}_i$ are the predicted labels.
By minimizing this loss, we can find at which point will the network make the least amount of errors i.e predict correct labels for a given input with the highest accuracy. We reach this minimum by a process called ***Gradient Descent***. The gradient is the slope of the loss function and points in the direction of highest change. Think of gradient descent as someone trying to climb down a mountain in the least amount of time, by following the steepest slope to the base.
<img src="assets/gradient_descent.png" width=400px>
## Backpropagation
Implementing gradient descent on single layer neural networks is easy, but it gets complicated on multilayer neural networks, like the one we built.
Backpropagation is essentially just an application of the chain rule of the chain rule from calculus.
1. We perform a forward pass and calculate the loss (also called the cost)
<img src="assets/forwardpass.png" width=600px>
2. Once the loss is calculated we use the chain rule to find the gradient.
$$
\large \frac{\partial \ell}{\partial W_1} = \frac{\partial L_1}{\partial W_1} \frac{\partial S}{\partial L_1} \frac{\partial L_2}{\partial S} \frac{\partial \ C}{\partial L_2}
$$
3. Now we backpropagate, and update the weights.
<img src="assets/backpropagation1.png" width=600px>
What about layers further back? To calculate the gradient for wโ1โ we use the same method as before, walking backwards through the graph, so the derivative calculations look like this:
<img src="assets/backpropagation2.png" width=600px>
## Losses in PyTorch
PyTorch offers us the `nn` module to calculate the loss. It contains losses such as the cross-entropy loss (`nn.CrossEntropyLoss`) which we assign to `criterion`.
Something really important to note here. Looking at [the documentation for `nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss),
> This criterion combines `nn.LogSoftmax()` and `nn.NLLLoss()` in one single class.
>
> The input is expected to contain scores for each class.
This means we need to pass in the raw output of our network into the loss, not the output of the softmax function. This raw output is usually called the *logits* or *scores*. We use the logits because softmax gives you probabilities which will often be very close to zero or one but floating-point numbers can't accurately represent values near zero or one ([read more here](https://docs.python.org/3/tutorial/floatingpoint.html)). It's usually best to avoid doing calculations with probabilities, typically we use log-probabilities.
```
#Import Modules and the Data
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10))
# Define the loss
criterion = nn.CrossEntropyLoss()
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our logits
logits = model(images)
# Calculate the loss with the logits and the labels
loss = criterion(logits, labels)
print(loss)
```
In most cases it is more convienent to build a network with a log-softmax output using `nn.LogSoftmax` or `F.log_softmax` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LogSoftmax)). Then you can get the actual probabilities by taking the exponential `torch.exp(output)`. With a log-softmax output, you want to use the negative log likelihood loss, `nn.NLLLoss` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.NLLLoss)).
We will build this network shortly.
## How do we use this knowledge to perform backpropagation ?
Torch provides a module, `autograd`, for automatically calculating the gradients of tensors. We can use it to calculate the gradients of all our parameters with respect to the loss. Autograd works by keeping track of operations performed on tensors, then going backwards through those operations, calculating gradients along the way. To make sure PyTorch keeps track of operations on a tensor and calculates the gradients, you need to set `requires_grad = True` on a tensor. You can do this at creation with the `requires_grad` keyword, or at any time with `x.requires_grad_(True)`.
The gradients are computed with respect to some variable `z` with `z.backward()`. This does a backward pass through the operations that created `z`.
## Updating the weights
Well we now know how to find the loss and to calculate the gradients, but how do we update the weights?
For this we require an optimizer that we'll use to update the weights with the gradients. We get these from PyTorch's [`optim` package](https://pytorch.org/docs/stable/optim.html). For example we can use stochastic gradient descent with `optim.SGD`. You can see how to define an optimizer below.
```
from torch import optim
# Optimizers require the parameters to optimize and a learning rate
optimizer = optim.SGD(model.parameters(), lr=0.01)
```
# Let's Train our Network !!!
Now that we've learnt how to train a network let's put in use.
```
# Write a model and train it
model = nn.Sequential(nn.Linear(784,128),
nn.ReLU(),
nn.Linear(128,64),
nn.ReLU(),
nn.Linear(64,10),
nn.LogSoftmax())
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
images = images.view(images.shape[0],-1)
#training pass
optimizer.zero_grad()
output=model(images)
loss=criterion(output,labels)
loss.backward()
running_loss+=loss.item()
optimizer.step()
#Viewing results
%matplotlib inline
import helper
images, labels=next(iter(trainloader))
img= images[0].view(1,784)
with torch.no_grad():
logps = model(img)
ps= torch.exp(logps)
helper.view_classify(img.view(1,28,28),ps)
```
|
github_jupyter
|
#Import Modules and the Data
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10))
# Define the loss
criterion = nn.CrossEntropyLoss()
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our logits
logits = model(images)
# Calculate the loss with the logits and the labels
loss = criterion(logits, labels)
print(loss)
from torch import optim
# Optimizers require the parameters to optimize and a learning rate
optimizer = optim.SGD(model.parameters(), lr=0.01)
# Write a model and train it
model = nn.Sequential(nn.Linear(784,128),
nn.ReLU(),
nn.Linear(128,64),
nn.ReLU(),
nn.Linear(64,10),
nn.LogSoftmax())
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
images = images.view(images.shape[0],-1)
#training pass
optimizer.zero_grad()
output=model(images)
loss=criterion(output,labels)
loss.backward()
running_loss+=loss.item()
optimizer.step()
#Viewing results
%matplotlib inline
import helper
images, labels=next(iter(trainloader))
img= images[0].view(1,784)
with torch.no_grad():
logps = model(img)
ps= torch.exp(logps)
helper.view_classify(img.view(1,28,28),ps)
| 0.8848 | 0.996462 |
```
# direct to proper path
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import seaborn as sns
from scipy.stats import norm
from scipy.special import erf
import numpy as np
import matplotlib.pyplot as plt
from codes.Environment import Mixture_AbsGau, setup_env, Exp, generate_samples
import matplotlib as mpl
mpl.rcParams['axes.spines.left'] = False
mpl.rcParams['axes.spines.right'] = False
mpl.rcParams['axes.spines.top'] = False
mpl.rcParams['axes.spines.top'] = False
# mpl.rcParams['axes.spines.bottom'] = False
def Abs_Gau_pdf(x, mu, sigma):
return 1.0/np.sqrt(2 * np.pi * sigma ** 2) * (np.exp(- 1.0/(2 * sigma**2) * (x - mu)** 2) + np.exp(- 1.0/(2 * sigma**2) * (x + mu)** 2 ))
def Abs_Gau_cdf(x, mu, sigma):
return 1.0/2 * (erf((x-mu)/ np.sqrt(2 * sigma ** 2)) + erf((x+ mu)/ np.sqrt(2 * sigma ** 2)))
def Phi(x):
return 1.0/2 * (1+ erf(x/np.sqrt(2)))
def Abs_Gau_mean(mu, sigma):
return sigma * np.sqrt(2.0/np.pi) * np.exp(- mu ** 2 / (2 * sigma ** 2)) +\
+ mu * (1 - 2 * Phi(- mu/sigma))
def Abs_Gau_quant_est(p, mu, sigma, size = 10000):
samples = np.abs(np.random.normal(mu, sigma, size))
return np.sort(samples)[int(p * size)], samples
def Exp_pdf(x, para):
return para * np.exp(- para * x)
def Exp_mean(para):
return 1.0/para
def Exp_quant(p, para):
return - np.log(1- p)/para
plot_mean_flag = False
plot_quant_flag = True
plot_quant_est_flag = True
p = 0.2
np.random.seed(24)
save_path = "../plots/slide_plots/"
mu = 1.0
sigma = 1.0
x = np.linspace(0, 5, 100)
plt.fill_between(x, Abs_Gau_pdf(x, mu, sigma), color = 'grey', alpha = 0.3)
plt.ylim(0, 0.6)
plt.xlabel('Reward')
plt.yticks([])
if plot_mean_flag:
mean = round(Abs_Gau_mean(mu, sigma),2)
plt.vlines(mean, 0, 0.5, linestyles = 'dashed')
plt.annotate(str(mean), (mean, 0.52))
plt.savefig(save_path + 'arm1_mean.pdf', bbox_inches='tight', transparent=True)
if plot_quant_flag:
quant,_ = Abs_Gau_quant_est(p, mu, sigma)
quant = round(quant,2)
plt.vlines(quant, 0, 0.5, linestyles = 'dashed')
plt.annotate(str(quant), (quant, 0.52))
plt.savefig(save_path + 'arm1_quant.pdf', bbox_inches='tight', transparent=True)
plt.savefig(save_path + 'arm1.pdf', bbox_inches='tight', transparent=True)
mu = 1.5
sigma = 1.0
x = np.linspace(0, 5, 100)
plt.fill_between(x, Abs_Gau_pdf(x, mu, sigma), color = 'grey', alpha = 0.3)
plt.ylim(0, 0.6)
plt.xlabel('Reward')
plt.yticks([])
if plot_mean_flag:
mean = round(Abs_Gau_mean(mu, sigma),2)
plt.vlines(mean, 0, 0.5, linestyles = 'dashed')
plt.annotate(str(mean), (mean, 0.52))
plt.savefig(save_path + 'arm2_mean.pdf', bbox_inches='tight', transparent=True)
if plot_quant_flag:
quant, samples = Abs_Gau_quant_est(p, mu, sigma)
quant = round(quant,2)
if plot_quant_est_flag:
# plt.xlim(0,3)
for num_sample in [10]: # 5, 10, 100
plt.figure()
x = np.linspace(0, 5, 100)
plt.fill_between(x, Abs_Gau_pdf(x, mu, sigma), color = 'grey', alpha = 0.3)
plt.ylim(0, 0.6)
plt.xlabel('Reward')
plt.yticks([])
plt.vlines(quant, 0, 0.5, linestyles = 'dashed')
plt.scatter(samples[:num_sample], np.ones(num_sample) * 0.01, alpha = 0.5)
quant_est = np.sort(samples[:num_sample])[int(p * num_sample)]
quant_est = round(quant_est,2)
plt.vlines(quant_est, 0, 0.5, linestyles = 'dashed', color = 'orange', alpha = 0.5)
# plt.annotate(str(quant_est), (quant_est, 0.52))
plt.savefig(save_path + 'arm2_quant_' + str(num_sample) + '.pdf', bbox_inches='tight', transparent=True)
else:
plt.vlines(quant, 0, 0.5, linestyles = 'dashed')
plt.annotate(str(quant), (quant, 0.52))
plt.savefig(save_path + 'arm2_quant.pdf', bbox_inches='tight', transparent=True)
plt.savefig(save_path + 'arm2.pdf', bbox_inches='tight', transparent=True)
para = 0.5
x = np.linspace(0, 5, 100)
plt.fill_between(x, Exp_pdf(x, para), color = 'grey', alpha = 0.3)
plt.ylim(0, 0.6)
plt.xlabel('Reward')
plt.yticks([])
if plot_mean_flag:
mean = round(Exp_mean(para),2)
plt.vlines(mean, 0, 0.5, linestyles = 'dashed')
plt.annotate(str(mean), (mean, 0.52))
plt.savefig(save_path + 'arm3_mean.pdf', bbox_inches='tight', transparent=True)
if plot_quant_flag:
quant, samples = Abs_Gau_quant_est(p, mu, sigma)
quant = round(quant,2)
plt.vlines(quant, 0, 0.5, linestyles = 'dashed')
plt.annotate(str(quant), (quant, 0.52))
plt.savefig(save_path + 'arm3_quant.pdf', bbox_inches='tight', transparent=True)
plt.savefig(save_path + 'arm3.pdf', bbox_inches='tight', transparent=True)
```
|
github_jupyter
|
# direct to proper path
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import seaborn as sns
from scipy.stats import norm
from scipy.special import erf
import numpy as np
import matplotlib.pyplot as plt
from codes.Environment import Mixture_AbsGau, setup_env, Exp, generate_samples
import matplotlib as mpl
mpl.rcParams['axes.spines.left'] = False
mpl.rcParams['axes.spines.right'] = False
mpl.rcParams['axes.spines.top'] = False
mpl.rcParams['axes.spines.top'] = False
# mpl.rcParams['axes.spines.bottom'] = False
def Abs_Gau_pdf(x, mu, sigma):
return 1.0/np.sqrt(2 * np.pi * sigma ** 2) * (np.exp(- 1.0/(2 * sigma**2) * (x - mu)** 2) + np.exp(- 1.0/(2 * sigma**2) * (x + mu)** 2 ))
def Abs_Gau_cdf(x, mu, sigma):
return 1.0/2 * (erf((x-mu)/ np.sqrt(2 * sigma ** 2)) + erf((x+ mu)/ np.sqrt(2 * sigma ** 2)))
def Phi(x):
return 1.0/2 * (1+ erf(x/np.sqrt(2)))
def Abs_Gau_mean(mu, sigma):
return sigma * np.sqrt(2.0/np.pi) * np.exp(- mu ** 2 / (2 * sigma ** 2)) +\
+ mu * (1 - 2 * Phi(- mu/sigma))
def Abs_Gau_quant_est(p, mu, sigma, size = 10000):
samples = np.abs(np.random.normal(mu, sigma, size))
return np.sort(samples)[int(p * size)], samples
def Exp_pdf(x, para):
return para * np.exp(- para * x)
def Exp_mean(para):
return 1.0/para
def Exp_quant(p, para):
return - np.log(1- p)/para
plot_mean_flag = False
plot_quant_flag = True
plot_quant_est_flag = True
p = 0.2
np.random.seed(24)
save_path = "../plots/slide_plots/"
mu = 1.0
sigma = 1.0
x = np.linspace(0, 5, 100)
plt.fill_between(x, Abs_Gau_pdf(x, mu, sigma), color = 'grey', alpha = 0.3)
plt.ylim(0, 0.6)
plt.xlabel('Reward')
plt.yticks([])
if plot_mean_flag:
mean = round(Abs_Gau_mean(mu, sigma),2)
plt.vlines(mean, 0, 0.5, linestyles = 'dashed')
plt.annotate(str(mean), (mean, 0.52))
plt.savefig(save_path + 'arm1_mean.pdf', bbox_inches='tight', transparent=True)
if plot_quant_flag:
quant,_ = Abs_Gau_quant_est(p, mu, sigma)
quant = round(quant,2)
plt.vlines(quant, 0, 0.5, linestyles = 'dashed')
plt.annotate(str(quant), (quant, 0.52))
plt.savefig(save_path + 'arm1_quant.pdf', bbox_inches='tight', transparent=True)
plt.savefig(save_path + 'arm1.pdf', bbox_inches='tight', transparent=True)
mu = 1.5
sigma = 1.0
x = np.linspace(0, 5, 100)
plt.fill_between(x, Abs_Gau_pdf(x, mu, sigma), color = 'grey', alpha = 0.3)
plt.ylim(0, 0.6)
plt.xlabel('Reward')
plt.yticks([])
if plot_mean_flag:
mean = round(Abs_Gau_mean(mu, sigma),2)
plt.vlines(mean, 0, 0.5, linestyles = 'dashed')
plt.annotate(str(mean), (mean, 0.52))
plt.savefig(save_path + 'arm2_mean.pdf', bbox_inches='tight', transparent=True)
if plot_quant_flag:
quant, samples = Abs_Gau_quant_est(p, mu, sigma)
quant = round(quant,2)
if plot_quant_est_flag:
# plt.xlim(0,3)
for num_sample in [10]: # 5, 10, 100
plt.figure()
x = np.linspace(0, 5, 100)
plt.fill_between(x, Abs_Gau_pdf(x, mu, sigma), color = 'grey', alpha = 0.3)
plt.ylim(0, 0.6)
plt.xlabel('Reward')
plt.yticks([])
plt.vlines(quant, 0, 0.5, linestyles = 'dashed')
plt.scatter(samples[:num_sample], np.ones(num_sample) * 0.01, alpha = 0.5)
quant_est = np.sort(samples[:num_sample])[int(p * num_sample)]
quant_est = round(quant_est,2)
plt.vlines(quant_est, 0, 0.5, linestyles = 'dashed', color = 'orange', alpha = 0.5)
# plt.annotate(str(quant_est), (quant_est, 0.52))
plt.savefig(save_path + 'arm2_quant_' + str(num_sample) + '.pdf', bbox_inches='tight', transparent=True)
else:
plt.vlines(quant, 0, 0.5, linestyles = 'dashed')
plt.annotate(str(quant), (quant, 0.52))
plt.savefig(save_path + 'arm2_quant.pdf', bbox_inches='tight', transparent=True)
plt.savefig(save_path + 'arm2.pdf', bbox_inches='tight', transparent=True)
para = 0.5
x = np.linspace(0, 5, 100)
plt.fill_between(x, Exp_pdf(x, para), color = 'grey', alpha = 0.3)
plt.ylim(0, 0.6)
plt.xlabel('Reward')
plt.yticks([])
if plot_mean_flag:
mean = round(Exp_mean(para),2)
plt.vlines(mean, 0, 0.5, linestyles = 'dashed')
plt.annotate(str(mean), (mean, 0.52))
plt.savefig(save_path + 'arm3_mean.pdf', bbox_inches='tight', transparent=True)
if plot_quant_flag:
quant, samples = Abs_Gau_quant_est(p, mu, sigma)
quant = round(quant,2)
plt.vlines(quant, 0, 0.5, linestyles = 'dashed')
plt.annotate(str(quant), (quant, 0.52))
plt.savefig(save_path + 'arm3_quant.pdf', bbox_inches='tight', transparent=True)
plt.savefig(save_path + 'arm3.pdf', bbox_inches='tight', transparent=True)
| 0.393968 | 0.617325 |
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from tqdm import tnrange, tqdm_notebook
import gc
import operator
sns.set_context('talk')
pd.set_option('display.max_columns', 500)
import warnings
warnings.filterwarnings('ignore', message='Changing the shape of non-C contiguous array')
```
# Read the data
```
dfXtrain = pd.read_csv('preprocessed_csv/train_4.csv', index_col='id', sep=';')
dfXtest = pd.read_csv('preprocessed_csv/test_4.csv', index_col='id', sep=';')
dfYtrain = pd.read_csv('preprocessed_csv/y_train_4.csv', header=None, names=['ID', 'COTIS'], sep=';')
dfYtrain = dfYtrain.set_index('ID')
```
# Preprocessing
ะัะฝะตัะตะผ var14, department ะธ subreg.
```
dropped_col_names = ['department', 'subreg', 'ext_dep']
def drop_cols(df):
return df.drop(dropped_col_names, axis=1), df[dropped_col_names]
train, dropped_train = drop_cols(dfXtrain)
test, dropped_test = drop_cols(dfXtest)
```
ะะพะฑะฐะฒะธะผ ะธะฝัั ะพ ะฒะตะปะธัะธะฝะต ะณะพัะพะดะฐ ะธะท subreg'a
```
def add_big_city_cols(df, dropped_df):
df['big'] = np.where(dropped_df['subreg'] % 100 == 0, 1, 0)
df['average'] = np.where(dropped_df['subreg'] % 10 == 0, 1, 0)
df['average'] = df['average'] - df['big']
df['small'] = 1 - df['big'] - df['average']
return df
train = add_big_city_cols(train, dropped_train)
test = add_big_city_cols(test, dropped_test)
```
ะะตะบะพะดะธััะตะผ ะพััะฐะฒัะธะตัั ะบะฐัะตะณะพัะธะฐะปัะฝัะต ะฟัะธะทะฝะฐะบะธ
```
numerical = list(train.select_dtypes(include=[np.number]).columns)
numerical
categorical = list(train.select_dtypes(exclude=[np.number]).columns)
categorical
list(test.select_dtypes(exclude=[np.number]).columns)
for col in categorical:
print(col, train[col].nunique(), test[col].nunique())
```
energie_veh ะธ var6 ั ะฟะพะผะพััั get_dummies
```
train.energie_veh.unique()
test.energie_veh.unique()
small_cat = ['energie_veh', 'var6']
train = pd.get_dummies(train, columns=small_cat)
test = pd.get_dummies(test, columns=small_cat)
```
ะขะตะฟะตัั ะฟะพัะผะพััะธะผ ะฝะฐ ะพััะฐะปัะฝัะต
```
len(set(train.profession.values) - set(test.profession.values))
len(set(train.var8.values) - set(test.var8.values))
len(set(test.var8.values) - set(train.var8.values))
len(set(train.marque.values) - set(test.marque.values))
len(set(test.marque.values) - set(train.marque.values))
set(test.marque.values) - set(train.marque.values)
test[test.marque == 'GEELY']
test[test.marque == 'SOVAM']
```
profession ะธ var8 ัะพะถะต ะฒ dummy
```
middle_cat = ['profession', 'var8', 'marque', 'var14']
bigX = pd.concat([train, test])
bigX.shape
bigX = pd.get_dummies(bigX, columns=middle_cat)
bigX.shape
```
ะ ะฐัะฟะพะปะพะถะธะผ ััะพะปะฑัั ะฒ ะฝัะถะฝะพะผ ะฟะพััะดะบะต, ะดะพะฑะฐะฒะธะผ ะบะพะฝััะฐะฝัะฝัะน ััะพะปะฑะตั
```
bigX.crm /= 100
first_col_list = ['crm', 'puis_fiscale']
col_list = first_col_list + sorted(list(set(bigX.columns) - set(first_col_list)))
bigX = bigX[col_list]
```
ะ ะฐะทะฑะตััะผัั ั ะฝัะผะตัะธะบะฐะผะธ
```
numerical = set(numerical)
numerical -= set(['big', 'average', 'small'])
for col in numerical:
treshold = 10
if bigX[col].nunique() <= treshold:
print(col, bigX[col].nunique())
```
ะญัะธ (ััะพ ะฒััะต) ะผะพะถะฝะพ ohe
```
for col in numerical:
treshold = 10
if bigX[col].nunique() > treshold:
print(col, bigX[col].nunique())
```
* crm ะฒัะบะธะดัะฒะฐะตััั
* var1 ะฟะพัะพะณ 3
* age ะฟะพัะพะณ 22
```
intercept = 50
base = 400
target = (dfYtrain.COTIS - intercept)/ train.crm * 100 / base
target.describe()
```
50 ะธ 400 ั
ะพัะพัะพ ะปะพะถะฐััั
```
bigX.fillna(-9999, inplace=True)
y_train = np.array(dfYtrain)
train = bigX.loc[train.index]
x_train = np.array(train)
test = bigX.loc[test.index]
x_test = np.array(test)
x_train.shape
x_test.shape
```
# Save routines
```
dfYtest = pd.DataFrame({'ID': dfXtest.index, 'COTIS': np.zeros(test.shape[0])})
dfYtest = dfYtest[['ID', 'COTIS']]
dfYtest.head()
def save_to_file(y, file_name):
dfYtest['COTIS'] = y
dfYtest.to_csv('results/{}'.format(file_name), index=False, sep=';')
model_name = 'divided'
dfYtest_stacking = pd.DataFrame({'ID': dfXtrain.index, model_name: np.zeros(train.shape[0])})
dfYtest_stacking = dfYtest_stacking[['ID', model_name]]
dfYtest_stacking.head()
def save_to_file_stacking(y, file_name):
dfYtest_stacking[model_name] = y
dfYtest_stacking.to_csv('stacking/{}'.format(file_name), index=False, sep=';')
```
# Train model
```
def plot_quality(grid_searcher, param_name):
means = []
stds = []
for elem in grid_searcher.grid_scores_:
means.append(np.mean(elem.cv_validation_scores))
stds.append(np.sqrt(np.var(elem.cv_validation_scores)))
means = np.array(means)
stds = np.array(stds)
params = grid_searcher.param_grid
plt.figure(figsize=(10, 6))
plt.plot(params[param_name], means)
plt.fill_between(params[param_name], \
means + stds, means - stds, alpha = 0.3, facecolor='blue')
plt.xlabel(param_name)
plt.ylabel('MAPE')
def mape(y_true, y_pred):
return -np.mean(np.abs((y_true - y_pred) / y_true)) * 100
def mape_scorer(est, X, y):
gc.collect()
return mape(y, est.predict(X))
class MyGS():
class Element():
def __init__(self):
self.cv_validation_scores = []
def add(self, score):
self.cv_validation_scores.append(score)
def __init__(self, param_grid, name, n_folds):
self.param_grid = {name: param_grid}
self.grid_scores_ = [MyGS.Element() for item in param_grid]
self.est = None
def add(self, score, param_num):
self.grid_scores_[param_num].add(score)
intercept = 50
base = 400
def scorer(y_true, y_pred, crm):
y_true = inv_func(y_true, crm)
y_pred = inv_func(y_pred, crm)
return mape(y_true, y_pred)
def func(y, crm):
return (y - intercept) / crm / base
def inv_func(y, crm):
return y * crm * base + intercept
validation_index = (dropped_train.ext_dep == 10) | (dropped_train.ext_dep > 900)
train_index = ~validation_index
subtrain, validation = train[train_index], train[validation_index]
x_subtrain = np.array(subtrain)
x_validation = np.array(validation)
ysubtrain, yvalidation = dfYtrain[train_index], dfYtrain[validation_index]
y_subtrain = np.array(ysubtrain).flatten()
y_validation = np.array(yvalidation).flatten()
validation.shape
from sklearn.tree import LinearDecisionTreeRegressor as LDTR
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import ExtraTreesRegressor
%%time
est = ExtraTreesRegressor(n_estimators=10, max_features=None,
max_depth=None, n_jobs=-1, random_state=42).fit(
X=x_subtrain, y=func(y_subtrain, x_subtrain[:, 0]), sample_weight=None)
y_pred = inv_func(est.predict(x_validation), x_validation[:, 0])
mape(y_validation, y_pred)
gc.collect()
sample_weight_subtrain = np.power(y_subtrain, -1)
%%time
est = DecisionTreeRegressor(max_features=None,
max_depth=None, random_state=42).fit(
X=x_subtrain, y=func(y_subtrain, x_subtrain[:, 0]), sample_weight=sample_weight_subtrain)
y_pred = inv_func(est.predict(x_validation), x_validation[:, 0])
mape(y_validation, y_pred)
gc.collect()
import xgboost as xgb
def grid_search(x_train, y_train, x_validation, y_validation, scorer, weights=None):
param = {'base_score':0.5, 'colsample_bylevel':1, 'colsample_bytree':1, 'gamma':0,
'eta':0.15, 'max_delta_step':0, 'max_depth':15,
'min_child_weight':20, 'nthread':-1,
'objective':'reg:linear', 'alpha':0, 'lambda':1,
'scale_pos_weight':1, 'seed':56, 'silent':True, 'subsample':1}
diff_num_round_list = [4 for i in range(5)]
diff_num_round_list[0] = 60
num_round_list = np.cumsum(diff_num_round_list)
n_folds = 1
mygs = MyGS(num_round_list, 'num_round', n_folds=n_folds)
#label_kfold = LabelKFold(np.array(dropped_train['department']), n_folds=n_folds)
dtrain = xgb.DMatrix(x_train,
label=y_train,
missing=-9999,
weight=weights)
dvalidation = xgb.DMatrix(x_validation, missing=-9999)
param['base_score'] = np.mean(y_train)
bst = None
for index, diff_num_round in enumerate(diff_num_round_list):
bst = xgb.train(param, dtrain, diff_num_round, xgb_model=bst)
y_pred = bst.predict(dvalidation)
score = scorer(y_validation, y_pred, x_validation[:, 0])
mygs.add(score, index)
mygs.est = bst
gc.collect()
return mygs
%%time
mygs = grid_search(x_subtrain, func(y_subtrain, x_subtrain[:, 0]),
x_validation, func(y_validation, x_validation[:, 0]),
scorer, None)
plot_quality(mygs, 'num_round')
```
min_child_weight = 5
```
plot_quality(mygs, 'num_round')
plot_quality(mygs, 'num_round')
gc.collect()
dvalidation = xgb.DMatrix(x_validation, missing=-9999)
y_pred = inv_func(mygs.est.predict(dvalidation), x_validation[:, 0])
plt.scatter(x_validation[:, 0], y_validation)
plt.scatter(x_validation[:, 0], y_pred, color='g')
plt.show()
```
# Save
```
%%time
est = LDTR(max_features=None, max_depth=15, random_state=42,
n_coefficients=2, n_first_dropped=2, const_term=True, min_samples_leaf=40).fit(
X=x_train, y=y_train, sample_weight=np.power(y_train.flatten(), -1))
y_pred = est.predict(x_test)
save_to_file(y_pred, 'ldtr.csv')
```
|
github_jupyter
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from tqdm import tnrange, tqdm_notebook
import gc
import operator
sns.set_context('talk')
pd.set_option('display.max_columns', 500)
import warnings
warnings.filterwarnings('ignore', message='Changing the shape of non-C contiguous array')
dfXtrain = pd.read_csv('preprocessed_csv/train_4.csv', index_col='id', sep=';')
dfXtest = pd.read_csv('preprocessed_csv/test_4.csv', index_col='id', sep=';')
dfYtrain = pd.read_csv('preprocessed_csv/y_train_4.csv', header=None, names=['ID', 'COTIS'], sep=';')
dfYtrain = dfYtrain.set_index('ID')
dropped_col_names = ['department', 'subreg', 'ext_dep']
def drop_cols(df):
return df.drop(dropped_col_names, axis=1), df[dropped_col_names]
train, dropped_train = drop_cols(dfXtrain)
test, dropped_test = drop_cols(dfXtest)
def add_big_city_cols(df, dropped_df):
df['big'] = np.where(dropped_df['subreg'] % 100 == 0, 1, 0)
df['average'] = np.where(dropped_df['subreg'] % 10 == 0, 1, 0)
df['average'] = df['average'] - df['big']
df['small'] = 1 - df['big'] - df['average']
return df
train = add_big_city_cols(train, dropped_train)
test = add_big_city_cols(test, dropped_test)
numerical = list(train.select_dtypes(include=[np.number]).columns)
numerical
categorical = list(train.select_dtypes(exclude=[np.number]).columns)
categorical
list(test.select_dtypes(exclude=[np.number]).columns)
for col in categorical:
print(col, train[col].nunique(), test[col].nunique())
train.energie_veh.unique()
test.energie_veh.unique()
small_cat = ['energie_veh', 'var6']
train = pd.get_dummies(train, columns=small_cat)
test = pd.get_dummies(test, columns=small_cat)
len(set(train.profession.values) - set(test.profession.values))
len(set(train.var8.values) - set(test.var8.values))
len(set(test.var8.values) - set(train.var8.values))
len(set(train.marque.values) - set(test.marque.values))
len(set(test.marque.values) - set(train.marque.values))
set(test.marque.values) - set(train.marque.values)
test[test.marque == 'GEELY']
test[test.marque == 'SOVAM']
middle_cat = ['profession', 'var8', 'marque', 'var14']
bigX = pd.concat([train, test])
bigX.shape
bigX = pd.get_dummies(bigX, columns=middle_cat)
bigX.shape
bigX.crm /= 100
first_col_list = ['crm', 'puis_fiscale']
col_list = first_col_list + sorted(list(set(bigX.columns) - set(first_col_list)))
bigX = bigX[col_list]
numerical = set(numerical)
numerical -= set(['big', 'average', 'small'])
for col in numerical:
treshold = 10
if bigX[col].nunique() <= treshold:
print(col, bigX[col].nunique())
for col in numerical:
treshold = 10
if bigX[col].nunique() > treshold:
print(col, bigX[col].nunique())
intercept = 50
base = 400
target = (dfYtrain.COTIS - intercept)/ train.crm * 100 / base
target.describe()
bigX.fillna(-9999, inplace=True)
y_train = np.array(dfYtrain)
train = bigX.loc[train.index]
x_train = np.array(train)
test = bigX.loc[test.index]
x_test = np.array(test)
x_train.shape
x_test.shape
dfYtest = pd.DataFrame({'ID': dfXtest.index, 'COTIS': np.zeros(test.shape[0])})
dfYtest = dfYtest[['ID', 'COTIS']]
dfYtest.head()
def save_to_file(y, file_name):
dfYtest['COTIS'] = y
dfYtest.to_csv('results/{}'.format(file_name), index=False, sep=';')
model_name = 'divided'
dfYtest_stacking = pd.DataFrame({'ID': dfXtrain.index, model_name: np.zeros(train.shape[0])})
dfYtest_stacking = dfYtest_stacking[['ID', model_name]]
dfYtest_stacking.head()
def save_to_file_stacking(y, file_name):
dfYtest_stacking[model_name] = y
dfYtest_stacking.to_csv('stacking/{}'.format(file_name), index=False, sep=';')
def plot_quality(grid_searcher, param_name):
means = []
stds = []
for elem in grid_searcher.grid_scores_:
means.append(np.mean(elem.cv_validation_scores))
stds.append(np.sqrt(np.var(elem.cv_validation_scores)))
means = np.array(means)
stds = np.array(stds)
params = grid_searcher.param_grid
plt.figure(figsize=(10, 6))
plt.plot(params[param_name], means)
plt.fill_between(params[param_name], \
means + stds, means - stds, alpha = 0.3, facecolor='blue')
plt.xlabel(param_name)
plt.ylabel('MAPE')
def mape(y_true, y_pred):
return -np.mean(np.abs((y_true - y_pred) / y_true)) * 100
def mape_scorer(est, X, y):
gc.collect()
return mape(y, est.predict(X))
class MyGS():
class Element():
def __init__(self):
self.cv_validation_scores = []
def add(self, score):
self.cv_validation_scores.append(score)
def __init__(self, param_grid, name, n_folds):
self.param_grid = {name: param_grid}
self.grid_scores_ = [MyGS.Element() for item in param_grid]
self.est = None
def add(self, score, param_num):
self.grid_scores_[param_num].add(score)
intercept = 50
base = 400
def scorer(y_true, y_pred, crm):
y_true = inv_func(y_true, crm)
y_pred = inv_func(y_pred, crm)
return mape(y_true, y_pred)
def func(y, crm):
return (y - intercept) / crm / base
def inv_func(y, crm):
return y * crm * base + intercept
validation_index = (dropped_train.ext_dep == 10) | (dropped_train.ext_dep > 900)
train_index = ~validation_index
subtrain, validation = train[train_index], train[validation_index]
x_subtrain = np.array(subtrain)
x_validation = np.array(validation)
ysubtrain, yvalidation = dfYtrain[train_index], dfYtrain[validation_index]
y_subtrain = np.array(ysubtrain).flatten()
y_validation = np.array(yvalidation).flatten()
validation.shape
from sklearn.tree import LinearDecisionTreeRegressor as LDTR
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import ExtraTreesRegressor
%%time
est = ExtraTreesRegressor(n_estimators=10, max_features=None,
max_depth=None, n_jobs=-1, random_state=42).fit(
X=x_subtrain, y=func(y_subtrain, x_subtrain[:, 0]), sample_weight=None)
y_pred = inv_func(est.predict(x_validation), x_validation[:, 0])
mape(y_validation, y_pred)
gc.collect()
sample_weight_subtrain = np.power(y_subtrain, -1)
%%time
est = DecisionTreeRegressor(max_features=None,
max_depth=None, random_state=42).fit(
X=x_subtrain, y=func(y_subtrain, x_subtrain[:, 0]), sample_weight=sample_weight_subtrain)
y_pred = inv_func(est.predict(x_validation), x_validation[:, 0])
mape(y_validation, y_pred)
gc.collect()
import xgboost as xgb
def grid_search(x_train, y_train, x_validation, y_validation, scorer, weights=None):
param = {'base_score':0.5, 'colsample_bylevel':1, 'colsample_bytree':1, 'gamma':0,
'eta':0.15, 'max_delta_step':0, 'max_depth':15,
'min_child_weight':20, 'nthread':-1,
'objective':'reg:linear', 'alpha':0, 'lambda':1,
'scale_pos_weight':1, 'seed':56, 'silent':True, 'subsample':1}
diff_num_round_list = [4 for i in range(5)]
diff_num_round_list[0] = 60
num_round_list = np.cumsum(diff_num_round_list)
n_folds = 1
mygs = MyGS(num_round_list, 'num_round', n_folds=n_folds)
#label_kfold = LabelKFold(np.array(dropped_train['department']), n_folds=n_folds)
dtrain = xgb.DMatrix(x_train,
label=y_train,
missing=-9999,
weight=weights)
dvalidation = xgb.DMatrix(x_validation, missing=-9999)
param['base_score'] = np.mean(y_train)
bst = None
for index, diff_num_round in enumerate(diff_num_round_list):
bst = xgb.train(param, dtrain, diff_num_round, xgb_model=bst)
y_pred = bst.predict(dvalidation)
score = scorer(y_validation, y_pred, x_validation[:, 0])
mygs.add(score, index)
mygs.est = bst
gc.collect()
return mygs
%%time
mygs = grid_search(x_subtrain, func(y_subtrain, x_subtrain[:, 0]),
x_validation, func(y_validation, x_validation[:, 0]),
scorer, None)
plot_quality(mygs, 'num_round')
plot_quality(mygs, 'num_round')
plot_quality(mygs, 'num_round')
gc.collect()
dvalidation = xgb.DMatrix(x_validation, missing=-9999)
y_pred = inv_func(mygs.est.predict(dvalidation), x_validation[:, 0])
plt.scatter(x_validation[:, 0], y_validation)
plt.scatter(x_validation[:, 0], y_pred, color='g')
plt.show()
%%time
est = LDTR(max_features=None, max_depth=15, random_state=42,
n_coefficients=2, n_first_dropped=2, const_term=True, min_samples_leaf=40).fit(
X=x_train, y=y_train, sample_weight=np.power(y_train.flatten(), -1))
y_pred = est.predict(x_test)
save_to_file(y_pred, 'ldtr.csv')
| 0.338077 | 0.750667 |
# Benchmarking gRPC vs REST API
```
import sys
sys.path.append('../')
import warnings
warnings.filterwarnings('ignore')
import tensorflow as tf
import tensorflow.keras as keras
import requests
import numpy as np
import json
from tqdm import tqdm
from time import time
from apis import *
```
#### Data functions
```
def get_fashion_mnist():
_, (test_images, test_labels) = keras.datasets.fashion_mnist.load_data()
# reshape data
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1)
# scale the values to 0.0 to 1.0
test_images = test_images / 255.0
return test_images, test_labels
def get_mnist():
_, (test_images, test_labels) = keras.datasets.mnist.load_data()
# reshape data
test_images = test_images.reshape(test_images.shape[0], -1)
# scale the values to 0.0 to 1.0
test_images = test_images / 255.0
return test_images, test_labels
```
#### gRPC and HTTP requests functions
```
def time_for_grpc_requests(proto_request_list, server='0.0.0.0:8500'):
prediction_service = PredictionService(server)
proto_response_list = []
st = time()
for req in tqdm(proto_request_list):
response = PredictResponse().copy(prediction_service.predict(req._protobuf, 5))
proto_response_list.append(response)
et = time()
return et-st, proto_response_list
def time_for_http_requests(json_request_list, server_url='http://localhost:8501/v1/models/fashion_model:predict'):
json_response_list = []
headers = {"content-type": "application/json"}
st = time()
for req in tqdm(json_request_list):
json_response = requests.post(server_url, data=req, headers=headers)
json_response.raise_for_status()
json_response_list.append(json_response)
et = time()
return et-st, json_response_list
def create_request(dataset, test_images, batch_size):
proto_request_list = []
json_request_list = []
if dataset == 'fashion_mnist':
proto_pred_request = PredictRequest(model_spec=ModelSpec(name='fashion_model', version=1, signature_name='serving_default'))
json_pred_request = {"signature_name": "serving_default"}
elif dataset == 'mnist':
proto_pred_request = PredictRequest(model_spec=ModelSpec(name='mnist', version=1, signature_name='predict_images'))
json_pred_request = {"signature_name": "predict_images"}
for i in range(0, len(test_images), batch_size):
# protobuf message
if dataset == 'fashion_mnist':
proto_pred_request.inputs = {'input_image': {'values' : test_images[i:i+batch_size,:].astype(np.float32)}}
elif dataset == 'mnist':
proto_pred_request.inputs = {'images' : {'values' : test_images[i:i+batch_size,:].astype(np.float32)}}
proto_request_list.append(proto_pred_request)
# json message
json_pred_request.update({'instances' : test_images[i:i+batch_size,:].tolist()})
json_request_list.append(json.dumps(json_pred_request))
return proto_request_list, json_request_list
def latency_profile_http_vs_grpc(dataset, batch_size, num_samples=10000):
if dataset == 'fashion_mnist':
test_images, test_labels = get_fashion_mnist()
grpc_server = '0.0.0.0:8500'
http_rest_server = 'http://localhost:8501/v1/models/fashion_model:predict'
elif dataset == 'mnist':
test_images, test_labels = get_mnist()
grpc_server = '0.0.0.0:8500'
http_rest_server = r='http://localhost:8501/v1/models/mnist:predict'
else:
raise ValueError('Wrong dataset type!')
test_images = test_images[:num_samples]
test_labels = test_labels[:num_samples]
proto_request_list, json_request_list = create_request(dataset, test_images, batch_size)
grpc_time = time_for_grpc_requests(proto_request_list, server=grpc_server)
print('GRPC-DONE: Batch Size: {}, Time: {}'.format(batch_size, grpc_time))
rest_time = time_for_http_requests(json_request_list, server_url=http_rest_server)
print('HTTP-DONE: Batch Size: {}, Time: {}'.format(batch_size, rest_time))
return grpc_time, rest_time
batch_sizes = [1, 4, 8, 16]
batch_sizes
latency_profile_against_batches = []
for batch_size in batch_sizes:
latency_profile_against_batches.append(latency_profile_http_vs_grpc('mnist', batch_size))
import matplotlib.pyplot as plt
%matplotlib inline
def plot_latencies(latency_profile_against_batches, dataset):
N = len(latency_profile_against_batches)
grpc_latencies = [x[0] for x in latency_profile_against_batches]
http_latencies = [x[1] for x in latency_profile_against_batches]
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots(figsize=(10,8))
rects1 = ax.bar(ind, grpc_latencies, width, color='royalblue')
rects2 = ax.bar(ind+width, http_latencies, width, color='seagreen')
# add some
ax.set_ylabel('Latency (s)')
ax.set_xlabel('Batch Sizes')
ax.set_title('''Latency comparison between gRPC vs REST `predict` requests
on {} dataset'''.format(dataset))
ax.set_xticks(ind + width / 2)
ax.set_xticklabels( ('1', '4', '8', '16') )
ax.legend( (rects1[0], rects2[0]), ('GRPC/PB', 'HTTP/JSON') )
plt.savefig('latency-comp-{}.png'.format(dataset))
plt.show()
plot_latencies(latency_profile_against_batches, 'mnist')
```
The gRPC request are processed on average 6 times faster than HTTP requests.
Future tests: Need to add
|
github_jupyter
|
import sys
sys.path.append('../')
import warnings
warnings.filterwarnings('ignore')
import tensorflow as tf
import tensorflow.keras as keras
import requests
import numpy as np
import json
from tqdm import tqdm
from time import time
from apis import *
def get_fashion_mnist():
_, (test_images, test_labels) = keras.datasets.fashion_mnist.load_data()
# reshape data
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1)
# scale the values to 0.0 to 1.0
test_images = test_images / 255.0
return test_images, test_labels
def get_mnist():
_, (test_images, test_labels) = keras.datasets.mnist.load_data()
# reshape data
test_images = test_images.reshape(test_images.shape[0], -1)
# scale the values to 0.0 to 1.0
test_images = test_images / 255.0
return test_images, test_labels
def time_for_grpc_requests(proto_request_list, server='0.0.0.0:8500'):
prediction_service = PredictionService(server)
proto_response_list = []
st = time()
for req in tqdm(proto_request_list):
response = PredictResponse().copy(prediction_service.predict(req._protobuf, 5))
proto_response_list.append(response)
et = time()
return et-st, proto_response_list
def time_for_http_requests(json_request_list, server_url='http://localhost:8501/v1/models/fashion_model:predict'):
json_response_list = []
headers = {"content-type": "application/json"}
st = time()
for req in tqdm(json_request_list):
json_response = requests.post(server_url, data=req, headers=headers)
json_response.raise_for_status()
json_response_list.append(json_response)
et = time()
return et-st, json_response_list
def create_request(dataset, test_images, batch_size):
proto_request_list = []
json_request_list = []
if dataset == 'fashion_mnist':
proto_pred_request = PredictRequest(model_spec=ModelSpec(name='fashion_model', version=1, signature_name='serving_default'))
json_pred_request = {"signature_name": "serving_default"}
elif dataset == 'mnist':
proto_pred_request = PredictRequest(model_spec=ModelSpec(name='mnist', version=1, signature_name='predict_images'))
json_pred_request = {"signature_name": "predict_images"}
for i in range(0, len(test_images), batch_size):
# protobuf message
if dataset == 'fashion_mnist':
proto_pred_request.inputs = {'input_image': {'values' : test_images[i:i+batch_size,:].astype(np.float32)}}
elif dataset == 'mnist':
proto_pred_request.inputs = {'images' : {'values' : test_images[i:i+batch_size,:].astype(np.float32)}}
proto_request_list.append(proto_pred_request)
# json message
json_pred_request.update({'instances' : test_images[i:i+batch_size,:].tolist()})
json_request_list.append(json.dumps(json_pred_request))
return proto_request_list, json_request_list
def latency_profile_http_vs_grpc(dataset, batch_size, num_samples=10000):
if dataset == 'fashion_mnist':
test_images, test_labels = get_fashion_mnist()
grpc_server = '0.0.0.0:8500'
http_rest_server = 'http://localhost:8501/v1/models/fashion_model:predict'
elif dataset == 'mnist':
test_images, test_labels = get_mnist()
grpc_server = '0.0.0.0:8500'
http_rest_server = r='http://localhost:8501/v1/models/mnist:predict'
else:
raise ValueError('Wrong dataset type!')
test_images = test_images[:num_samples]
test_labels = test_labels[:num_samples]
proto_request_list, json_request_list = create_request(dataset, test_images, batch_size)
grpc_time = time_for_grpc_requests(proto_request_list, server=grpc_server)
print('GRPC-DONE: Batch Size: {}, Time: {}'.format(batch_size, grpc_time))
rest_time = time_for_http_requests(json_request_list, server_url=http_rest_server)
print('HTTP-DONE: Batch Size: {}, Time: {}'.format(batch_size, rest_time))
return grpc_time, rest_time
batch_sizes = [1, 4, 8, 16]
batch_sizes
latency_profile_against_batches = []
for batch_size in batch_sizes:
latency_profile_against_batches.append(latency_profile_http_vs_grpc('mnist', batch_size))
import matplotlib.pyplot as plt
%matplotlib inline
def plot_latencies(latency_profile_against_batches, dataset):
N = len(latency_profile_against_batches)
grpc_latencies = [x[0] for x in latency_profile_against_batches]
http_latencies = [x[1] for x in latency_profile_against_batches]
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots(figsize=(10,8))
rects1 = ax.bar(ind, grpc_latencies, width, color='royalblue')
rects2 = ax.bar(ind+width, http_latencies, width, color='seagreen')
# add some
ax.set_ylabel('Latency (s)')
ax.set_xlabel('Batch Sizes')
ax.set_title('''Latency comparison between gRPC vs REST `predict` requests
on {} dataset'''.format(dataset))
ax.set_xticks(ind + width / 2)
ax.set_xticklabels( ('1', '4', '8', '16') )
ax.legend( (rects1[0], rects2[0]), ('GRPC/PB', 'HTTP/JSON') )
plt.savefig('latency-comp-{}.png'.format(dataset))
plt.show()
plot_latencies(latency_profile_against_batches, 'mnist')
| 0.442155 | 0.713182 |
```
import numpy as np
import tensorflow as tf
%%time
X_train = np.load("../data/array_data_mini/X_train.npz")["arr_0"]
y_train = np.load("../data/array_data_mini/y_train.npz")["arr_0"]
X_val = np.load("../data/array_data_mini/X_val.npz")["arr_0"]
y_val = np.load("../data/array_data_mini/y_val.npz")["arr_0"]
print(X_train.shape)
print(y_train.shape)
print(X_val.shape)
print(y_val.shape)
seq_size = (X_train[0].shape[0],1)
n_classes = 6
batch_size = 32
print(seq_size)
class Gen_Data(tf.keras.utils.Sequence):
# Generate data batches for model
def __init__(self,batch_size,seq_size,X_data,y_data):
# Set required attributes
self.batch_size = batch_size
self.t_size = seq_size
self.X_data = X_data
self.y_data = y_data
self.n_batches = len(self.X_data)//self.batch_size
self.batch_idx = np.array_split(range(len(self.X_data)), self.n_batches)
def __len__(self):
# Set number of batches per epoch
return self.n_batches
def __getitem__(self,idx):
# Fetch one batch
batch_X = self.X_data[self.batch_idx[idx]]
batch_y = self.y_data[self.batch_idx[idx]]
return batch_X, batch_y
train_gen = Gen_Data(batch_size, seq_size, X_train, y_train)
val_gen = Gen_Data(batch_size, seq_size, X_val, y_val)
def get_model(seq_size, n_classes):
inputs = tf.keras.Input(shape=seq_size)
# entry block
x = tf.keras.layers.Conv1D(32, 5, strides=2, padding="same")(inputs)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation("relu")(x)
block_out = x
# downsample blocks
for f in [64,128,256]:
x = tf.keras.layers.Conv1D(f, 5, strides=1, padding="same")(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation("relu")(x)
x = tf.keras.layers.Conv1D(f, 5, strides=1, padding="same")(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation("relu")(x)
x = tf.keras.layers.MaxPooling1D(3, strides=2, padding="same")(x)
residual = tf.keras.layers.Conv1D(f, 1, strides=2, padding="same")(block_out)
x = tf.keras.layers.add([x, residual])
block_out = x
# upsample blocks
for f in [256, 128, 64, 32]:
x = tf.keras.layers.Conv1DTranspose(f,5,padding="same")(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation("relu")(x)
x = tf.keras.layers.Conv1DTranspose(f,5,padding="same")(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation("relu")(x)
x = tf.keras.layers.UpSampling1D(2)(x)
residual = tf.keras.layers.UpSampling1D(2)(block_out)
residual = tf.keras.layers.Conv1D(f, 1, padding="same")(residual)
x = tf.keras.layers.add([x, residual])
block_out = x
# Add a per-pixel classification layer
outputs = tf.keras.layers.Conv1D(n_classes, 3, activation="softmax", padding="same")(x)
# Define the model
model = tf.keras.Model(inputs, outputs)
return model
tf.keras.backend.clear_session()
model = get_model(seq_size,n_classes)
model.summary()
loss = tf.keras.losses.SparseCategoricalCrossentropy()
opt = tf.keras.optimizers.Adam()
model.compile(optimizer=opt,loss=loss)
epochs = 5
model.fit(train_gen, epochs=epochs, validation_data=val_gen)
model.save("../models/u_net_mini/")
```
|
github_jupyter
|
import numpy as np
import tensorflow as tf
%%time
X_train = np.load("../data/array_data_mini/X_train.npz")["arr_0"]
y_train = np.load("../data/array_data_mini/y_train.npz")["arr_0"]
X_val = np.load("../data/array_data_mini/X_val.npz")["arr_0"]
y_val = np.load("../data/array_data_mini/y_val.npz")["arr_0"]
print(X_train.shape)
print(y_train.shape)
print(X_val.shape)
print(y_val.shape)
seq_size = (X_train[0].shape[0],1)
n_classes = 6
batch_size = 32
print(seq_size)
class Gen_Data(tf.keras.utils.Sequence):
# Generate data batches for model
def __init__(self,batch_size,seq_size,X_data,y_data):
# Set required attributes
self.batch_size = batch_size
self.t_size = seq_size
self.X_data = X_data
self.y_data = y_data
self.n_batches = len(self.X_data)//self.batch_size
self.batch_idx = np.array_split(range(len(self.X_data)), self.n_batches)
def __len__(self):
# Set number of batches per epoch
return self.n_batches
def __getitem__(self,idx):
# Fetch one batch
batch_X = self.X_data[self.batch_idx[idx]]
batch_y = self.y_data[self.batch_idx[idx]]
return batch_X, batch_y
train_gen = Gen_Data(batch_size, seq_size, X_train, y_train)
val_gen = Gen_Data(batch_size, seq_size, X_val, y_val)
def get_model(seq_size, n_classes):
inputs = tf.keras.Input(shape=seq_size)
# entry block
x = tf.keras.layers.Conv1D(32, 5, strides=2, padding="same")(inputs)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation("relu")(x)
block_out = x
# downsample blocks
for f in [64,128,256]:
x = tf.keras.layers.Conv1D(f, 5, strides=1, padding="same")(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation("relu")(x)
x = tf.keras.layers.Conv1D(f, 5, strides=1, padding="same")(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation("relu")(x)
x = tf.keras.layers.MaxPooling1D(3, strides=2, padding="same")(x)
residual = tf.keras.layers.Conv1D(f, 1, strides=2, padding="same")(block_out)
x = tf.keras.layers.add([x, residual])
block_out = x
# upsample blocks
for f in [256, 128, 64, 32]:
x = tf.keras.layers.Conv1DTranspose(f,5,padding="same")(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation("relu")(x)
x = tf.keras.layers.Conv1DTranspose(f,5,padding="same")(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation("relu")(x)
x = tf.keras.layers.UpSampling1D(2)(x)
residual = tf.keras.layers.UpSampling1D(2)(block_out)
residual = tf.keras.layers.Conv1D(f, 1, padding="same")(residual)
x = tf.keras.layers.add([x, residual])
block_out = x
# Add a per-pixel classification layer
outputs = tf.keras.layers.Conv1D(n_classes, 3, activation="softmax", padding="same")(x)
# Define the model
model = tf.keras.Model(inputs, outputs)
return model
tf.keras.backend.clear_session()
model = get_model(seq_size,n_classes)
model.summary()
loss = tf.keras.losses.SparseCategoricalCrossentropy()
opt = tf.keras.optimizers.Adam()
model.compile(optimizer=opt,loss=loss)
epochs = 5
model.fit(train_gen, epochs=epochs, validation_data=val_gen)
model.save("../models/u_net_mini/")
| 0.720565 | 0.365627 |
# Description
Ce notebook est utilisรฉ pour requรฉrir la dรฉcoupe dโun raster dโune zone dโune couche de donnรฉes de WaPOR en utilisant lโAPI de WaPOR
Vous aurez besoin dโune clรฉ de lโAPI de WaPOR pour utiliser ce notebook.
# รtape 1: Lire la clรฉ dโAPI
Obtenir votre clรฉ de lโAPI ร partir de https://wapor.apps.fao.org/profile
```
import requests
import pandas as pd
path_query=r'https://io.apps.fao.org/gismgr/api/v1/query/'
path_sign_in=r'https://io.apps.fao.org/gismgr/api/v1/iam/sign-in/'
APIToken=input('Votre clรฉ dโAPI: ')
```
# รtape 2: Obtenir une clรฉ dโautorisation dโaccรจs
Utilisation de la clรฉ dโentrรฉe dโAPI pour obtenir la clรฉ dโautorisation dโaccรจs
```
resp_signin=requests.post(path_sign_in,headers={'X-GISMGR-API-KEY':APIToken})
resp_signin = resp_signin.json()
AccessToken=resp_signin['response']['accessToken']
AccessToken
```
# รtape 3: Poster la charge de la requรชte
Pour plus dโexemples de charges de requรชte de sรฉries chronologique dโune zone visiter https://io.apps.fao.org/gismgr/api/v1/swagger-ui/examples/AreaStatsTimeSeries.txt
```
cube_code='L2_PHE_S'
workspace='WAPOR_2'
outputFileName='L2_PHE_17s1_s_clipped.tif'
#obtenir la mesure du cube de donnรฉes
cube_url=f'https://io.apps.fao.org/gismgr/api/v1/catalog/workspaces/{workspace}/cubes/{cube_code}/measures'
resp=requests.get(cube_url).json()
measure=resp['response']['items'][0]['code']
print('MEASURE: ',measure)
#Obtenir la dimension du cube de donnรฉes
cube_url=f'https://io.apps.fao.org/gismgr/api/v1/catalog/workspaces/{workspace}/cubes/{cube_code}/dimensions'
resp=requests.get(cube_url).json()
items=pd.DataFrame.from_dict(resp['response']['items'])
items
```
Dรฉfinir les valeurs des dimensions pour identifier la donnรฉe raster voulue
```
year="[2017-01-01,2018-01-01)"
stage="SOS"
season="S1"
```
## Dรฉfinir la zone par lโรฉtendue de la coordonnรฉe
```
bbox= [37.95883206252312, 7.89534, 43.32093, 12.3873979377346] #latlon
xmin,ymin,xmax,ymax=bbox[0],bbox[1],bbox[2],bbox[3]
Polygon=[
[xmin,ymin],
[xmin,ymax],
[xmax,ymax],
[xmax,ymin],
[xmin,ymin]
]
query={
"type": "CropRaster",
"params": {
"properties": {
"outputFileName": outputFileName,
"cutline": True,
"tiled": True,
"compressed": True,
"overviews": True
},
"cube": {
"code": cube_code,
"workspaceCode": workspace,
"language": "en"
},
"dimensions": [
{
"code": "SEASON",
"values": [
season
]
},
{
"code": "STAGE",
"values": [
stage
]
},
{
"code": "YEAR",
"values": [
year
]
}
],
"measures": [
measure
],
"shape": {
"type": "Polygon",
"coordinates": [Polygon]
}
}
}
```
## Ou dรฉfinir la zone en passant par GeoJSON
```
import ogr
shp_fh=r".\data\Awash_shapefile.shp"
shpfile=ogr.Open(shp_fh)
layer=shpfile.GetLayer()
epsg_code=layer.GetSpatialRef().GetAuthorityCode(None)
shape=layer.GetFeature(0).ExportToJson(as_object=True)['geometry']
shape["properties"]={"name": "EPSG:{0}".format(epsg_code)}#latlon projection
query={
"type": "CropRaster",
"params": {
"properties": {
"outputFileName": outputFileName,
"cutline": True,
"tiled": True,
"compressed": True,
"overviews": True
},
"cube": {
"code": cube_code,
"workspaceCode": workspace,
"language": "en"
},
"dimensions": [
{
"code": "SEASON",
"values": [
season
]
},
{
"code": "STAGE",
"values": [
stage
]
},
{
"code": "YEAR",
"values": [
year
]
}
],
"measures": [
measure
],
"shape": shape
}
}
```
Poster la charge de la requรชte (QueryPayload) avec une clรฉ dโaccรจs dans lโentรชte (Header). En rรฉponses, obtient un url pour requรฉrir le travail
```
resp_query=requests.post(path_query,headers={'Authorization':'Bearer {0}'.format(AccessToken)},
json=query)
resp_query = resp_query.json()
job_url=resp_query['response']['links'][0]['href']
job_url
```
# รtape 4: Obtenir les rรฉsultats du travail (Job)
Cela mettra du temps avant que le travail ne soit fini. Une fois le travail fini, son statut changera de ยซ En cours ยป (RUNNING) ร ยซ Terminรฉ ยป (COMPLETED) ou ยซ terminรฉ avec erreurs ยป (COMPLETED WITH ERRORS). Si cโest terminรฉ, les rรฉsultats de la sรฉrie chronologique de la zone peuvent รชtre trouvรฉs ร partir de Rรฉponse ยซ sortie ยป.
```
i=0
print('RUNNING',end=" ")
while i==0:
resp = requests.get(job_url)
resp=resp.json()
if resp['response']['status']=='RUNNING':
print('.',end =" ")
if resp['response']['status']=='COMPLETED':
results=resp['response']['output']
print(resp['response']['output'])
i=1
if resp['response']['status']=='COMPLETED WITH ERRORS':
print(resp['response']['log'])
i=1
```
|
github_jupyter
|
import requests
import pandas as pd
path_query=r'https://io.apps.fao.org/gismgr/api/v1/query/'
path_sign_in=r'https://io.apps.fao.org/gismgr/api/v1/iam/sign-in/'
APIToken=input('Votre clรฉ dโAPI: ')
resp_signin=requests.post(path_sign_in,headers={'X-GISMGR-API-KEY':APIToken})
resp_signin = resp_signin.json()
AccessToken=resp_signin['response']['accessToken']
AccessToken
cube_code='L2_PHE_S'
workspace='WAPOR_2'
outputFileName='L2_PHE_17s1_s_clipped.tif'
#obtenir la mesure du cube de donnรฉes
cube_url=f'https://io.apps.fao.org/gismgr/api/v1/catalog/workspaces/{workspace}/cubes/{cube_code}/measures'
resp=requests.get(cube_url).json()
measure=resp['response']['items'][0]['code']
print('MEASURE: ',measure)
#Obtenir la dimension du cube de donnรฉes
cube_url=f'https://io.apps.fao.org/gismgr/api/v1/catalog/workspaces/{workspace}/cubes/{cube_code}/dimensions'
resp=requests.get(cube_url).json()
items=pd.DataFrame.from_dict(resp['response']['items'])
items
year="[2017-01-01,2018-01-01)"
stage="SOS"
season="S1"
bbox= [37.95883206252312, 7.89534, 43.32093, 12.3873979377346] #latlon
xmin,ymin,xmax,ymax=bbox[0],bbox[1],bbox[2],bbox[3]
Polygon=[
[xmin,ymin],
[xmin,ymax],
[xmax,ymax],
[xmax,ymin],
[xmin,ymin]
]
query={
"type": "CropRaster",
"params": {
"properties": {
"outputFileName": outputFileName,
"cutline": True,
"tiled": True,
"compressed": True,
"overviews": True
},
"cube": {
"code": cube_code,
"workspaceCode": workspace,
"language": "en"
},
"dimensions": [
{
"code": "SEASON",
"values": [
season
]
},
{
"code": "STAGE",
"values": [
stage
]
},
{
"code": "YEAR",
"values": [
year
]
}
],
"measures": [
measure
],
"shape": {
"type": "Polygon",
"coordinates": [Polygon]
}
}
}
import ogr
shp_fh=r".\data\Awash_shapefile.shp"
shpfile=ogr.Open(shp_fh)
layer=shpfile.GetLayer()
epsg_code=layer.GetSpatialRef().GetAuthorityCode(None)
shape=layer.GetFeature(0).ExportToJson(as_object=True)['geometry']
shape["properties"]={"name": "EPSG:{0}".format(epsg_code)}#latlon projection
query={
"type": "CropRaster",
"params": {
"properties": {
"outputFileName": outputFileName,
"cutline": True,
"tiled": True,
"compressed": True,
"overviews": True
},
"cube": {
"code": cube_code,
"workspaceCode": workspace,
"language": "en"
},
"dimensions": [
{
"code": "SEASON",
"values": [
season
]
},
{
"code": "STAGE",
"values": [
stage
]
},
{
"code": "YEAR",
"values": [
year
]
}
],
"measures": [
measure
],
"shape": shape
}
}
resp_query=requests.post(path_query,headers={'Authorization':'Bearer {0}'.format(AccessToken)},
json=query)
resp_query = resp_query.json()
job_url=resp_query['response']['links'][0]['href']
job_url
i=0
print('RUNNING',end=" ")
while i==0:
resp = requests.get(job_url)
resp=resp.json()
if resp['response']['status']=='RUNNING':
print('.',end =" ")
if resp['response']['status']=='COMPLETED':
results=resp['response']['output']
print(resp['response']['output'])
i=1
if resp['response']['status']=='COMPLETED WITH ERRORS':
print(resp['response']['log'])
i=1
| 0.24608 | 0.696133 |
# N-BEATS
In this notebook, we show an example of how **N-BEATS** can be used with darts. If you are new to darts, we recommend you first follow the `01-darts-intro.ipynb` notebook.
**N-BEATS** is a state-of-the-art model that shows the potential of **pure DL architectures** in the context of the time-series forecasting. It outperforms well-established statistical approaches on the *M3*, and *M4* competitions. For more details on the model, see: https://arxiv.org/pdf/1905.10437.pdf.
```
# fix python path if working locally
from utils import fix_pythonpath_if_working_locally
fix_pythonpath_if_working_locally()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from darts import TimeSeries
from darts.models import NBEATSModel
from darts.dataprocessing.transformers import Scaler, MissingValuesFiller
from darts.metrics import mape, r2_score
def display_forecast(pred_series, ts_transformed, forecast_type, start_date=None):
plt.figure(figsize=(8,5))
if (start_date):
ts_transformed = ts_transformed.drop_before(start_date)
ts_transformed.univariate_component(0).plot(label='actual')
pred_series.plot(label=('historic ' + forecast_type + ' forecasts'))
plt.title('R2: {}'.format(r2_score(ts_transformed.univariate_component(0), pred_series)))
plt.legend();
```
## Daily energy generation example
We test NBEATS on a daily energy generation dataset from a Run-of-river power plant, as it exhibits various levels of seasonalities
```
df = pd.read_csv('energy_dataset.csv', delimiter=",")
df['time'] = pd.to_datetime(df['time'], utc=True)
df['time']= df.time.dt.tz_localize(None)
df.set_index('time')['generation hydro run-of-river and poundage'].plot()
plt.title('Hourly generation hydro run-of-river and poundage');
```
To simplify things, we work with the daily generation, and we fill the missing values present in the data by using the `MissingValuesFiller`:
```
df_day_avg = df.groupby(df['time'].astype(str).str.split(" ").str[0]).mean().reset_index()
filler = MissingValuesFiller()
scaler = Scaler()
series = scaler.fit_transform(
filler.transform(
TimeSeries.from_dataframe(
df_day_avg, 'time', ['generation hydro run-of-river and poundage'])
)
)
series.plot()
plt.title('Daily generation hydro run-of-river and poundage');
```
We split the data into train and validation sets. Normally we would need to use an additional test set to validate the model on unseen data, but we will skip it for this example.
```
train, val = series.split_after(pd.Timestamp('20170901'))
```
### Generic architecture
N-BEATS is a univariate model architecture that offers two configurations: a *generic* one and a *interpretable* one. The **generic architecture** uses as little prior knowledge as possible, with no feature engineering, no scaling and no internal architectural components that may be considered time-series-specific.
To start off, we use a model with the generic architecture of N-BEATS:
```
model_nbeats = NBEATSModel(
input_chunk_length=30,
output_chunk_length=7,
generic_architecture=True,
num_stacks=10,
num_blocks=1,
num_layers=4,
layer_widths=512,
n_epochs=100,
nr_epochs_val_period=1,
batch_size=800,
model_name='nbeats_run'
)
model_nbeats.fit(train, val_series=val, verbose=True)
```
Let's see the historical forecasts the model would have produced with an expanding training window, and a forecasting horizon of 7:
```
pred_series = model_nbeats.historical_forecasts(
series,
start=pd.Timestamp('20170901'),
forecast_horizon=7,
stride=5,
retrain=False,
verbose=True
)
display_forecast(pred_series, series['0'], '7 day', start_date=pd.Timestamp('20170901'))
```
### Interpretable model
N-BEATS offers an *interpretable architecture* consisting of two stacks: A **trend** stack and a **seasonality** stack. The architecture is designed so that:
- The trend component is removed from the input before it is fed into the seasonality stack
- The **partial forecasts of trend and seasonality are available** as separate interpretable outputs
```
model_nbeats = NBEATSModel(
input_chunk_length=30,
output_chunk_length=7,
generic_architecture=False,
num_blocks=3,
num_layers=4,
layer_widths=512,
n_epochs=100,
nr_epochs_val_period=1,
batch_size=800,
model_name='nbeats_interpretable_run'
)
model_nbeats.fit(series=train, val_series=val, verbose=True)
```
Let's see the historical forecasts the model would have produced with an expanding training window, and a forecasting horizon of 7:
```
pred_series = model_nbeats.historical_forecasts(
series,
start=pd.Timestamp('20170901'),
forecast_horizon=7,
stride=5,
retrain=False,
verbose=True
)
display_forecast(pred_series, series['0'], '7 day', start_date=pd.Timestamp('20170901'))
```
|
github_jupyter
|
# fix python path if working locally
from utils import fix_pythonpath_if_working_locally
fix_pythonpath_if_working_locally()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from darts import TimeSeries
from darts.models import NBEATSModel
from darts.dataprocessing.transformers import Scaler, MissingValuesFiller
from darts.metrics import mape, r2_score
def display_forecast(pred_series, ts_transformed, forecast_type, start_date=None):
plt.figure(figsize=(8,5))
if (start_date):
ts_transformed = ts_transformed.drop_before(start_date)
ts_transformed.univariate_component(0).plot(label='actual')
pred_series.plot(label=('historic ' + forecast_type + ' forecasts'))
plt.title('R2: {}'.format(r2_score(ts_transformed.univariate_component(0), pred_series)))
plt.legend();
df = pd.read_csv('energy_dataset.csv', delimiter=",")
df['time'] = pd.to_datetime(df['time'], utc=True)
df['time']= df.time.dt.tz_localize(None)
df.set_index('time')['generation hydro run-of-river and poundage'].plot()
plt.title('Hourly generation hydro run-of-river and poundage');
df_day_avg = df.groupby(df['time'].astype(str).str.split(" ").str[0]).mean().reset_index()
filler = MissingValuesFiller()
scaler = Scaler()
series = scaler.fit_transform(
filler.transform(
TimeSeries.from_dataframe(
df_day_avg, 'time', ['generation hydro run-of-river and poundage'])
)
)
series.plot()
plt.title('Daily generation hydro run-of-river and poundage');
train, val = series.split_after(pd.Timestamp('20170901'))
model_nbeats = NBEATSModel(
input_chunk_length=30,
output_chunk_length=7,
generic_architecture=True,
num_stacks=10,
num_blocks=1,
num_layers=4,
layer_widths=512,
n_epochs=100,
nr_epochs_val_period=1,
batch_size=800,
model_name='nbeats_run'
)
model_nbeats.fit(train, val_series=val, verbose=True)
pred_series = model_nbeats.historical_forecasts(
series,
start=pd.Timestamp('20170901'),
forecast_horizon=7,
stride=5,
retrain=False,
verbose=True
)
display_forecast(pred_series, series['0'], '7 day', start_date=pd.Timestamp('20170901'))
model_nbeats = NBEATSModel(
input_chunk_length=30,
output_chunk_length=7,
generic_architecture=False,
num_blocks=3,
num_layers=4,
layer_widths=512,
n_epochs=100,
nr_epochs_val_period=1,
batch_size=800,
model_name='nbeats_interpretable_run'
)
model_nbeats.fit(series=train, val_series=val, verbose=True)
pred_series = model_nbeats.historical_forecasts(
series,
start=pd.Timestamp('20170901'),
forecast_horizon=7,
stride=5,
retrain=False,
verbose=True
)
display_forecast(pred_series, series['0'], '7 day', start_date=pd.Timestamp('20170901'))
| 0.452536 | 0.986572 |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Clone-the-repo" data-toc-modified-id="Clone-the-repo-1"><span class="toc-item-num">1 </span>Clone the repo</a></span></li><li><span><a href="#Load-data" data-toc-modified-id="Load-data-2"><span class="toc-item-num">2 </span>Load data</a></span></li><li><span><a href="#Inspect-data" data-toc-modified-id="Inspect-data-3"><span class="toc-item-num">3 </span>Inspect data</a></span></li></ul></div>
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from utils import data_loading
```
# Clone the repo
```
# Clone the entire repo.
# !git clone -b data-inspecting --single-branch https://github.com/NewLuminous/Zalo-Vietnamese-Wiki-QA.git zaloqa
# %cd zaloqa
# !ls
```
# Load data
```
zalo_loader = data_loading.ZaloLoader()
zalo_data = zalo_loader.read_csv("data/zaloai/train.csv")
zalo_data
squad_vn_loader = data_loading.SquadLoader()
squad_vn_data = squad_vn_loader.read_csv("data/mailong25/squad-v2.0-mailong25.csv")
squad_vn_data = squad_vn_loader.read_csv("data/facebook/test-context-vi-question-vi_fb.csv")
squad_vn_data = squad_vn_loader.read_csv("data/facebook/dev-context-vi-question-vi_fb.csv")
squad_vn_data
squad_loader = data_loading.SquadLoader()
squad_data = squad_loader.read_csv("data/squad/train-v2.0_part_1.csv")
squad_data = squad_loader.read_csv("data/squad/train-v2.0_part_2.csv")
squad_data = squad_loader.read_csv("data/squad/dev-v2.0.csv")
squad_data
combined_data = data_loading.load(src=['zaloai', 'mailong25', 'facebook'])
combined_data
```
# Inspect data
```
zalo_data.isna().sum()
label_count = pd.concat([
zalo_data.join(pd.Series(np.full(len(zalo_data), 'Zalo'), name='dataset'))[['dataset', 'label']],
squad_vn_data.join(pd.Series(np.full(len(squad_vn_data), 'SquadVN'), name='dataset'))[['dataset', 'label']],
squad_data.join(pd.Series(np.full(len(squad_data), 'Squad'), name='dataset'))[['dataset', 'label']]
], axis = 0)
sns.countplot(x='dataset', hue='label', data=label_count)
question_count = pd.Series({'Zalo': zalo_data['question'].nunique(),
'SquadVN': squad_vn_data['question'].nunique(),
'Squad': squad_data['question'].nunique()})
_, ax = plt.subplots()
for i, v in enumerate(question_count):
ax.text(v + 2000, i, str(v), color='blue', fontweight='bold')
plt.title('Number of questions')
question_count.plot(kind='barh', xlim=(0, 170000)).invert_yaxis()
paragraph_count = pd.Series({'Zalo': zalo_data['text'].nunique(),
'SquadVN': squad_vn_data['text'].nunique(),
'Squad': squad_data['text'].nunique()})
_, ax = plt.subplots()
for i, v in enumerate(paragraph_count):
ax.text(v + 1000, i, str(v), color='blue', fontweight='bold')
plt.title('Number of paragraphs')
paragraph_count.plot(kind='barh', xlim=(0, 25000)).invert_yaxis()
title_count = pd.Series({'Zalo': zalo_data['title'].nunique(),
'SquadVN': squad_vn_data['title'].nunique(),
'Squad': squad_data['title'].nunique()})
_, ax = plt.subplots()
for i, v in enumerate(title_count):
ax.text(v + 200, i, str(v), color='blue', fontweight='bold')
plt.title('Number of titles')
title_count.plot(kind='barh', xlim=(0, 11000)).invert_yaxis()
plt.title('Top 10 popular titles in Zalo dataset')
plt.xlabel('Frequency')
zalo_data['title'].value_counts()[1:11].plot(kind='barh').invert_yaxis()
plt.title('Top 10 popular titles in SquadVN dataset')
plt.xlabel('Frequency')
squad_vn_data['title'].value_counts()[1:11].plot(kind='barh').invert_yaxis()
plt.title('Top 10 popular titles in Squad dataset')
plt.xlabel('Frequency')
squad_data['title'].value_counts()[1:11].plot(kind='barh').invert_yaxis()
plt.title('Length of question (Zalo)')
plt.xlabel('Length')
zalo_data['question'].str.split(' ').str.len().plot.hist(xlim=(0, 25), bins=20)
import tokenization
plt.title('Length of question in tokens (Zalo)')
plt.xlabel('Length')
zalo_data['question'].apply(tokenization.tokenize).str.len().plot.hist(xlim=(0, 25), bins=20)
plt.title('Length of question (SquadVN)')
plt.xlabel('Length')
squad_vn_data['question'].str.split(' ').str.len().plot.hist(xlim=(0, 30), bins=20)
plt.title('Length of question (Squad)')
plt.xlabel('Length')
squad_data['question'].str.split(' ').str.len().plot.hist(xlim=(0, 40), bins=20)
question_len = pd.Series({'Zalo': zalo_data['question'].str.split(' ').str.len().mean(),
'SquadVN': squad_vn_data['question'].str.split(' ').str.len().mean(),
'Squad': squad_data['question'].str.split(' ').str.len().mean()})
plt.title('Length of question')
question_len.plot(kind='barh').invert_yaxis()
plt.title('Length of paragraph (Zalo)')
plt.xlabel('Length')
zalo_data['text'].str.split(' ').str.len().plot.hist(xlim=(0, 300), bins=100)
plt.title('Length of paragraph in tokens')
plt.xlabel('Length')
zalo_data['text'].apply(tokenization.tokenize).str.len().plot.hist(xlim=(0, 300), bins=100)
plt.title('Length of paragraph (SquadVN)')
plt.xlabel('Length')
squad_vn_data['text'].str.split(' ').str.len().plot.hist(xlim=(0, 700), bins=100)
plt.title('Length of paragraph (Squad)')
plt.xlabel('Length')
squad_data['text'].str.split(' ').str.len().plot.hist(xlim=(0, 500), bins=100)
paragraph_len = pd.Series({'Zalo': zalo_data['text'].str.split(' ').str.len().mean(),
'SquadVN': squad_vn_data['text'].str.split(' ').str.len().mean(),
'Squad': squad_data['text'].str.split(' ').str.len().mean()})
plt.title('Length of paragraph')
paragraph_len.plot(kind='barh').invert_yaxis()
plt.title('Length of question (Zalo + SquadVN)')
plt.xlabel('Length')
combined_data['question'].str.split(' ').str.len().plot.hist(xlim=(0, 30), bins=20)
plt.title('Length of paragraph (Zalo + SquadVN)')
plt.xlabel('Length')
combined_data['text'].str.split(' ').str.len().plot.hist(xlim=(0, 700), bins=100)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from utils import data_loading
# Clone the entire repo.
# !git clone -b data-inspecting --single-branch https://github.com/NewLuminous/Zalo-Vietnamese-Wiki-QA.git zaloqa
# %cd zaloqa
# !ls
zalo_loader = data_loading.ZaloLoader()
zalo_data = zalo_loader.read_csv("data/zaloai/train.csv")
zalo_data
squad_vn_loader = data_loading.SquadLoader()
squad_vn_data = squad_vn_loader.read_csv("data/mailong25/squad-v2.0-mailong25.csv")
squad_vn_data = squad_vn_loader.read_csv("data/facebook/test-context-vi-question-vi_fb.csv")
squad_vn_data = squad_vn_loader.read_csv("data/facebook/dev-context-vi-question-vi_fb.csv")
squad_vn_data
squad_loader = data_loading.SquadLoader()
squad_data = squad_loader.read_csv("data/squad/train-v2.0_part_1.csv")
squad_data = squad_loader.read_csv("data/squad/train-v2.0_part_2.csv")
squad_data = squad_loader.read_csv("data/squad/dev-v2.0.csv")
squad_data
combined_data = data_loading.load(src=['zaloai', 'mailong25', 'facebook'])
combined_data
zalo_data.isna().sum()
label_count = pd.concat([
zalo_data.join(pd.Series(np.full(len(zalo_data), 'Zalo'), name='dataset'))[['dataset', 'label']],
squad_vn_data.join(pd.Series(np.full(len(squad_vn_data), 'SquadVN'), name='dataset'))[['dataset', 'label']],
squad_data.join(pd.Series(np.full(len(squad_data), 'Squad'), name='dataset'))[['dataset', 'label']]
], axis = 0)
sns.countplot(x='dataset', hue='label', data=label_count)
question_count = pd.Series({'Zalo': zalo_data['question'].nunique(),
'SquadVN': squad_vn_data['question'].nunique(),
'Squad': squad_data['question'].nunique()})
_, ax = plt.subplots()
for i, v in enumerate(question_count):
ax.text(v + 2000, i, str(v), color='blue', fontweight='bold')
plt.title('Number of questions')
question_count.plot(kind='barh', xlim=(0, 170000)).invert_yaxis()
paragraph_count = pd.Series({'Zalo': zalo_data['text'].nunique(),
'SquadVN': squad_vn_data['text'].nunique(),
'Squad': squad_data['text'].nunique()})
_, ax = plt.subplots()
for i, v in enumerate(paragraph_count):
ax.text(v + 1000, i, str(v), color='blue', fontweight='bold')
plt.title('Number of paragraphs')
paragraph_count.plot(kind='barh', xlim=(0, 25000)).invert_yaxis()
title_count = pd.Series({'Zalo': zalo_data['title'].nunique(),
'SquadVN': squad_vn_data['title'].nunique(),
'Squad': squad_data['title'].nunique()})
_, ax = plt.subplots()
for i, v in enumerate(title_count):
ax.text(v + 200, i, str(v), color='blue', fontweight='bold')
plt.title('Number of titles')
title_count.plot(kind='barh', xlim=(0, 11000)).invert_yaxis()
plt.title('Top 10 popular titles in Zalo dataset')
plt.xlabel('Frequency')
zalo_data['title'].value_counts()[1:11].plot(kind='barh').invert_yaxis()
plt.title('Top 10 popular titles in SquadVN dataset')
plt.xlabel('Frequency')
squad_vn_data['title'].value_counts()[1:11].plot(kind='barh').invert_yaxis()
plt.title('Top 10 popular titles in Squad dataset')
plt.xlabel('Frequency')
squad_data['title'].value_counts()[1:11].plot(kind='barh').invert_yaxis()
plt.title('Length of question (Zalo)')
plt.xlabel('Length')
zalo_data['question'].str.split(' ').str.len().plot.hist(xlim=(0, 25), bins=20)
import tokenization
plt.title('Length of question in tokens (Zalo)')
plt.xlabel('Length')
zalo_data['question'].apply(tokenization.tokenize).str.len().plot.hist(xlim=(0, 25), bins=20)
plt.title('Length of question (SquadVN)')
plt.xlabel('Length')
squad_vn_data['question'].str.split(' ').str.len().plot.hist(xlim=(0, 30), bins=20)
plt.title('Length of question (Squad)')
plt.xlabel('Length')
squad_data['question'].str.split(' ').str.len().plot.hist(xlim=(0, 40), bins=20)
question_len = pd.Series({'Zalo': zalo_data['question'].str.split(' ').str.len().mean(),
'SquadVN': squad_vn_data['question'].str.split(' ').str.len().mean(),
'Squad': squad_data['question'].str.split(' ').str.len().mean()})
plt.title('Length of question')
question_len.plot(kind='barh').invert_yaxis()
plt.title('Length of paragraph (Zalo)')
plt.xlabel('Length')
zalo_data['text'].str.split(' ').str.len().plot.hist(xlim=(0, 300), bins=100)
plt.title('Length of paragraph in tokens')
plt.xlabel('Length')
zalo_data['text'].apply(tokenization.tokenize).str.len().plot.hist(xlim=(0, 300), bins=100)
plt.title('Length of paragraph (SquadVN)')
plt.xlabel('Length')
squad_vn_data['text'].str.split(' ').str.len().plot.hist(xlim=(0, 700), bins=100)
plt.title('Length of paragraph (Squad)')
plt.xlabel('Length')
squad_data['text'].str.split(' ').str.len().plot.hist(xlim=(0, 500), bins=100)
paragraph_len = pd.Series({'Zalo': zalo_data['text'].str.split(' ').str.len().mean(),
'SquadVN': squad_vn_data['text'].str.split(' ').str.len().mean(),
'Squad': squad_data['text'].str.split(' ').str.len().mean()})
plt.title('Length of paragraph')
paragraph_len.plot(kind='barh').invert_yaxis()
plt.title('Length of question (Zalo + SquadVN)')
plt.xlabel('Length')
combined_data['question'].str.split(' ').str.len().plot.hist(xlim=(0, 30), bins=20)
plt.title('Length of paragraph (Zalo + SquadVN)')
plt.xlabel('Length')
combined_data['text'].str.split(' ').str.len().plot.hist(xlim=(0, 700), bins=100)
| 0.319865 | 0.889 |
# Creating your own datasets on your private server
In addition to querying existing datasets, the QCArchive software can be used to easily generate new ones. In this example, we will create a new dataset of small molecules with up to 3 heavy atoms and compute their DFT and AN1-1x energies.
For this example, we will use a demonstation "Snowflake" server which runs calculations locally in this Jupyter notebook session. In general, QCArchive can be used with thousands of distributed compute nodes at once.
```
import numpy as np
import pandas as pd
import qcportal as ptl
from qcfractal import FractalSnowflakeHandler
server = FractalSnowflakeHandler()
local_client = server.client()
```
Our new dataset will be called "QM3":
```
qm3 = ptl.collections.Dataset(name="QM3", client=local_client, default_units="hartree")
```
### Adding molecules to a dataset
The following function counts heavy atoms in a molecule:
```
def count_heavy_atoms(molecule):
return len(list(filter(lambda a: a != 'H', molecule.symbols)))
```
The `add_entry` function adds a molecule to a dataset. Below, we add all molecules in QM7b with 3 or fewer heavy atoms. The `save` function commits these changes to the server. First, pull QM7b down from the MolSSI server:
```
client = ptl.FractalClient()
qm7b = client.get_collection("dataset", "QM7b")
qm7b_mols = qm7b.get_molecules()
for molecule in qm7b_mols["molecule"]:
if count_heavy_atoms(molecule) <= 3:
qm3.add_entry(f"{molecule.name}_{molecule.get_hash()[:2]}", molecule)
qm3.save()
```
We can now query the server for the molecules in our dataset:
```
qm3.get_molecules()
```
And look at one of them:
```
qm3.get_molecules(subset="C2H6O_18")
```
### Running calculations
Our QM3 dataset now has all of its molecules, but no properties have been computed for them.
```
qm3.list_values()
```
----
<img src="https://raw.githubusercontent.com/psi4/psi4media/master/logos-psi4/psi4square.png" alt="psi4" align="left" style="width: 80px;"/>
<img src="https://raw.githubusercontent.com/aiqm/torchani/master/logo1.png" alt="torchani" align="right" style="width: 120px;"/>
The `compute` function is used to submit calculations for every molecule in a dataset.
We will compute the ฯB97x/6-31g(d) energy for each molecule using the Psi4 program, and the ANI-1x energy using the TorchANI program. (Other supported programs include CFOUR, entos, GAMESS, Q-Chem, Molpro, MOPAC, NWChem, RDKit, TeraChem, and Turbomole.)
```
qm3.compute(program='psi4', method='wB97x', basis='6-31g(d)')
qm3.compute(program="torchani", method="ANI1x")
```
The calculations are submitted and run asynchronously.
As before, values are described with `list_values` and queried with `get_values`. Incomplete calculations show up as `NaN`, pandas placeholder for missing data.
```
qm3.list_values()
dft_data = qm3.get_values(program='psi4')
dft_data
ml_model_data = qm3.get_values(program='torchani')
ml_model_data
```
We can compare the ANI-1x predictions to the DFT values:
```
import plotly.express as px
data = pd.merge(dft_data, ml_model_data, left_index=True, right_index=True)
data["Unsigned Difference (Hartree)"] = np.abs(data["WB97X/6-31g(d)-Psi4"] - data["ANI1X-Torchani"])
fig = px.violin(data, y="Unsigned Difference (Hartree)", box=True,
title="Difference Distribution between ANI-1x and ฯB97x/6-31g(d)")
fig.show()
```
## Other calculations you can do
- `Dataset`: Single point, gradients, and freqencies
- `ReactionDataset`: Reactions and interaction energies, with many-body counterpoise corrections
- `OptimizationDataset`: Geometry optimization
- `TorsionDriveDataset`: PES scans over torsion angles, for force field fitting.
Talk to us about adding more!
- (AI)MD trajectories?
- Normal mode sampling?
## The QCArchive stack enables large-scale, multi-resource calculations

## Ongoing data enrichment efforts
With the QCArchive framework, it is very easy to submit new calculations on existing datasets. In the MolSSI database, we are augmenting existing datasets with a common set of calculations at various levels of DFT:
- HF / Def2-TZVP
- LDA / Def2-TZVP
- PBE-D3M(BJ) / Def2-TZVP
- B3LYP-D3M(BJ) / Def2-TZVP
- ฯB97x-D3(BJ) / Def2-TZVP
and, where feasible:
- MP2 / cc-pVTZ
- CCSD(T) / cc-pVTZ
Let us know if there are properties/calculations that you would like!
## Extras
```
from IPython.core.display import HTML
def print_info(dataset):
print(f"Name: {dataset.data.name}")
print()
print(f"Data Points: {dataset.data.metadata['data_points']}")
print(f"Elements: {dataset.data.metadata['elements']}")
print(f"Labels: {dataset.data.metadata['labels']}")
display(HTML("<u>Description:</u> " + dataset.data.description))
for cite in dataset.data.metadata["citations"]:
display(HTML(cite['acs_citation']))
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import qcportal as ptl
from qcfractal import FractalSnowflakeHandler
server = FractalSnowflakeHandler()
local_client = server.client()
qm3 = ptl.collections.Dataset(name="QM3", client=local_client, default_units="hartree")
def count_heavy_atoms(molecule):
return len(list(filter(lambda a: a != 'H', molecule.symbols)))
client = ptl.FractalClient()
qm7b = client.get_collection("dataset", "QM7b")
qm7b_mols = qm7b.get_molecules()
for molecule in qm7b_mols["molecule"]:
if count_heavy_atoms(molecule) <= 3:
qm3.add_entry(f"{molecule.name}_{molecule.get_hash()[:2]}", molecule)
qm3.save()
qm3.get_molecules()
qm3.get_molecules(subset="C2H6O_18")
qm3.list_values()
qm3.compute(program='psi4', method='wB97x', basis='6-31g(d)')
qm3.compute(program="torchani", method="ANI1x")
qm3.list_values()
dft_data = qm3.get_values(program='psi4')
dft_data
ml_model_data = qm3.get_values(program='torchani')
ml_model_data
import plotly.express as px
data = pd.merge(dft_data, ml_model_data, left_index=True, right_index=True)
data["Unsigned Difference (Hartree)"] = np.abs(data["WB97X/6-31g(d)-Psi4"] - data["ANI1X-Torchani"])
fig = px.violin(data, y="Unsigned Difference (Hartree)", box=True,
title="Difference Distribution between ANI-1x and ฯB97x/6-31g(d)")
fig.show()
from IPython.core.display import HTML
def print_info(dataset):
print(f"Name: {dataset.data.name}")
print()
print(f"Data Points: {dataset.data.metadata['data_points']}")
print(f"Elements: {dataset.data.metadata['elements']}")
print(f"Labels: {dataset.data.metadata['labels']}")
display(HTML("<u>Description:</u> " + dataset.data.description))
for cite in dataset.data.metadata["citations"]:
display(HTML(cite['acs_citation']))
| 0.369543 | 0.958886 |
## _*H2 ground state energy computation using Quantum Phase Estimation*_
This notebook demonstrates using Qiskit Aqua Chemistry to compute ground state energy of the Hydrogen (H2) molecule using QPE (Quantum Phase Estimation) algorithm. It is compared to the same energy as computed by the ExactEigensolver
This notebook populates a dictionary, that is a progammatic representation of an input file, in order to drive the qiskit_aqua_chemistry stack. Such a dictionary can be manipulated programmatically. An sibling notebook `h2_iqpe` is also provided, which showcases how the ground state energies over a range of inter-atomic distances can be computed and then plotted as well.
This notebook has been written to use the PYSCF chemistry driver. See the PYSCF chemistry driver readme if you need to install the external PySCF library that this driver requires.
```
from qiskit_aqua_chemistry import AquaChemistry
import time
distance = 0.735
molecule = 'H .0 .0 0; H .0 .0 {}'.format(distance)
# Input dictionary to configure Qiskit Aqua Chemistry for the chemistry problem.
aqua_chemistry_qpe_dict = {
'driver': {'name': 'PYSCF'},
'PYSCF': {
'atom': molecule,
'basis': 'sto3g'
},
'operator': {'name': 'hamiltonian', 'transformation': 'full', 'qubit_mapping': 'parity'},
'algorithm': {
'name': 'QPE',
'num_ancillae': 9,
'num_time_slices': 50,
'expansion_mode': 'suzuki',
'expansion_order': 2,
},
'initial_state': {'name': 'HartreeFock'},
'backend': {
'name': 'local_qasm_simulator',
'shots': 100,
}
}
aqua_chemistry_ees_dict = {
'driver': {'name': 'PYSCF'},
'PYSCF': {'atom': molecule, 'basis': 'sto3g'},
'operator': {'name': 'hamiltonian', 'transformation': 'full', 'qubit_mapping': 'parity'},
'algorithm': {
'name': 'ExactEigensolver',
},
}
```
With the two algorithms configured, we can then run them and check the results, as follows.
```
start_time = time.time()
result_qpe = AquaChemistry().run(aqua_chemistry_qpe_dict)
result_ees = AquaChemistry().run(aqua_chemistry_ees_dict)
print("--- computation completed in %s seconds ---" % (time.time() - start_time))
print('The groundtruth total ground state energy is {}.'.format(
result_ees['energy']
))
print('The total ground state energy as computed by QPE is {}.'.format(
result_qpe['energy']
))
print('In comparison, the Hartree-Fock ground state energy is {}.'.format(
result_ees['hf_energy']
))
```
|
github_jupyter
|
from qiskit_aqua_chemistry import AquaChemistry
import time
distance = 0.735
molecule = 'H .0 .0 0; H .0 .0 {}'.format(distance)
# Input dictionary to configure Qiskit Aqua Chemistry for the chemistry problem.
aqua_chemistry_qpe_dict = {
'driver': {'name': 'PYSCF'},
'PYSCF': {
'atom': molecule,
'basis': 'sto3g'
},
'operator': {'name': 'hamiltonian', 'transformation': 'full', 'qubit_mapping': 'parity'},
'algorithm': {
'name': 'QPE',
'num_ancillae': 9,
'num_time_slices': 50,
'expansion_mode': 'suzuki',
'expansion_order': 2,
},
'initial_state': {'name': 'HartreeFock'},
'backend': {
'name': 'local_qasm_simulator',
'shots': 100,
}
}
aqua_chemistry_ees_dict = {
'driver': {'name': 'PYSCF'},
'PYSCF': {'atom': molecule, 'basis': 'sto3g'},
'operator': {'name': 'hamiltonian', 'transformation': 'full', 'qubit_mapping': 'parity'},
'algorithm': {
'name': 'ExactEigensolver',
},
}
start_time = time.time()
result_qpe = AquaChemistry().run(aqua_chemistry_qpe_dict)
result_ees = AquaChemistry().run(aqua_chemistry_ees_dict)
print("--- computation completed in %s seconds ---" % (time.time() - start_time))
print('The groundtruth total ground state energy is {}.'.format(
result_ees['energy']
))
print('The total ground state energy as computed by QPE is {}.'.format(
result_qpe['energy']
))
print('In comparison, the Hartree-Fock ground state energy is {}.'.format(
result_ees['hf_energy']
))
| 0.571886 | 0.986942 |
```
# Regular EDA (exploratory data analysis) and plotting libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# we want our plots to appear inside the notebook
%matplotlib inline
from google.colab import drive
drive.mount('/content/drive')
# Import training and validation sets
df = pd.read_csv("/content/drive/MyDrive/ml/data/weather.csv",
low_memory=False)
df.info()
# Finding how many missing values are there
df.isna().sum()
```
**NB**: NO MISSING VALUES
```
# Columns of the data set
df.columns
# Plotting saledate vs SalePrice for the first 100 samples
fig, ax = plt.subplots(figsize=(10,8))
ax.scatter(df["DATE"][:1000], df["PRCP"][:1000]);
df.DATE[:1000]
# Plotting PRCP in histogram
df.PRCP.plot.hist();
```
### Parsing dates
When we work with time series data, we want to enrich the time & date component as much as possible.
We can do that by telling pandas which of our columns has dates in it using the parse_dates parameter.
```
# Import data again but this time parse dates
df = pd.read_csv("/content/drive/MyDrive/ml/data/weather.csv",
low_memory=False,
parse_dates=["DATE"])
df.DATE.dtype
df.DATE[:1000]
fig, ax = plt.subplots(figsize=(10,8))
ax.scatter(df["DATE"][:1000], df["PRCP"][:1000]);
df.head()
df.head().T
df.DATE.head(20)
```
### Sort DataFrame by Date
When working with time series data, it's good to sort it by date
```
# Sort DataFrame in date order
df.sort_values(by=["DATE"], inplace=True, ascending=True)
df.DATE.head(20)
```
### Make a copy of the original DataFrame
We make a copy of the original dataframe so when we manipulate the copy, we've still got our original data.
```
# Make a copy of the original DataFrame to perform edits on
df_tmax = df.copy()
```
# Predicting TMAX
### Add datetime parameters for date column
```
df_tmax["Year"] = df_tmax.DATE.dt.year
df_tmax["Month"] = df_tmax.DATE.dt.month
df_tmax["Day"] = df_tmax.DATE.dt.day
df_tmax.head().T
# Now that we've enriched our DataFrame with date time features, we can remove DATE and RAIN column
df_tmax.drop("DATE", axis=1, inplace=True)
df_tmax.drop("RAIN", axis=1, inplace=True)
# Check the values of different columns
df_tmax
df_tmax.head()
len(df_tmax)
df_tmax.columns
df_tmax = df_tmax.drop(['TMIN', 'HUMIDITY', 'PRCP', 'PRESSURE'], axis=1)
df_tmax
```
# Model Creation
```
# Let's build a machine learning model
%%time
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_jobs=-1,
random_state= 42)
model.fit(df_tmax.drop("TMAX", axis=1), df_tmax["TMAX"])
# Score the model
model_score= model.score(df_tmax.drop("TMAX", axis=1), df_tmax["TMAX"])
print(f'Model score is: {model_score*100:.2f}')
```
**Splitting data into train and validation sets**
```
df_tmax.Year
df_tmax.Year.value_counts()
# Split data into train and validation
df_val = df_tmax[df_tmax.Year == 2008 ]
df_train = df_tmax[df_tmax.Year != 2008]
len(df_val), len(df_train)
# Splitting data into X and y
X_train, y_train = df_train.drop("TMAX", axis=1), df_train.TMAX
X_valid, y_valid = df_val.drop("TMAX", axis=1), df_val.TMAX
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
y_train
```
**Building an evaluation function**
```
# Create evaluation function (the competition uses RMSLE)
from sklearn.metrics import mean_squared_log_error, mean_absolute_error, r2_score
def rmsle(y_test, y_preds):
"""
Calculates root mean squared log error between prediction and true labels
"""
return np.sqrt(mean_squared_log_error(y_test, y_preds))
# Create function to evaluate model on a few different Levels
def show_score(model):
train_preds = model.predict(X_train)
val_preds = model.predict(X_valid)
scores = {"Training MAE": mean_absolute_error(y_train, train_preds),
"Valid MAE": mean_absolute_error(y_valid, val_preds),
"Training RMSLE": rmsle(y_train, train_preds),
"Valid RMSLE": rmsle(y_valid, val_preds),
"Training R^2-Score": r2_score(y_train, train_preds),
"Valid R^2-Score": r2_score(y_valid, val_preds)}
return scores
```
---
**Testing our model on a subset(to tune the hyperparameters)**
This process take a long time to complete
* %%time
* model = RandomForestRegressor(n_jobs=-1,
random_sate=42)
* model.fit(X_train, y_train)
---
---
```
# Because the length of the X_train is really high
print(f'Length of the X_train set: {len(X_train)}')
# Change max_samples value to make the process faster.
model = RandomForestRegressor(n_jobs=-1,
random_state=42,
max_samples=1000)
%%time
# Cutting down on the maxx number of samples each estimator can see improves training time
model.fit(X_train, y_train)
print(f'the model is {(X_train.shape[0]) * 100 / 1000000} times faster')
show_score(model)
```
### Hyerparameter tuning with RandomizedSearchCV
```
%%time
from sklearn.model_selection import RandomizedSearchCV
# Different RandomForestRegressor hyperparameters
rf_gird = {"n_estimators": np.arange(10, 100, 10),
"max_depth": [None, 3, 5, 10],
"min_samples_split": np.arange(2, 20, 2),
"min_samples_leaf": np.arange(1, 20, 2),
"max_features": [0.5, 1, "sqrt", "auto"],
"max_samples": [1000]}
# Instantiate RandomizedSearchCV model
rs_model = RandomizedSearchCV(RandomForestRegressor(n_jobs=-1,
random_state=42),
param_distributions=rf_gird,
n_iter=2,
cv=5,
verbose=True)
# Fit the RandomizedSearchCV model
rs_model.fit(X_train, y_train)
# Find the best model's hyperparameters
rs_model.best_params_
# Evaluate the RandomizedSearchCV model
show_score(rs_model)
```
### Train a model with the best hyperparameters
**Note:** These were found after 100 iterations of RandomizedSearchCV
```
%%time
# Model with ideal hyperparameter tuning
ideal_model = RandomForestRegressor(n_estimators=40,
min_samples_leaf=1,
min_samples_split=14,
max_features=0.5,
n_jobs=-1,
max_samples=None,
random_state=42)
# Fit the model
ideal_model.fit(X_train, y_train)
# Score for idea_model (trained on all the data)
show_score(ideal_model)
# Scores on rs_model (Only trained on ~1,000 samples)
show_score(rs_model)
```
### Make Prediction on test data
**Note**: Due to missing value and numerical conversion. The model will not run. We need to fix the issues first to run the model.
```
test_preds = ideal_model.predict(X_valid)
test_preds
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds)
#Make a line
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds)
plt.plot(y_valid, y_valid, 'r')
```
# **Custom Data Predictions**
* We will predict a new TMAX depending upon A new year
Put Year, Month and Date below
```
# Here 2020 is the year, 12 is the month and 11 is the day of the date
New_year_to_predict_tmax = [[2020,12,11]]
Custom_tmax_preds = ideal_model.predict(New_year_to_predict_tmax)
print(f' Predicted Maximum Temperature (TMAX) is: {Custom_tmax_preds[0]:.2f}ยฐC' )
```
# Predicting TMIN
```
df_tmin = df.copy()
df_tmin
```
## Add datetime parameters for date column
```
df_tmin["Year"] = df_tmin.DATE.dt.year
df_tmin["Month"] = df_tmin.DATE.dt.month
df_tmin["Day"] = df_tmin.DATE.dt.day
df_tmin.head().T
# Now that we've enriched our DataFrame with date time features, we can remove DATE and RAIN column
df_tmin.drop("DATE", axis=1, inplace=True)
df_tmin.drop("RAIN", axis=1, inplace=True)
# Check the values of different columns
df_tmin
df_tmin.head()
len(df_tmin)
df_tmin.columns
df_tmin = df_tmin.drop(['TMAX', 'HUMIDITY', 'PRCP', 'PRESSURE'], axis=1)
df_tmin
```
# Model Creation
```
# Let's build a machine learning model
%%time
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_jobs=-1,
random_state= 42)
model.fit(df_tmin.drop("TMIN", axis=1), df_tmin["TMIN"])
# Score the model
model_score= model.score(df_tmin.drop("TMIN", axis=1), df_tmin["TMIN"])
print(f'Model score is: {model_score*100:.2f}')
df_tmin.Year
df_tmin.Year.value_counts()
# Split data into train and validation
df_val = df_tmin[df_tmin.Year == 2008 ]
df_train = df_tmin[df_tmin.Year != 2008]
len(df_val), len(df_train)
# Splitting data into X and y
X_train, y_train = df_train.drop("TMIN", axis=1), df_train.TMIN
X_valid, y_valid = df_val.drop("TMIN", axis=1), df_val.TMIN
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
y_train
```
**Building an evaluation function**
```
# Create evaluation function (the competition uses RMSLE)
from sklearn.metrics import mean_squared_log_error, mean_absolute_error, r2_score
def rmsle(y_test, y_preds):
"""
Calculates root mean squared log error between prediction and true labels
"""
return np.sqrt(mean_squared_log_error(y_test, y_preds))
# Create function to evaluate model on a few different Levels
def show_score(model):
train_preds = model.predict(X_train)
val_preds = model.predict(X_valid)
scores = {"Training MAE": mean_absolute_error(y_train, train_preds),
"Valid MAE": mean_absolute_error(y_valid, val_preds),
"Training RMSLE": rmsle(y_train, train_preds),
"Valid RMSLE": rmsle(y_valid, val_preds),
"Training R^2-Score": r2_score(y_train, train_preds),
"Valid R^2-Score": r2_score(y_valid, val_preds)}
return scores
# Because the length of the X_train is really high
print(f'Length of the X_train set: {len(X_train)}')
# Change max_samples value to make the process faster.
model = RandomForestRegressor(n_jobs=-1,
random_state=42,
max_samples=1000)
%%time
# Cutting down on the maxx number of samples each estimator can see improves training time
model.fit(X_train, y_train)
print(f'the model is {(X_train.shape[0]) * 100 / 1000000} times faster')
%%time
# Model with ideal hyperparameter tuning
ideal_model_tmin = RandomForestRegressor(n_estimators=40,
min_samples_leaf=1,
min_samples_split=14,
max_features=0.5,
n_jobs=-1,
max_samples=None,
random_state=42)
# Fit the model
ideal_model_tmin.fit(X_train, y_train)
# Score for idea_model (trained on all the data)
show_score(ideal_model_tmin)
```
### Make Prediction on test data
```
test_preds_tmin = ideal_model_tmin.predict(X_valid)
test_preds_tmin
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds_tmin)
#Make a line
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds_tmin)
plt.plot(y_valid, y_valid, 'r')
```
# **Custom Data Predictions**
* We will predict a new TMIN depending upon A new year
Put Year, Month and Date below
```
# Here 2020 is the year, 12 is the month and 11 is the day of the date
New_year_to_predict_tmin = [[2020,12,11]]
Custom_tmin_preds = ideal_model_tmin.predict(New_year_to_predict_tmin)
print(f' Predicted Minimum Temperature (TMIN) is: {Custom_tmin_preds[0]:.2f}ยฐC' )
```
# Predicting HUMIDITY
```
df_humid = df.copy()
df_humid
```
## Add datetime parameters for date column
```
df_humid["Year"] = df_humid.DATE.dt.year
df_humid["Month"] = df_humid.DATE.dt.month
df_humid["Day"] = df_humid.DATE.dt.day
df_humid.head().T
df_humid.columns
df_humid = df_humid.drop(['DATE', 'TMAX', 'TMIN', 'PRCP', 'PRESSURE', 'RAIN'], axis=1)
df_humid
```
# Model Creation
```
# Let's build a machine learning model
%%time
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_jobs=-1,
random_state= 42)
model.fit(df_humid.drop("HUMIDITY", axis=1), df_humid["HUMIDITY"])
# Score the model
model_score= model.score(df_humid.drop("HUMIDITY", axis=1), df_humid["HUMIDITY"])
print(f'Model score is: {model_score*100:.2f}')
# Split data into train and validation
df_val = df_humid[df_humid.Year == 2008 ]
df_train = df_humid[df_humid.Year != 2008]
len(df_val), len(df_train)
# Splitting data into X and y
X_train, y_train = df_train.drop("HUMIDITY", axis=1), df_train.HUMIDITY
X_valid, y_valid = df_val.drop("HUMIDITY", axis=1), df_val.HUMIDITY
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
```
### Building an evaluation function
```
# Create evaluation function (the competition uses RMSLE)
from sklearn.metrics import mean_squared_log_error, mean_absolute_error, r2_score
def rmsle(y_test, y_preds):
"""
Calculates root mean squared log error between prediction and true labels
"""
return np.sqrt(mean_squared_log_error(y_test, y_preds))
# Create function to evaluate model on a few different Levels
def show_score(model):
train_preds = model.predict(X_train)
val_preds = model.predict(X_valid)
scores = {"Training MAE": mean_absolute_error(y_train, train_preds),
"Valid MAE": mean_absolute_error(y_valid, val_preds),
"Training RMSLE": rmsle(y_train, train_preds),
"Valid RMSLE": rmsle(y_valid, val_preds),
"Training R^2-Score": r2_score(y_train, train_preds),
"Valid R^2-Score": r2_score(y_valid, val_preds)}
return scores
# Because the length of the X_train is really high
print(f'Length of the X_train set: {len(X_train)}')
# Change max_samples value to make the process faster.
model = RandomForestRegressor(n_jobs=-1,
random_state=42,
max_samples=1000)
%%time
# Cutting down on the maxx number of samples each estimator can see improves training time
model.fit(X_train, y_train)
print(f'the model is {(X_train.shape[0]) * 100 / 1000000} times faster')
%%time
# Model with ideal hyperparameter tuning
ideal_model_humid = RandomForestRegressor(n_estimators=40,
min_samples_leaf=1,
min_samples_split=14,
max_features=0.5,
n_jobs=-1,
max_samples=None,
random_state=42)
# Fit the model
ideal_model_humid.fit(X_train, y_train)
# Score for idea_model (trained on all the data)
show_score(ideal_model_humid)
```
### Make Prediction on test data
```
test_preds_humid = ideal_model_humid.predict(X_valid)
test_preds_humid
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds_humid)
#Make a line
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds_humid)
plt.plot(y_valid, y_valid, 'r')
```
# **Custom Data Predictions**
* We will predict a new HUMIDITY depending upon A new year
> Put Year, Month and Date below
```
# Here 2020 is the year, 12 is the month and 11 is the day of the date
New_year_to_predict_humid = [[2020,12,11]]
Custom_humid_preds = ideal_model_humid.predict(New_year_to_predict_humid)
print(f' Predicted HUMIDITY is: {Custom_humid_preds[0]:.2f} %')
```
# Predicting precipitation
```
df_prcp = df.copy()
df_prcp
```
## Add datetime parameters for date column
```
df_prcp["Year"] = df_prcp.DATE.dt.year
df_prcp["Month"] = df_prcp.DATE.dt.month
df_prcp["Day"] = df_prcp.DATE.dt.day
df_prcp
df_prcp.columns
df_prcp = df_prcp.drop(['DATE', 'PRESSURE', 'RAIN'], axis=1)
df_prcp
```
Lets reformat colums to make it look good
```
#now 'age' will appear at the end of our df
df_prcp_ref = df_prcp[['Day','Month','Year','TMAX','TMIN','HUMIDITY','PRCP']]
df_prcp_ref.head()
```
# Model Creation
```
# Let's build a machine learning model
%%time
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_jobs=-1,
random_state= 42)
model.fit(df_prcp_ref.drop("PRCP", axis=1), df_prcp_ref["PRCP"])
# Score the model
model_score= model.score(df_prcp_ref.drop("PRCP", axis=1), df_prcp_ref["PRCP"])
print(f'Model score is: {model_score*100:.2f}')
# Split data into train and validation
df_val = df_prcp_ref[df_prcp_ref.Year == 2008 ]
df_train = df_prcp_ref[df_prcp_ref.Year != 2008]
len(df_val), len(df_train)
# Splitting data into X and y
X_train, y_train = df_train.drop("PRCP", axis=1), df_train.PRCP
X_valid, y_valid = df_val.drop("PRCP", axis=1), df_val.PRCP
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
```
### Building an evaluation function
```
# Create evaluation function (the competition uses RMSLE)
from sklearn.metrics import mean_squared_log_error, mean_absolute_error, r2_score
def rmsle(y_test, y_preds):
"""
Calculates root mean squared log error between prediction and true labels
"""
return np.sqrt(mean_squared_log_error(y_test, y_preds))
# Create function to evaluate model on a few different Levels
def show_score(model):
train_preds = model.predict(X_train)
val_preds = model.predict(X_valid)
scores = {"Training MAE": mean_absolute_error(y_train, train_preds),
"Valid MAE": mean_absolute_error(y_valid, val_preds),
"Training RMSLE": rmsle(y_train, train_preds),
"Valid RMSLE": rmsle(y_valid, val_preds),
"Training R^2-Score": r2_score(y_train, train_preds),
"Valid R^2-Score": r2_score(y_valid, val_preds)}
return scores
# Because the length of the X_train is really high
print(f'Length of the X_train set: {len(X_train)}')
# Change max_samples value to make the process faster.
model = RandomForestRegressor(n_jobs=-1,
random_state=42,
max_samples=1000)
%%time
# Cutting down on the maxx number of samples each estimator can see improves training time
model.fit(X_train, y_train)
%%time
# Model with ideal hyperparameter tuning
ideal_model_prcp = RandomForestRegressor(n_estimators=40,
min_samples_leaf=1,
min_samples_split=14,
max_features=0.5,
n_jobs=-1,
max_samples=None,
random_state=42)
# Fit the model
ideal_model_prcp.fit(X_train, y_train)
# Score for idea_model (trained on all the data)
show_score(ideal_model_prcp)
```
### Make Prediction on test data
```
test_preds_prcp = ideal_model_prcp.predict(X_valid)
test_preds_prcp
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds_prcp)
#Make a line
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds_prcp)
plt.plot(y_valid, y_valid, 'r')
```
# **Custom Data Predictions**
* We will predict a new precipitation depending upon A new year
> Put Date,Month, Year, TMAX, TMIN, and HUMIDITY below
```
# Here 2020 is the year, 12 is the month and 11 is the day of the date
New_year_to_predict_prcp = [[11,12,2020,31.25,22.81,53.44]]
Custom_prcp_preds = ideal_model_prcp.predict(New_year_to_predict_prcp)
print(f' Predicted precipitation is: {Custom_prcp_preds[0]:.2f} ')
```
|
github_jupyter
|
# Regular EDA (exploratory data analysis) and plotting libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# we want our plots to appear inside the notebook
%matplotlib inline
from google.colab import drive
drive.mount('/content/drive')
# Import training and validation sets
df = pd.read_csv("/content/drive/MyDrive/ml/data/weather.csv",
low_memory=False)
df.info()
# Finding how many missing values are there
df.isna().sum()
# Columns of the data set
df.columns
# Plotting saledate vs SalePrice for the first 100 samples
fig, ax = plt.subplots(figsize=(10,8))
ax.scatter(df["DATE"][:1000], df["PRCP"][:1000]);
df.DATE[:1000]
# Plotting PRCP in histogram
df.PRCP.plot.hist();
# Import data again but this time parse dates
df = pd.read_csv("/content/drive/MyDrive/ml/data/weather.csv",
low_memory=False,
parse_dates=["DATE"])
df.DATE.dtype
df.DATE[:1000]
fig, ax = plt.subplots(figsize=(10,8))
ax.scatter(df["DATE"][:1000], df["PRCP"][:1000]);
df.head()
df.head().T
df.DATE.head(20)
# Sort DataFrame in date order
df.sort_values(by=["DATE"], inplace=True, ascending=True)
df.DATE.head(20)
# Make a copy of the original DataFrame to perform edits on
df_tmax = df.copy()
df_tmax["Year"] = df_tmax.DATE.dt.year
df_tmax["Month"] = df_tmax.DATE.dt.month
df_tmax["Day"] = df_tmax.DATE.dt.day
df_tmax.head().T
# Now that we've enriched our DataFrame with date time features, we can remove DATE and RAIN column
df_tmax.drop("DATE", axis=1, inplace=True)
df_tmax.drop("RAIN", axis=1, inplace=True)
# Check the values of different columns
df_tmax
df_tmax.head()
len(df_tmax)
df_tmax.columns
df_tmax = df_tmax.drop(['TMIN', 'HUMIDITY', 'PRCP', 'PRESSURE'], axis=1)
df_tmax
# Let's build a machine learning model
%%time
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_jobs=-1,
random_state= 42)
model.fit(df_tmax.drop("TMAX", axis=1), df_tmax["TMAX"])
# Score the model
model_score= model.score(df_tmax.drop("TMAX", axis=1), df_tmax["TMAX"])
print(f'Model score is: {model_score*100:.2f}')
df_tmax.Year
df_tmax.Year.value_counts()
# Split data into train and validation
df_val = df_tmax[df_tmax.Year == 2008 ]
df_train = df_tmax[df_tmax.Year != 2008]
len(df_val), len(df_train)
# Splitting data into X and y
X_train, y_train = df_train.drop("TMAX", axis=1), df_train.TMAX
X_valid, y_valid = df_val.drop("TMAX", axis=1), df_val.TMAX
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
y_train
# Create evaluation function (the competition uses RMSLE)
from sklearn.metrics import mean_squared_log_error, mean_absolute_error, r2_score
def rmsle(y_test, y_preds):
"""
Calculates root mean squared log error between prediction and true labels
"""
return np.sqrt(mean_squared_log_error(y_test, y_preds))
# Create function to evaluate model on a few different Levels
def show_score(model):
train_preds = model.predict(X_train)
val_preds = model.predict(X_valid)
scores = {"Training MAE": mean_absolute_error(y_train, train_preds),
"Valid MAE": mean_absolute_error(y_valid, val_preds),
"Training RMSLE": rmsle(y_train, train_preds),
"Valid RMSLE": rmsle(y_valid, val_preds),
"Training R^2-Score": r2_score(y_train, train_preds),
"Valid R^2-Score": r2_score(y_valid, val_preds)}
return scores
# Because the length of the X_train is really high
print(f'Length of the X_train set: {len(X_train)}')
# Change max_samples value to make the process faster.
model = RandomForestRegressor(n_jobs=-1,
random_state=42,
max_samples=1000)
%%time
# Cutting down on the maxx number of samples each estimator can see improves training time
model.fit(X_train, y_train)
print(f'the model is {(X_train.shape[0]) * 100 / 1000000} times faster')
show_score(model)
%%time
from sklearn.model_selection import RandomizedSearchCV
# Different RandomForestRegressor hyperparameters
rf_gird = {"n_estimators": np.arange(10, 100, 10),
"max_depth": [None, 3, 5, 10],
"min_samples_split": np.arange(2, 20, 2),
"min_samples_leaf": np.arange(1, 20, 2),
"max_features": [0.5, 1, "sqrt", "auto"],
"max_samples": [1000]}
# Instantiate RandomizedSearchCV model
rs_model = RandomizedSearchCV(RandomForestRegressor(n_jobs=-1,
random_state=42),
param_distributions=rf_gird,
n_iter=2,
cv=5,
verbose=True)
# Fit the RandomizedSearchCV model
rs_model.fit(X_train, y_train)
# Find the best model's hyperparameters
rs_model.best_params_
# Evaluate the RandomizedSearchCV model
show_score(rs_model)
%%time
# Model with ideal hyperparameter tuning
ideal_model = RandomForestRegressor(n_estimators=40,
min_samples_leaf=1,
min_samples_split=14,
max_features=0.5,
n_jobs=-1,
max_samples=None,
random_state=42)
# Fit the model
ideal_model.fit(X_train, y_train)
# Score for idea_model (trained on all the data)
show_score(ideal_model)
# Scores on rs_model (Only trained on ~1,000 samples)
show_score(rs_model)
test_preds = ideal_model.predict(X_valid)
test_preds
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds)
#Make a line
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds)
plt.plot(y_valid, y_valid, 'r')
# Here 2020 is the year, 12 is the month and 11 is the day of the date
New_year_to_predict_tmax = [[2020,12,11]]
Custom_tmax_preds = ideal_model.predict(New_year_to_predict_tmax)
print(f' Predicted Maximum Temperature (TMAX) is: {Custom_tmax_preds[0]:.2f}ยฐC' )
df_tmin = df.copy()
df_tmin
df_tmin["Year"] = df_tmin.DATE.dt.year
df_tmin["Month"] = df_tmin.DATE.dt.month
df_tmin["Day"] = df_tmin.DATE.dt.day
df_tmin.head().T
# Now that we've enriched our DataFrame with date time features, we can remove DATE and RAIN column
df_tmin.drop("DATE", axis=1, inplace=True)
df_tmin.drop("RAIN", axis=1, inplace=True)
# Check the values of different columns
df_tmin
df_tmin.head()
len(df_tmin)
df_tmin.columns
df_tmin = df_tmin.drop(['TMAX', 'HUMIDITY', 'PRCP', 'PRESSURE'], axis=1)
df_tmin
# Let's build a machine learning model
%%time
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_jobs=-1,
random_state= 42)
model.fit(df_tmin.drop("TMIN", axis=1), df_tmin["TMIN"])
# Score the model
model_score= model.score(df_tmin.drop("TMIN", axis=1), df_tmin["TMIN"])
print(f'Model score is: {model_score*100:.2f}')
df_tmin.Year
df_tmin.Year.value_counts()
# Split data into train and validation
df_val = df_tmin[df_tmin.Year == 2008 ]
df_train = df_tmin[df_tmin.Year != 2008]
len(df_val), len(df_train)
# Splitting data into X and y
X_train, y_train = df_train.drop("TMIN", axis=1), df_train.TMIN
X_valid, y_valid = df_val.drop("TMIN", axis=1), df_val.TMIN
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
y_train
# Create evaluation function (the competition uses RMSLE)
from sklearn.metrics import mean_squared_log_error, mean_absolute_error, r2_score
def rmsle(y_test, y_preds):
"""
Calculates root mean squared log error between prediction and true labels
"""
return np.sqrt(mean_squared_log_error(y_test, y_preds))
# Create function to evaluate model on a few different Levels
def show_score(model):
train_preds = model.predict(X_train)
val_preds = model.predict(X_valid)
scores = {"Training MAE": mean_absolute_error(y_train, train_preds),
"Valid MAE": mean_absolute_error(y_valid, val_preds),
"Training RMSLE": rmsle(y_train, train_preds),
"Valid RMSLE": rmsle(y_valid, val_preds),
"Training R^2-Score": r2_score(y_train, train_preds),
"Valid R^2-Score": r2_score(y_valid, val_preds)}
return scores
# Because the length of the X_train is really high
print(f'Length of the X_train set: {len(X_train)}')
# Change max_samples value to make the process faster.
model = RandomForestRegressor(n_jobs=-1,
random_state=42,
max_samples=1000)
%%time
# Cutting down on the maxx number of samples each estimator can see improves training time
model.fit(X_train, y_train)
print(f'the model is {(X_train.shape[0]) * 100 / 1000000} times faster')
%%time
# Model with ideal hyperparameter tuning
ideal_model_tmin = RandomForestRegressor(n_estimators=40,
min_samples_leaf=1,
min_samples_split=14,
max_features=0.5,
n_jobs=-1,
max_samples=None,
random_state=42)
# Fit the model
ideal_model_tmin.fit(X_train, y_train)
# Score for idea_model (trained on all the data)
show_score(ideal_model_tmin)
test_preds_tmin = ideal_model_tmin.predict(X_valid)
test_preds_tmin
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds_tmin)
#Make a line
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds_tmin)
plt.plot(y_valid, y_valid, 'r')
# Here 2020 is the year, 12 is the month and 11 is the day of the date
New_year_to_predict_tmin = [[2020,12,11]]
Custom_tmin_preds = ideal_model_tmin.predict(New_year_to_predict_tmin)
print(f' Predicted Minimum Temperature (TMIN) is: {Custom_tmin_preds[0]:.2f}ยฐC' )
df_humid = df.copy()
df_humid
df_humid["Year"] = df_humid.DATE.dt.year
df_humid["Month"] = df_humid.DATE.dt.month
df_humid["Day"] = df_humid.DATE.dt.day
df_humid.head().T
df_humid.columns
df_humid = df_humid.drop(['DATE', 'TMAX', 'TMIN', 'PRCP', 'PRESSURE', 'RAIN'], axis=1)
df_humid
# Let's build a machine learning model
%%time
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_jobs=-1,
random_state= 42)
model.fit(df_humid.drop("HUMIDITY", axis=1), df_humid["HUMIDITY"])
# Score the model
model_score= model.score(df_humid.drop("HUMIDITY", axis=1), df_humid["HUMIDITY"])
print(f'Model score is: {model_score*100:.2f}')
# Split data into train and validation
df_val = df_humid[df_humid.Year == 2008 ]
df_train = df_humid[df_humid.Year != 2008]
len(df_val), len(df_train)
# Splitting data into X and y
X_train, y_train = df_train.drop("HUMIDITY", axis=1), df_train.HUMIDITY
X_valid, y_valid = df_val.drop("HUMIDITY", axis=1), df_val.HUMIDITY
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
# Create evaluation function (the competition uses RMSLE)
from sklearn.metrics import mean_squared_log_error, mean_absolute_error, r2_score
def rmsle(y_test, y_preds):
"""
Calculates root mean squared log error between prediction and true labels
"""
return np.sqrt(mean_squared_log_error(y_test, y_preds))
# Create function to evaluate model on a few different Levels
def show_score(model):
train_preds = model.predict(X_train)
val_preds = model.predict(X_valid)
scores = {"Training MAE": mean_absolute_error(y_train, train_preds),
"Valid MAE": mean_absolute_error(y_valid, val_preds),
"Training RMSLE": rmsle(y_train, train_preds),
"Valid RMSLE": rmsle(y_valid, val_preds),
"Training R^2-Score": r2_score(y_train, train_preds),
"Valid R^2-Score": r2_score(y_valid, val_preds)}
return scores
# Because the length of the X_train is really high
print(f'Length of the X_train set: {len(X_train)}')
# Change max_samples value to make the process faster.
model = RandomForestRegressor(n_jobs=-1,
random_state=42,
max_samples=1000)
%%time
# Cutting down on the maxx number of samples each estimator can see improves training time
model.fit(X_train, y_train)
print(f'the model is {(X_train.shape[0]) * 100 / 1000000} times faster')
%%time
# Model with ideal hyperparameter tuning
ideal_model_humid = RandomForestRegressor(n_estimators=40,
min_samples_leaf=1,
min_samples_split=14,
max_features=0.5,
n_jobs=-1,
max_samples=None,
random_state=42)
# Fit the model
ideal_model_humid.fit(X_train, y_train)
# Score for idea_model (trained on all the data)
show_score(ideal_model_humid)
test_preds_humid = ideal_model_humid.predict(X_valid)
test_preds_humid
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds_humid)
#Make a line
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds_humid)
plt.plot(y_valid, y_valid, 'r')
# Here 2020 is the year, 12 is the month and 11 is the day of the date
New_year_to_predict_humid = [[2020,12,11]]
Custom_humid_preds = ideal_model_humid.predict(New_year_to_predict_humid)
print(f' Predicted HUMIDITY is: {Custom_humid_preds[0]:.2f} %')
df_prcp = df.copy()
df_prcp
df_prcp["Year"] = df_prcp.DATE.dt.year
df_prcp["Month"] = df_prcp.DATE.dt.month
df_prcp["Day"] = df_prcp.DATE.dt.day
df_prcp
df_prcp.columns
df_prcp = df_prcp.drop(['DATE', 'PRESSURE', 'RAIN'], axis=1)
df_prcp
#now 'age' will appear at the end of our df
df_prcp_ref = df_prcp[['Day','Month','Year','TMAX','TMIN','HUMIDITY','PRCP']]
df_prcp_ref.head()
# Let's build a machine learning model
%%time
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_jobs=-1,
random_state= 42)
model.fit(df_prcp_ref.drop("PRCP", axis=1), df_prcp_ref["PRCP"])
# Score the model
model_score= model.score(df_prcp_ref.drop("PRCP", axis=1), df_prcp_ref["PRCP"])
print(f'Model score is: {model_score*100:.2f}')
# Split data into train and validation
df_val = df_prcp_ref[df_prcp_ref.Year == 2008 ]
df_train = df_prcp_ref[df_prcp_ref.Year != 2008]
len(df_val), len(df_train)
# Splitting data into X and y
X_train, y_train = df_train.drop("PRCP", axis=1), df_train.PRCP
X_valid, y_valid = df_val.drop("PRCP", axis=1), df_val.PRCP
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
# Create evaluation function (the competition uses RMSLE)
from sklearn.metrics import mean_squared_log_error, mean_absolute_error, r2_score
def rmsle(y_test, y_preds):
"""
Calculates root mean squared log error between prediction and true labels
"""
return np.sqrt(mean_squared_log_error(y_test, y_preds))
# Create function to evaluate model on a few different Levels
def show_score(model):
train_preds = model.predict(X_train)
val_preds = model.predict(X_valid)
scores = {"Training MAE": mean_absolute_error(y_train, train_preds),
"Valid MAE": mean_absolute_error(y_valid, val_preds),
"Training RMSLE": rmsle(y_train, train_preds),
"Valid RMSLE": rmsle(y_valid, val_preds),
"Training R^2-Score": r2_score(y_train, train_preds),
"Valid R^2-Score": r2_score(y_valid, val_preds)}
return scores
# Because the length of the X_train is really high
print(f'Length of the X_train set: {len(X_train)}')
# Change max_samples value to make the process faster.
model = RandomForestRegressor(n_jobs=-1,
random_state=42,
max_samples=1000)
%%time
# Cutting down on the maxx number of samples each estimator can see improves training time
model.fit(X_train, y_train)
%%time
# Model with ideal hyperparameter tuning
ideal_model_prcp = RandomForestRegressor(n_estimators=40,
min_samples_leaf=1,
min_samples_split=14,
max_features=0.5,
n_jobs=-1,
max_samples=None,
random_state=42)
# Fit the model
ideal_model_prcp.fit(X_train, y_train)
# Score for idea_model (trained on all the data)
show_score(ideal_model_prcp)
test_preds_prcp = ideal_model_prcp.predict(X_valid)
test_preds_prcp
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds_prcp)
#Make a line
plt.figure(figsize=(12,6))
plt.scatter(y_valid, test_preds_prcp)
plt.plot(y_valid, y_valid, 'r')
# Here 2020 is the year, 12 is the month and 11 is the day of the date
New_year_to_predict_prcp = [[11,12,2020,31.25,22.81,53.44]]
Custom_prcp_preds = ideal_model_prcp.predict(New_year_to_predict_prcp)
print(f' Predicted precipitation is: {Custom_prcp_preds[0]:.2f} ')
| 0.747063 | 0.892093 |
# Predictive Modelling for Classification
## Import libraries
```
import sys
# adding to the path variables the one folder higher (locally, not changing system variables)
sys.path.append("..")
from datetime import datetime
import pandas as pd
from scripts.helper import reduce_mem_usage
from pycaret.classification import *
from pycaret.utils import check_metric
from pycaret.datasets import get_data
import pickle
pd.set_option("display.max_columns", 120)
```
## Import data
```
# Create datetime
today = datetime.today()
d1 = today.strftime("%d%m%Y")
# Test dataset from pycaret library for classification modelling
# Comment it out if you're using the Capstone dataset!
#dataset_pycaret = get_data('credit')
dataset = pd.read_csv('data/feat_train_v2.csv')
dataset_test = pd.read_csv('data/feat_test_v2.csv')
#dataset_pycaret.info()
```
## Preparation for Modelling
```
numerical_cols = np.load("data/Numerical_Columns.npy")
categorical_cols = np.load("data/Categorical_Columns.npy")
type(numerical_cols)
numerical_cols = numerical_cols.tolist()
categorical_cols = categorical_cols.tolist()
type(numerical_cols)
# Create target for classification model
class_train = dataset[categorical_cols+numerical_cols]
class_train['Target'] = dataset['totals.transactionRevenue'].apply(lambda x: 0 if x == 0 else 1)
class_test = dataset_test[categorical_cols+numerical_cols]
class_test['Target'] = dataset_test['totals.transactionRevenue'].apply(lambda x: 0 if x == 0 else 1)
```
### Removing some zeros!
```
totals_transactionRevenue_zero = class_train[class_train['Target'] == 0].sample(frac=0.25, random_state=123)
totals_transactionRevenue_nonzero = class_train[class_train['Target'] != 0]
class_train = pd.concat([totals_transactionRevenue_zero, totals_transactionRevenue_nonzero], axis=0)
class_train.head()
```
## Binary Classification
Binary classification is a supervised machine learning technique where the goal is to predict categorical class labels which are discrete and unoredered such as Pass/Fail, Positive/Negative, Default/Not-Default etc. A few real world use cases for classification are listed below:
Medical testing to determine if a patient has a certain disease or not - the classification property is the presence of the disease.
A "pass or fail" test method or quality control in factories, i.e. deciding if a specification has or has not been met โ a go/no-go classification.
Information retrieval, namely deciding whether a page or an article should be in the result set of a search or not โ the classification property is the relevance of the article, or the usefulness to the user.
In order to demonstrate the predict_model() function on unseen data, a sample of xxxxx records has been withheld from the original dataset to be used for predictions. This should not be confused with a train/test split as this particular split is performed to simulate a real life scenario. Another way to think about this is that these xxxxx records are not available at the time when the machine learning experiment was performed.
```
data_unseen = class_test
#data.reset_index(inplace=True, drop=True)
#data_unseen.reset_index(inplace=True, drop=True)
print('Data for Modeling: ' + str(class_train.shape))
print('Unseen Data For Predictions: ' + str(data_unseen.shape))
```
## 1.0 Setting up environment in PyCaret
The setup() function initializes the environment in pycaret and creates the transformation pipeline to prepare the data for modeling and deployment. setup() must be called before executing any other function in pycaret. It takes two mandatory parameters: a pandas dataframe and the name of the target column. All other parameters are optional and are used to customize the pre-processing pipeline.
When setup() is executed, PyCaret's inference algorithm will automatically infer the data types for all features based on certain properties. The data type should be inferred correctly but this is not always the case. To account for this, PyCaret displays a table containing the features and their inferred data types after setup() is executed. If all of the data types are correctly identified enter can be pressed to continue or quit can be typed to end the expriment. Ensuring that the data types are correct is of fundamental importance in PyCaret as it automatically performs a few pre-processing tasks which are imperative to any machine learning experiment. These tasks are performed differently for each data type which means it is very important for them to be correctly configured.
```
print(class_train.info())
# class_train[categorical_cols] = class_train[categorical_cols].astype('category')
# class_test[categorical_cols] = class_test[categorical_cols].astype('category')
exp_clf101 = setup(data = class_train, target = 'Target', session_id=123, data_split_stratify = True, fold_strategy = 'stratifiedkfold', fix_imbalance = True, numeric_features = categorical_cols+numerical_cols)
```
## 2.0 Comparing all models
Comparing all models to evaluate performance is the recommended starting point for modeling once the setup is completed (unless you exactly know what kind of model you need, which is often not the case). This function trains all models in the model library and scores them using stratified cross validation for metric evaluation. The output prints a score grid that shows average Accuracy, AUC, Recall, Precision, F1, Kappa, and MCC accross the folds (10 by default) along with training times.
```
models()
```
## Comparing models - Dont run if you have just one model, skip to next part
```
# start_time = datetime.now()
# best_model = compare_models(['lightgbm'])
# end_time = datetime.now()
# print('Duration: {}'.format(end_time - start_time))
```
Two simple words of code (not even a line) have trained and evaluated over 15 models using cross validation. The score grid printed above highlights the highest performing metric for comparison purposes only. The grid by default is sorted using 'Accuracy' (highest to lowest) which can be changed by passing the sort parameter. For example compare_models(sort = 'Recall') will sort the grid by Recall instead of Accuracy. If you want to change the fold parameter from the default value of 10 to a different value then you can use the fold parameter. For example compare_models(fold = 5) will compare all models on 5 fold cross validation. Reducing the number of folds will improve the training time. By default, compare_models return the best performing model based on default sort order but can be used to return a list of top N models by using n_select parameter.
```
print(best_model)
```
## 3.0 Create a Model
create_model is the most granular function in PyCaret and is often the foundation behind most of the PyCaret functionalities. As the name suggests this function trains and evaluates a model using cross validation that can be set with fold parameter. The output prints a score grid that shows Accuracy, AUC, Recall, Precision, F1, Kappa and MCC by fold.
There are 18 classifiers available in the model library of PyCaret. To see list of all classifiers either check the docstring or use models function to see the library.
### 3.1 LGBM
```
lgbm = create_model('lightgbm')
```
#### Example for Hyperparameter Tuning on GPU with 3.2 XGBoost Classifier
## 4. Tune a Model
When a model is created using the create_model() function it uses the default hyperparameters to train the model. In order to tune hyperparameters, the tune_model() function is used. This function automatically tunes the hyperparameters of a model using Random Grid Search on a pre-defined search space. The output prints a score grid that shows Accuracy, AUC, Recall, Precision, F1, Kappa, and MCC by fold for the best model. To use the custom search grid, you can pass custom_grid parameter in the tune_model function (KNN tuning below).
### 4.1 LGBM Tuning
```
#tuned_lgbm = tune_model(lgbm)
print(tuned_lgbm)
```
### ...
## 5. Plot a Model
Before model finalization, the plot_model() function can be used to analyze the performance across different aspects such as AUC, confusion_matrix, decision boundary etc. This function takes a trained model object and returns a plot based on the test / hold-out set.
There are 15 different plots available, please see the plot_model() docstring for the list of available plots.
### 5.1 AUC Plot
```
plot_model(lgbm, plot = 'auc')
```
### 5.2 Precision-Recall Curve
```
plot_model(lgbm, plot = 'pr')
```
### 5.3 Feature Importance Plot
```
plot_model(lgbm, plot='feature')
```
### 5.4 Confusion Matrix
```
plot_model(lgbm, plot = 'confusion_matrix')
```
Another way to analyze the performance of models is to use the evaluate_model() function which displays a user interface for all of the available plots for a given model. It internally uses the plot_model() function.
```
#evaluate_model(lgbm)
```
## 6 Predict on test / hold-out Sample
Before finalizing the model, it is advisable to perform one final check by predicting the test/hold-out set and reviewing the evaluation metrics. Now, using our final trained model stored in the tuned_rf variable we will predict against the hold-out sample and evaluate the metrics to see if they are materially different than the CV results.
```
predict_model(lgbm) # use tuned_lgbm
```
## 7 Finalize Model for Deplyoment
Model finalization is the last step in the experiment. A normal machine learning workflow in PyCaret starts with setup(), followed by comparing all models using compare_models() and shortlisting a few candidate models (based on the metric of interest) to perform several modeling techniques such as hyperparameter tuning, ensembling, stacking etc. This workflow will eventually lead you to the best model for use in making predictions on new and unseen data. The finalize_model() function fits the model onto the complete dataset including the test/hold-out sample (30% in this case). The purpose of this function is to train the model on the complete dataset before it is deployed in production.
```
final_lgbm = finalize_model(lgbm) # use tuned lgbm
# Final Random Forest model parameters for deployment
print(final_lgbm)
```
Caution: One final word of caution. Once the model is finalized using finalize_model(), the entire dataset including the test/hold-out set is used for training. As such, if the model is used for predictions on the hold-out set after finalize_model() is used, the information grid printed will be misleading as you are trying to predict on the same data that was used for modeling.
```
predict_model(final_lgbm);
```
Notice how the AUC in final_rf has increased to 0.xxxx from 0.xxxx, even though the model is the same. This is because the final_rf variable has been trained on the complete dataset including the test/hold-out set.
## 8. Predict on unseen data
The predict_model() function is also used to predict on the unseen dataset. The only difference from section 6 above is that this time we will pass the data_unseen parameter. data_unseen is the variable created at the beginning of the tutorial and contains 5% (xxxxx samples) of the original dataset which was never exposed to PyCaret.
```
unseen_predictions = predict_model(lgbm, data=data_unseen)
#unseen_predictions.Label.describe()
#unseen_predictions.head()
```
The Label and Score columns are added onto the data_unseen set. Label is the prediction and score is the probability of the prediction. Notice that predicted results are concatenated to the original dataset while all the transformations are automatically performed in the background. You can also check the metrics on this since you have actual target column default available. To do that we will use pycaret.utils module. See example below:
```
check_metric(unseen_predictions['Target'], unseen_predictions['Label'], metric = 'Recall')
```
## 9. Saving the model
We have now finished the experiment by finalizing the tuned_rf model which is now stored in final_rf variable. We have also used the model stored in final_rf to predict data_unseen. This brings us to the end of our experiment, but one question is still to be asked: What happens when you have more new data to predict? Do you have to go through the entire experiment again? The answer is no, PyCaret's inbuilt function save_model() allows you to save the model along with entire transformation pipeline for later use.
```
save_model(final_lgbm,'model/Class_lgbm_Model_{}'.format(d1))
```
## 10. Loading the saved model
To load a saved model at a future date in the same or an alternative environment, we would use PyCaret's load_model() function and then easily apply the saved model on new unseen data for prediction.
```
saved_final_nb = load_model('model/Class_lgbm_Model_{}'.format(d1))
plot_model(final_lgbm, plot='confusion_matrix')
```
## Finally - Exporting the Classfication Label
```
sub_class = unseen_predictions['Label']
sub_class.head()
sub_class.to_csv("model/sub_class.csv",index=False)
pd.read_csv("model/sub_class.csv")
```
Once the model is loaded in the environment, you can simply use it to predict on any new data using the same predict_model() function. Below we have applied the loaded model to predict the same data_unseen that we used in section 8 above.
|
github_jupyter
|
import sys
# adding to the path variables the one folder higher (locally, not changing system variables)
sys.path.append("..")
from datetime import datetime
import pandas as pd
from scripts.helper import reduce_mem_usage
from pycaret.classification import *
from pycaret.utils import check_metric
from pycaret.datasets import get_data
import pickle
pd.set_option("display.max_columns", 120)
# Create datetime
today = datetime.today()
d1 = today.strftime("%d%m%Y")
# Test dataset from pycaret library for classification modelling
# Comment it out if you're using the Capstone dataset!
#dataset_pycaret = get_data('credit')
dataset = pd.read_csv('data/feat_train_v2.csv')
dataset_test = pd.read_csv('data/feat_test_v2.csv')
#dataset_pycaret.info()
numerical_cols = np.load("data/Numerical_Columns.npy")
categorical_cols = np.load("data/Categorical_Columns.npy")
type(numerical_cols)
numerical_cols = numerical_cols.tolist()
categorical_cols = categorical_cols.tolist()
type(numerical_cols)
# Create target for classification model
class_train = dataset[categorical_cols+numerical_cols]
class_train['Target'] = dataset['totals.transactionRevenue'].apply(lambda x: 0 if x == 0 else 1)
class_test = dataset_test[categorical_cols+numerical_cols]
class_test['Target'] = dataset_test['totals.transactionRevenue'].apply(lambda x: 0 if x == 0 else 1)
totals_transactionRevenue_zero = class_train[class_train['Target'] == 0].sample(frac=0.25, random_state=123)
totals_transactionRevenue_nonzero = class_train[class_train['Target'] != 0]
class_train = pd.concat([totals_transactionRevenue_zero, totals_transactionRevenue_nonzero], axis=0)
class_train.head()
data_unseen = class_test
#data.reset_index(inplace=True, drop=True)
#data_unseen.reset_index(inplace=True, drop=True)
print('Data for Modeling: ' + str(class_train.shape))
print('Unseen Data For Predictions: ' + str(data_unseen.shape))
print(class_train.info())
# class_train[categorical_cols] = class_train[categorical_cols].astype('category')
# class_test[categorical_cols] = class_test[categorical_cols].astype('category')
exp_clf101 = setup(data = class_train, target = 'Target', session_id=123, data_split_stratify = True, fold_strategy = 'stratifiedkfold', fix_imbalance = True, numeric_features = categorical_cols+numerical_cols)
models()
# start_time = datetime.now()
# best_model = compare_models(['lightgbm'])
# end_time = datetime.now()
# print('Duration: {}'.format(end_time - start_time))
print(best_model)
lgbm = create_model('lightgbm')
#tuned_lgbm = tune_model(lgbm)
print(tuned_lgbm)
plot_model(lgbm, plot = 'auc')
plot_model(lgbm, plot = 'pr')
plot_model(lgbm, plot='feature')
plot_model(lgbm, plot = 'confusion_matrix')
#evaluate_model(lgbm)
predict_model(lgbm) # use tuned_lgbm
final_lgbm = finalize_model(lgbm) # use tuned lgbm
# Final Random Forest model parameters for deployment
print(final_lgbm)
predict_model(final_lgbm);
unseen_predictions = predict_model(lgbm, data=data_unseen)
#unseen_predictions.Label.describe()
#unseen_predictions.head()
check_metric(unseen_predictions['Target'], unseen_predictions['Label'], metric = 'Recall')
save_model(final_lgbm,'model/Class_lgbm_Model_{}'.format(d1))
saved_final_nb = load_model('model/Class_lgbm_Model_{}'.format(d1))
plot_model(final_lgbm, plot='confusion_matrix')
sub_class = unseen_predictions['Label']
sub_class.head()
sub_class.to_csv("model/sub_class.csv",index=False)
pd.read_csv("model/sub_class.csv")
| 0.219087 | 0.911061 |
# Variational Quantum Eigensolver(VQE) using arbitrary ansatz
When performing chemical calculations in VQE, the unitary matrix is operated on the initial wavefunction states, such as Hartree-Fock wave functions. And the unitary operation is determined by the ansatz to be used. This time, we will calculate the electronic state of hydrogen molecule by VQE using ansatz that we made by ourselves. We will use Hardware Efficient Ansatz(HEA).
Install the necessary libraries. The Hamiltonian is obtained with OpenFermion.
```
!pip3 install blueqat openfermion
```
Import the necessary libraries. The optimization of VQE uses SciPy minimize.
```
from blueqat import Circuit
from openfermion.hamiltonians import MolecularData
from openfermion.transforms import get_fermion_operator, jordan_wigner, get_sparse_operator
import numpy as np
from scipy.optimize import minimize
```
## Definition of Ansatz
We choose Hardware Efficient Ansatz (HEA). In HEA, firstly Ry and Rz gates operate to each initialized qubit, and then CZ gates connect the adjacent qubits to each other. This block consisting of Ry, Rz, and CZ gates is repeated several times. (It is noticed that the types of gates and the connections are slightly different depending on studies.) Physically, this ansatz can be interpreted as a combination of the change of state for each qubit using the rotation on the Bloch sphere by the Ry and Rz gates and the extension of the search space of the wave function using the CZ gate.
The arguments are the number of qubits n_qubits and the gate depth n_depth. The wave function is initialized in this function.
```
def HEA(params,n_qubits,n_depth):
#Wave function initialization |1100>
circ=Circuit().x[2, 3]
#Circuit creation
params_devided=np.array_split(params,n_depth)
for params_one_depth in params_devided:
for i,param in enumerate(params_one_depth):
if i < n_qubits:
circ.ry(param)[i]
else:
circ.rz(param)[i%n_qubits]
for qbit in range(n_qubits):
if qbit < n_qubits-1:
circ.cz[qbit,qbit+1]
#Running the circuit
wf = circ.run(backend="numpy")
return wf
```
## Expectations and cost functions
Get the expected value from the obtained wave function.
```
def expect(wf,hamiltonian):
return np.vdot(wf, hamiltonian.dot(wf)).real
def cost(params,hamiltonian,n_qubits,n_depth):
wf=HEA(params,n_qubits,n_depth)
return expect(wf,hamiltonian)
```
## Obtaining the information of molecule
Specify the bond length of the hydrogen molecule and use OpenFermion to obtain information about the molecule. The basis set is STO-3G.
```
def get_molecule(length):
geometry = [('H',(0.,0.,0.)),('H',(0.,0.,length))]
try:
description = f'{length:.2f}'
molecule = MolecularData(geometry, "sto-3g",1,description=description)
molecule.load()
except:
description = f'{length:.1f}'
molecule = MolecularData(geometry, "sto-3g",1,description=description)
molecule.load()
return molecule
```
## Calculation Execution and Plotting
Run a VQE on each bond length (this takes a few minutes). We then compare the results of the VQE and Full CI (FCI) calculations with respect to energy and bond length.
```
#Recording of bond length, HEA and FCI results
bond_len_list = [];energy_list=[];fullci_list=[]
#Execute the calculation for each bond length
for bond_len in np.arange(0.2,2.5,0.1):
molecule = get_molecule(bond_len)
#Determination of the number of bits, depth and initial parameter values
n_qubits=molecule.n_qubits
n_depth=4
init_params=np.random.rand(2*n_qubits*n_depth)*0.1
#Hamiltonian Definition
hamiltonian = get_sparse_operator(jordan_wigner(get_fermion_operator(molecule.get_molecular_hamiltonian())))
#Optimization run
result=minimize(cost,x0=init_params,args=(hamiltonian,n_qubits,n_depth))
#Recording of bond length, HEA and FCI results
bond_len_list.append(bond_len)
energy_list.append(result.fun)
fullci_list.append(molecule.fci_energy)
#Plotting
import matplotlib.pyplot as plt
plt.plot(bond_len_list,fullci_list,label="FCI",color="blue")
plt.plot(bond_len_list,energy_list, marker="o",label="VQE",color="red",linestyle='None')
plt.legend()
```
While depending on the initial parameters, VQE energy tends to deviate from the FCI energy in the large bond length region. The reason is that the prepared initial wavefunctions become different from the true solution with increasing the bond length. The accuracy might be improved by changing the initial parameters, ansatz, etc.
|
github_jupyter
|
!pip3 install blueqat openfermion
from blueqat import Circuit
from openfermion.hamiltonians import MolecularData
from openfermion.transforms import get_fermion_operator, jordan_wigner, get_sparse_operator
import numpy as np
from scipy.optimize import minimize
def HEA(params,n_qubits,n_depth):
#Wave function initialization |1100>
circ=Circuit().x[2, 3]
#Circuit creation
params_devided=np.array_split(params,n_depth)
for params_one_depth in params_devided:
for i,param in enumerate(params_one_depth):
if i < n_qubits:
circ.ry(param)[i]
else:
circ.rz(param)[i%n_qubits]
for qbit in range(n_qubits):
if qbit < n_qubits-1:
circ.cz[qbit,qbit+1]
#Running the circuit
wf = circ.run(backend="numpy")
return wf
def expect(wf,hamiltonian):
return np.vdot(wf, hamiltonian.dot(wf)).real
def cost(params,hamiltonian,n_qubits,n_depth):
wf=HEA(params,n_qubits,n_depth)
return expect(wf,hamiltonian)
def get_molecule(length):
geometry = [('H',(0.,0.,0.)),('H',(0.,0.,length))]
try:
description = f'{length:.2f}'
molecule = MolecularData(geometry, "sto-3g",1,description=description)
molecule.load()
except:
description = f'{length:.1f}'
molecule = MolecularData(geometry, "sto-3g",1,description=description)
molecule.load()
return molecule
#Recording of bond length, HEA and FCI results
bond_len_list = [];energy_list=[];fullci_list=[]
#Execute the calculation for each bond length
for bond_len in np.arange(0.2,2.5,0.1):
molecule = get_molecule(bond_len)
#Determination of the number of bits, depth and initial parameter values
n_qubits=molecule.n_qubits
n_depth=4
init_params=np.random.rand(2*n_qubits*n_depth)*0.1
#Hamiltonian Definition
hamiltonian = get_sparse_operator(jordan_wigner(get_fermion_operator(molecule.get_molecular_hamiltonian())))
#Optimization run
result=minimize(cost,x0=init_params,args=(hamiltonian,n_qubits,n_depth))
#Recording of bond length, HEA and FCI results
bond_len_list.append(bond_len)
energy_list.append(result.fun)
fullci_list.append(molecule.fci_energy)
#Plotting
import matplotlib.pyplot as plt
plt.plot(bond_len_list,fullci_list,label="FCI",color="blue")
plt.plot(bond_len_list,energy_list, marker="o",label="VQE",color="red",linestyle='None')
plt.legend()
| 0.590071 | 0.983754 |
# 1-1.1 Intro Python
## Getting started with Python in Jupyter Notebooks
- **Python 3 in Jupyter notebooks**
- **`print()`**
- **comments**
- data types basics
- variables
- addition with Strings and Integers
- Errors
- character art
-----
><font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font>
- **use Python 3 in Jupyter notebooks**
- **write working code using `print()` and `#` comments**
- combine Strings using string addition (`+`)
- add numbers in code (`+`)
- troubleshoot errors
- create character art
#
<font size="6" color="#00A0B2" face="verdana"> <B>Concept</B></font>
## Hello World! - python `print()` statement
Using code to write "Hello World!" on the screen is the traditional first program when learning a new language in computer science
Python has a very simple implementation:
```python
print("Hello World!")
```
## "Hello World!"
[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/6f5784c6-eece-4dfe-a14e-9dcf6ee81a7f/Unit1_Section1.1-Hello_World.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/6f5784c6-eece-4dfe-a14e-9dcf6ee81a7f/Unit1_Section1.1-Hello_World.vtt","srclang":"en","kind":"subtitles","label":"english"}])
Our "Hello World!" program worked because this notebook hosts a python interpreter that can run python code cells.
Try showing
```python
"Hello programmer!"
```
enter new text inside the quotations in the cell above. Click on the cell to edit the code.
What happens if any part of `print` is capitalized or what happens there are no quotation marks around the greeting?
## Methods for running the code in a cell
1. **Click in the cell below** and **press "Ctrl+Enter"** to run the code
or
2. **Click in the cell below** and **press "Shift+Enter"** to run the code and move to the next cell
3. **Menu: Cell**...
a. **> Run Cells** runs the highlighted cell(s)
b. **> Run All Above** runs the highlighted cell and above
c. **> Run All Below** runs the highlighted cell and below
<font size="4" color="#00A0B2" face="verdana"> <B>Example</B></font>
```
# [ ] Review the code, run the code
print("Hello programmer!")
```
#
<font size="6" color="#00A0B2" face="verdana"> <B>Concept</B></font>
## Comments
[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/34e2afb1-d07a-44ca-8860-bba1a5476caa/Unit1_Section1.1-Comments.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/34e2afb1-d07a-44ca-8860-bba1a5476caa/Unit1_Section1.1-Comments.vtt","srclang":"en","kind":"subtitles","label":"english"}])
When coding, programmers include comments for explanation of how code works for reminders and to help others who encounter the code
### comment start with the `#` symbol
#
<font size="6" color="#00A0B2" face="verdana"> <B>Example</B></font>
```
# this is how a comment looks in python code
# every comment line starts with the # symbol
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 1</B></font>
## Program: "Hello World!" with comment
- add a comment describing the code purpose
- create an original "Hello World" style message
```
# This is how to leave comment in the code the module useds [] in the comment to denote a task
print("Hello World!")
```
#
<font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font>
## Notebooks and Libraries
Jupyter Notebooks provide a balance of jotting down important summary information along with proving a live code development environment where we can write and run python code. This course uses cloud hosted Jupyter [Notebooks](https://notebooks.azure.com) on Microsoft Azure and we will walk through the basics and some best practices for notebook use.
## add a notebook library
- New: https://notebooks.azure.com/library > New Library
- Link: from a shared Azure Notebook library link > open link, sign in> clone and Run
- Add: open library > Add Notebooks > from computer > navigate to file(s)
## working in notebook cells
- **Markdown cells** display text in a web page format. Markdown is code that formats the way the cell displays (*this cell is Markdown*)
- **Code cells** contain python code and can be interpreted and run from a cell. Code cells display code and output.
- **in edit** or **previously run:** cells can display in editing mode or cells can display results of *code* having been run
[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/6b9134fc-c7d7-4d25-b0a7-bdb79d3e1a5b/Unit1_Section1.1-EditRunSave.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/6b9134fc-c7d7-4d25-b0a7-bdb79d3e1a5b/Unit1_Section1.1-EditRunSave.vtt","srclang":"en","kind":"subtitles","label":"english"}])
### edit mode
- **text** cells in editing mode show markdown code
- Markdown cells keep editing mode appearance until the cell is run
- **code** (python 3) cells in editing look the same after editing, but may show different run output
- clicking another cell moves the green highlight that indicates which cell has active editing focus
### cells need to be saved
- the notebook will frequently auto save
- **best practice** is to manually save after editing a cell using **"Ctrl + S"** or alternatively, **Menu: File > Save and Checkpoint**
#
<font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font>
## Altering Notebook Structure
[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/cb195105-eee8-4068-9007-64b2392cd9ff/Unit1_Section1.1-Language_Cells.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/cb195105-eee8-4068-9007-64b2392cd9ff/Unit1_Section1.1-Language_Cells.vtt","srclang":"en","kind":"subtitles","label":"english"}])
### add a cell
- Highlight any cell and then... add a new cell using **Menu: Insert > Insert Cell Below** or **Insert Cell Above**
- Add with Keyboard Shortcut: **"ESC + A"** to insert above or **"ESC + B"** to insert below
### choose cell type
- Format cells as Markdown or Code via the toolbar dropdown or **Menu: Cell > Cell Type > Code** or **Markdown**
- Cells default to Code when created but can be reformatted from code to Markdown and vice versa
### change notebook page language
- The course uses Python 3 but Jupyter Notebooks can be in Python 2 or 3 (and a language called R)
- To change a notebook to Python 3 go to **"Menu: Kernel > Change Kernel> Python 3"**
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 2</B></font>
## Insert a new cell
- Insert a new Code cell below with a comment describing the task
- edit cell: add print() with the message "after edit, save!"
- run the cell
```
# Inserted new cell below
print("after edit, save!")
```
### Insert another new cell
- Insert a new Code cell below
- edit cell: add print() with the message showing the keyboard Shortcut to save **Ctrl + s**
- run the cell
```
print("Keybord shortcut to sace is CTL+s")
```
[Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) ยฉ 2017 Microsoft
|
github_jupyter
|
print("Hello World!")
# [ ] Review the code, run the code
print("Hello programmer!")
# this is how a comment looks in python code
# every comment line starts with the # symbol
# This is how to leave comment in the code the module useds [] in the comment to denote a task
print("Hello World!")
# Inserted new cell below
print("after edit, save!")
print("Keybord shortcut to sace is CTL+s")
| 0.216757 | 0.884389 |
# Classical Computation on a Quantum Computer
## Contents
1. [Introduction](#intro)
2. [Consulting and Oracle](#oracle)
3. [Taking Out the Garbage](#garbage)
## 1. Introduction <a id='intro'></a>
One consequence of having a universal set of quantum gates is the ability to reproduce any classical computation. We simply need to compile the classical computation down into the Boolean logic gates that we saw in *The Atoms of Computation*, and then reproduce these on a quantum computer.
This demonstrates an important fact about quantum computers: they can do anything that a classical computer can do, and they can do so with at least the same computational complexity. Though it is not the aim to use quantum computers for tasks at which classical computers already excel, this is nevertheless a good demonstration that quantum computers can solve a general range of problems.
Furthermore, problems that require quantum solutions often involve components that can be tackled using classical algorithms. In some cases, these classical parts can be done on classical hardware. However, in many cases, the classical algorithm must be run on inputs that exist in a superposition state. This requires the classical algorithm to be run on quantum hardware. In this section we introduce some of the ideas used when doing this.
## 2. Consulting an Oracle <a id='oracle'></a>
Many quantum algorithms are based around the analysis of some function $f(x)$. Often these algorithms simply assume the existence of some 'black box' implementation of this function, which we can give an input $x$ and receive the corresponding output $f(x)$. This is referred to as an *oracle*.
The advantage of thinking of the oracle in this abstract way allows us to concentrate on the quantum techniques we use to analyze the function, rather than the function itself.
In order to understand how an oracle works within a quantum algorithm, we need to be specific about how they are defined. One of the main forms that oracles take is that of *Boolean oracles*. These are described by the following unitary evolution,
$$
U_f \left|x , \bar 0 \right\rangle = \left|x, f(x)\right\rangle.
$$
Here $\left|x , \bar 0 \right\rangle = \left|x \right\rangle \otimes \left|\bar 0 \right\rangle$ is used to represent a multi-qubit state consisting of two registers. The first register is in state $\left|x\right\rangle$, where $x$ is a binary representation of the input to our function. The number of qubits in this register is the number of bits required to represent the inputs.
The job of the second register is to similarly encode the output. Specifically, the state of this register after applying $U_f$ will be a binary representation of the output $\left|f(x)\right\rangle$, and this register will consist of as many qubits as are required for this. This initial state $\left|\bar 0 \right\rangle$ for this register represents the state for which all qubits are $\left|0 \right\rangle$. For other initial states, applying $U_f$ will lead to different results. The specific results that arise will depend on how we define the unitary $U_f$.
Another form of oracle is the *phase oracle*, which is defined as follows,
$$
P_f \left|x \right\rangle = (-1)^{f(x)} \left|x \right\rangle,
$$
where the output $f(x)$ is typically a simple bit value of $0$ or $1$.
Though it seems much different in form from the Boolean oracle, it is very much another expression of the same basic idea. In fact, it can be realized using the same 'phase kickback' mechanism as described in a previous section.
To see this, consider the Boolean oracle $U_f$ that would correspond to the same function. This can be implemented as something that is essentially a generalized form of the controlled-NOT. It is controlled on the input register, such that it leaves the output bit in state $\left|0 \right\rangle$ for $f(x)=0$, and applies an $X$ to flip it to $\left|1 \right\rangle$ if $f(x)=1$. If the initial state of the output register were $\left|- \right\rangle$ rather than $\left|0 \right\rangle$, the effect of $U_f$ would then be to induce exactly the phase of $(-1)^{f(x)}$ required.
$$
U_f \left( \left|x \right\rangle \otimes \left| - \right\rangle \right) = (P_f \otimes I) \left( \left|x \right\rangle \otimes \left| - \right\rangle \right)
$$
Since the $\left|- \right\rangle$ state of the output qubit is left unchanged by the whole process, it can safely be ignored. The end effect is therefore that the phase oracle is simply implemented by the corresponding Boolean oracle.
## 3. Taking Out the Garbage <a id='garbage'></a>
The functions evaluated by an oracle are typically those that can be evaluated efficiently on a classical computer. However, the need to implement it as a unitary in one of the forms shown above means that it must instead be implemented using quantum gates. However, this is not quite as simple as just taking the Boolean gates that can implement the classical algorithm, and replacing them with their quantum counterparts.
One issue that we must take care of is that of reversibility. A unitary of the form $U = \sum_x \left| f(x) \right\rangle \left\langle x \right|$ is only possible if every unique input $x$ results in a unique output $f(x)$, which is not true in general. However, we can force it to be true by simply including a copy of the input in the output. It is this that leads us to the form for Boolean oracles as we saw earlier
$$
U_f \left|x,\bar 0 \right\rangle = \left| x,f(x) \right\rangle
$$
With the computation written as a unitary, we are able to consider the effect of applying it to superposition states. For example, let us take the superposition over all possible inputs $x$ (unnormalized for simplicity). This will result in a superposition of all possible input/output pairs,
$$
U_f \sum_x \left|x,0\right\rangle = \sum_x \left|x,f(x)\right\rangle.
$$
When adapting classical algorithms, we also need to take care that these superpositions behave as we need them to. Classical algorithms typically do not only compute the desired output, but will also create additional information along the way. Such additional remnants of a computation do not pose a significant problem classically, and the memory they take up can easily be recovered by deleting them. From a quantum perspective, however, things are not so easy.
For example, consider the case that a classical algorithm performs the following process,
$$
V_f \left|x,\bar 0, \bar 0 \right\rangle = \left| x,f(x), g(x) \right\rangle
$$
Here we see a third register, which is used as a 'scratchpad' for the classical algorithm. We will refer to information that is left in this register at the end of the computation as the 'garbage', $g(x)$. Let us use $V_f$ to denote a unitary that implements the above.
Quantum algorithms are typically built upon interference effects. The simplest such effect is to create a superposition using some unitary, and then remove it using the inverse of that unitary. The entire effect of this is, of course, trivial. However, we must ensure that our quantum computer is at least able to do such trivial things.
For example, suppose some process within our quantum computation has given us the superposition state $\sum_x \left|x,f(x)\right\rangle$, and we are required to return this to the state $\sum_x \left|x,0\right\rangle$. For this we could simply apply $U_f^\dagger$. The ability to apply this follows directly from knowing a circuit that would apply $U_f$, since we would simply need to replace each gate in the circuit with its inverse and reverse the order.
However, suppose we don't know how to apply $U_f$, but instead know how to apply $V_f$. This means that we can't apply $U_f^\dagger$ here, but could use $V_f^\dagger$. Unfortunately, the presence of the garbage means that it won't have the same effect.
For an explicit example of this we can take a very simple case. We'll restrict $x$, $f(x)$ and $g(x)$ to all consist of just a single bit. We'll also use $f(x) = x$ and $g(x) = x$, each of which can be achieved with just a single `cx` gate controlled on the input register.
Specifically, the circuit to implement $U_f$ is just the following single `cx` between the single bit of the input and output registers.
```
from qiskit import QuantumCircuit, QuantumRegister
input_bit = QuantumRegister(1, 'input')
output_bit = QuantumRegister(1, 'output')
garbage_bit = QuantumRegister(1, 'garbage')
Uf = QuantumCircuit(input_bit, output_bit, garbage_bit)
Uf.cx(input_bit[0], output_bit[0])
Uf.draw()
```
For $V_f$, where we also need to make a copy of the input for the garbage, we can use the following two `cx` gates.
```
Vf = QuantumCircuit(input_bit, output_bit, garbage_bit)
Vf.cx(input_bit[0], garbage_bit[0])
Vf.cx(input_bit[0], output_bit[0])
Vf.draw()
```
Now we can look at the effect of first applying $U_f$, and then applying $V_f^{\dagger}$. The net effect is the following circuit.
```
qc = Uf + Vf.inverse()
qc.draw()
```
This circuit begins with two identical `cx` gates, whose effects cancel each other out. All that remains is the final `cx` between the input and garbage registers. Mathematically, this means
$$
V_f^\dagger U_f \left| x,0,0 \right\rangle = V_f^\dagger \left| x,f(x),0 \right\rangle = \left| x , 0 ,g(x) \right\rangle.
$$
Here we see that the action of $V_f^\dagger$ does not simply return us to the initial state, but instead leaves the first qubit entangled with unwanted garbage. Any subsequent steps in an algorithm will therefore not run as expected, since the state is not the one that we need.
For this reason we need a way of removing classical garbage from our quantum algorithms. This can be done by a method known as 'uncomputation. We simply need to take another blank variable and apply $V_f$
$$
\left| x, 0, 0, 0 \right\rangle \rightarrow \left| x,f(x),g(x),0 \right\rangle.
$$
Then we apply a set of controlled-NOT gates, each controlled on one of the qubits used to encode the output, and targeted on the corresponding qubit in the extra blank variable.
Here's the circuit to do this for our example using single qubit registers.
```
final_output_bit = QuantumRegister(1, 'final-output')
copy = QuantumCircuit(output_bit, final_output_bit)
copy.cx(output_bit, final_output_bit)
copy.draw()
```
The effect of this is to copies the information over (if you have heard of the no-cloning theorem, note that this is not the same process). Specifically, it transforms the state in the following way.
$$
\left| x,f(x),g(x),0 \right\rangle \rightarrow \left| x,f(x),g(x),f(x) \right\rangle.
$$
Finally we apply $V_f^\dagger$, which undoes the original computation.
$$
\left| x,f(x),g(x),0 \right\rangle \rightarrow \left| x,0,0,f(x) \right\rangle.
$$
The copied output nevertheless remains. The net effect is to perform the computation without garbage, and hence achieves our desired $U_f$.
For our example using single qubit registers, and for which $f(x) = x$, the whole process corresponds to the following circuit.
```
(Vf.inverse() + copy + Vf).draw()
```
Using what you know so far of how the `cx` gates work, you should be able to see that the two applied to the garbage register will cancel each other out. We have therefore successfully removed the garbage.
### Quick Exercises
1. Show that the output is correctly written to the 'final output' register (and only to this register) when the 'output' register is initialized as $|0\rangle$.
2. Determine what happens when the 'output' register is initialized as $|1\rangle$.
With this method, and all of the others covered in this chapter, we now have all the tools we need to create quantum algorithms. Now we can move on to seeing those algorithms in action.
```
import qiskit
qiskit.__qiskit_version__
```
|
github_jupyter
|
from qiskit import QuantumCircuit, QuantumRegister
input_bit = QuantumRegister(1, 'input')
output_bit = QuantumRegister(1, 'output')
garbage_bit = QuantumRegister(1, 'garbage')
Uf = QuantumCircuit(input_bit, output_bit, garbage_bit)
Uf.cx(input_bit[0], output_bit[0])
Uf.draw()
Vf = QuantumCircuit(input_bit, output_bit, garbage_bit)
Vf.cx(input_bit[0], garbage_bit[0])
Vf.cx(input_bit[0], output_bit[0])
Vf.draw()
qc = Uf + Vf.inverse()
qc.draw()
final_output_bit = QuantumRegister(1, 'final-output')
copy = QuantumCircuit(output_bit, final_output_bit)
copy.cx(output_bit, final_output_bit)
copy.draw()
(Vf.inverse() + copy + Vf).draw()
import qiskit
qiskit.__qiskit_version__
| 0.560493 | 0.994214 |
__Fitting__
In this example, we'll fit the ccd imaging data we simulated in the previous exercise. We'll do this using model images generated via a tracer, and by comparing to the simulated image we'll get diagostics about the quality of the fit.
```
%matplotlib inline
from autolens.data import ccd
from autolens.data.array import mask as ma
from autolens.lens import ray_tracing, lens_fit
from autolens.model.galaxy import galaxy as g
from autolens.lens import lens_data as ld
from autolens.model.profiles import light_profiles as lp
from autolens.model.profiles import mass_profiles as mp
from autolens.data.plotters import ccd_plotters
from autolens.lens.plotters import ray_tracing_plotters
from autolens.lens.plotters import lens_fit_plotters
# If you are using Docker, the path you should use to output these images is (e.g. comment out this line)
# path = '/home/user/workspace/howtolens/chapter_1_introduction'
# If you arn't using docker, you need to change the path below to the chapter 2 directory and uncomment it
# path = '/path/to/user/workspace/howtolens/chapter_1_introduction'
ccd_data = ccd.load_ccd_data_from_fits(image_path=path + '/data/image.fits',
noise_map_path=path+'/data/noise_map.fits',
psf_path=path + '/data/psf.fits', pixel_scale=0.1)
```
The variable ccd_data is a CCDData object, which is a 'package' of all components of the CCD data of the lens, in particular:
1) The image.
2) The Point Spread Function (PSF).
3) Its noise-map.
```
print('Image:')
print(ccd_data.image)
print()
print('Noise-Map:')
print(ccd_data.noise_map)
print()
print('PSF:')
print(ccd_data.psf)
```
To fit an image, we first specify a mask. A mask describes the sections of the image that we fit.
Typically, we want to mask out regions of the image where the lens and source galaxies are not visible, for example at the edges where the signal is entirely background sky and noise.
For the image we simulated, a 3" circular mask will do the job.
```
mask = ma.Mask.circular(shape=ccd_data.shape, pixel_scale=ccd_data.pixel_scale, radius_arcsec=3.0)
print(mask) # 1 = True, which means the pixel is masked. Edge pixels are indeed masked.
print(mask[48:53, 48:53]) # Whereas central pixels are False and therefore unmasked.
```
We can use a ccd_plotter to compare the mask and the image - this is useful if we really want to 'tailor' a mask to the lensed source's light (which in this example, we won't).
```
ccd_plotters.plot_image(ccd_data=ccd_data, mask=mask)
```
We can also use the mask to 'zoom' our plot around the masked region only - meaning that if our image is very large, we can focus-in on the lens and source galaxies.
You'll see this is an option for pretty much every plotter in PyAutoLens, and is something we'll do often throughout the tutorials.
```
ccd_plotters.plot_image(ccd_data=ccd_data, mask=mask, zoom_around_mask=True)
```
We can also remove all pixels output of the mask in the plot, which means that if bright pixels outside the mask are messing up the color scheme and plot, they'll removed. Again, we'll do this throughout the code.
```
ccd_plotters.plot_image(ccd_data=ccd_data, mask=mask, extract_array_from_mask=True, zoom_around_mask=True)
```
Now we've loaded the ccd data and created a mask, we use them to create a 'lens data' object, which we'll perform using the lens_data module (imported as 'ld').
A lens data object is a 'package' of all parts of a data-set we need in order to fit it with a lens model:
1) The ccd-data, e.g. the image, PSF (so that when we compare a tracer's image-plane image to the image data we can include blurring due to the telescope optics) and noise-map (so our goodness-of-fit measure accounts for noise in the observations).
2) The mask, so that only the regions of the image with a signal are fitted.
3) A grid-stack aligned to the ccd-imaging data's pixels: so the tracer's image-plane image is generated on the same (masked) grid as the image.
```
lens_data = ld.LensData(ccd_data=ccd_data, mask=mask)
ccd_plotters.plot_image(ccd_data=ccd_data)
```
By printing its attribute, we can see that it does indeed contain the image, mask, psf and so on
```
print('Image:')
print(lens_data.image)
print()
print('Noise-Map:')
print(lens_data.noise_map)
print()
print('PSF:')
print(lens_data.psf)
print()
print('Mask')
print(lens_data.mask)
print()
print('Grid')
print(lens_data.grid_stack.regular)
```
The image, noise-map and grids are masked using the mask and mapped to 1D arrays for fast calcuations.
```
print(lens_data.image.shape) # This is the original 2D image
print(lens_data.image_1d.shape)
print(lens_data.noise_map_1d.shape)
print(lens_data.grid_stack.regular.shape)
```
To fit an image, we need to create an image-plane image using a tracer. Lets use the same tracer we simulated the ccd data with (thus, our fit should be 'perfect').
Its worth noting that below, we use the lens_data's grid-stack to setup the tracer. This ensures that our image-plane image will be the same resolution and alignment as our image-data, as well as being masked appropriately.
```
lens_galaxy = g.Galaxy(mass=mp.EllipticalIsothermal(centre=(0.0, 0.0), einstein_radius=1.6, axis_ratio=0.7, phi=45.0))
source_galaxy = g.Galaxy(light=lp.EllipticalSersic(centre=(0.1, 0.1), axis_ratio=0.8, phi=45.0,
intensity=1.0, effective_radius=1.0, sersic_index=2.5))
tracer = ray_tracing.TracerImageSourcePlanes(lens_galaxies=[lens_galaxy], source_galaxies=[source_galaxy],
image_plane_grid_stack=lens_data.grid_stack)
ray_tracing_plotters.plot_image_plane_image(tracer=tracer)
```
To fit the image, we pass the lens data and tracer to the fitting module. This performs the following:
1) Blurs the tracer's image-plane image with the lens data's PSF, ensuring that the telescope optics are accounted for by the fit. This creates the fit's 'model_image'.
2) Computes the difference between this model_image and the observed image-data, creating the fit's 'residual_map'.
3) Divides the residuals by the noise-map and squaring each value, creating the fit's 'chi_squared_map'.
4) Sums up these chi-squared values and converts them to a 'likelihood', which quantities how good the tracer's fit to the data was (higher likelihood = better fit).
```
fit = lens_fit.fit_lens_data_with_tracer(lens_data=lens_data, tracer=tracer)
lens_fit_plotters.plot_fit_subplot(fit=fit, should_plot_mask=True, extract_array_from_mask=True, zoom_around_mask=True)
```
We can print the fit's attributes - if we don't specify where we'll get all zeros, as the edges were masked:
```
print('Model-Image Edge Pixels:')
print(fit.model_image)
print()
print('Residuals Edge Pixels:')
print(fit.residual_map)
print()
print('Chi-Squareds Edge Pixels:')
print(fit.chi_squared_map)
```
Of course, the central unmasked pixels have non-zero values.
```
print('Model-Image Central Pixels:')
print(fit.model_image[48:53, 48:53])
print()
print('Residuals Central Pixels:')
print(fit.residual_map[48:53, 48:53])
print()
print('Chi-Squareds Central Pixels:')
print(fit.chi_squared_map[48:53, 48:53])
```
It also provides a likelihood, which is a single-figure estimate of how good the model image fitted the simulated image (in unmasked pixels only!).
```
print('Likelihood:')
print(fit.likelihood)
```
We used the same tracer to create and fit the image. Therefore, our fit to the image was excellent. For instance, by inspecting the residuals and chi-squareds, one can see no signs of the source galaxy's light present, indicating a good fit.
This solution should translate to one of the highest-likelihood solutions possible.
Lets change the tracer, so that it's near the correct solution, but slightly off. Below, we slightly offset the lens galaxy, by 0.005"
```
lens_galaxy = g.Galaxy(mass=mp.EllipticalIsothermal(centre=(0.005, 0.005), einstein_radius=1.6, axis_ratio=0.7, phi=45.0))
source_galaxy = g.Galaxy(light=lp.EllipticalSersic(centre=(0.1, 0.1), axis_ratio=0.8, phi=45.0,
intensity=1.0, effective_radius=1.0, sersic_index=2.5))
tracer = ray_tracing.TracerImageSourcePlanes(lens_galaxies=[lens_galaxy], source_galaxies=[source_galaxy],
image_plane_grid_stack=lens_data.grid_stack)
fit = lens_fit.fit_lens_data_with_tracer(lens_data=lens_data, tracer=tracer)
lens_fit_plotters.plot_fit_subplot(fit=fit, should_plot_mask=True, extract_array_from_mask=True, zoom_around_mask=True)
```
We now observe residuals to appear at the locations the source galaxy was observed, which corresponds to an increase in chi-squareds (which determines our goodness-of-fit).
Lets compare the likelihood to the value we computed above (which was 11697.24):
```
print('Previous Likelihood:')
print(11697.24)
print('New Likelihood:')
print(fit.likelihood)
```
It decreases! This model was a worse fit to the data.
Lets change the tracer, one more time, to a solution that is nowhere near the correct one.
```
lens_galaxy = g.Galaxy(mass=mp.EllipticalIsothermal(centre=(0.005, 0.005), einstein_radius=1.3, axis_ratio=0.8, phi=45.0))
source_galaxy = g.Galaxy(light=lp.EllipticalSersic(centre=(0.1, 0.1), axis_ratio=0.7, phi=65.0,
intensity=1.0, effective_radius=0.4, sersic_index=3.5))
tracer = ray_tracing.TracerImageSourcePlanes(lens_galaxies=[lens_galaxy], source_galaxies=[source_galaxy],
image_plane_grid_stack=lens_data.grid_stack)
fit = lens_fit.fit_lens_data_with_tracer(lens_data=lens_data, tracer=tracer)
lens_fit_plotters.plot_fit_subplot(fit=fit, should_plot_mask=True, extract_array_from_mask=True, zoom_around_mask=True)
```
Clearly, the model provides a terrible fit, and this tracer is not a plausible representation of the image-data (of course, we already knew that, given that we simulated it!)
The likelihood drops dramatically, as expected.
```
print('Previous Likelihoods:')
print(11697.24)
print(10319.44)
print('New Likelihood:')
print(fit.likelihood)
```
Congratulations, you've fitted your first strong lens with PyAutoLens! Perform the following exercises:
1) In this example, we 'knew' the correct solution, because we simulated the lens ourselves. In the real Universe, we have no idea what the correct solution is. How would you go about finding the correct solution? Could you find a solution that fits the data reasonable through trial and error?
|
github_jupyter
|
%matplotlib inline
from autolens.data import ccd
from autolens.data.array import mask as ma
from autolens.lens import ray_tracing, lens_fit
from autolens.model.galaxy import galaxy as g
from autolens.lens import lens_data as ld
from autolens.model.profiles import light_profiles as lp
from autolens.model.profiles import mass_profiles as mp
from autolens.data.plotters import ccd_plotters
from autolens.lens.plotters import ray_tracing_plotters
from autolens.lens.plotters import lens_fit_plotters
# If you are using Docker, the path you should use to output these images is (e.g. comment out this line)
# path = '/home/user/workspace/howtolens/chapter_1_introduction'
# If you arn't using docker, you need to change the path below to the chapter 2 directory and uncomment it
# path = '/path/to/user/workspace/howtolens/chapter_1_introduction'
ccd_data = ccd.load_ccd_data_from_fits(image_path=path + '/data/image.fits',
noise_map_path=path+'/data/noise_map.fits',
psf_path=path + '/data/psf.fits', pixel_scale=0.1)
print('Image:')
print(ccd_data.image)
print()
print('Noise-Map:')
print(ccd_data.noise_map)
print()
print('PSF:')
print(ccd_data.psf)
mask = ma.Mask.circular(shape=ccd_data.shape, pixel_scale=ccd_data.pixel_scale, radius_arcsec=3.0)
print(mask) # 1 = True, which means the pixel is masked. Edge pixels are indeed masked.
print(mask[48:53, 48:53]) # Whereas central pixels are False and therefore unmasked.
ccd_plotters.plot_image(ccd_data=ccd_data, mask=mask)
ccd_plotters.plot_image(ccd_data=ccd_data, mask=mask, zoom_around_mask=True)
ccd_plotters.plot_image(ccd_data=ccd_data, mask=mask, extract_array_from_mask=True, zoom_around_mask=True)
lens_data = ld.LensData(ccd_data=ccd_data, mask=mask)
ccd_plotters.plot_image(ccd_data=ccd_data)
print('Image:')
print(lens_data.image)
print()
print('Noise-Map:')
print(lens_data.noise_map)
print()
print('PSF:')
print(lens_data.psf)
print()
print('Mask')
print(lens_data.mask)
print()
print('Grid')
print(lens_data.grid_stack.regular)
print(lens_data.image.shape) # This is the original 2D image
print(lens_data.image_1d.shape)
print(lens_data.noise_map_1d.shape)
print(lens_data.grid_stack.regular.shape)
lens_galaxy = g.Galaxy(mass=mp.EllipticalIsothermal(centre=(0.0, 0.0), einstein_radius=1.6, axis_ratio=0.7, phi=45.0))
source_galaxy = g.Galaxy(light=lp.EllipticalSersic(centre=(0.1, 0.1), axis_ratio=0.8, phi=45.0,
intensity=1.0, effective_radius=1.0, sersic_index=2.5))
tracer = ray_tracing.TracerImageSourcePlanes(lens_galaxies=[lens_galaxy], source_galaxies=[source_galaxy],
image_plane_grid_stack=lens_data.grid_stack)
ray_tracing_plotters.plot_image_plane_image(tracer=tracer)
fit = lens_fit.fit_lens_data_with_tracer(lens_data=lens_data, tracer=tracer)
lens_fit_plotters.plot_fit_subplot(fit=fit, should_plot_mask=True, extract_array_from_mask=True, zoom_around_mask=True)
print('Model-Image Edge Pixels:')
print(fit.model_image)
print()
print('Residuals Edge Pixels:')
print(fit.residual_map)
print()
print('Chi-Squareds Edge Pixels:')
print(fit.chi_squared_map)
print('Model-Image Central Pixels:')
print(fit.model_image[48:53, 48:53])
print()
print('Residuals Central Pixels:')
print(fit.residual_map[48:53, 48:53])
print()
print('Chi-Squareds Central Pixels:')
print(fit.chi_squared_map[48:53, 48:53])
print('Likelihood:')
print(fit.likelihood)
lens_galaxy = g.Galaxy(mass=mp.EllipticalIsothermal(centre=(0.005, 0.005), einstein_radius=1.6, axis_ratio=0.7, phi=45.0))
source_galaxy = g.Galaxy(light=lp.EllipticalSersic(centre=(0.1, 0.1), axis_ratio=0.8, phi=45.0,
intensity=1.0, effective_radius=1.0, sersic_index=2.5))
tracer = ray_tracing.TracerImageSourcePlanes(lens_galaxies=[lens_galaxy], source_galaxies=[source_galaxy],
image_plane_grid_stack=lens_data.grid_stack)
fit = lens_fit.fit_lens_data_with_tracer(lens_data=lens_data, tracer=tracer)
lens_fit_plotters.plot_fit_subplot(fit=fit, should_plot_mask=True, extract_array_from_mask=True, zoom_around_mask=True)
print('Previous Likelihood:')
print(11697.24)
print('New Likelihood:')
print(fit.likelihood)
lens_galaxy = g.Galaxy(mass=mp.EllipticalIsothermal(centre=(0.005, 0.005), einstein_radius=1.3, axis_ratio=0.8, phi=45.0))
source_galaxy = g.Galaxy(light=lp.EllipticalSersic(centre=(0.1, 0.1), axis_ratio=0.7, phi=65.0,
intensity=1.0, effective_radius=0.4, sersic_index=3.5))
tracer = ray_tracing.TracerImageSourcePlanes(lens_galaxies=[lens_galaxy], source_galaxies=[source_galaxy],
image_plane_grid_stack=lens_data.grid_stack)
fit = lens_fit.fit_lens_data_with_tracer(lens_data=lens_data, tracer=tracer)
lens_fit_plotters.plot_fit_subplot(fit=fit, should_plot_mask=True, extract_array_from_mask=True, zoom_around_mask=True)
print('Previous Likelihoods:')
print(11697.24)
print(10319.44)
print('New Likelihood:')
print(fit.likelihood)
| 0.62681 | 0.952309 |
```
from IPython.core.display import HTML
css_file = '../style.css'
HTML(open(css_file, 'r').read())
```
# Introduction to matrices
## Preamble
Before we start our journey into linear algebra, we take a quick look at creating matrices using the `sympy` package. As always, we start off by initializing LaTex printing using the `init_printing()` function.
```
from sympy import init_printing
init_printing()
```
## Representing matrices
Matrices are represented as $m$ rows of values, spread over $n$ columns, to make up an $m \times n$ array or grid. The `sympy` package contains the `Matrix()` function to create these objects.
```
from sympy import Matrix
```
Expression (1) depicts a $4 \times 3$ matrix of integer values. We can recreate this using the `Matrix()` function. This is a matrix. A matrix has a dimension, which lists, in order, the number of rows and the number of columns. The matrix in (1) has dimension $3 \times 3$.
$$\begin{bmatrix} 1 && 2 && 3 \\ 4 && 5 && 6 \\ 7 && 8 && 9 \\ 10 && 11 && 12 \end{bmatrix} \tag{1}$$
The values are entered as a list of list, with each sublist containing a row of values.
```
matrix_1 = Matrix([[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12]])
matrix_1
```
By using the `type()` function we can inspect the object type of which `matrix_1` is an instance.
```
type(matrix_1)
```
We note that it is a `MutableDenseMatrix`. Mutable refers to the fact that we can change the values in the matrix and dense refers to the fact that there are not an abundance of zeros in the data.
## Shape
The `.shape()` method calculates the number of rows and columns of a matrix.
```
matrix_1.shape
```
## Accessing values in rows and columns
The `.row()` and `.col()` methods give us access to the values in a matrix. Remember that Python indexing starts at $0$, such that the first row (in the mathematical representation) is the zeroth row in `python`.
```
matrix_1.row(0) # The first row
matrix_1.col(0) # The first column
```
The `-1` value gives us access to the last row or column.
```
matrix_1.row(-1)
```
Every element in a matrix is indexed, with a row and column number. In (2), we see a $3 \times 4$ matrix with the index of every element. Note we place both values together, without a comma separating them.
$$\begin{pmatrix} a_{11} && a_{12} && a_{13} && a_{14} \\ a_{21} && a_{22} && a_{23} && a_{24} \\ a_{31} && a_{32} && a_{33} && a_{34} \end{pmatrix} \tag{2}$$
So, if we wish to find the element in the first row and the first column in our `matrix_1` variable (which holds a `sympy` matrix object), we will use `0,0` and not `1,1`. The _indexing_ (using the _address_ of each element) is done by using square brackets.
```
# Repriting matrix_1
matrix_1
matrix_1[0,0]
```
Let's look at the element in the second row and third column, which is $6$.
```
matrix_1[1,2]
```
We can also span a few rows and column. Below, we index the first two rows. This is done by using the colon, `:`, symbol. The last number (after the colon is excluded, such that `0:2` refers to the zeroth and first row indices.
```
matrix_1[0:2,0:4]
```
We can also specify the actual rows or columns, by placing them in square brackets (creating a list). Below, we also use the colon symbol on is won. This denotes the selection of all values. So, we have the first and third rows (mathematically) or the zeroth and second `python` row index, and all the columns.
```
matrix_1[[0,2],:]
```
## Deleting and inserting rows
Row and column can be inserted into or deleted from a matrix using the `.row_insert()`, `.col_insert()`, `.row_del()`, and `.col_del()` methods.
Let's have a look at where these inserted and deletions take place.
```
matrix_1.row_insert(1, Matrix([[10, 20, 30]])) # Using row 1
```
We note that the row was inserted as row 1.
If we call the matrix again, we note that the changes were not permanent.
```
matrix_1
```
We have to overwrite the computer variable to make the changes permanent or alternatively create a new computer variable. (This is contrary to the current documentation.)
```
matrix_2 = matrix_1.row_insert(1, Matrix([[10, 20, 30]]))
matrix_2
matrix_3 = matrix_1.row_del(1) # Permanently deleting the second row
matrix_3 # A bug in the code currently returns a NoneType object
```
## Useful matrix constructors
There are a few special matrices that can be constructed using `sympy` functions. The zero matrix of size $n \times n$ can be created with the `zeros()` function and the $n \times n$ identity matrix (more on this later) can be created with the `eye()` function.
```
from sympy import zeros, eye
zeros(5) # A 5x5 matrix of all zeros
zeros(5)
eye(4) # A 4x4 identity matrix
```
The `diag()` function creates a diagonal matrix (which is square) with specified values along the main axis (top-left to bottom-right) and zeros everywhere else.
```
from sympy import diag
diag(1, 2, 3, 4, 5)
```
|
github_jupyter
|
from IPython.core.display import HTML
css_file = '../style.css'
HTML(open(css_file, 'r').read())
from sympy import init_printing
init_printing()
from sympy import Matrix
matrix_1 = Matrix([[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12]])
matrix_1
type(matrix_1)
matrix_1.shape
matrix_1.row(0) # The first row
matrix_1.col(0) # The first column
matrix_1.row(-1)
# Repriting matrix_1
matrix_1
matrix_1[0,0]
matrix_1[1,2]
matrix_1[0:2,0:4]
matrix_1[[0,2],:]
matrix_1.row_insert(1, Matrix([[10, 20, 30]])) # Using row 1
matrix_1
matrix_2 = matrix_1.row_insert(1, Matrix([[10, 20, 30]]))
matrix_2
matrix_3 = matrix_1.row_del(1) # Permanently deleting the second row
matrix_3 # A bug in the code currently returns a NoneType object
from sympy import zeros, eye
zeros(5) # A 5x5 matrix of all zeros
zeros(5)
eye(4) # A 4x4 identity matrix
from sympy import diag
diag(1, 2, 3, 4, 5)
| 0.49292 | 0.991946 |
# Python Crash Course
Hello! This is a quick intro to programming in Python to help you hit the ground running with the _12 Steps to Navier-Stokes_.
## Libraries
Python is a high-level open-source language. But the _Python world_ is inhabited by many packages or libraries that provide useful things like array operations, plotting functions, and much more. We can import libraries of functions to expand the capabilities of Python in our programs.
OK! We'll start by importing a few libraries to help us out.
```
#comments in python are denoted by the pound sign
import numpy as np #numpy is a library we're importing that provides a bunch of useful matrix operations akin to MATLAB
import matplotlib.pyplot as plt #matplotlib is 2D plotting library which we will use to plot our results
```
So what's all of this import-as business? We are importing one library named `numpy` and we are importing a sub-library of a big package called `matplotlib`. Because the functions we want to use belong to these libraries, we have to tell Python to look at those libraries when we call a particular function. The two lines above have created shortcuts to those libraries named `np` and `plt`, respectively. So if we want to use the numpy function `linspace`, for instance, we can call it by writing:
```
myarray = np.linspace(0, 5, 10)
myarray
```
If we don't preface the `linspace` function with `np`, Python will throw an error.
```
myarray = linspace(0, 5, 10)
```
Sometimes, you'll see people importing a whole library without assigning a shortcut for it (like `np` here for `numpy`). This saves typing but is sloppy and can get you in trouble. Best to get into good habits from the beginning!
To learn new functions available to you, visit the [NumPy Reference](http://docs.scipy.org/doc/numpy/reference/) page. If you are a proficient `Matlab` user, there is a wiki page that should prove helpful to you: [NumPy for Matlab Users](http://wiki.scipy.org/NumPy_for_Matlab_Users)
## Variables
Python doesn't require explicitly declared variable types like C and other languages.
```
a = 5 #a is an integer 5
b = 'five' #b is a string of the word 'five'
c = 5.0 #c is a floating point 5
type(a)
type(b)
type(c)
```
Pay special attention to assigning floating point values to variables or you may get values you do not expect in your programs.
```
14/a
14/c
```
If you divide an integer by an integer, it will return an answer rounded to the nearest integer. If you want a floating point answer, one of the numbers must be a float. Simply appending a decimal point will do the trick:
```
14./a
```
## Whitespace in Python
Python uses indents and whitespace to group statements together. To write a short loop in C, you might use:
for (i = 0, i < 5, i++){
printf("Hi! \n");
}
Python does not use curly braces like C, so the same program as above is written in Python as follows:
```
for i in range(5):
print("Hi \n")
```
If you have nested for-loops, there is a further indent for the inner loop.
```
for i in range(3):
for j in range(3):
print(i, j)
print("This statement is within the i-loop, but not the j-loop")
```
## Slicing Arrays
In NumPy, you can look at portions of arrays in the same way as in `Matlab`, with a few extra tricks thrown in. Let's take an array of values from 1 to 5.
```
myvals = np.array([1, 2, 3, 4, 5])
myvals
```
Python uses a **zero-based index**, so let's look at the first and last element in the array `myvals`
```
myvals[0], myvals[4]
```
There are 5 elements in the array `myvals`, but if we try to look at `myvals[5]`, Python will be unhappy, as `myvals[5]` is actually calling the non-existant 6th element of that array.
```
myvals[5]
```
Arrays can also be 'sliced', grabbing a range of values. Let's look at the first three elements
```
myvals[0:3]
```
Note here, the slice is inclusive on the front end and exclusive on the back, so the above command gives us the values of `myvals[0]`, `myvals[1]` and `myvals[2]`, but not `myvals[3]`.
## Assigning Array Variables
One of the strange little quirks/features in Python that often confuses people comes up when assigning and comparing arrays of values. Here is a quick example. Let's start by defining a 1-D array called $a$:
```
a = np.linspace(1,5,5)
a
```
OK, so we have an array $a$, with the values 1 through 5. I want to make a copy of that array, called $b$, so I'll try the following:
```
b = a
b
```
Great. So $a$ has the values 1 through 5 and now so does $b$. Now that I have a backup of $a$, I can change its values without worrying about losing data (or so I may think!).
```
a[2] = 17
a
```
Here, the 3rd element of $a$ has been changed to 17. Now let's check on $b$.
```
b
```
And that's how things go wrong! When you use a statement like $a = b$, rather than copying all the values of $a$ into a new array called $b$, Python just creates an alias (or a pointer) called $b$ and tells it to route us to $a$. So if we change a value in $a$ then $b$ will reflect that change (technically, this is called *assignment by reference*). If you want to make a true copy of the array, you have to tell Python to copy every element of $a$ into a new array. Let's call it $c$.
```
c[:] = a[:]
```
Unfortunately, if we want to make the true copy, the new array has to be defined first and has to have the correct number of elements. So it will be a two-step process. We can define an "empty" array that is the same size as $a$ by using the numpy function `empty_like`:
```
c = np.empty_like(a)
len(c) #shows us how long c is
c[:]=a[:]
c
```
Now, we can try again to change a value in $a$ and see if the changes are also seen in $c$.
```
a[2] = 3
a
c
```
OK, it worked! If the difference between `a = b` and `a[:]=b[:]` is unclear, you should read through this again. This issue will come back to haunt you otherwise.
## Learn More
There are a lot of resources online to learn more about using NumPy and other libraries. Just for kicks, here we use IPython's feature for embedding videos to point you to a short video on YouTube on using NumPy arrays.
```
from IPython.display import YouTubeVideo
# a short video about using NumPy arrays, from Enthought
YouTubeVideo('vWkb7VahaXQ')
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
|
github_jupyter
|
#comments in python are denoted by the pound sign
import numpy as np #numpy is a library we're importing that provides a bunch of useful matrix operations akin to MATLAB
import matplotlib.pyplot as plt #matplotlib is 2D plotting library which we will use to plot our results
myarray = np.linspace(0, 5, 10)
myarray
myarray = linspace(0, 5, 10)
a = 5 #a is an integer 5
b = 'five' #b is a string of the word 'five'
c = 5.0 #c is a floating point 5
type(a)
type(b)
type(c)
14/a
14/c
14./a
for i in range(5):
print("Hi \n")
for i in range(3):
for j in range(3):
print(i, j)
print("This statement is within the i-loop, but not the j-loop")
myvals = np.array([1, 2, 3, 4, 5])
myvals
myvals[0], myvals[4]
myvals[5]
myvals[0:3]
a = np.linspace(1,5,5)
a
b = a
b
a[2] = 17
a
b
c[:] = a[:]
c = np.empty_like(a)
len(c) #shows us how long c is
c[:]=a[:]
c
a[2] = 3
a
c
from IPython.display import YouTubeVideo
# a short video about using NumPy arrays, from Enthought
YouTubeVideo('vWkb7VahaXQ')
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
| 0.362856 | 0.987592 |
```
import sqlite3
import pickle
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, plot_confusion_matrix
import matplotlib.pyplot as plt
import spacy
# Create connection to SQLite Database:
conn = sqlite3.connect('../twitoff/twitoff.db')
def get_data(query, conn):
'''Function to get data from SQLite DB'''
cursor = conn.cursor()
result = cursor.execute(query).fetchall()
# Get columns from cursor object
columns = list(map(lambda x: x[0], cursor.description))
# Assign to a DataFrame
df = pd.DataFrame(data=result, columns=columns)
return df
sql = '''
SELECT
tweet.id,
tweet.tweet,
tweet.embedding,
user.username
FROM tweet
JOIN user ON tweet.user_id = user.id;
'''
df = get_data(sql, conn)
df['embedding_decoded'] = df.embedding.apply(lambda x: pickle.loads(x))
print(df.shape)
display(df.head)
df.username.value_counts()
user1_emb = df.embedding_decoded[df.username == 'badbanana']
user2_emb = df.embedding_decoded[df.username == 'mental_floss']
embeddings = pd.concat([user1_emb, user2_emb])
embeddings_df = pd.DataFrame(embeddings.to_list(),
columns=[f'dom{i}' for i in range(300)])
labels = np.concatenate([np.ones(len(user1_emb)),
np.zeros(len(user2_emb))])
print(embeddings_df.shape, labels.shape)
X_train, X_test, y_train, y_test = train_test_split(
embeddings_df, labels, test_size=0.2, random_state=42)
print(X_train.shape, X_test.shape)
log_reg = LogisticRegression(max_iter=1000)
%timeit log_reg.fit(X_train, y_train)
y_pred = log_reg.predict(X_test)
print(classification_report(y_test, y_pred))
%matplotlib inline
fig, ax = plt.subplots(figsize=(10,10))
plot_confusion_matrix(log_reg, X_test, y_test,
normalize='true', cmap='Blues',
display_labels=['BadBanana', 'Mental_Floss'], ax=ax)
plt.title(f'LogReg Confusion Matrix (N={X_test.shape[0]})');
nlp = spacy.load('en_core_web_md', disable=['tagger', 'parser'])
def vectorize_tweet(nlp, tweet_text):
'''Returns spacy embeddings for passed in text'''
return list(nlp(tweet_text).vector)
new_emb = vectorize_tweet(nlp, '''For the final segment tonight,
you'll each have two minutes to throw
feces at each other.''')
new_emb[0:5]
log_reg.predict([new_emb])
pickle.dump(log_reg, open("../models/log_reg.pkl", "wb"))
unpickled_lr = pickle.load(open('../models/log_reg.pkl', 'rb'))
unpickled_lr.predict([new_emb])
```
|
github_jupyter
|
import sqlite3
import pickle
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, plot_confusion_matrix
import matplotlib.pyplot as plt
import spacy
# Create connection to SQLite Database:
conn = sqlite3.connect('../twitoff/twitoff.db')
def get_data(query, conn):
'''Function to get data from SQLite DB'''
cursor = conn.cursor()
result = cursor.execute(query).fetchall()
# Get columns from cursor object
columns = list(map(lambda x: x[0], cursor.description))
# Assign to a DataFrame
df = pd.DataFrame(data=result, columns=columns)
return df
sql = '''
SELECT
tweet.id,
tweet.tweet,
tweet.embedding,
user.username
FROM tweet
JOIN user ON tweet.user_id = user.id;
'''
df = get_data(sql, conn)
df['embedding_decoded'] = df.embedding.apply(lambda x: pickle.loads(x))
print(df.shape)
display(df.head)
df.username.value_counts()
user1_emb = df.embedding_decoded[df.username == 'badbanana']
user2_emb = df.embedding_decoded[df.username == 'mental_floss']
embeddings = pd.concat([user1_emb, user2_emb])
embeddings_df = pd.DataFrame(embeddings.to_list(),
columns=[f'dom{i}' for i in range(300)])
labels = np.concatenate([np.ones(len(user1_emb)),
np.zeros(len(user2_emb))])
print(embeddings_df.shape, labels.shape)
X_train, X_test, y_train, y_test = train_test_split(
embeddings_df, labels, test_size=0.2, random_state=42)
print(X_train.shape, X_test.shape)
log_reg = LogisticRegression(max_iter=1000)
%timeit log_reg.fit(X_train, y_train)
y_pred = log_reg.predict(X_test)
print(classification_report(y_test, y_pred))
%matplotlib inline
fig, ax = plt.subplots(figsize=(10,10))
plot_confusion_matrix(log_reg, X_test, y_test,
normalize='true', cmap='Blues',
display_labels=['BadBanana', 'Mental_Floss'], ax=ax)
plt.title(f'LogReg Confusion Matrix (N={X_test.shape[0]})');
nlp = spacy.load('en_core_web_md', disable=['tagger', 'parser'])
def vectorize_tweet(nlp, tweet_text):
'''Returns spacy embeddings for passed in text'''
return list(nlp(tweet_text).vector)
new_emb = vectorize_tweet(nlp, '''For the final segment tonight,
you'll each have two minutes to throw
feces at each other.''')
new_emb[0:5]
log_reg.predict([new_emb])
pickle.dump(log_reg, open("../models/log_reg.pkl", "wb"))
unpickled_lr = pickle.load(open('../models/log_reg.pkl', 'rb'))
unpickled_lr.predict([new_emb])
| 0.41182 | 0.371963 |
# Quantile Estimation of Portugese Hydro Power Seasonality
[](https://notebooks.gesis.org/binder/v2/gh/AyrtonB/Merit-Order-Effect/main?filepath=nbs%2Fug-01-hydro-seasonality.ipynb)
In this example we'll use power output data from Portugese hydro-plants to demonstrate how the quantile LOWESS model can be used.
<br>
### Imports
```
import pandas as pd
import matplotlib.pyplot as plt
from moepy import lowess, eda
```
<br>
### Loading Data
We'll start by reading in the Portugese hydro output data
```
df_portugal_hydro = pd.read_csv('../data/lowess_examples/portugese_hydro.csv')
df_portugal_hydro.index = pd.to_datetime(df_portugal_hydro['datetime'])
df_portugal_hydro = df_portugal_hydro.drop(columns='datetime')
df_portugal_hydro['day_of_the_year'] = df_portugal_hydro.index.dayofyear
df_portugal_hydro = df_portugal_hydro.resample('D').mean()
df_portugal_hydro = df_portugal_hydro.rename(columns={'power_MW': 'average_power_MW'})
df_portugal_hydro.head()
```
<br>
### Quantile LOWESS
We now just need to feed this data into our `quantile_model` wrapper
```
# Estimating the quantiles
df_quantiles = lowess.quantile_model(df_portugal_hydro['day_of_the_year'].values,
df_portugal_hydro['average_power_MW'].values,
frac=0.4, num_fits=40)
# Cleaning names and sorting for plotting
df_quantiles.columns = [f'p{int(col*100)}' for col in df_quantiles.columns]
df_quantiles = df_quantiles[df_quantiles.columns[::-1]]
df_quantiles.head()
```
<br>
We can then visualise the estimated quantile fits of the data
```
fig, ax = plt.subplots(dpi=150)
ax.scatter(df_portugal_hydro['day_of_the_year'], df_portugal_hydro['average_power_MW'], s=1, color='k', alpha=0.5)
df_quantiles.plot(cmap='viridis', legend=False, ax=ax)
eda.hide_spines(ax)
ax.legend(frameon=False, bbox_to_anchor=(1, 0.8))
ax.set_xlabel('Day of the Year')
ax.set_ylabel('Hydro Power Average (MW)')
ax.set_xlim(0, 365)
ax.set_ylim(0)
```
<br>
We can also ask questions like: "what day of a standard year would the lowest power output be recorded?"
```
scenario = 'p50'
print(f'In a {scenario} year the lowest hydro power output will most likely fall on day {df_quantiles[scenario].idxmin()}')
```
<br>
We can also identify the peridos when our predictions will have the greatest uncertainty
```
s_80pct_pred_intvl = df_quantiles['p90'] - df_quantiles['p10']
print(f'Day {s_80pct_pred_intvl.idxmax()} is most likely to have the greatest variation in hydro power output')
# Plotting
fig, ax = plt.subplots(dpi=150)
s_80pct_pred_intvl.plot(ax=ax)
eda.hide_spines(ax)
ax.set_xlabel('Day of the Year')
ax.set_ylabel('Hydro Power Output 80%\nPrediction Interval Size (MW)')
ax.set_xlim(0, 365)
ax.set_ylim(0)
```
|
github_jupyter
|
import pandas as pd
import matplotlib.pyplot as plt
from moepy import lowess, eda
df_portugal_hydro = pd.read_csv('../data/lowess_examples/portugese_hydro.csv')
df_portugal_hydro.index = pd.to_datetime(df_portugal_hydro['datetime'])
df_portugal_hydro = df_portugal_hydro.drop(columns='datetime')
df_portugal_hydro['day_of_the_year'] = df_portugal_hydro.index.dayofyear
df_portugal_hydro = df_portugal_hydro.resample('D').mean()
df_portugal_hydro = df_portugal_hydro.rename(columns={'power_MW': 'average_power_MW'})
df_portugal_hydro.head()
# Estimating the quantiles
df_quantiles = lowess.quantile_model(df_portugal_hydro['day_of_the_year'].values,
df_portugal_hydro['average_power_MW'].values,
frac=0.4, num_fits=40)
# Cleaning names and sorting for plotting
df_quantiles.columns = [f'p{int(col*100)}' for col in df_quantiles.columns]
df_quantiles = df_quantiles[df_quantiles.columns[::-1]]
df_quantiles.head()
fig, ax = plt.subplots(dpi=150)
ax.scatter(df_portugal_hydro['day_of_the_year'], df_portugal_hydro['average_power_MW'], s=1, color='k', alpha=0.5)
df_quantiles.plot(cmap='viridis', legend=False, ax=ax)
eda.hide_spines(ax)
ax.legend(frameon=False, bbox_to_anchor=(1, 0.8))
ax.set_xlabel('Day of the Year')
ax.set_ylabel('Hydro Power Average (MW)')
ax.set_xlim(0, 365)
ax.set_ylim(0)
scenario = 'p50'
print(f'In a {scenario} year the lowest hydro power output will most likely fall on day {df_quantiles[scenario].idxmin()}')
s_80pct_pred_intvl = df_quantiles['p90'] - df_quantiles['p10']
print(f'Day {s_80pct_pred_intvl.idxmax()} is most likely to have the greatest variation in hydro power output')
# Plotting
fig, ax = plt.subplots(dpi=150)
s_80pct_pred_intvl.plot(ax=ax)
eda.hide_spines(ax)
ax.set_xlabel('Day of the Year')
ax.set_ylabel('Hydro Power Output 80%\nPrediction Interval Size (MW)')
ax.set_xlim(0, 365)
ax.set_ylim(0)
| 0.529263 | 0.982507 |
# TensorTrade - Renderers and Plotly Visualization Chart
## Data Loading Function
```
# ipywidgets is required to run Plotly in Jupyter Notebook.
# Uncomment and run the following line to install it if required.
#!pip install ipywidgets
import pandas as pd
def load_csv(filename):
df = pd.read_csv('data/' + filename, skiprows=1)
df.drop(columns=['symbol', 'volume_btc'], inplace=True)
# Fix timestamp form "2019-10-17 09-AM" to "2019-10-17 09-00-00 AM"
df['date'] = df['date'].str[:14] + '00-00 ' + df['date'].str[-2:]
# Convert the date column type from string to datetime for proper sorting.
df['date'] = pd.to_datetime(df['date'])
# Make sure historical prices are sorted chronologically, oldest first.
df.sort_values(by='date', ascending=True, inplace=True)
df.reset_index(drop=True, inplace=True)
# Format timestamps as you want them to appear on the chart buy/sell marks.
df['date'] = df['date'].dt.strftime('%Y-%m-%d %I:%M %p')
# PlotlyTradingChart expects 'datetime' as the timestamps column name.
df.rename(columns={'date': 'datetime'}, inplace=True)
return df
df = load_csv('Coinbase_BTCUSD_1h.csv')
df.head()
```
## Data Preparation
### Create the dataset features
```
import ta
from tensortrade.data import DataFeed, Module
dataset = ta.add_all_ta_features(df, 'open', 'high', 'low', 'close', 'volume', fillna=True)
dataset.head(3)
```
### Create Chart Price History Data
Note: It is recommended to create the chart data *after* creating and cleaning the dataset to ensure one-to-one mapping between the historical prices data and the dataset.
```
price_history = dataset[['datetime', 'open', 'high', 'low', 'close', 'volume']] # chart data
display(price_history.head(3))
dataset.drop(columns=['datetime', 'open', 'high', 'low', 'close', 'volume'], inplace=True)
```
## Setup Trading Environment
### Create Data Feeds
```
from tensortrade.exchanges import Exchange
from tensortrade.exchanges.services.execution.simulated import execute_order
from tensortrade.data import Stream, DataFeed, Module
from tensortrade.instruments import USD, BTC
from tensortrade.wallets import Wallet, Portfolio
coinbase = Exchange("coinbase", service=execute_order)(
Stream("USD-BTC", price_history['close'].tolist())
)
portfolio = Portfolio(USD, [
Wallet(coinbase, 10000 * USD),
Wallet(coinbase, 10 * BTC),
])
with Module("coinbase") as coinbase_ns:
nodes = [Stream(name, dataset[name].tolist()) for name in dataset.columns]
feed = DataFeed([coinbase_ns])
feed.next()
```
### Trading Environment Renderers
A renderer is a channel for the trading environment to output its current state. One or more renderers can be attached to the environment at the same time. For example, you can let the environment draw a chart and log to a file at the same time.
Notice that while all renderers can technically be used together, you need to select the best combination to void undesired results. For example, PlotlyTradingChart can work well with FileLogger but may not display well with ScreenLogger.
Renderer can be set by name (string) or class, single or list. Available renderers are:
* `'screenlog'` or `ScreenLogger`: Shows results on the screen.
* `'filelog'` or `FileLogger`: Logs results to a file.
* `'plotly'` or `PlotlyTradingChart`: A trading chart based on Plotly.
#### Examples:
* renderers = 'screenlog' (default)
* renderers = ['screenlog', 'filelog']
* renderers = ScreenLogger()
* renderers = ['screenlog', FileLogger()]
* renderers = [FileLogger(filename='example.log')]
Renderers can also be created and configured first then attached to the environment as seen in a following example.
### Trading Environment with a Single Renderer
```
from tensortrade.environments.render import ScreenLogger
from tensortrade.environments import TradingEnvironment
env = TradingEnvironment(
feed=feed,
portfolio=portfolio,
action_scheme='managed-risk',
reward_scheme='risk-adjusted',
window_size=20,
renderers = 'screenlog' # ScreenLogger used with default settings
)
from tensortrade.agents import DQNAgent
agent = DQNAgent(env)
agent.train(n_episodes=2, n_steps=200, render_interval=10)
```
### Environment with Multiple Renderers
Create PlotlyTradingChart and FileLogger renderers. Configuring renderers is optional as they can be used with their default settings.
```
from tensortrade.environments.render import PlotlyTradingChart
from tensortrade.environments.render import FileLogger
chart_renderer = PlotlyTradingChart(
display=True, # show the chart on screen (default)
height=800, # affects both displayed and saved file height. None for 100% height.
save_format='html', # save the chart to an HTML file
auto_open_html=True, # open the saved HTML chart in a new browser tab
)
file_logger = FileLogger(
filename='example.log', # omit or None for automatic file name
path='training_logs' # create a new directory if doesn't exist, None for no directory
)
```
### Environement with Multiple Renderers
```
env = TradingEnvironment(
feed=feed,
portfolio=portfolio,
action_scheme='managed-risk',
reward_scheme='risk-adjusted',
window_size=20,
price_history=price_history,
renderers = [chart_renderer, file_logger]
)
```
## Setup and Train DQN Agent
The green and red arrows shown on the chart represent buy and sell trades respectively. The head of each arrow falls at the trade execution price.
```
from tensortrade.agents import DQNAgent
agent = DQNAgent(env)
# Set render_interval to None to render at episode ends only
agent.train(n_episodes=2, n_steps=200, render_interval=10)
```
## Direct Performance and Net Worth Plotting
Alternatively, the final performance and net worth can be displayed using pandas via Matplotlib.
```
%matplotlib inline
portfolio.performance.plot()
portfolio.performance.net_worth.plot()
```
|
github_jupyter
|
# ipywidgets is required to run Plotly in Jupyter Notebook.
# Uncomment and run the following line to install it if required.
#!pip install ipywidgets
import pandas as pd
def load_csv(filename):
df = pd.read_csv('data/' + filename, skiprows=1)
df.drop(columns=['symbol', 'volume_btc'], inplace=True)
# Fix timestamp form "2019-10-17 09-AM" to "2019-10-17 09-00-00 AM"
df['date'] = df['date'].str[:14] + '00-00 ' + df['date'].str[-2:]
# Convert the date column type from string to datetime for proper sorting.
df['date'] = pd.to_datetime(df['date'])
# Make sure historical prices are sorted chronologically, oldest first.
df.sort_values(by='date', ascending=True, inplace=True)
df.reset_index(drop=True, inplace=True)
# Format timestamps as you want them to appear on the chart buy/sell marks.
df['date'] = df['date'].dt.strftime('%Y-%m-%d %I:%M %p')
# PlotlyTradingChart expects 'datetime' as the timestamps column name.
df.rename(columns={'date': 'datetime'}, inplace=True)
return df
df = load_csv('Coinbase_BTCUSD_1h.csv')
df.head()
import ta
from tensortrade.data import DataFeed, Module
dataset = ta.add_all_ta_features(df, 'open', 'high', 'low', 'close', 'volume', fillna=True)
dataset.head(3)
price_history = dataset[['datetime', 'open', 'high', 'low', 'close', 'volume']] # chart data
display(price_history.head(3))
dataset.drop(columns=['datetime', 'open', 'high', 'low', 'close', 'volume'], inplace=True)
from tensortrade.exchanges import Exchange
from tensortrade.exchanges.services.execution.simulated import execute_order
from tensortrade.data import Stream, DataFeed, Module
from tensortrade.instruments import USD, BTC
from tensortrade.wallets import Wallet, Portfolio
coinbase = Exchange("coinbase", service=execute_order)(
Stream("USD-BTC", price_history['close'].tolist())
)
portfolio = Portfolio(USD, [
Wallet(coinbase, 10000 * USD),
Wallet(coinbase, 10 * BTC),
])
with Module("coinbase") as coinbase_ns:
nodes = [Stream(name, dataset[name].tolist()) for name in dataset.columns]
feed = DataFeed([coinbase_ns])
feed.next()
from tensortrade.environments.render import ScreenLogger
from tensortrade.environments import TradingEnvironment
env = TradingEnvironment(
feed=feed,
portfolio=portfolio,
action_scheme='managed-risk',
reward_scheme='risk-adjusted',
window_size=20,
renderers = 'screenlog' # ScreenLogger used with default settings
)
from tensortrade.agents import DQNAgent
agent = DQNAgent(env)
agent.train(n_episodes=2, n_steps=200, render_interval=10)
from tensortrade.environments.render import PlotlyTradingChart
from tensortrade.environments.render import FileLogger
chart_renderer = PlotlyTradingChart(
display=True, # show the chart on screen (default)
height=800, # affects both displayed and saved file height. None for 100% height.
save_format='html', # save the chart to an HTML file
auto_open_html=True, # open the saved HTML chart in a new browser tab
)
file_logger = FileLogger(
filename='example.log', # omit or None for automatic file name
path='training_logs' # create a new directory if doesn't exist, None for no directory
)
env = TradingEnvironment(
feed=feed,
portfolio=portfolio,
action_scheme='managed-risk',
reward_scheme='risk-adjusted',
window_size=20,
price_history=price_history,
renderers = [chart_renderer, file_logger]
)
from tensortrade.agents import DQNAgent
agent = DQNAgent(env)
# Set render_interval to None to render at episode ends only
agent.train(n_episodes=2, n_steps=200, render_interval=10)
%matplotlib inline
portfolio.performance.plot()
portfolio.performance.net_worth.plot()
| 0.675336 | 0.887546 |
# HTML table to Pandas Data Frame to Portal Item
Often we read informative articles that present data in a tabular form. If such data contained location information, it would be much more insightful if presented as a cartographic map. Thus this sample shows how Pandas can be used to extract data from a table within a web page (in this case, a Wikipedia article) and how it can be then brought into the GIS for further analysis and visualization.
**Note**: to run this sample, you need a few extra libraries in your conda environment. If you don't have the libraries, install them by running the following commands from cmd.exe or your shell
```
conda install lxml
conda install html5lib
conda install beautifulsoup4
conda install matplotlib```
```
import pandas as pd
```
Let us read the Wikipedia article on [Estimated number of guns per capita by country](https://en.wikipedia.org/wiki/Number_of_guns_per_capita_by_country) as a pandas data frame object
```
df = pd.read_html("https://en.wikipedia.org/wiki/Number_of_guns_per_capita_by_country")[0]
```
Let us process the table by dropping some unncessary columns
```
df.columns = df.iloc[0]
df = df.reindex(df.index.drop(0))
df = df.reindex(df.index.drop(1))
df = df.drop(df.columns[0], axis=1)
df.head()
```
If you notice, the `Estimate of civilian firearms per 100 persons` for United States is not a proper integer. We can correct is as below
```
df.iloc[0,1] = 120.5
df.dtypes
```
However, we cannot change every incorrect data entry if the table is large. Further we need the `Estimate of civilian firearms per 100 persons` column in numeric format. Hence let us convert it and while doing so, convert incorrect values to `NaN` which stands for Not a Number
```
converted_column = pd.to_numeric(df["Estimate of civilian firearms per 100 persons"], errors = 'coerce')
df['Estimate of civilian firearms per 100 persons'] = converted_column
df.head()
```
## Plot as a map
Let us connect to our GIS to geocode this data and present it as a map
```
from arcgis.gis import GIS
import json
gis = GIS("https://www.arcgis.com", "arcgis_python", "P@ssword123")
```
Since the table is now using `Country (or dependent territory, subnational area, etc.)` column to signify the country code, which represents as `Country__or_dependent_territory__subnational_area__etc__` as the real column name of the table, the mapping relationship is stated as below:
```
fc = gis.content.import_data(df, {"CountryCode":"Country__or_dependent_territory__subnational_area__etc__"})
map1 = gis.map('UK')
map1
```
Let us us smart mapping to render the points with varying sizes representing the number of firearms per 100 residents
```
map1.add_layer(fc, {"renderer":"ClassedSizeRenderer",
"field_name": "Estimate_of_civilian_firearms_per_100_persons"})
```
Let us publish this layer as a **feature collection** item in our GIS
```
item_properties = {
"title": "Worldwide Firearms ownership",
"tags" : "guns,violence",
"snippet": " GSR Worldwide firearms ownership",
"description": "test description",
"text": json.dumps({"featureCollection": {"layers": [dict(fc.layer)]}}),
"type": "Feature Collection",
"typeKeywords": "Data, Feature Collection, Singlelayer",
"extent" : "-102.5272,-41.7886,172.5967,64.984"
}
item = gis.content.add(item_properties)
```
Let us search for this item
```
search_result = gis.content.search("Worldwide Firearms ownership")
search_result[0]
```
|
github_jupyter
|
conda install lxml
conda install html5lib
conda install beautifulsoup4
conda install matplotlib```
Let us read the Wikipedia article on [Estimated number of guns per capita by country](https://en.wikipedia.org/wiki/Number_of_guns_per_capita_by_country) as a pandas data frame object
Let us process the table by dropping some unncessary columns
If you notice, the `Estimate of civilian firearms per 100 persons` for United States is not a proper integer. We can correct is as below
However, we cannot change every incorrect data entry if the table is large. Further we need the `Estimate of civilian firearms per 100 persons` column in numeric format. Hence let us convert it and while doing so, convert incorrect values to `NaN` which stands for Not a Number
## Plot as a map
Let us connect to our GIS to geocode this data and present it as a map
Since the table is now using `Country (or dependent territory, subnational area, etc.)` column to signify the country code, which represents as `Country__or_dependent_territory__subnational_area__etc__` as the real column name of the table, the mapping relationship is stated as below:
Let us us smart mapping to render the points with varying sizes representing the number of firearms per 100 residents
Let us publish this layer as a **feature collection** item in our GIS
Let us search for this item
| 0.688154 | 0.987664 |
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
### Homework part I: Prohibited Comment Classification (3 points)

__In this notebook__ you will build an algorithm that classifies social media comments into normal or toxic.
Like in many real-world cases, you only have a small (10^3) dataset of hand-labeled examples to work with. We'll tackle this problem using both classical nlp methods and embedding-based approach.
```
import pandas as pd
data = pd.read_csv("comments.tsv", sep='\t')
texts = data['comment_text'].values
target = data['should_ban'].values
data[50::200]
from sklearn.model_selection import train_test_split
texts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)
```
__Note:__ it is generally a good idea to split data into train/test before anything is done to them.
It guards you against possible data leakage in the preprocessing stage. For example, should you decide to select words present in obscene tweets as features, you should only count those words over the training set. Otherwise your algoritm can cheat evaluation.
### Preprocessing and tokenization
Comments contain raw text with punctuation, upper/lowercase letters and even newline symbols.
To simplify all further steps, we'll split text into space-separated tokens using one of nltk tokenizers.
```
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
preprocess = lambda text: ' '.join(tokenizer.tokenize(text.lower()))
text = 'How to be a grown-up at work: replace "fuck you" with "Ok, great!".'
print("before:", text,)
print("after:", preprocess(text),)
# task: preprocess each comment in train and test
texts_train = [preprocess(text) for text in texts_train]
texts_test = [preprocess(text) for text in texts_test]
assert texts_train[5] == 'who cares anymore . they attack with impunity .'
assert texts_test[89] == 'hey todds ! quick q ? why are you so gay'
assert len(texts_test) == len(y_test)
```
### Solving it: bag of words

One traditional approach to such problem is to use bag of words features:
1. build a vocabulary of frequent words (use train data only)
2. for each training sample, count the number of times a word occurs in it (for each word in vocabulary).
3. consider this count a feature for some classifier
__Note:__ in practice, you can compute such features using sklearn. Please don't do that in the current assignment, though.
* `from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer`
```
# task: find up to k most frequent tokens in texts_train,
# sort them by number of occurences (highest first)
from collections import Counter
import itertools
k = 10000
all_words = [word for tweet in map(lambda x: x.split(), texts_train) for word in tweet]
word_counts = Counter(all_words)
bow_vocabulary = list(dict(word_counts.most_common(k)).keys())
print('example features:', sorted(bow_vocabulary)[::100])
def text_to_bow(text):
""" convert text string to an array of token counts. Use bow_vocabulary. """
text_words = Counter(text.split())
text_vector = np.zeros(len(bow_vocabulary))
for idx, word in enumerate(bow_vocabulary):
text_vector[idx] = text_words[word]
return text_vector
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
k_max = len(set(' '.join(texts_train).split()))
assert X_train_bow.shape == (len(texts_train), min(k, k_max))
assert X_test_bow.shape == (len(texts_test), min(k, k_max))
assert np.all(X_train_bow[5:10].sum(-1) == np.array([len(s.split()) for s in texts_train[5:10]]))
assert len(bow_vocabulary) <= min(k, k_max)
assert X_train_bow[6, bow_vocabulary.index('.')] == texts_train[6].split().count('.')
```
Machine learning stuff: fit, predict, evaluate. You know the drill.
```
from sklearn.linear_model import LogisticRegression
bow_model = LogisticRegression().fit(X_train_bow, y_train)
from sklearn.metrics import roc_auc_score, roc_curve
for name, X, y, model in [
('train', X_train_bow, y_train, bow_model),
('test ', X_test_bow, y_test, bow_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
```
### Task: implement TF-IDF features
Not all words are equally useful. One can prioritize rare words and downscale words like "and"/"or" by using __tf-idf features__. This abbreviation stands for __text frequency/inverse document frequence__ and means exactly that:
$$ feature_i = { Count(word_i \in x) \times { log {N \over Count(word_i \in D) + \alpha} }} $$
, where x is a single text, D is your dataset (a collection of texts), N is a total number of documents and $\alpha$ is a smoothing hyperparameter (typically 1).
And $Count(word_i \in D)$ is the number of documents where $word_i$ appears.
It may also be a good idea to normalize each data sample after computing tf-idf features.
__Your task:__ implement tf-idf features, train a model and evaluate ROC curve. Compare it with basic BagOfWords model from above.
Please don't use sklearn/nltk builtin tf-idf vectorizers in your solution :) You can still use 'em for debugging though.
```
from collections import defaultdict
from functools import partial
def dataset_counter(dataset, top_k=k):
word_to_tweet = defaultdict(list)
for tweet in map(lambda x: x.split(), dataset):
for word in tweet:
word_to_tweet[word].append(tweet)
dataset_counter = defaultdict(int)
for word in word_to_tweet:
dataset_counter[word] = len(word_to_tweet[word])
return dataset_counter
def tf_idf(text, dataset_counter, num_tweets, alpha=1):
text_counter = Counter(text.split())
text_vector = np.zeros(len(dataset_counter))
for idx, (word, dataset_count) in enumerate(dataset_counter.items()):
tf = text_counter[word]
log_idf = np.log(num_tweets / (dataset_counter[word] + alpha))
text_vector[idx] = tf * log_idf
return text_vector / (np.linalg.norm(text_vector) + 1e-8)
# precalculate number of occurences in tweets for each word in train dataset
train_counter = dataset_counter(texts_train)
len_train = len(texts_train)
# using precalculated Count(w \in D) calculate TF-IDF vectors for train and test
X_train_tf_idf = np.stack(list(map(partial(tf_idf, dataset_counter=train_counter, num_tweets=len_train), texts_train)))
X_test_tf_idf = np.stack(list(map(partial(tf_idf, dataset_counter=train_counter, num_tweets=len_train), texts_test)))
tf_idf_model = LogisticRegression().fit(X_train_tf_idf, y_train)
for name, X, y, model in [
('train', X_train_tf_idf, y_train, tf_idf_model),
('test ', X_test_tf_idf, y_test, tf_idf_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
```
TF-IDF model is better than Bag of Words.
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
### Solving it better: word vectors
Let's try another approach: instead of counting per-word frequencies, we shall map all words to pre-trained word vectors and average over them to get text features.
This should give us two key advantages: (1) we now have 10^2 features instead of 10^4 and (2) our model can generalize to word that are not in training dataset.
We begin with a standard approach with pre-trained word vectors. However, you may also try
* training embeddings from scratch on relevant (unlabeled) data
* multiplying word vectors by inverse word frequency in dataset (like tf-idf).
* concatenating several embeddings
* call `gensim.downloader.info()['models'].keys()` to get a list of available models
* clusterizing words by their word-vectors and try bag of cluster_ids
__Note:__ loading pre-trained model may take a while. It's a perfect opportunity to refill your cup of tea/coffee and grab some extra cookies. Or binge-watch some tv series if you're slow on internet connection
```
import gensim.downloader
embeddings = gensim.downloader.load("fasttext-wiki-news-subwords-300")
# If you're low on RAM or download speed, use "glove-wiki-gigaword-100" instead. Ignore all further asserts.
def vectorize_sum(comment):
"""
implement a function that converts preprocessed comment to a sum of token vectors
"""
embedding_dim = embeddings.wv.vectors.shape[1]
features = np.zeros([embedding_dim], dtype='float32')
for word in comment.split():
try:
features += embeddings.get_vector(word)
except:
pass
return features
assert np.allclose(
vectorize_sum("who cares anymore . they attack with impunity .")[::70],
np.array([ 0.0108616 , 0.0261663 , 0.13855131, -0.18510573, -0.46380025])
)
X_train_wv = np.stack([vectorize_sum(text) for text in texts_train])
X_test_wv = np.stack([vectorize_sum(text) for text in texts_test])
wv_model = LogisticRegression().fit(X_train_wv, y_train)
for name, X, y, model in [
('bow train', X_train_bow, y_train, bow_model),
('bow test ', X_test_bow, y_test, bow_model),
('vec train', X_train_wv, y_train, wv_model),
('vec test ', X_test_wv, y_test, wv_model),
('tf-idf train', X_train_tf_idf, y_train, tf_idf_model),
('tf-idf test', X_test_tf_idf, y_test, tf_idf_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
assert roc_auc_score(y_test, wv_model.predict_proba(X_test_wv)[:, 1]) > 0.92, "something's wrong with your features"
```
# Results
As you can see, TF-IDF model performs better than bag of words but worse than fasttext vectors model.
It is probably due to the fact that fasttext embeddings were trained on a large Wikipedia corpus and bag of words and tf-idf models relied only on available train data.
If everything went right, you've just managed to reduce misclassification rate by a factor of two.
This trick is very useful when you're dealing with small datasets. However, if you have hundreds of thousands of samples, there's a whole different range of methods for that. We'll get there in the second part.
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
data = pd.read_csv("comments.tsv", sep='\t')
texts = data['comment_text'].values
target = data['should_ban'].values
data[50::200]
from sklearn.model_selection import train_test_split
texts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
preprocess = lambda text: ' '.join(tokenizer.tokenize(text.lower()))
text = 'How to be a grown-up at work: replace "fuck you" with "Ok, great!".'
print("before:", text,)
print("after:", preprocess(text),)
# task: preprocess each comment in train and test
texts_train = [preprocess(text) for text in texts_train]
texts_test = [preprocess(text) for text in texts_test]
assert texts_train[5] == 'who cares anymore . they attack with impunity .'
assert texts_test[89] == 'hey todds ! quick q ? why are you so gay'
assert len(texts_test) == len(y_test)
# task: find up to k most frequent tokens in texts_train,
# sort them by number of occurences (highest first)
from collections import Counter
import itertools
k = 10000
all_words = [word for tweet in map(lambda x: x.split(), texts_train) for word in tweet]
word_counts = Counter(all_words)
bow_vocabulary = list(dict(word_counts.most_common(k)).keys())
print('example features:', sorted(bow_vocabulary)[::100])
def text_to_bow(text):
""" convert text string to an array of token counts. Use bow_vocabulary. """
text_words = Counter(text.split())
text_vector = np.zeros(len(bow_vocabulary))
for idx, word in enumerate(bow_vocabulary):
text_vector[idx] = text_words[word]
return text_vector
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
k_max = len(set(' '.join(texts_train).split()))
assert X_train_bow.shape == (len(texts_train), min(k, k_max))
assert X_test_bow.shape == (len(texts_test), min(k, k_max))
assert np.all(X_train_bow[5:10].sum(-1) == np.array([len(s.split()) for s in texts_train[5:10]]))
assert len(bow_vocabulary) <= min(k, k_max)
assert X_train_bow[6, bow_vocabulary.index('.')] == texts_train[6].split().count('.')
from sklearn.linear_model import LogisticRegression
bow_model = LogisticRegression().fit(X_train_bow, y_train)
from sklearn.metrics import roc_auc_score, roc_curve
for name, X, y, model in [
('train', X_train_bow, y_train, bow_model),
('test ', X_test_bow, y_test, bow_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
from collections import defaultdict
from functools import partial
def dataset_counter(dataset, top_k=k):
word_to_tweet = defaultdict(list)
for tweet in map(lambda x: x.split(), dataset):
for word in tweet:
word_to_tweet[word].append(tweet)
dataset_counter = defaultdict(int)
for word in word_to_tweet:
dataset_counter[word] = len(word_to_tweet[word])
return dataset_counter
def tf_idf(text, dataset_counter, num_tweets, alpha=1):
text_counter = Counter(text.split())
text_vector = np.zeros(len(dataset_counter))
for idx, (word, dataset_count) in enumerate(dataset_counter.items()):
tf = text_counter[word]
log_idf = np.log(num_tweets / (dataset_counter[word] + alpha))
text_vector[idx] = tf * log_idf
return text_vector / (np.linalg.norm(text_vector) + 1e-8)
# precalculate number of occurences in tweets for each word in train dataset
train_counter = dataset_counter(texts_train)
len_train = len(texts_train)
# using precalculated Count(w \in D) calculate TF-IDF vectors for train and test
X_train_tf_idf = np.stack(list(map(partial(tf_idf, dataset_counter=train_counter, num_tweets=len_train), texts_train)))
X_test_tf_idf = np.stack(list(map(partial(tf_idf, dataset_counter=train_counter, num_tweets=len_train), texts_test)))
tf_idf_model = LogisticRegression().fit(X_train_tf_idf, y_train)
for name, X, y, model in [
('train', X_train_tf_idf, y_train, tf_idf_model),
('test ', X_test_tf_idf, y_test, tf_idf_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
import gensim.downloader
embeddings = gensim.downloader.load("fasttext-wiki-news-subwords-300")
# If you're low on RAM or download speed, use "glove-wiki-gigaword-100" instead. Ignore all further asserts.
def vectorize_sum(comment):
"""
implement a function that converts preprocessed comment to a sum of token vectors
"""
embedding_dim = embeddings.wv.vectors.shape[1]
features = np.zeros([embedding_dim], dtype='float32')
for word in comment.split():
try:
features += embeddings.get_vector(word)
except:
pass
return features
assert np.allclose(
vectorize_sum("who cares anymore . they attack with impunity .")[::70],
np.array([ 0.0108616 , 0.0261663 , 0.13855131, -0.18510573, -0.46380025])
)
X_train_wv = np.stack([vectorize_sum(text) for text in texts_train])
X_test_wv = np.stack([vectorize_sum(text) for text in texts_test])
wv_model = LogisticRegression().fit(X_train_wv, y_train)
for name, X, y, model in [
('bow train', X_train_bow, y_train, bow_model),
('bow test ', X_test_bow, y_test, bow_model),
('vec train', X_train_wv, y_train, wv_model),
('vec test ', X_test_wv, y_test, wv_model),
('tf-idf train', X_train_tf_idf, y_train, tf_idf_model),
('tf-idf test', X_test_tf_idf, y_test, tf_idf_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
assert roc_auc_score(y_test, wv_model.predict_proba(X_test_wv)[:, 1]) > 0.92, "something's wrong with your features"
| 0.516595 | 0.905029 |
# Statistical Rethinking Ch 2 Bayesian Modeling
> Implementing and adding extra materials and contents from statistical rethinking course/book
- toc: true
- badges: true
- comments: true
- categories: [jupyter, bayesian, stats_rethinking]
- metadata_key1: bayesian
- metadata_key2: jupyter
# About
## Summary
- [Statistical rethinking](https://xcelab.net/rm/statistical-rethinking/) is an excellent book/course about Bayesian learning for beginners.
- The contents here are based on the book and his lecture. Summarizing, adding bits and pieces to facilitate the understanding.
- I would recommend:
1. Read the chapter
2. Watch the youtube video for the corresponding chapter
3. Implement and run the code
> Important: I would highly recommend spending some time on Chapter 2 and 3 as they are the foundations
## Corresponding Youtube Video
> youtube: https://www.youtube.com/watch?v=XoVtOAN0htU&list=PLDcUM9US4XdNM4Edgs7weiyIguLSToZRI&index=2
# Chapter 2
## Overview
- Chapter 2 covers the basics of Bayesian inference. It's pretty much summarized in one sentence as indicated below. I will not cover this part so if you are not sure what the following means, check the textbook and the youtube video.
> Important: Bayesian inference is really just counting and comparing of possibilities.
- This chapter also covers Bayesian modeling. This is the fundamental part that I'm going to cover and spend most time on.
## Bayesian Modeling
- Problem statement:
1. Randomly sample a point from a world map
2. Is the point land(=L) or water(=W)? There's nothing else. It has to be either L or W. So it's binomial.
3. Can we figure out a good estimate of the world from these samples?
- For example, given the collected samples, how much portion of the world is water? 30%? 60%?
```
#hide
import jax.numpy as np
import numpyro
import numpyro.distributions as dist
import seaborn as sns
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [10, 8]
sns.set_style("darkgrid")
sns.set_context("talk")
#hide
def calculate_posterior(total_count, water_count):
# Don't worry about these equations for now as
# I'll explain these in detail later
probability_grid = np.linspace(start=0, stop=1, num=100)
prior = np.ones(100)
likelihood = np.exp(
dist.Binomial(
total_count=total_count,
probs=probability_grid
).log_prob(water_count)
)
unstandardized_posterior = likelihood * prior
posterior = unstandardized_posterior / np.sum(unstandardized_posterior)
return probability_grid, posterior
```
### Sample N = 1
Let's start from the beginning. You randomly sampled a sample from the world map. It was water(=W). To understand the world with probability, we could try plotting the probability of water vs plausibility plot like below.
Don't worry about understanding the code now as I'll go through them later.
```
#collapse-hide
# calculate posterior given 1 randomly sampled data
p_grid, posterior = calculate_posterior(total_count=1, water_count=1)
# calculate the previous posterior
prev_posterior = np.repeat(np.mean(posterior), 100)
# plot the previous posterior
plt.plot(p_grid, prev_posterior, "--", color='k')
# plot the current posterior
plt.plot(p_grid, posterior, "-", color="k")
# some other stuff
plt.legend(['Previous plausibility', 'Current plausibility'])
plt.xlabel("Proportion of water (probability)")
plt.yticks([])
plt.ylabel("Plausibility")
plt.title("Given one random sample (N = 1), how much portion of the world is water?");
```
Let me explain about this figure a bit. First, you have proportion of water or probability of water in the x-axis. You have the plausibility that tells you how likely that is the case in the y-axis. Check out the previous plausibility. This is shown as a dashed line. Previous plausibility means before randomly sampling anything. You have no idea what's the plausibility of water with respect to (w.r.t.) the proportion of water. In other words, every proportion in x-axis is likely plausible before sampling anything and that's why it's a straight line.
Next, look at the current plausibility. Things have changed here. Now you have one sample, which was water(=W) out of N = 1 total sample. Notice that now the plausibility w.r.t. the proportion of water = 0 is zero. Why? because you observed that there is water by sampling water. Therefore, there's no way the proportion of water is zero now. Don't worry if you don't get it by now, let's look at the next example.
### Sample N = 2
Previously, we sampled water(=W). With one sample, it's impossible to understand the underlying probability distribution of the problem. Let's keep sampling to understand more about the world. This time, you sampled land(=L). So now you have W, L, two samples.
```
#collapse-hide
# get previous posterior
prev_posterior = posterior
# calculate posterior given 1 randomly sampled data
p_grid, posterior = calculate_posterior(total_count=2, water_count=1)
# plot the previous posterior
plt.plot(p_grid, prev_posterior, "--", color='k')
# plot the current posterior
plt.plot(p_grid, posterior, "-", color="k")
# some other stuff
plt.legend(['Previous plausibility', 'Current plausibility'])
plt.xlabel("Proportion of water (probability)")
plt.yticks([])
plt.ylabel("Plausibility")
plt.title("Given one random sample (N = 2), how much portion of the world is water?");
```
It's becoming more interesting. At the second sample, you observed land(=L) and you have already observed water(=W) in the first sample. Now you know that the plausibility w.r.t. the proportion of water = 1 is zero too. Why? Think this way. If the world is covered with full of water with no land, then yeah surely the proportion of water = 1 is plausible. But in that case, you won't observe land. Since you observed land in the second sample, this distopian scenario of the world covered with full of water is no longer valid. Thank god!
Instead, what's more plausible is the proportion of water = 0.5. This is because out of 2 samples, you have 1 water and 1 land. It's like flipping a coin now. Without knowing what's a coin is, but you were told that you have 1 head and 1 tail, how do approximate the probability of the next one being head? Probably 50:50, right?
### Sample N = 3 to 9
From here on, you should be able to pretty much understand what the plot indicates. Take some time to see if the below plots make sense to you given the sequences of samples: W, L, W, W, W, L, W, L, W
So now you have 9 samples in total. Of which 6 water and 3 land.
```
#collapse-hide
rest_of_samples = ['W', 'L', 'W', 'W', 'W', 'L', 'W', 'L', 'W']
prev_total = 0
prev_water = 0
# calculate the previous posterior
posterior = np.repeat(1/100, 100)
f, axes = plt.subplots(3, 3)
for i in range(9):
s = rest_of_samples[i]
if i < 3:
x_idx = 0
y_idx = i
elif i >= 3 and i < 6:
x_idx = 1
y_idx = i - 3
else:
x_idx = 2
y_idx = i - 6
prev_total += 1
if s == 'W':
prev_water += 1
# get previous posterior
prev_posterior = posterior
# calculate posterior given 1 randomly sampled data
p_grid, posterior = calculate_posterior(total_count=prev_total, water_count=prev_water)
# plot the previous posterior
ax = axes[x_idx, y_idx]
ax.plot(p_grid, prev_posterior, "--", color='k')
# plot the current posterior
ax.plot(p_grid, posterior, "-", color="k")
ax.set_yticklabels([])
# some other stuff
if i == len(rest_of_samples) - 1:
ax.legend(['Previous plausibility', 'Current plausibility'], bbox_to_anchor=(1.05, 1), loc=2,)
plt.xlabel("Proportion of water (probability)")
plt.ylabel("Plausibility")
else:
ax.set_xticklabels([])
title_txt = "N = " + str(prev_total) + ", " + s
ax.set_title(title_txt)
```
You can now see that every time we get a sample (either L or W), the probability distribution gets updated to represent the underlying distribution of the world as accurate as it can (I'll talk about how exactly to calculate these in a moment). If you look at the last N = 9, you see the peak somewhere around 0.65 ish. This should be intuitive because you got 6 W out of 9 total samples. Which is 6/9 = 0.666... The most likely representation of the world given only 9 samples.
## Breaking down Bayesian modeling
By now, you might be wondering. "Yeah, I think I got the concept. It's a toy problem. Easy! But how exactly can we compute and plot the graph?"
This is the section with equations.
You might already know the Bayes rule:
$$\underbrace{P(A|B)}_\text{Posterior}
=\frac{\overbrace{P(B|A)}^\text{Likelihood} \times \overbrace{P(A)}^\text{Prior}}{\underbrace{P(B)}_\text{Marginal}}$$
In the figures above, all I was doing is the following:
1. Calculate the likelihood given the new sample
2. Calculate the prior (or get the previous posterior)
3. Multiply the two
4. Divide it by the marginal to standardize so it becomes posterior probability
### Likelihood
First, let's get started with likelihood. Likelihood is described as
> Important: a mathematical formula that specified the plausibility of the data.
In the toy example, given the assumption that each of the random sampling is independent from each other and the probability of getting a sample does not change (the world doesn't change), we could treat the problem as the binomial distribution.
$$Pr(x|n,p) = \frac{n!}{(n-x)!x!} p^x (1-p)^{n-x}$$
where
- $x$: the count of an event (e.g. water)
- $n$: total count of events
- $p$: probability of getting $x$
```
# Define the distribution with parameters
d = dist.Binomial(total_count=9, probs=0.5)
# Say we drew 6 samples out of 9 total samples
x = 6
# Evaluate log probability of x
p = d.log_prob(x)
# Convert back to a normal probability by taking the exponential
likelihood = np.exp(p)
# Now all together
likelihood = np.exp(dist.Binomial(total_count=9, probs=0.5).log_prob(6))
print(likelihood)
```
Note that for the probability of p, we are just setting it to 0.5 here. This corresponds to the x-axis (proportion of water) in the previous plots. Imagine calculating the likelihood at a certain point in x-axis which is 0.5. The above value in likelihood is the likelihood of water if the probability of water appearing is 0.5 and we sampled 6 water out of 9 total samples.
### Prior
Now the prior. This is where you can give some information about the prior distribution, usually determined by the assumption you have about the data. I talked a bit before in the plot with example N = 1 where the dashed line was flat. This was because before sampling anything, there's no information about the world, so the plausibility of water in any probability is equally likely = flat line.
We can model this flat line as an uniform distribution:
$$Pr(p) = \frac{1}{b - a}$$
where
- $a$: minimum probability
- $b$: maximum probability
### Marginal
This part is easy as it's just the factor to normalize as you can see in the Bayes rule I already showed you.
In other words, the following Bayes rule can be written in a different format
$$\underbrace{P(A|B)}_\text{Posterior}
=\frac{\overbrace{P(B|A)}^\text{Likelihood} \times \overbrace{P(A)}^\text{Prior}}{\underbrace{P(B)}_\text{Marginal}}$$
when thinking the marginal as a normalization factor, it's written like this:
$$\underbrace{P(A|B)}_\text{Posterior}
=\frac{\overbrace{P(B|A)}^\text{Likelihood} \times \overbrace{P(A)}^\text{Prior}}{\underbrace{\sum P(B|A)P(A)}_\text{Marginal}}$$
Now you have all the components to calculate the posterior. Let's move on to 3 ways of calculating the posteriors using the above equations.
## Three approximation techniques
It is often difficult to directly apply an analytical solution to a real world problem. For example, it's not feasible to derive the continuous posterior distribution of the data. Rather, we prefer discrete approximation so that we can handle it computationally. One way to do that is to use approximation techniques to calculate the posterior using the Bayer rule.
1. Grid approximation
2. Quadratic approximation
3. Markov Chain Monte Carlo (MCMC)
### 1) Grid approximation
This was what I was doing the above and it's the most simple solution to the problem. We just define an evenly spaced grid and and calculate the posterior on each.
```
# Define the variables
num_points = 20 # number of points in the grid
total_count = 9 # total number of samples
water_count = 6 # number of water sample collected
# Define the grid (from 0 to 1, 20 points)
probability_grid = np.linspace(start=0, stop=1, num=num_points)
# Define the prior (Using uniform distribution, 1 all the way)
prior = np.ones(num_points)
# Calculate the likelihood
likelihood = np.exp(
dist.Binomial(
total_count=total_count,
probs=probability_grid
).log_prob(water_count)
)
# Now we can calculate the posterior but unstandardized
unstandardized_posterior = likelihood * prior
# Divide it by the Marginal which is just the sum
posterior = unstandardized_posterior / np.sum(unstandardized_posterior)
#collapse-hide
plt.plot(probability_grid, posterior, "-o")
# some other stuff
plt.xlabel("Probability of water")
plt.ylabel("Posterior probability")
title_txt = "Grid approximation: " + str(num_points) + " points"
plt.title(title_txt);
# Define the variables
num_points = 5 # number of points in the grid
total_count = 9 # total number of samples
water_count = 6 # number of water sample collected
# Define the grid (from 0 to 1, 20 points)
probability_grid = np.linspace(start=0, stop=1, num=num_points)
# Define the prior (Using uniform distribution, 1 all the way)
prior = np.ones(num_points)
# Calculate the likelihood
likelihood = np.exp(
dist.Binomial(
total_count=total_count,
probs=probability_grid
).log_prob(water_count)
)
# Now we can calculate the posterior but unstandardized
unstandardized_posterior = likelihood * prior
# Divide it by the Marginal which is just the sum
posterior = unstandardized_posterior / np.sum(unstandardized_posterior)
#collapse-hide
plt.plot(probability_grid, posterior, "-o")
# some other stuff
plt.xlabel("Probability of water")
plt.ylabel("Posterior probability")
title_txt = "Grid approximation: " + str(num_points) + " points"
plt.title(title_txt);
```
You could also play with the points, but with this simple toy example, we don't need much points.
The problem with the grid approximation is that it does not scale well. As the problem becomes more complicated and there's more parameters to deal with, you will face the scaling problem using this approach.
### 2) Quadratic approximation
You might have heard of the alternative term called "Gaussian approximation". Quadratic approximation tries to approximate the posterior probability at its peak, fitting the gaussian approximation. You may think that is a big assumption to make, but it turns out pretty robust and a good approximation to many of the problems in the real world.
Steps are pretty simple:
1. Find the posterior mode: this is equivalent to finding the peak of the posterior.
2. Estimate the curvature near the peak
### 3) Markov Chain Monte Carlo
We'll cover this in more details later on.
|
github_jupyter
|
#hide
import jax.numpy as np
import numpyro
import numpyro.distributions as dist
import seaborn as sns
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [10, 8]
sns.set_style("darkgrid")
sns.set_context("talk")
#hide
def calculate_posterior(total_count, water_count):
# Don't worry about these equations for now as
# I'll explain these in detail later
probability_grid = np.linspace(start=0, stop=1, num=100)
prior = np.ones(100)
likelihood = np.exp(
dist.Binomial(
total_count=total_count,
probs=probability_grid
).log_prob(water_count)
)
unstandardized_posterior = likelihood * prior
posterior = unstandardized_posterior / np.sum(unstandardized_posterior)
return probability_grid, posterior
#collapse-hide
# calculate posterior given 1 randomly sampled data
p_grid, posterior = calculate_posterior(total_count=1, water_count=1)
# calculate the previous posterior
prev_posterior = np.repeat(np.mean(posterior), 100)
# plot the previous posterior
plt.plot(p_grid, prev_posterior, "--", color='k')
# plot the current posterior
plt.plot(p_grid, posterior, "-", color="k")
# some other stuff
plt.legend(['Previous plausibility', 'Current plausibility'])
plt.xlabel("Proportion of water (probability)")
plt.yticks([])
plt.ylabel("Plausibility")
plt.title("Given one random sample (N = 1), how much portion of the world is water?");
#collapse-hide
# get previous posterior
prev_posterior = posterior
# calculate posterior given 1 randomly sampled data
p_grid, posterior = calculate_posterior(total_count=2, water_count=1)
# plot the previous posterior
plt.plot(p_grid, prev_posterior, "--", color='k')
# plot the current posterior
plt.plot(p_grid, posterior, "-", color="k")
# some other stuff
plt.legend(['Previous plausibility', 'Current plausibility'])
plt.xlabel("Proportion of water (probability)")
plt.yticks([])
plt.ylabel("Plausibility")
plt.title("Given one random sample (N = 2), how much portion of the world is water?");
#collapse-hide
rest_of_samples = ['W', 'L', 'W', 'W', 'W', 'L', 'W', 'L', 'W']
prev_total = 0
prev_water = 0
# calculate the previous posterior
posterior = np.repeat(1/100, 100)
f, axes = plt.subplots(3, 3)
for i in range(9):
s = rest_of_samples[i]
if i < 3:
x_idx = 0
y_idx = i
elif i >= 3 and i < 6:
x_idx = 1
y_idx = i - 3
else:
x_idx = 2
y_idx = i - 6
prev_total += 1
if s == 'W':
prev_water += 1
# get previous posterior
prev_posterior = posterior
# calculate posterior given 1 randomly sampled data
p_grid, posterior = calculate_posterior(total_count=prev_total, water_count=prev_water)
# plot the previous posterior
ax = axes[x_idx, y_idx]
ax.plot(p_grid, prev_posterior, "--", color='k')
# plot the current posterior
ax.plot(p_grid, posterior, "-", color="k")
ax.set_yticklabels([])
# some other stuff
if i == len(rest_of_samples) - 1:
ax.legend(['Previous plausibility', 'Current plausibility'], bbox_to_anchor=(1.05, 1), loc=2,)
plt.xlabel("Proportion of water (probability)")
plt.ylabel("Plausibility")
else:
ax.set_xticklabels([])
title_txt = "N = " + str(prev_total) + ", " + s
ax.set_title(title_txt)
# Define the distribution with parameters
d = dist.Binomial(total_count=9, probs=0.5)
# Say we drew 6 samples out of 9 total samples
x = 6
# Evaluate log probability of x
p = d.log_prob(x)
# Convert back to a normal probability by taking the exponential
likelihood = np.exp(p)
# Now all together
likelihood = np.exp(dist.Binomial(total_count=9, probs=0.5).log_prob(6))
print(likelihood)
# Define the variables
num_points = 20 # number of points in the grid
total_count = 9 # total number of samples
water_count = 6 # number of water sample collected
# Define the grid (from 0 to 1, 20 points)
probability_grid = np.linspace(start=0, stop=1, num=num_points)
# Define the prior (Using uniform distribution, 1 all the way)
prior = np.ones(num_points)
# Calculate the likelihood
likelihood = np.exp(
dist.Binomial(
total_count=total_count,
probs=probability_grid
).log_prob(water_count)
)
# Now we can calculate the posterior but unstandardized
unstandardized_posterior = likelihood * prior
# Divide it by the Marginal which is just the sum
posterior = unstandardized_posterior / np.sum(unstandardized_posterior)
#collapse-hide
plt.plot(probability_grid, posterior, "-o")
# some other stuff
plt.xlabel("Probability of water")
plt.ylabel("Posterior probability")
title_txt = "Grid approximation: " + str(num_points) + " points"
plt.title(title_txt);
# Define the variables
num_points = 5 # number of points in the grid
total_count = 9 # total number of samples
water_count = 6 # number of water sample collected
# Define the grid (from 0 to 1, 20 points)
probability_grid = np.linspace(start=0, stop=1, num=num_points)
# Define the prior (Using uniform distribution, 1 all the way)
prior = np.ones(num_points)
# Calculate the likelihood
likelihood = np.exp(
dist.Binomial(
total_count=total_count,
probs=probability_grid
).log_prob(water_count)
)
# Now we can calculate the posterior but unstandardized
unstandardized_posterior = likelihood * prior
# Divide it by the Marginal which is just the sum
posterior = unstandardized_posterior / np.sum(unstandardized_posterior)
#collapse-hide
plt.plot(probability_grid, posterior, "-o")
# some other stuff
plt.xlabel("Probability of water")
plt.ylabel("Posterior probability")
title_txt = "Grid approximation: " + str(num_points) + " points"
plt.title(title_txt);
| 0.755907 | 0.981979 |
# CIFAR-10 Dataset
https://www.cs.toronto.edu/~kriz/cifar.html
```
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import cifar10
import os, glob
import PIL
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.neighbors import NearestNeighbors
from tqdm import tqdm
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np
model = VGG16(weights='imagenet', include_top=False)
model.summary()
img_path = 'data/cat.jpg'
img = image.load_img(img_path, target_size=(32, 32, 3))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
features = model.predict(x).flatten()
# print(features)
print(features.shape)
(X_train, _), (X_test, _) = cifar10.load_data()
cifar_images = np.concatenate((X_train, X_test), axis=0)
print(len(cifar_images))
fig = plt.figure(figsize=(10,10))
for i in range(0,25):
ax = fig.add_subplot(5, 5, i+1, xticks=[], yticks=[])
ax.imshow(cifar_images[i,:].reshape(32, 32, 3))
plt.show()
# get features for all of CIFAR-10
features_list = []
for image_index in tqdm(range(len(cifar_images))):
cifar_image = cifar_images[image_index,...]
x = image.img_to_array(cifar_image)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
features = model.predict(x).flatten()
features_list.append(features)
# print(features_list[1])
features_array = np.array(features_list)
print(features_array.shape)
nearest_neighbor_model = NearestNeighbors(10)
nearest_neighbor_model.fit(features_array)
def get_similiar_images(img_path, cnn_model, knn_model, image_dataset):
img = image.load_img(img_path, target_size=(32, 32))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
features = cnn_model.predict(x).flatten()
distances, indices = knn_model.kneighbors([features])
similiar_images = []
for s_i in indices[0]:
print(s_i)
similiar_images.append(image_dataset[s_i])
return similiar_images
def plot_similiar_images(query_path, images):
fig = plt.figure(figsize=(1, 1))
query_image = image.load_img(img_path, target_size=(32, 32))
plt.title("QUERY IMAGE")
plt.imshow(query_image)
fig = plt.figure(figsize=(3,3))
plt.title("Similiar Images")
for i in range(0, 9):
sim_image = images[i]
ax = fig.add_subplot(3, 3, i+1, xticks=[], yticks=[])
ax.imshow(sim_image)
plt.show()
img_path = 'data/cat.jpg'
similiar_images = get_similiar_images(img_path, model, nearest_neighbor_model, cifar_images)
plot_similiar_images(img_path, similiar_images)
img_path = 'data/bus.jpg'
similiar_images = get_similiar_images(img_path, model, nearest_neighbor_model, cifar_images)
plot_similiar_images(img_path, similiar_images)
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import cifar10
import os, glob
import PIL
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.neighbors import NearestNeighbors
from tqdm import tqdm
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np
model = VGG16(weights='imagenet', include_top=False)
model.summary()
img_path = 'data/cat.jpg'
img = image.load_img(img_path, target_size=(32, 32, 3))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
features = model.predict(x).flatten()
# print(features)
print(features.shape)
(X_train, _), (X_test, _) = cifar10.load_data()
cifar_images = np.concatenate((X_train, X_test), axis=0)
print(len(cifar_images))
fig = plt.figure(figsize=(10,10))
for i in range(0,25):
ax = fig.add_subplot(5, 5, i+1, xticks=[], yticks=[])
ax.imshow(cifar_images[i,:].reshape(32, 32, 3))
plt.show()
# get features for all of CIFAR-10
features_list = []
for image_index in tqdm(range(len(cifar_images))):
cifar_image = cifar_images[image_index,...]
x = image.img_to_array(cifar_image)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
features = model.predict(x).flatten()
features_list.append(features)
# print(features_list[1])
features_array = np.array(features_list)
print(features_array.shape)
nearest_neighbor_model = NearestNeighbors(10)
nearest_neighbor_model.fit(features_array)
def get_similiar_images(img_path, cnn_model, knn_model, image_dataset):
img = image.load_img(img_path, target_size=(32, 32))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
features = cnn_model.predict(x).flatten()
distances, indices = knn_model.kneighbors([features])
similiar_images = []
for s_i in indices[0]:
print(s_i)
similiar_images.append(image_dataset[s_i])
return similiar_images
def plot_similiar_images(query_path, images):
fig = plt.figure(figsize=(1, 1))
query_image = image.load_img(img_path, target_size=(32, 32))
plt.title("QUERY IMAGE")
plt.imshow(query_image)
fig = plt.figure(figsize=(3,3))
plt.title("Similiar Images")
for i in range(0, 9):
sim_image = images[i]
ax = fig.add_subplot(3, 3, i+1, xticks=[], yticks=[])
ax.imshow(sim_image)
plt.show()
img_path = 'data/cat.jpg'
similiar_images = get_similiar_images(img_path, model, nearest_neighbor_model, cifar_images)
plot_similiar_images(img_path, similiar_images)
img_path = 'data/bus.jpg'
similiar_images = get_similiar_images(img_path, model, nearest_neighbor_model, cifar_images)
plot_similiar_images(img_path, similiar_images)
| 0.329823 | 0.881819 |
```
%load_ext autoreload
%autoreload 2
import re
from os.path import join, basename, dirname
from glob import glob
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from natsort import natsorted
from tqdm import tqdm
import torch
from torchvision.io import read_image
from torchvision.utils import make_grid
from torchvision import transforms
import warnings
warnings.filterwarnings("ignore")
from experiment_utils import set_env, REPO_PATH, seed_everything
set_env()
from image_utils import denormalize, show_single_image, show_multiple_images
from cgn_framework.imagenet.dataloader import get_imagenet_dls
from cgn_framework.imagenet.models.classifier_ensemble import InvariantEnsemble
from cgn_framework.imagenet.models import CGN
from experiments.imagenet_utils import (
EnsembleGradCAM,
get_imagenet_mini_foldername_to_classname,
)
from experiments.gradio_demo import in_mini_folder_to_class
plt.rcParams.update({
"text.usetex": True,
"font.family": "serif",
"font.serif": ["Computer Modern Roman"],
})
```
### Load CGN model
```
seed_everything(0)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# load data
print("Loading data ...")
train_loader, val_loader, train_sampler = get_imagenet_dls(
"imagenet/data/in-mini", False, 64, 10,
)
ds_val = val_loader.dataset
ds_val = val_loader.dataset
df = pd.DataFrame(None, columns=["sample_index", "class_index", "class_folder", "class_name"])
df["sample_index"] = list(range(len(ds_val.labels)))
df["class_index"] = ds_val.labels.astype(int)
df["class_folder"] = df["class_index"].apply(lambda x: ds_val.classes[x])
df["class_name"] = df["class_folder"].replace(in_mini_folder_to_class)
# load CGN model
print("Loading CGN model ...")
cgn = CGN(batch_sz=1, pretrained=False)
weights = torch.load(join(REPO_PATH, 'cgn_framework/imagenet/weights/cgn.pth'), map_location='cpu')
cgn.load_state_dict(weights)
cgn.eval().to(device);
def generate(ys):
clf_transforms=ds_val.T_ims
with torch.no_grad():
x_gt, mask, premask, foreground, background, bg_mask = cgn(ys=ys)
x_gen = mask * foreground + (1 - mask) * background
image = x_gen[0]
pil_image = transforms.ToPILImage()((image + 1) * 0.5)
transformed_image = clf_transforms(pil_image)
foreground = foreground[0]
foreground = transforms.ToPILImage()((foreground + 1) * 0.5)
foreground = clf_transforms(foreground)
mask = mask[0]
background = background[0]
background = transforms.ToPILImage()((background + 1) * 0.5)
background = clf_transforms(background)
result = {
"x_gen": transformed_image,
"mask": mask,
"premask": premask,
"fg": foreground,
"bg": background,
}
return result
def display_generated_sample(result, save=False, save_name="sample"):
fig, axes = plt.subplots(1, 4, figsize=(11, 3), constrained_layout=True)
keys_to_show = ["mask", "fg", "bg", "x_gen"]
titles = ["Shape", "Texture", "Background", "Counterfactual"]
show_single_image(result["mask"], normalized=False, ax=axes[0], title=titles[0], show=False, cmap="gray")
show_single_image(result["fg"], normalized=True, ax=axes[1], title=titles[1], show=False)
show_single_image(result["bg"], normalized=True, ax=axes[2], title=titles[2], show=False)
show_single_image(result["x_gen"], normalized=True, ax=axes[3], title=titles[3], show=False)
if save:
plt.savefig(join(REPO_PATH, f"experiments/results/plots/{save_name}.pdf"))
plt.show()
def convert_list_of_classes_into_indices(class_names):
mapping = dict(df[["class_name", "class_index"]].values)
return [mapping[c] for c in class_names]
result = generate([0, 1, 2])
display_generated_sample(result)
```
### Textures
**Failure modes of CGN: Insufficient disentanglement**
1. Small objects in uniform backgrounds (e.g. `Kite in a sky` , `ski in snow`) struggle disentangling texture and background.
2. Cases where the central object has a complex texture (e.g. `crossword puzzle`) also seems to struggle disentangling texture and background.
3. As one would expect, complex scenes (e.g. `confectionary`) does not produce faithfully disentangles the three aspects.
```
classes = ["hummingbird", "hummingbird", "hummingbird"]
classes = ["beagle", "beagle", "beagle"]
classes = ["kite", "kite", "kite"]
classes = ["hand-held computer" for _ in range(3)]
classes = ["ski" for _ in range(3)]
classes = ["shopping basket" for _ in range(3)]
classes = ["street sign" for _ in range(3)]
classes = ["crossword puzzle" for _ in range(3)]
classes = ["confectionery" for _ in range(3)]
ys = convert_list_of_classes_into_indices(classes)
display_generated_sample(generate(ys), save=True, save_name=f"cf_sample_{classes[0]}")
"hummingbird" in df.class_name.unique()
df.class_name.unique()
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
import re
from os.path import join, basename, dirname
from glob import glob
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from natsort import natsorted
from tqdm import tqdm
import torch
from torchvision.io import read_image
from torchvision.utils import make_grid
from torchvision import transforms
import warnings
warnings.filterwarnings("ignore")
from experiment_utils import set_env, REPO_PATH, seed_everything
set_env()
from image_utils import denormalize, show_single_image, show_multiple_images
from cgn_framework.imagenet.dataloader import get_imagenet_dls
from cgn_framework.imagenet.models.classifier_ensemble import InvariantEnsemble
from cgn_framework.imagenet.models import CGN
from experiments.imagenet_utils import (
EnsembleGradCAM,
get_imagenet_mini_foldername_to_classname,
)
from experiments.gradio_demo import in_mini_folder_to_class
plt.rcParams.update({
"text.usetex": True,
"font.family": "serif",
"font.serif": ["Computer Modern Roman"],
})
seed_everything(0)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# load data
print("Loading data ...")
train_loader, val_loader, train_sampler = get_imagenet_dls(
"imagenet/data/in-mini", False, 64, 10,
)
ds_val = val_loader.dataset
ds_val = val_loader.dataset
df = pd.DataFrame(None, columns=["sample_index", "class_index", "class_folder", "class_name"])
df["sample_index"] = list(range(len(ds_val.labels)))
df["class_index"] = ds_val.labels.astype(int)
df["class_folder"] = df["class_index"].apply(lambda x: ds_val.classes[x])
df["class_name"] = df["class_folder"].replace(in_mini_folder_to_class)
# load CGN model
print("Loading CGN model ...")
cgn = CGN(batch_sz=1, pretrained=False)
weights = torch.load(join(REPO_PATH, 'cgn_framework/imagenet/weights/cgn.pth'), map_location='cpu')
cgn.load_state_dict(weights)
cgn.eval().to(device);
def generate(ys):
clf_transforms=ds_val.T_ims
with torch.no_grad():
x_gt, mask, premask, foreground, background, bg_mask = cgn(ys=ys)
x_gen = mask * foreground + (1 - mask) * background
image = x_gen[0]
pil_image = transforms.ToPILImage()((image + 1) * 0.5)
transformed_image = clf_transforms(pil_image)
foreground = foreground[0]
foreground = transforms.ToPILImage()((foreground + 1) * 0.5)
foreground = clf_transforms(foreground)
mask = mask[0]
background = background[0]
background = transforms.ToPILImage()((background + 1) * 0.5)
background = clf_transforms(background)
result = {
"x_gen": transformed_image,
"mask": mask,
"premask": premask,
"fg": foreground,
"bg": background,
}
return result
def display_generated_sample(result, save=False, save_name="sample"):
fig, axes = plt.subplots(1, 4, figsize=(11, 3), constrained_layout=True)
keys_to_show = ["mask", "fg", "bg", "x_gen"]
titles = ["Shape", "Texture", "Background", "Counterfactual"]
show_single_image(result["mask"], normalized=False, ax=axes[0], title=titles[0], show=False, cmap="gray")
show_single_image(result["fg"], normalized=True, ax=axes[1], title=titles[1], show=False)
show_single_image(result["bg"], normalized=True, ax=axes[2], title=titles[2], show=False)
show_single_image(result["x_gen"], normalized=True, ax=axes[3], title=titles[3], show=False)
if save:
plt.savefig(join(REPO_PATH, f"experiments/results/plots/{save_name}.pdf"))
plt.show()
def convert_list_of_classes_into_indices(class_names):
mapping = dict(df[["class_name", "class_index"]].values)
return [mapping[c] for c in class_names]
result = generate([0, 1, 2])
display_generated_sample(result)
classes = ["hummingbird", "hummingbird", "hummingbird"]
classes = ["beagle", "beagle", "beagle"]
classes = ["kite", "kite", "kite"]
classes = ["hand-held computer" for _ in range(3)]
classes = ["ski" for _ in range(3)]
classes = ["shopping basket" for _ in range(3)]
classes = ["street sign" for _ in range(3)]
classes = ["crossword puzzle" for _ in range(3)]
classes = ["confectionery" for _ in range(3)]
ys = convert_list_of_classes_into_indices(classes)
display_generated_sample(generate(ys), save=True, save_name=f"cf_sample_{classes[0]}")
"hummingbird" in df.class_name.unique()
df.class_name.unique()
| 0.625781 | 0.54468 |
<a href="https://colab.research.google.com/github/maxim371/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments/blob/master/Copy_of_LS_DS_141_Statistics_Probability_and_Inference.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200>
<br></br>
<br></br>
## *Data Science Unit 1 Sprint 3 Lesson 1*
# Statistics, Probability and Inference
Ever thought about how long it takes to make a pancake? Have you ever compared the tooking time of a pancake on each eye of your stove? Is the cooking time different between the different eyes? Now, we can run an experiment and collect a sample of 1,000 pancakes on one eye and another 800 pancakes on the other eye. Assumed we used the same pan, batter, and technique on both eyes. Our average cooking times were 180 (5 std) and 178.5 (4.25 std) seconds repsectively. Now, we can tell those numbers are not identicial, but how confident are we that those numbers are practically the same? How do we know the slight difference isn't caused by some external randomness?
Yes, today's lesson will help you figure out how long to cook your pancakes (*theoretically*). Experimentation is up to you; otherwise, you have to accept my data as true. How are going to accomplish this? With probability, statistics, inference and maple syrup (optional).
<img src="https://images.unsplash.com/photo-1541288097308-7b8e3f58c4c6?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=3300&q=80" width=400>
## Learning Objectives
* [Part 1](#p1): Normal Distribution Revisted
* [Part 2](#p2): Student's T Test
* [Part 3](#p3): Hypothesis Test & Doing it Live
## Normal Distribution Revisited
What is the Normal distribution: A probability distribution of a continuous real valued random-variable. The Normal distribution properties make it useful for the *Central Limit Theorm*, because if we assume a variable follows the normal distribution, we can make certain conclusions based on probabilities.
```
import numpy as np
mu = 180 # mean
sigma = 5 # standard deviation
sample = np.random.normal(mu, sigma, 1000)
np.mean(sample)
# Verify the mean of our sample
abs(mu - np.mean(sample)) < 1
# Verify the variance of our sample
abs(sigma - np.std(sample, ddof=1)) < 0.5
import seaborn as sns
from matplotlib import style
style.use('fivethirtyeight')
ax = sns.distplot(sample, color='r')
ax.axvline(np.percentile(sample,97.5),0)
ax.axvline(np.percentile(sample,2.5),0)
np.percentile(sample, 97.5)
```
## Student's T Test
>Assuming data come from a Normal distribution, the t test provides a way to test whether the sample mean (that is the mean calculated from the data) is a good estimate of the population mean.
The derivation of the t-distribution was first published in 1908 by William Gosset while working for the Guinness Brewery in Dublin. Due to proprietary issues, he had to publish under a pseudonym, and so he used the name Student.
The t-distribution is essentially a distribution of means of normaly distributed data. When we use a t-statistic, we are checking that a mean fails within a certain $\alpha$ probability of the mean of means.
```
t_df10 = np.random.standard_t(df=10, size=10)
t_df100 = np.random.standard_t(df=100, size=100)
t_df1000 = np.random.standard_t(df=1000, size=1000)
sns.kdeplot(t_df10, color='r');
sns.kdeplot(t_df100, color='y');
sns.kdeplot(t_df1000, color='b');
i = 10
for sample in [t_df10, t_df100, t_df1000]:
print(f"t - distribution with {i} degrees of freedom")
print("---" * 10)
print(f"Mean: {sample.mean()}")
print(f"Standard Deviation: {sample.std()}")
print(f"Variance: {sample.var()}")
i = i*10
```
Why is it different from normal? To better reflect the tendencies of small data and situations with unknown population standard deviation. In other words, the normal distribution is still the nice pure ideal (thanks to the central limit theorem), but the t-distribution is much more useful in many real-world situations.
```
import pandas as pd
# Missing LAR (no team roster page on NFL.com)
teams = ['ARI','ATL','BAL','BUF','CAR','CHI','CIN','CLE','DAL','DEN','DET','GB','HOU',
'IND','JAX','KC','LAC','MIA','MIN','NE','NO','NYG','NYJ','OAK','PHI',
'PIT','SEA','SF','TB','TEN','WAS']
df_list = []
for team in teams:
df = pd.read_html(f'http://www.nfl.com/teams/roster?team={team}')[1]
df['Team'] = team
df.columns = ['No','Name','Pos','Status','Height','Weight','Birthdate','Exp','College','Team']
df_list.append(df)
final_df = pd.concat(df_list, ignore_index=True)
print(final_df.shape)
final_df.head()
```
## Live Lecture - let's perform and interpret a t-test
We'll generate our own data, so we can know and alter the "ground truth" that the t-test should find. We will learn about p-values and how to interpret "statistical significance" based on the output of a hypothesis test. We will also dig a bit deeper into how the test statistic is calculated based on the sample error, and visually what it looks like to have 1 or 2 "tailed" t-tests.
```
from scipy.stats import ttest_ind, ttest_ind_from_stats, ttest_rel
burnerA = np.random.normal(180, 5, 1000)
burnerB = np.random.normal(178.5, 4.25, 800)
burnerA[:10]
burnerB[:10]
for sample in [burnerA, burnerB]:
print(f'Mean: {sample.mean()}')
print(f'StDev: {sample.std()}')
print('------')
tstat, pvalue = ttest_ind(burnerA, burnerB)
print(tstat)
print(pvalue)
sns.distplot(burnerA, color='r')
sns.distplot(burnerB, color='b')
from sklearn.datasets import load_wine
X = load_wine()
X
import pandas as pd
wine = pd.DataFrame(X['data'], columns = X['feature_names'])
wine['origin'] = X['target']
print(wine.shape)
wine.head()
wine.origin.value_counts()
sns.distplot(wine[wine['origin'] == 0]['ash'], color = 'b')
sns.distplot(wine[wine['origin'] == 2]['ash'], color = 'r')
tstat, pvalue = ttest_ind(wine[wine['origin'] == 0]['ash'], wine[wine['origin'] == 2]['ash'])
print(tstat)
print(pvalue)
import matplotlib.pyplot as plt
for feat in wine.columns:
# Split groups
group1 = wine[wine['origin'] == 1][feat]
group2 = wine[wine['origin'] == 2][feat]
# Plot distribution
sns.distplot(group1, color = 'b')
sns.distplot(group2, color = 'r')
# Run t-test
_, pvalue = ttest_ind(group1, group2)
# Plot
plt.title(f'Feature: {feat}, P-value: {pvalue:.5f}')
plt.figure()
```
# Resources
- https://homepage.divms.uiowa.edu/~mbognar/applets/t.html
- https://rpsychologist.com/d3/tdist/
- https://gallery.shinyapps.io/tdist/
- https://en.wikipedia.org/wiki/Standard_deviation#Sample_standard_deviation_of_metabolic_rate_of_northern_fulmars
- https://www.khanacademy.org/math/ap-statistics/two-sample-inference/two-sample-t-test-means/v/two-sample-t-test-for-difference-of-means
|
github_jupyter
|
import numpy as np
mu = 180 # mean
sigma = 5 # standard deviation
sample = np.random.normal(mu, sigma, 1000)
np.mean(sample)
# Verify the mean of our sample
abs(mu - np.mean(sample)) < 1
# Verify the variance of our sample
abs(sigma - np.std(sample, ddof=1)) < 0.5
import seaborn as sns
from matplotlib import style
style.use('fivethirtyeight')
ax = sns.distplot(sample, color='r')
ax.axvline(np.percentile(sample,97.5),0)
ax.axvline(np.percentile(sample,2.5),0)
np.percentile(sample, 97.5)
t_df10 = np.random.standard_t(df=10, size=10)
t_df100 = np.random.standard_t(df=100, size=100)
t_df1000 = np.random.standard_t(df=1000, size=1000)
sns.kdeplot(t_df10, color='r');
sns.kdeplot(t_df100, color='y');
sns.kdeplot(t_df1000, color='b');
i = 10
for sample in [t_df10, t_df100, t_df1000]:
print(f"t - distribution with {i} degrees of freedom")
print("---" * 10)
print(f"Mean: {sample.mean()}")
print(f"Standard Deviation: {sample.std()}")
print(f"Variance: {sample.var()}")
i = i*10
import pandas as pd
# Missing LAR (no team roster page on NFL.com)
teams = ['ARI','ATL','BAL','BUF','CAR','CHI','CIN','CLE','DAL','DEN','DET','GB','HOU',
'IND','JAX','KC','LAC','MIA','MIN','NE','NO','NYG','NYJ','OAK','PHI',
'PIT','SEA','SF','TB','TEN','WAS']
df_list = []
for team in teams:
df = pd.read_html(f'http://www.nfl.com/teams/roster?team={team}')[1]
df['Team'] = team
df.columns = ['No','Name','Pos','Status','Height','Weight','Birthdate','Exp','College','Team']
df_list.append(df)
final_df = pd.concat(df_list, ignore_index=True)
print(final_df.shape)
final_df.head()
from scipy.stats import ttest_ind, ttest_ind_from_stats, ttest_rel
burnerA = np.random.normal(180, 5, 1000)
burnerB = np.random.normal(178.5, 4.25, 800)
burnerA[:10]
burnerB[:10]
for sample in [burnerA, burnerB]:
print(f'Mean: {sample.mean()}')
print(f'StDev: {sample.std()}')
print('------')
tstat, pvalue = ttest_ind(burnerA, burnerB)
print(tstat)
print(pvalue)
sns.distplot(burnerA, color='r')
sns.distplot(burnerB, color='b')
from sklearn.datasets import load_wine
X = load_wine()
X
import pandas as pd
wine = pd.DataFrame(X['data'], columns = X['feature_names'])
wine['origin'] = X['target']
print(wine.shape)
wine.head()
wine.origin.value_counts()
sns.distplot(wine[wine['origin'] == 0]['ash'], color = 'b')
sns.distplot(wine[wine['origin'] == 2]['ash'], color = 'r')
tstat, pvalue = ttest_ind(wine[wine['origin'] == 0]['ash'], wine[wine['origin'] == 2]['ash'])
print(tstat)
print(pvalue)
import matplotlib.pyplot as plt
for feat in wine.columns:
# Split groups
group1 = wine[wine['origin'] == 1][feat]
group2 = wine[wine['origin'] == 2][feat]
# Plot distribution
sns.distplot(group1, color = 'b')
sns.distplot(group2, color = 'r')
# Run t-test
_, pvalue = ttest_ind(group1, group2)
# Plot
plt.title(f'Feature: {feat}, P-value: {pvalue:.5f}')
plt.figure()
| 0.6508 | 0.986494 |
```
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.master("local") \
.appName("Neural Network Model") \
.config("spark.executor.memory", "6gb") \
.getOrCreate()
sc = spark.sparkContext
df = spark.createDataFrame([('Male', 67, 150), # insert column values
('Female', 65, 135),
('Female', 68, 130),
('Male', 70, 160),
('Female', 70, 130),
('Male', 69, 174),
('Female', 65, 126),
('Male', 74, 188),
('Female', 60, 110),
('Female', 63, 125),
('Male', 70, 173),
('Male', 70, 145),
('Male', 68, 175),
('Female', 65, 123),
('Male', 71, 145),
('Male', 74, 160),
('Female', 64, 135),
('Male', 71, 175),
('Male', 67, 145),
('Female', 67, 130),
('Male', 70, 162),
('Female', 64, 107),
('Male', 70, 175),
('Female', 64, 130),
('Male', 66, 163),
('Female', 63, 137),
('Male', 65, 165),
('Female', 65, 130),
('Female', 64, 109)],
['gender', 'height','weight']) # insert header values
df.show(5)
from pyspark.sql import functions
df = df.withColumn('gender',functions.when(df['gender']=='Female',0).otherwise(1))
df = df.select('height', 'weight', 'gender')
df.show()
import numpy as np
df.select("height", "weight", "gender").collect()
data_array = np.array(df.select("height", "weight", "gender").collect())
data_array #view the array
data_array.shape
data_array[0]
data_array[28]
print(data_array.max(axis=0))
print(data_array.min(axis=0))
import matplotlib.pyplot as plt
%matplotlib inline
min_x = data_array.min(axis=0)[0]-10
max_x = data_array.max(axis=0)[0]+10
min_y = data_array.min(axis=0)[1]-10
max_y = data_array.max(axis=0)[1]+10
print(min_x, max_x, min_y, max_y)
# formatting the plot grid, scales, and figure size
plt.figure(figsize=(9, 4), dpi= 75)
plt.axis([min_x,max_x,min_y,max_y])
plt.grid()
for i in range(len(data_array)):
value = data_array[i]
# assign labels values to specific matrix elements
gender = value[2]
height = value[0]
weight = value[1]
# filter data points by gender
a = plt.scatter(height[gender==0],weight[gender==0], marker = 'x', c= 'b', label = 'Female')
b = plt.scatter(height[gender==1],weight[gender==1], marker = 'o', c= 'b', label = 'Male')
# plot values, title, legend, x and y axis
plt.title('Weight vs Height by Gender')
plt.xlabel('Height (in)')
plt.ylabel('Weight (lbs)')
plt.legend(handles=[a,b])
np.random.seed(12345)
w1 = np.random.randn()
w2 = np.random.randn()
b= np.random.randn()
print(w1, w2, b)
X = data_array[:,:2]
y = data_array[:,2]
print(X,y)
x_mean = X.mean(axis=0)
x_std = X.std(axis=0)
print(x_mean, x_std)
def normalize(X):
x_mean = X.mean(axis=0)
x_std = X.std(axis=0)
X = (X - X.mean(axis=0))/X.std(axis=0)
return X
X = normalize(X)
print(X)
print('standard deviation')
print(round(X[:,0].std(axis=0),0))
print('mean')
print(round(X[:,0].mean(axis=0),0))
data_array = np.column_stack((X[:,0], X[:,1],y))
print(data_array)
# formatting the plot grid, scales, and figure size
plt.figure(figsize=(9, 4), dpi= 75)
# plt.axis([min_x,max_x,min_y,max_y])
plt.grid()
for i in range(len(data_array)):
value_n = data_array[i]
# assign labels values to specific matrix elements
gender_n = value_n[2]
height_n = value_n[0]
weight_n = value_n[1]
an = plt.scatter(height_n[gender_n==0.0],weight_n[gender_n==0.0], marker = 'x', c= 'b', label = 'Female')
bn = plt.scatter(height_n[gender_n==1.0],weight_n[gender_n==1.0], marker = 'o', c= 'b', label = 'Male')
# plot values, title, legend, x and y axis
plt.title('Weight vs Height by Gender (normalized)')
plt.xlabel('Height (in)')
plt.ylabel('Weight (lbs)')
plt.legend(handles=[an,bn])
def sigmoid(input):
return 1/(1+np.exp(-input))
X = np.arange(-10,10,1)
Y = sigmoid(X)
plt.figure(figsize=(6, 4), dpi= 75)
plt.axis([-10,10,-0.25,1.2])
plt.grid()
plt.plot(X,Y)
plt.title('Sigmoid Function')
plt.show()
def sigmoid_derivative(x):
return sigmoid(x) * (1-sigmoid(x))
plt.figure(figsize=(6, 4), dpi= 75)
plt.axis([-10,10,-0.25,1.2])
plt.grid()
X = np.arange(-10,10,1)
Y = sigmoid(X)
Y_Prime = sigmoid_derivative(X)
plt.plot(X, Y, label="Sigmoid",c='b')
plt.plot(X, Y_Prime, marker=".", label="Sigmoid Derivative", c='b')
plt.title('Sigmoid vs Sigmoid Derivative')
plt.xlabel('X')
plt.ylabel('Y')
plt.legend()
plt.show()
data_array.shape
for i in range(100):
random_index = np.random.randint(len(data_array))
point = data_array[random_index]
print(i, point)
learning_rate = 0.1
all_costs = []
for i in range(100000):
# set the random data points that will be used to calculate the summation
random_number = np.random.randint(len(data_array))
random_person = data_array[random_number]
# the height and weight from the random individual are selected
height = random_person[0]
weight = random_person[1]
z = w1*height+w2*weight+b
predictedGender = sigmoid(z)
actualGender = random_person[2]
cost = (predictedGender-actualGender)**2
# the cost value is appended to the list
all_costs.append(cost)
# partial derivatives of the cost function and summation are calculated
dcost_predictedGender = 2 * (predictedGender-actualGender)
dpredictedGenger_dz = sigmoid_derivative(z)
dz_dw1 = height
dz_dw2 = weight
dz_db = 1
dcost_dw1 = dcost_predictedGender * dpredictedGenger_dz * dz_dw1
dcost_dw2 = dcost_predictedGender * dpredictedGenger_dz * dz_dw2
dcost_db = dcost_predictedGender * dpredictedGenger_dz * dz_db
# gradient descent calculation
w1 = w1 - learning_rate * dcost_dw1
w2 = w2 - learning_rate * dcost_dw2
b = b - learning_rate * dcost_db
plt.plot(all_costs)
plt.title('Cost Value over 100,000 iterations')
plt.xlabel('Iteration')
plt.ylabel('Cost Value')
plt.show()
print('The final values of w1, w2, and b')
print('---------------------------------')
print('w1 = {}'.format(w1))
print('w2 = {}'.format(w2))
print('b = {}'.format(b))
for i in range(len(data_array)):
random_individual = data_array[i]
height = random_individual[0]
weight = random_individual[1]
z = height*w1 + weight*w2 + b
predictedGender=sigmoid(z)
print("Individual #{} actual score: {} predicted score: {}"
.format(i+1,random_individual[2],predictedGender))
def input_normalize(height, weight):
inputHeight = (height - x_mean[0])/x_std[0]
inputWeight = (weight - x_mean[1])/x_std[1]
return inputHeight, inputWeight
score = input_normalize(70, 180)
def predict_gender(raw_score):
gender_summation = raw_score[0]*w1 + raw_score[1]*w2 + b
gender_score = sigmoid(gender_summation)
if gender_score <= 0.5:
gender = 'Female'
else:
gender = 'Male'
return gender, gender_score
predict_gender(score)
score = input_normalize(50,120)
predict_gender(score)
x_min = min(data_array[:,0])-0.1
x_max = max(data_array[:,0])+0.1
y_min = min(data_array[:,1])-0.1
y_max = max(data_array[:,1])+0.1
increment= 0.05
print(x_min, x_max, y_min, y_max)
x_data= np.arange(x_min, x_max, increment)
y_data= np.arange(y_min, y_max, increment)
xy_data = [[x_all, y_all] for x_all in x_data for y_all in y_data]
for i in range(len(xy_data)):
data = (xy_data[i])
height = data[0]
weight = data[1]
z_new = height*w1 + weight*w2 + b
predictedGender_new=sigmoid(z_new)
# print(height, weight, predictedGender_new)
ax = plt.scatter(height[predictedGender_new<=0.5],
weight[predictedGender_new<=0.5],
marker = 'o', c= 'r', label = 'Female')
bx = plt.scatter(height[predictedGender_new > 0.5],
weight[predictedGender_new>0.5],
marker = 'o', c= 'b', label = 'Male')
# plot values, title, legend, x and y axis
plt.title('Weight vs Height by Gender')
plt.xlabel('Height (in)')
plt.ylabel('Weight (lbs)')
plt.legend(handles=[ax,bx])
```
|
github_jupyter
|
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.master("local") \
.appName("Neural Network Model") \
.config("spark.executor.memory", "6gb") \
.getOrCreate()
sc = spark.sparkContext
df = spark.createDataFrame([('Male', 67, 150), # insert column values
('Female', 65, 135),
('Female', 68, 130),
('Male', 70, 160),
('Female', 70, 130),
('Male', 69, 174),
('Female', 65, 126),
('Male', 74, 188),
('Female', 60, 110),
('Female', 63, 125),
('Male', 70, 173),
('Male', 70, 145),
('Male', 68, 175),
('Female', 65, 123),
('Male', 71, 145),
('Male', 74, 160),
('Female', 64, 135),
('Male', 71, 175),
('Male', 67, 145),
('Female', 67, 130),
('Male', 70, 162),
('Female', 64, 107),
('Male', 70, 175),
('Female', 64, 130),
('Male', 66, 163),
('Female', 63, 137),
('Male', 65, 165),
('Female', 65, 130),
('Female', 64, 109)],
['gender', 'height','weight']) # insert header values
df.show(5)
from pyspark.sql import functions
df = df.withColumn('gender',functions.when(df['gender']=='Female',0).otherwise(1))
df = df.select('height', 'weight', 'gender')
df.show()
import numpy as np
df.select("height", "weight", "gender").collect()
data_array = np.array(df.select("height", "weight", "gender").collect())
data_array #view the array
data_array.shape
data_array[0]
data_array[28]
print(data_array.max(axis=0))
print(data_array.min(axis=0))
import matplotlib.pyplot as plt
%matplotlib inline
min_x = data_array.min(axis=0)[0]-10
max_x = data_array.max(axis=0)[0]+10
min_y = data_array.min(axis=0)[1]-10
max_y = data_array.max(axis=0)[1]+10
print(min_x, max_x, min_y, max_y)
# formatting the plot grid, scales, and figure size
plt.figure(figsize=(9, 4), dpi= 75)
plt.axis([min_x,max_x,min_y,max_y])
plt.grid()
for i in range(len(data_array)):
value = data_array[i]
# assign labels values to specific matrix elements
gender = value[2]
height = value[0]
weight = value[1]
# filter data points by gender
a = plt.scatter(height[gender==0],weight[gender==0], marker = 'x', c= 'b', label = 'Female')
b = plt.scatter(height[gender==1],weight[gender==1], marker = 'o', c= 'b', label = 'Male')
# plot values, title, legend, x and y axis
plt.title('Weight vs Height by Gender')
plt.xlabel('Height (in)')
plt.ylabel('Weight (lbs)')
plt.legend(handles=[a,b])
np.random.seed(12345)
w1 = np.random.randn()
w2 = np.random.randn()
b= np.random.randn()
print(w1, w2, b)
X = data_array[:,:2]
y = data_array[:,2]
print(X,y)
x_mean = X.mean(axis=0)
x_std = X.std(axis=0)
print(x_mean, x_std)
def normalize(X):
x_mean = X.mean(axis=0)
x_std = X.std(axis=0)
X = (X - X.mean(axis=0))/X.std(axis=0)
return X
X = normalize(X)
print(X)
print('standard deviation')
print(round(X[:,0].std(axis=0),0))
print('mean')
print(round(X[:,0].mean(axis=0),0))
data_array = np.column_stack((X[:,0], X[:,1],y))
print(data_array)
# formatting the plot grid, scales, and figure size
plt.figure(figsize=(9, 4), dpi= 75)
# plt.axis([min_x,max_x,min_y,max_y])
plt.grid()
for i in range(len(data_array)):
value_n = data_array[i]
# assign labels values to specific matrix elements
gender_n = value_n[2]
height_n = value_n[0]
weight_n = value_n[1]
an = plt.scatter(height_n[gender_n==0.0],weight_n[gender_n==0.0], marker = 'x', c= 'b', label = 'Female')
bn = plt.scatter(height_n[gender_n==1.0],weight_n[gender_n==1.0], marker = 'o', c= 'b', label = 'Male')
# plot values, title, legend, x and y axis
plt.title('Weight vs Height by Gender (normalized)')
plt.xlabel('Height (in)')
plt.ylabel('Weight (lbs)')
plt.legend(handles=[an,bn])
def sigmoid(input):
return 1/(1+np.exp(-input))
X = np.arange(-10,10,1)
Y = sigmoid(X)
plt.figure(figsize=(6, 4), dpi= 75)
plt.axis([-10,10,-0.25,1.2])
plt.grid()
plt.plot(X,Y)
plt.title('Sigmoid Function')
plt.show()
def sigmoid_derivative(x):
return sigmoid(x) * (1-sigmoid(x))
plt.figure(figsize=(6, 4), dpi= 75)
plt.axis([-10,10,-0.25,1.2])
plt.grid()
X = np.arange(-10,10,1)
Y = sigmoid(X)
Y_Prime = sigmoid_derivative(X)
plt.plot(X, Y, label="Sigmoid",c='b')
plt.plot(X, Y_Prime, marker=".", label="Sigmoid Derivative", c='b')
plt.title('Sigmoid vs Sigmoid Derivative')
plt.xlabel('X')
plt.ylabel('Y')
plt.legend()
plt.show()
data_array.shape
for i in range(100):
random_index = np.random.randint(len(data_array))
point = data_array[random_index]
print(i, point)
learning_rate = 0.1
all_costs = []
for i in range(100000):
# set the random data points that will be used to calculate the summation
random_number = np.random.randint(len(data_array))
random_person = data_array[random_number]
# the height and weight from the random individual are selected
height = random_person[0]
weight = random_person[1]
z = w1*height+w2*weight+b
predictedGender = sigmoid(z)
actualGender = random_person[2]
cost = (predictedGender-actualGender)**2
# the cost value is appended to the list
all_costs.append(cost)
# partial derivatives of the cost function and summation are calculated
dcost_predictedGender = 2 * (predictedGender-actualGender)
dpredictedGenger_dz = sigmoid_derivative(z)
dz_dw1 = height
dz_dw2 = weight
dz_db = 1
dcost_dw1 = dcost_predictedGender * dpredictedGenger_dz * dz_dw1
dcost_dw2 = dcost_predictedGender * dpredictedGenger_dz * dz_dw2
dcost_db = dcost_predictedGender * dpredictedGenger_dz * dz_db
# gradient descent calculation
w1 = w1 - learning_rate * dcost_dw1
w2 = w2 - learning_rate * dcost_dw2
b = b - learning_rate * dcost_db
plt.plot(all_costs)
plt.title('Cost Value over 100,000 iterations')
plt.xlabel('Iteration')
plt.ylabel('Cost Value')
plt.show()
print('The final values of w1, w2, and b')
print('---------------------------------')
print('w1 = {}'.format(w1))
print('w2 = {}'.format(w2))
print('b = {}'.format(b))
for i in range(len(data_array)):
random_individual = data_array[i]
height = random_individual[0]
weight = random_individual[1]
z = height*w1 + weight*w2 + b
predictedGender=sigmoid(z)
print("Individual #{} actual score: {} predicted score: {}"
.format(i+1,random_individual[2],predictedGender))
def input_normalize(height, weight):
inputHeight = (height - x_mean[0])/x_std[0]
inputWeight = (weight - x_mean[1])/x_std[1]
return inputHeight, inputWeight
score = input_normalize(70, 180)
def predict_gender(raw_score):
gender_summation = raw_score[0]*w1 + raw_score[1]*w2 + b
gender_score = sigmoid(gender_summation)
if gender_score <= 0.5:
gender = 'Female'
else:
gender = 'Male'
return gender, gender_score
predict_gender(score)
score = input_normalize(50,120)
predict_gender(score)
x_min = min(data_array[:,0])-0.1
x_max = max(data_array[:,0])+0.1
y_min = min(data_array[:,1])-0.1
y_max = max(data_array[:,1])+0.1
increment= 0.05
print(x_min, x_max, y_min, y_max)
x_data= np.arange(x_min, x_max, increment)
y_data= np.arange(y_min, y_max, increment)
xy_data = [[x_all, y_all] for x_all in x_data for y_all in y_data]
for i in range(len(xy_data)):
data = (xy_data[i])
height = data[0]
weight = data[1]
z_new = height*w1 + weight*w2 + b
predictedGender_new=sigmoid(z_new)
# print(height, weight, predictedGender_new)
ax = plt.scatter(height[predictedGender_new<=0.5],
weight[predictedGender_new<=0.5],
marker = 'o', c= 'r', label = 'Female')
bx = plt.scatter(height[predictedGender_new > 0.5],
weight[predictedGender_new>0.5],
marker = 'o', c= 'b', label = 'Male')
# plot values, title, legend, x and y axis
plt.title('Weight vs Height by Gender')
plt.xlabel('Height (in)')
plt.ylabel('Weight (lbs)')
plt.legend(handles=[ax,bx])
| 0.597373 | 0.6694 |
# Section 1.3 $\quad$ Matrix Multiplication
## Definitions
- The **dot product** or **inner product**, of the $n$-vectors in $\mathbb{R}^n$
$$
\mathbf{a} =
\left[
\begin{array}{c}
a_1 \\
a_2 \\
\vdots \\
a_n \\
\end{array}
\right] ~~~~~\text{and}~~~~~
\mathbf{b} =
\left[
\begin{array}{c}
b_1 \\
b_2 \\
\vdots \\
b_n \\
\end{array}
\right]
$$
is defined as <br /><br /><br /><br />
### Example 1
$$
\mathbf{a} =
\left[
\begin{array}{c}
-3 \\
2 \\
3 \\
\end{array}
\right],~~
\mathbf{b} =
\left[
\begin{array}{c}
4 \\
1 \\
2 \\
\end{array}
\right],~~~~
\mathbf{a}\cdot\mathbf{b} = \qquad\qquad\qquad\qquad\qquad
$$
```
from numpy import *
a = array([-3, 2, 3]);
b = array([4, 1, 2]);
dot(a, b)
```
## Matrix Multiplication
If $A = [a_{ij}]$ is an $\underline{\hspace{1in}}$ matrix and $B = [b_{ij}]$ is a $\underline{\hspace{1in}}$ matrix, then the product of $A$ and $B$, is the $\underline{\hspace{1in}}$ matrix $C = [c_{ij}]$, defined by
$$
c_{ij} = \hspace{4in}
$$
<br /><br /><br /><br />
**Remark:** The product of $A$ and $B$ is defined only when
### Example 2
Compute the product matrix $AB$, where
$$
A = \left[
\begin{array}{ccc}
1 & 2 & -1 \\
3 & 1 & 4 \\
\end{array}
\right]~~~~\text{and}~~~~
B = \left[
\begin{array}{cc}
-2 & 5 \\
4 & -3 \\
2 & 1
\end{array}
\right]
$$
```
from numpy import *
A = array([[1, 2, -1], [3, 1, 4]]);
B = array([[-2, 5], [4, -3], [2, 1]]);
dot(A, B)
```
**Question:** Let $A = [a_{ij}]$ be an $m\times p$ matrix, and $B = [b_{ij}]$ be a $p\times n$ matrix. Is the statement $AB = BA$ true?
$\bullet$<br /><br /><br /><br />$\bullet$<br /><br /><br /><br />$\bullet$<br /><br /><br /><br />
## Matrix-Vector Product Written in Terms of Columns
Let $A = [a_{ij}]$ be an $m\times n$ matrix and $\mathbf{c}$ be an $n$-vector
$$
\mathbf{c} =
\left[
\begin{array}{c}
c_1 \\
c_2 \\
\vdots \\
c_n \\
\end{array}
\right]
$$
Then
$$
A\mathbf{c} =
\left[
\begin{array}{cccc}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{m1} & a_{m2} & \cdots & a_{mn} \\
\end{array}
\right]
\left[
\begin{array}{c}
c_1 \\
c_2 \\
\vdots \\
c_n \\
\end{array}
\right]
$$
<br /><br /><br /><br />
$$
=\hspace{2.1in}
$$
<br /><br /><br /><br />
### Example 3
Let
$$
A = \left[
\begin{array}{ccc}
2 & -1 & -3 \\
4 & 2 & -2 \\
\end{array}
\right]~~~~\text{and}~~~~
\mathbf{c} = \left[
\begin{array}{c}
2 \\
-3 \\
4
\end{array}
\right].
$$
Then
$$
A\mathbf{c} = \hspace{4in}
$$
```
from numpy import *
A = array([[2, -1, -3], [4, 2, -2]]);
c = array([2, -3, 4])
dot(A, c)
```
## Linear Systems
Consider the linear system of $m$ equations in $n$ unknowns:
\begin{eqnarray}
% \nonumber to remove numbering (before each equation)
a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n &=& b_1 \\
a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n &=& b_2 \qquad \qquad (1)\\
\vdots\hspace{1in} && \vdots \nonumber \\
a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n &=& b_m
\end{eqnarray}
It can be written in matrix form $A\mathbf{x} = \mathbf{b}$, where <br /><br /><br /><br /><br /><br /><br />
The matrix $A$ is called $\underline{\hspace{1.5in}}$ of the linear system (1).
The matrix $[A\mid\mathbf{b}]$ is called the $\underline{\hspace{1.5in}}$ of the linear system (1) and has the form
<br /><br /><br /><br /><br /><br /><br />
If $b_1 = b_2 = \cdots = b_m = 0$ in (1), the linear system is called a $\underline{\hspace{2in}}$.
Note that the matrix-vector product $A\mathbf{x}$ can be expressed as
$$
A\mathbf{x} = x_1\mathbf{a}_1 + x_1\mathbf{a}_2 + \cdots + x_n\mathbf{a}_n
$$
so that the linear system (1) can be written as <br /><br /><br /><br />
Therefore, $A\mathbf{x} = \mathbf{b}$ is **consistent** if and only if <br /><br /><br /><br />
|
github_jupyter
|
from numpy import *
a = array([-3, 2, 3]);
b = array([4, 1, 2]);
dot(a, b)
from numpy import *
A = array([[1, 2, -1], [3, 1, 4]]);
B = array([[-2, 5], [4, -3], [2, 1]]);
dot(A, B)
from numpy import *
A = array([[2, -1, -3], [4, 2, -2]]);
c = array([2, -3, 4])
dot(A, c)
| 0.327238 | 0.98315 |
# Dependency Parsing
```
%load_ext autoreload
%autoreload 2
```
## Eisner's algorithm step-by-step
Inputs
* arcs scores $s_\theta(h,m)$, for $h\in \{0,...,N\}$ and $m\in \{1,...,N\}$, $h\neq m$
* sentence _She enjoys the Summer School._
Notice that
* the length of the sequence is $N = 5$
* the terminal symbols that comprise the sentence are $w_1=$ _She_, $w_2=$ _enjoys_, $w_3=$ _the_, $w_4=$ _Summer_, $w_5=$ _School_
* the root symbol $w_0=\ast$ is defined for convenience; the whole sentence can be thought as being _$\ast$ She enjoys the Summer School._
Variables to fill in:
* $\mathrm{incomplete}$, shape $(N+1)\times (N+1) \times 2$: incomplete span scores
* $\mathrm{complete}$, shape $(N+1)\times (N+1) \times 2$: complete span scores
### Initialization
Initialization corresponds to setting all 1-word 'spans' scores to zero. As we will see in the induction stage, these will be the initial building blocks for computing longer span scores.
The figure below illustrates all the initialized span scores.

### Induction
We now proceed to do some iterations on the Induction stage's double `for` loop:
#### Spans with $k=1$
$k=1$ corresponds to spans over pairs of words. The inner loop variable $s$ loops over the words, and determines the leftmost word in the span. The other variable $t:=s+k$ corresponds to the end of the span (the rightmost word).
#### Incomplete spans
Since $s\leq r<t$, and $t=s+k=s+1$, one concludes that $r=s$ for all spans with $k=1$. For that reason, the highest score corresponds to the only value of $r$:
$$
\mathrm{incomplete}[s,t,\leftarrow]\overset{(r=s)}{=}\mathrm{complete[s,s,\rightarrow]}+\mathrm{complete[t,t,\leftarrow]}+s_\theta(t,s)
$$

Notice the complete spans on the right hand side do not meet on top of a word. That is the reason why these spans are called _incomplete._
The incomplete spans that go right are computed in the exact same way, except the arc score we use is the one of the arc going _right:_ $s_\theta(s,t)$ instead of $s_\theta(t,s)$.
$$
\mathrm{incomplete}[s,t,\rightarrow]\overset{(r=s)}{=}\mathrm{complete[s,s,\rightarrow]}+\mathrm{complete[t,t,\leftarrow]}+s_\theta(s,t)
$$

#### Spans with length $k=2$
The next step is to compute scores for spans over three words. An immediate consequence is that now $r$ can take two different values: $s$ and $s+1$. Now there is an actual need to maximize the score over possible values of $r$.
#### Incomplete spans
The different values of $r$ correspond to using different sets of complete span scores.
$$
\mathrm{incomplete}[s,t,\leftarrow]=\underset{r}{\max}\left\{\begin{matrix}(r=s)& \mathrm{complete}[s,s,\rightarrow]+\mathrm{complete}[s+1,t,\leftarrow]+s_\theta(t,s)\\ (r=s+1)& \mathrm{complete}[s,s+1,\rightarrow]+\mathrm{complete}[t,t,\leftarrow]+s_\theta(t,s)\end{matrix}\right.
$$

The procedure to compute right-facing incomplete spans is similar to the one above. All that changes is the arc score that is used.
$$
\mathrm{incomplete}[s,t,\rightarrow]=\underset{r}{\max}\left\{\begin{matrix}(r=s)& \mathrm{complete}[s,s,\rightarrow]+\mathrm{complete}[s+1,t,\leftarrow]+s_\theta(s,t)\\ (r=s+1)& \mathrm{complete}[s,s+1,\rightarrow]+\mathrm{complete}[t,t,\leftarrow]+s_\theta(s,t)\end{matrix}\right.
$$

#### Complete spans
We now proceed to compute complete span scores. Once again, the incomplete span scores required for this step were conveniently computed before.
$$
\mathrm{complete}[s,t,\leftarrow]=\underset{r}{\max}\left\{\begin{matrix}(r=s)& \mathrm{complete}[s,s,\leftarrow]+\mathrm{incomplete}[s,t,\leftarrow]\\ (r=s+1)& \mathrm{complete}[s,s+1,\leftarrow]+\mathrm{incomplete}[s+1,t,\leftarrow]\end{matrix}\right.
$$

The last step in this demo is to compute right-facing complete span scores over three words:
$$
\mathrm{complete}[s,t,\rightarrow]=\underset{r}{\max}\left\{\begin{matrix}(r=s)& \mathrm{incomplete}[s,s+1,\rightarrow]+\mathrm{complete}[s+1,t,\rightarrow]\\ (r=s+1)& \mathrm{incomplete}[s,t,\rightarrow]+\mathrm{complete}[t,t,\rightarrow]\end{matrix}\right.
$$

These steps continue until a complete span of size $N+1$ is computed, which corresponds to spanning the whole sentence. After that, we backtrack the highest scores to build the parse tree.
# Implement Eisner's algorithm
Implement Eisnerโs algorithm for projective dependency parsing. The pseudo-code is shown as Algorithm 13. Implement this algorithm as the function:
```python
def parse_proj(self, scores):
```
in file dependency decoder.py. The input is a matrix of arc scores, whose dimension is (N + 1)-by-(N + 1), and whose (h, m) entry contains the score sq(h, m).
In particular, the first row contains the scores for the arcs that depart from the root, and the first columnโs values, along with the main diagonal, are to be ignored (since no arcs point to the root, and there are no self-pointing arcs). To make your job easier, we provide an implementation of the backtracking part:
```python
def backtrack_eisner(self, incomplete_backtrack, complete_backtrack, s, t, direction, complete, heads):
```
so you just need to build complete/incomplete spans and their backtrack pointers and then call
```python
heads = -np.ones(N+1, dtype=int)
self.backtrack_eisner(incomplete_backtrack, complete_backtrack, 0, N, 1, 1,heads)
return heads
```
to obtain the final parse.
To test the algorithm, retrain the parser on the English data (where the trees are actually all projective) by setting
the flag dp.projective to True:
```
dp = depp.DependencyParser()
dp.features.use_lexical = True
dp.features.use_distance = True
dp.features.use_contextual = True
dp.read_data("english")
dp.projective = True
dp.train_perceptron(10)
dp.test()
```
You should get the following results:
```
๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ4.2.5
Number of sentences: 8044
Number of tokens: 80504
Number of words: 12202
Number of pos: 48
Number of features: 338014
Epoch 1
Training accuracy: 0.835637168541
Epoch 2
Training accuracy: 0.922426254687
Epoch 3
Training accuracy: 0.947621628947
Epoch 4
Training accuracy: 0.960326602521
Epoch 5
Training accuracy: 0.967689840538
Epoch 6
Training accuracy: 0.97263631025
Epoch 7
Training accuracy: 0.97619370285
Epoch 8
Training accuracy: 0.979209016579
Epoch 9
Training accuracy: 0.98127569228
Epoch 10
Training accuracy: 0.981320865519
Test accuracy (509 test instances): 0.886732599366
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
def parse_proj(self, scores):
def backtrack_eisner(self, incomplete_backtrack, complete_backtrack, s, t, direction, complete, heads):
heads = -np.ones(N+1, dtype=int)
self.backtrack_eisner(incomplete_backtrack, complete_backtrack, 0, N, 1, 1,heads)
return heads
dp = depp.DependencyParser()
dp.features.use_lexical = True
dp.features.use_distance = True
dp.features.use_contextual = True
dp.read_data("english")
dp.projective = True
dp.train_perceptron(10)
dp.test()
๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ๏ฟผ4.2.5
Number of sentences: 8044
Number of tokens: 80504
Number of words: 12202
Number of pos: 48
Number of features: 338014
Epoch 1
Training accuracy: 0.835637168541
Epoch 2
Training accuracy: 0.922426254687
Epoch 3
Training accuracy: 0.947621628947
Epoch 4
Training accuracy: 0.960326602521
Epoch 5
Training accuracy: 0.967689840538
Epoch 6
Training accuracy: 0.97263631025
Epoch 7
Training accuracy: 0.97619370285
Epoch 8
Training accuracy: 0.979209016579
Epoch 9
Training accuracy: 0.98127569228
Epoch 10
Training accuracy: 0.981320865519
Test accuracy (509 test instances): 0.886732599366
| 0.566139 | 0.918444 |
# GenoSurf API and pyGMQL Example Use Case:
```
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
IN_COLAB
if IN_COLAB:
!pip install git+https://github.com/DEIB-GECO/PyGMQL.git
!pip install --force https://github.com/chengs/tqdm/archive/colab.zip
import datetime
datetime.datetime.now()
```
### Import necessary libraries
```
from IPython.display import clear_output
import requests
from ast import literal_eval
import pandas as pd
from functools import reduce
import os
from io import StringIO
import gmql as gl
from tqdm import tqdm_notebook as tqdm
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
clear_output()
# set JAVA_HOME
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
```
## Funtion to download data from GenoSurf
```
def get_genosurf_results(input_query, payload = {'agg': True}):
url = 'http://geco.deib.polimi.it/genosurf/api/query/table'
if type(input_query) == str:
input_query = literal_eval(input_query)
response = requests.post(url, json=input_query, params=payload)
response_json = response.json()
df = pd.DataFrame(response_json)
return df
```
### Funtions to download data from GMQL repository and convert it to gmql dataset
```
def download_region_file(url,sample_name, other_column_positions =[] ,other_columns = []): #TODO columns and their names
download_token = '?authToken=DOWNLOAD-TOKEN'
r = requests.get(url + download_token)
test_string_io = StringIO(r.text)
column_positions = [0,1,2,3] + other_column_positions
df2 = pd.read_csv(test_string_io, sep="\t",header=None,usecols=column_positions)
df2.columns = ['chr', 'start','stop','strand'] + other_columns
df2['sample_name'] = sample_name
return df2
def download_dataset_files(df_genosurf,other_column_positions = [], other_columns = []):
dfs = [] # can be done by append each iteration
for index,row in tqdm(list(df_genosurf.iterrows()), desc='Dowloading'):
dfs.append(download_region_file(row.local_url, row.item_source_id,other_column_positions, other_columns))
df = pd.concat(dfs)
return df
def create_dataset(df,meta_df=None, meta_columns=None):
df = df.copy()
if meta_df is not None and meta_columns is not None:
meta_df = meta_df.copy()
meta_df['sample_name'] = meta_df['item_source_id']
meta_df = meta_df.set_index('sample_name')
meta_df = meta_df[meta_columns]
meta_df = meta_df.applymap(lambda x: [x] if x else [])
return gl.from_pandas(df, meta_df ,sample_name='sample_name').to_GMQLDataset()
```
#### Myc dataset download
#### BRCA dataset download
#### GENE dataset download
#### Loading all DSs from local backup DSs
```
myc_df = pd.read_csv("backup/myc_df.csv")
myc_dataset_files = pd.read_csv("backup/myc_dataset_files.csv")
myc_dataset = create_dataset(myc_dataset_files)
myc_dataset.head()
brca_df = pd.read_csv("backup/brca_df.csv")
brca_dataset_files = pd.read_csv("backup/brca_dataset_files.csv")
brca_dataset = create_dataset(brca_dataset_files,brca_df,['item_source_id','age','gender','ethnicity','species'])
brca_dataset.head()
gene_df = pd.read_csv("backup/gene_df.csv")
gene_dataset_files = pd.read_csv("backup/gene_dataset_files.csv")
gene_dataset = create_dataset(gene_dataset_files)
gene_dataset.head()
myc = myc_dataset.cover(2, 'ANY')
```
```
protein_coding_gene = gene_dataset.select(region_predicate=gene_dataset.gene_biotype=="protein_coding")
prom = protein_coding_gene.reg_project(new_field_dict={'start': protein_coding_gene.start - 5000,
'stop': protein_coding_gene.start + 1000})
# prom = gene_dataset.reg_project(new_field_dict={'start': gene_dataset.start - 5000,
# 'stop': gene_dataset.start + 1000})
```
```
prom_myc = prom.join(myc,genometric_predicate=[gl.DL(0)],output="left")
```
```
prom_myc_brca_map = prom_myc.map(brca_dataset,refName='REF_NEW')
```
```
prom_myc_brca = prom_myc_brca_map.select(region_predicate=prom_myc_brca_map.count_REF_NEW_EXP > 0)
prom_myc_brca_m = prom_myc_brca.materialize()
print(len(set(prom_myc_brca_m.regs)), len(set(prom_myc_brca_m.regs.index)))
result_regions = prom_myc_brca_m.regs
result_meta = prom_myc_brca_m.meta
joined = result_regions.join(result_meta)
joined = joined.applymap(lambda x: ''.join(x) if type(x) == list else x )
joined.head()
def convert(s):
try:
r = float(s)
i = int(r)
if i == r:
return i
else:
return r
except ValueError:
return s
def age_group(x):
if x <= 55:
return '25 <= x <= 55'
elif x <= 65:
return '55 < x <= 65'
else:
return '65 < x <= 100'
joined = joined.applymap(convert)
joined
joined['age_number'] = (joined['EXP.age'] / 365)
joined['age'] = (joined['EXP.age'] / 365).map(age_group)
joined.head()
```
```
joined.groupby('age').nunique()['EXP.item_source_id']
pivoted = joined.pivot_table(index ='age',
columns ='REF.gene_symbol',
values ='count_REF_NEW_EXP',
fill_value=0,
dropna=True, )
# values =['count_REF_EXP','count_REF_NEW_EXP'],
pivoted.head()
pivoted_new = pivoted.copy()
columns1 = pivoted_new.columns[pivoted_new.sum()> 1]
pivoted_new = pivoted_new[columns1]
# row1 = pivoted_new.T.columns[pivoted_new.sum(axis=1)> 1]
# pivoted_new = pivoted_new.T[row1].T
pivoted_new
pivoted_new = pivoted_new.T
pivoted_new = pivoted_new[sorted(pivoted_new.columns)]
pivoted_new = pivoted_new.T
%matplotlib inline
# selected genes cluster
sns.clustermap(pivoted_new,row_cluster=False)
# sns.heatmap(pivoted_new)
# every gene heatmap
plt.figure(figsize=(10,8))
sns.heatmap(pivoted)
# every gene clustermap
sns.clustermap(pivoted,row_cluster=False)
# selected genes heatmap
plt.figure(figsize=(10,8))
sns.heatmap(pivoted_new)
```
|
github_jupyter
|
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
IN_COLAB
if IN_COLAB:
!pip install git+https://github.com/DEIB-GECO/PyGMQL.git
!pip install --force https://github.com/chengs/tqdm/archive/colab.zip
import datetime
datetime.datetime.now()
from IPython.display import clear_output
import requests
from ast import literal_eval
import pandas as pd
from functools import reduce
import os
from io import StringIO
import gmql as gl
from tqdm import tqdm_notebook as tqdm
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
clear_output()
# set JAVA_HOME
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
def get_genosurf_results(input_query, payload = {'agg': True}):
url = 'http://geco.deib.polimi.it/genosurf/api/query/table'
if type(input_query) == str:
input_query = literal_eval(input_query)
response = requests.post(url, json=input_query, params=payload)
response_json = response.json()
df = pd.DataFrame(response_json)
return df
def download_region_file(url,sample_name, other_column_positions =[] ,other_columns = []): #TODO columns and their names
download_token = '?authToken=DOWNLOAD-TOKEN'
r = requests.get(url + download_token)
test_string_io = StringIO(r.text)
column_positions = [0,1,2,3] + other_column_positions
df2 = pd.read_csv(test_string_io, sep="\t",header=None,usecols=column_positions)
df2.columns = ['chr', 'start','stop','strand'] + other_columns
df2['sample_name'] = sample_name
return df2
def download_dataset_files(df_genosurf,other_column_positions = [], other_columns = []):
dfs = [] # can be done by append each iteration
for index,row in tqdm(list(df_genosurf.iterrows()), desc='Dowloading'):
dfs.append(download_region_file(row.local_url, row.item_source_id,other_column_positions, other_columns))
df = pd.concat(dfs)
return df
def create_dataset(df,meta_df=None, meta_columns=None):
df = df.copy()
if meta_df is not None and meta_columns is not None:
meta_df = meta_df.copy()
meta_df['sample_name'] = meta_df['item_source_id']
meta_df = meta_df.set_index('sample_name')
meta_df = meta_df[meta_columns]
meta_df = meta_df.applymap(lambda x: [x] if x else [])
return gl.from_pandas(df, meta_df ,sample_name='sample_name').to_GMQLDataset()
myc_df = pd.read_csv("backup/myc_df.csv")
myc_dataset_files = pd.read_csv("backup/myc_dataset_files.csv")
myc_dataset = create_dataset(myc_dataset_files)
myc_dataset.head()
brca_df = pd.read_csv("backup/brca_df.csv")
brca_dataset_files = pd.read_csv("backup/brca_dataset_files.csv")
brca_dataset = create_dataset(brca_dataset_files,brca_df,['item_source_id','age','gender','ethnicity','species'])
brca_dataset.head()
gene_df = pd.read_csv("backup/gene_df.csv")
gene_dataset_files = pd.read_csv("backup/gene_dataset_files.csv")
gene_dataset = create_dataset(gene_dataset_files)
gene_dataset.head()
myc = myc_dataset.cover(2, 'ANY')
protein_coding_gene = gene_dataset.select(region_predicate=gene_dataset.gene_biotype=="protein_coding")
prom = protein_coding_gene.reg_project(new_field_dict={'start': protein_coding_gene.start - 5000,
'stop': protein_coding_gene.start + 1000})
# prom = gene_dataset.reg_project(new_field_dict={'start': gene_dataset.start - 5000,
# 'stop': gene_dataset.start + 1000})
prom_myc = prom.join(myc,genometric_predicate=[gl.DL(0)],output="left")
prom_myc_brca_map = prom_myc.map(brca_dataset,refName='REF_NEW')
prom_myc_brca = prom_myc_brca_map.select(region_predicate=prom_myc_brca_map.count_REF_NEW_EXP > 0)
prom_myc_brca_m = prom_myc_brca.materialize()
print(len(set(prom_myc_brca_m.regs)), len(set(prom_myc_brca_m.regs.index)))
result_regions = prom_myc_brca_m.regs
result_meta = prom_myc_brca_m.meta
joined = result_regions.join(result_meta)
joined = joined.applymap(lambda x: ''.join(x) if type(x) == list else x )
joined.head()
def convert(s):
try:
r = float(s)
i = int(r)
if i == r:
return i
else:
return r
except ValueError:
return s
def age_group(x):
if x <= 55:
return '25 <= x <= 55'
elif x <= 65:
return '55 < x <= 65'
else:
return '65 < x <= 100'
joined = joined.applymap(convert)
joined
joined['age_number'] = (joined['EXP.age'] / 365)
joined['age'] = (joined['EXP.age'] / 365).map(age_group)
joined.head()
joined.groupby('age').nunique()['EXP.item_source_id']
pivoted = joined.pivot_table(index ='age',
columns ='REF.gene_symbol',
values ='count_REF_NEW_EXP',
fill_value=0,
dropna=True, )
# values =['count_REF_EXP','count_REF_NEW_EXP'],
pivoted.head()
pivoted_new = pivoted.copy()
columns1 = pivoted_new.columns[pivoted_new.sum()> 1]
pivoted_new = pivoted_new[columns1]
# row1 = pivoted_new.T.columns[pivoted_new.sum(axis=1)> 1]
# pivoted_new = pivoted_new.T[row1].T
pivoted_new
pivoted_new = pivoted_new.T
pivoted_new = pivoted_new[sorted(pivoted_new.columns)]
pivoted_new = pivoted_new.T
%matplotlib inline
# selected genes cluster
sns.clustermap(pivoted_new,row_cluster=False)
# sns.heatmap(pivoted_new)
# every gene heatmap
plt.figure(figsize=(10,8))
sns.heatmap(pivoted)
# every gene clustermap
sns.clustermap(pivoted,row_cluster=False)
# selected genes heatmap
plt.figure(figsize=(10,8))
sns.heatmap(pivoted_new)
| 0.33764 | 0.531392 |
```
import os
import cv2
import time
import random
import math
import numpy as np
import torch
import torch.nn as nn
from torch.nn import functional as F
import torch.optim as optim
from torch import optim
from torch.optim.lr_scheduler import ReduceLROnPlateau
import torch.backends.cudnn as cudnn
from torch.utils.data import DataLoader, Dataset
from matplotlib import pyplot as plt
from albumentations import (Resize, RandomCrop,VerticalFlip, HorizontalFlip, Normalize, Compose, CLAHE, Rotate)
from albumentations.pytorch import ToTensor
from torch.autograd import Variable
from PIL import Image
import segmentation_models_pytorch as smp
import imageio
seed = 42
random.seed(seed)
torch.manual_seed(seed)
os.environ["CUDA_VISIBLE_DEVICE"] = '0'
print(torch.cuda.get_device_name(0))
def get_transforms(phase, mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)):
list_transforms = []
if phase == "train":
list_transforms.extend(
[
HorizontalFlip(),
VerticalFlip(),
Rotate(),
] )
list_transforms.extend( [Resize(480, 480, interpolation=Image.BILINEAR),CLAHE(), Normalize(mean=mean, std=std, p=1), ToTensor(),] )
list_trfms = Compose(list_transforms)
return list_trfms
def readImg(im_fn):
im = cv2.imread(im_fn)
if im is None :
tmp = imageio.mimread(im_fn)
if tmp is not None:
im = np.array(tmp)
im = im.transpose(1,2,0)
else:
image = Image.open(im_fn)
im = np.asarray(image)
else:
im = cv2.cvtColor(np.asarray(im), cv2.COLOR_BGR2RGB)
return im
class RetinalDataset(Dataset):
def __init__(self, name, img_root, gt_root, phase):
super().__init__()
self.inputs = []
self.gts = []
self.transform = get_transforms(phase)
for root in img_root:
file_list = os.getcwd() + root
list_image = os.listdir(file_list)
list_image.sort()
for i, image_path in enumerate(list_image):
img = os.path.join(file_list,list_image[i])
self.inputs.append(img)
for root in gt_root:
file_list = os.getcwd() + root
list_image = os.listdir(file_list)
list_image.sort()
for i, image_path in enumerate(list_image):
img = os.path.join(file_list,list_image[i])
self.gts.append(img)
print('Load %s: %d samples for %s'%(name, len(self.inputs),phase))
def __len__(self):
return len(self.inputs)
def __getitem__(self, index):
image = readImg(self.inputs[index])
mask = readImg(self.gts[index])
if mask.shape[2] == 3:
mask = mask[:,:,0]
augmented = self.transform(image=image, mask=mask.squeeze())
return augmented["image"], augmented["mask"]
# DRIVE ๆฐๆฎ้
dr_train_loader = RetinalDataset('DRIVE',['\\data\\DRIVE\\train\\images'],
['\\data\\DRIVE\\train\\1st_manual'], 'train')
dr_test_loader = RetinalDataset('DRIVE',['\\data\\DRIVE\\test\\images'],
['\\data\\DRIVE\\test\\1st_manual'], 'test')
# STARE ๆฐๆฎ้
st_train_loader = RetinalDataset('STARE',['\\data\\STARE\\train\\image'],
['\\data\\STARE\\train\\labels-ah'], 'train')
st_test_loader = RetinalDataset('STARE',['\\data\\STARE\\test\\image'],
['\\data\\STARE\\test\\labels-ah'], 'test')
# CHASEDB1 ๆฐๆฎ้
st_train_loader = RetinalDataset('CHASEDB1',['\\data\\CHASEDB1\\train\\image'],
['\\data\\CHASEDB1\\train\\1st'], 'train')
st_test_loader = RetinalDataset('CHASEDB1',['\\data\\CHASEDB1\\test\\image'],
['\\data\\CHASEDB1\\test\\1st'], 'test')
# HRF ๆฐๆฎ้
hr_train_loader = RetinalDataset('HRF',['\\data\\HRF\\train\\images'],
['\\data\\HRF\\train\\manual1'], 'train')
hr_test_loader = RetinalDataset('HRF',['\\data\\HRF\\test\\images'],
['\\data\\HRF\\test\\manual1'], 'test')
# ๆททๅ่ฎญ็ป้
all_train_loader = RetinalDataset('all',['\\data\\DRIVE\\train\\images','\\data\\STARE\\train\\image',
'\\data\\CHASEDB1\\train\\image','\\data\\HRF\\train\\images'],
['\\data\\DRIVE\\train\\1st_manual','\\data\\STARE\\train\\labels-ah',
'\\data\\CHASEDB1\\train\\1st','\\data\\HRF\\train\\manual1'],'train')
all_test_loader = RetinalDataset('all',['\\data\\DRIVE\\test\\images','\\data\\STARE\\test\\image',
'\\data\\CHASEDB1\\test\\image','\\data\\HRF\\test\\images'],
['\\data\\DRIVE\\test\\1st_manual','\\data\\STARE\\test\\labels-ah',
'\\data\\CHASEDB1\\test\\1st','\\data\\HRF\\test\\manual1'],'test')
batch_size = 8
epochs = 500
lr = 0.001
batch_iter = math.ceil(len(all_train_loader) / batch_size)
net = smp.Unet('resnet18', classes=1, activation=None, encoder_weights='imagenet')
net.cuda()
net_name = 'Unet-Resnet18'
loss_fuc = 'BCEL'
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(net.parameters(), lr=lr)
scheduler = ReduceLROnPlateau(optimizer, mode="min", patience=4, verbose=True)
dataset = "all"
trainloader = DataLoader(all_train_loader, batch_size=batch_size, shuffle=True, pin_memory=True)
testloader = DataLoader(all_test_loader, batch_size=1, shuffle=False, pin_memory=True)
result_path = 'results'
if not os.path.exists(result_path):
os.makedirs(result_path)
weights_path = "weights"
if not os.path.exists(weights_path):
os.makedirs(weights_path)
image_path = os.path.join(result_path,dataset)
if not os.path.exists(image_path):
os.makedirs(image_path)
f_loss = open(os.path.join(result_path, "log_%s_%s_%s.txt"%(dataset,loss_fuc,net_name)),'w')
f_loss.write('Dataset : %s\n'%dataset)
f_loss.write('Loss : %s\n'%loss_fuc)
f_loss.write('Net : %s\n'%net_name)
f_loss.write('Learning rate: %05f\n'%lr)
f_loss.write('batch-size: %s\n'%batch_size)
f_loss.close()
def train(e):
print('start train epoch: %d'%e)
net.train()
loss_plot = []
for i, (x,y) in enumerate(trainloader):
optimizer.zero_grad()
x = x.cuda(async=True)
y = y.cuda(async=True)
x = net(x)
loss = criterion(x.squeeze(), y.squeeze())
print('Epoch:%d Batch:%d/%d loss:%08f'%(e, i+1, batch_iter, loss.data))
loss_plot.append(loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
return loss_plot
def test():
net.eval()
acc = torch.tensor(0)
tpr = torch.tensor(0)
fpr = torch.tensor(0)
sn = torch.tensor(0)
sp = torch.tensor(0)
for i, (x,y) in enumerate(testloader):
optimizer.zero_grad()
x = x.cuda(async=True)
y = y.cuda(async=True)
x = net(x)
x = torch.sigmoid(x).squeeze()
y = y.squeeze().int().long()
x = torch.where(x > 0.5, torch.tensor(1).cuda(), torch.tensor(0).cuda())
temp = x + torch.tensor(2).cuda().long() * y
tp = torch.sum(torch.where(temp == 3, torch.tensor(1).cuda(),torch.tensor(0).cuda())).float()
fp = torch.sum(torch.where(temp == 1, torch.tensor(1).cuda(),torch.tensor(0).cuda())).float()
tn = torch.sum(torch.where(temp == 0, torch.tensor(1).cuda(),torch.tensor(0).cuda())).float()
fn = torch.sum(torch.where(temp == 2, torch.tensor(1).cuda(),torch.tensor(0).cuda())).float()
acc = acc + (tp + tn) / (tp + fp + tn + fn)
tpr = tpr + tp / (tp + fn)
fpr = fpr + fp / (tn + fp)
sn = sn + tn / (tn + fp)
sp = sp + tp / (tp + fn)
acc = (acc / len(testloader)).cpu().numpy()
tpr = (tpr / len(testloader)).cpu().numpy()
fpr = (fpr / len(testloader)).cpu().numpy()
sn = (sn / len(testloader)).cpu().numpy()
sp = (sp / len(testloader)).cpu().numpy()
print('ACC:',acc)
print('TPR:',tpr)
print('FPR:',fpr)
print('SN:',sn)
print('SP:',sp)
f_log = open(os.path.join(result_path, "log_%s_%s_%s.txt"%(dataset,loss_fuc,net_name)),'a')
f_log.write('Epoch:%d acc:%08f\n'%(e, acc))
f_log.write('Epoch:%d TPR:%08f\n'%(e, tpr))
f_log.write('Epoch:%d FPR:%08f\n'%(e, fpr))
f_log.write('Epoch:%d SN:%08f\n'%(e, sn))
f_log.write('Epoch:%d SP:%08f\n'%(e, sp))
f_log.close()
return acc
best_acc = 0
loss_plot = [0]
for e in range(1, epochs + 1):
loss_plot = loss_plot + train(e)
if e % 10 == 0:
acc = test()
if acc > best_acc:
if best_acc != 0:
os.remove(os.path.join(weights_path,
'net_%s_%s_%s_%f.pth'%(dataset,loss_fuc,net_name,best_acc)))
torch.save(net.state_dict(),os.path.join(weights_path,
'net_%s_%s_%s_%f.pth'%(dataset,loss_fuc,net_name,acc)))
best_acc = acc
plt.plot(loss_plot[1:])
def test_plot():
net.eval()
res = []
for i, (x,y) in enumerate(testloader):
optimizer.zero_grad()
x = x.cuda(async=True)
y = y.cuda(async=True)
x = net(x)
x = torch.sigmoid(x).squeeze()
y = y.squeeze().int().long().cpu().detach().numpy()
x = torch.where(x > 0.5, torch.tensor(1).cuda(), torch.tensor(0).cuda()).cpu().detach().numpy()
acc = np.sum(np.where(x == y,1,0)) / np.sum(np.where(x == x,1,0))
res.append(acc)
im = cv2.merge([x*255,y*255,y*255])
plt.imsave(os.path.join(image_path,(str(i)+'_'+'%4f'%acc+'.png')),im.astype('uint8'), format="png")
return res
resume = os.path.join(weights_path,
'net_%s_%s_%s_%f.pth'%(dataset,loss_fuc,net_name,best_acc))
pre_params = torch.load(resume)
net.load_state_dict(pre_params)
res = test_plot()
res = np.array(res)
print(np.mean(res[0:20])) # DRIVE
print(np.mean(res[20:30])) # STARE
print(np.mean(res[30:44])) # CHASEDB1
print(np.mean(res[44:68])) # HRF
```
|
github_jupyter
|
import os
import cv2
import time
import random
import math
import numpy as np
import torch
import torch.nn as nn
from torch.nn import functional as F
import torch.optim as optim
from torch import optim
from torch.optim.lr_scheduler import ReduceLROnPlateau
import torch.backends.cudnn as cudnn
from torch.utils.data import DataLoader, Dataset
from matplotlib import pyplot as plt
from albumentations import (Resize, RandomCrop,VerticalFlip, HorizontalFlip, Normalize, Compose, CLAHE, Rotate)
from albumentations.pytorch import ToTensor
from torch.autograd import Variable
from PIL import Image
import segmentation_models_pytorch as smp
import imageio
seed = 42
random.seed(seed)
torch.manual_seed(seed)
os.environ["CUDA_VISIBLE_DEVICE"] = '0'
print(torch.cuda.get_device_name(0))
def get_transforms(phase, mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)):
list_transforms = []
if phase == "train":
list_transforms.extend(
[
HorizontalFlip(),
VerticalFlip(),
Rotate(),
] )
list_transforms.extend( [Resize(480, 480, interpolation=Image.BILINEAR),CLAHE(), Normalize(mean=mean, std=std, p=1), ToTensor(),] )
list_trfms = Compose(list_transforms)
return list_trfms
def readImg(im_fn):
im = cv2.imread(im_fn)
if im is None :
tmp = imageio.mimread(im_fn)
if tmp is not None:
im = np.array(tmp)
im = im.transpose(1,2,0)
else:
image = Image.open(im_fn)
im = np.asarray(image)
else:
im = cv2.cvtColor(np.asarray(im), cv2.COLOR_BGR2RGB)
return im
class RetinalDataset(Dataset):
def __init__(self, name, img_root, gt_root, phase):
super().__init__()
self.inputs = []
self.gts = []
self.transform = get_transforms(phase)
for root in img_root:
file_list = os.getcwd() + root
list_image = os.listdir(file_list)
list_image.sort()
for i, image_path in enumerate(list_image):
img = os.path.join(file_list,list_image[i])
self.inputs.append(img)
for root in gt_root:
file_list = os.getcwd() + root
list_image = os.listdir(file_list)
list_image.sort()
for i, image_path in enumerate(list_image):
img = os.path.join(file_list,list_image[i])
self.gts.append(img)
print('Load %s: %d samples for %s'%(name, len(self.inputs),phase))
def __len__(self):
return len(self.inputs)
def __getitem__(self, index):
image = readImg(self.inputs[index])
mask = readImg(self.gts[index])
if mask.shape[2] == 3:
mask = mask[:,:,0]
augmented = self.transform(image=image, mask=mask.squeeze())
return augmented["image"], augmented["mask"]
# DRIVE ๆฐๆฎ้
dr_train_loader = RetinalDataset('DRIVE',['\\data\\DRIVE\\train\\images'],
['\\data\\DRIVE\\train\\1st_manual'], 'train')
dr_test_loader = RetinalDataset('DRIVE',['\\data\\DRIVE\\test\\images'],
['\\data\\DRIVE\\test\\1st_manual'], 'test')
# STARE ๆฐๆฎ้
st_train_loader = RetinalDataset('STARE',['\\data\\STARE\\train\\image'],
['\\data\\STARE\\train\\labels-ah'], 'train')
st_test_loader = RetinalDataset('STARE',['\\data\\STARE\\test\\image'],
['\\data\\STARE\\test\\labels-ah'], 'test')
# CHASEDB1 ๆฐๆฎ้
st_train_loader = RetinalDataset('CHASEDB1',['\\data\\CHASEDB1\\train\\image'],
['\\data\\CHASEDB1\\train\\1st'], 'train')
st_test_loader = RetinalDataset('CHASEDB1',['\\data\\CHASEDB1\\test\\image'],
['\\data\\CHASEDB1\\test\\1st'], 'test')
# HRF ๆฐๆฎ้
hr_train_loader = RetinalDataset('HRF',['\\data\\HRF\\train\\images'],
['\\data\\HRF\\train\\manual1'], 'train')
hr_test_loader = RetinalDataset('HRF',['\\data\\HRF\\test\\images'],
['\\data\\HRF\\test\\manual1'], 'test')
# ๆททๅ่ฎญ็ป้
all_train_loader = RetinalDataset('all',['\\data\\DRIVE\\train\\images','\\data\\STARE\\train\\image',
'\\data\\CHASEDB1\\train\\image','\\data\\HRF\\train\\images'],
['\\data\\DRIVE\\train\\1st_manual','\\data\\STARE\\train\\labels-ah',
'\\data\\CHASEDB1\\train\\1st','\\data\\HRF\\train\\manual1'],'train')
all_test_loader = RetinalDataset('all',['\\data\\DRIVE\\test\\images','\\data\\STARE\\test\\image',
'\\data\\CHASEDB1\\test\\image','\\data\\HRF\\test\\images'],
['\\data\\DRIVE\\test\\1st_manual','\\data\\STARE\\test\\labels-ah',
'\\data\\CHASEDB1\\test\\1st','\\data\\HRF\\test\\manual1'],'test')
batch_size = 8
epochs = 500
lr = 0.001
batch_iter = math.ceil(len(all_train_loader) / batch_size)
net = smp.Unet('resnet18', classes=1, activation=None, encoder_weights='imagenet')
net.cuda()
net_name = 'Unet-Resnet18'
loss_fuc = 'BCEL'
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(net.parameters(), lr=lr)
scheduler = ReduceLROnPlateau(optimizer, mode="min", patience=4, verbose=True)
dataset = "all"
trainloader = DataLoader(all_train_loader, batch_size=batch_size, shuffle=True, pin_memory=True)
testloader = DataLoader(all_test_loader, batch_size=1, shuffle=False, pin_memory=True)
result_path = 'results'
if not os.path.exists(result_path):
os.makedirs(result_path)
weights_path = "weights"
if not os.path.exists(weights_path):
os.makedirs(weights_path)
image_path = os.path.join(result_path,dataset)
if not os.path.exists(image_path):
os.makedirs(image_path)
f_loss = open(os.path.join(result_path, "log_%s_%s_%s.txt"%(dataset,loss_fuc,net_name)),'w')
f_loss.write('Dataset : %s\n'%dataset)
f_loss.write('Loss : %s\n'%loss_fuc)
f_loss.write('Net : %s\n'%net_name)
f_loss.write('Learning rate: %05f\n'%lr)
f_loss.write('batch-size: %s\n'%batch_size)
f_loss.close()
def train(e):
print('start train epoch: %d'%e)
net.train()
loss_plot = []
for i, (x,y) in enumerate(trainloader):
optimizer.zero_grad()
x = x.cuda(async=True)
y = y.cuda(async=True)
x = net(x)
loss = criterion(x.squeeze(), y.squeeze())
print('Epoch:%d Batch:%d/%d loss:%08f'%(e, i+1, batch_iter, loss.data))
loss_plot.append(loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
return loss_plot
def test():
net.eval()
acc = torch.tensor(0)
tpr = torch.tensor(0)
fpr = torch.tensor(0)
sn = torch.tensor(0)
sp = torch.tensor(0)
for i, (x,y) in enumerate(testloader):
optimizer.zero_grad()
x = x.cuda(async=True)
y = y.cuda(async=True)
x = net(x)
x = torch.sigmoid(x).squeeze()
y = y.squeeze().int().long()
x = torch.where(x > 0.5, torch.tensor(1).cuda(), torch.tensor(0).cuda())
temp = x + torch.tensor(2).cuda().long() * y
tp = torch.sum(torch.where(temp == 3, torch.tensor(1).cuda(),torch.tensor(0).cuda())).float()
fp = torch.sum(torch.where(temp == 1, torch.tensor(1).cuda(),torch.tensor(0).cuda())).float()
tn = torch.sum(torch.where(temp == 0, torch.tensor(1).cuda(),torch.tensor(0).cuda())).float()
fn = torch.sum(torch.where(temp == 2, torch.tensor(1).cuda(),torch.tensor(0).cuda())).float()
acc = acc + (tp + tn) / (tp + fp + tn + fn)
tpr = tpr + tp / (tp + fn)
fpr = fpr + fp / (tn + fp)
sn = sn + tn / (tn + fp)
sp = sp + tp / (tp + fn)
acc = (acc / len(testloader)).cpu().numpy()
tpr = (tpr / len(testloader)).cpu().numpy()
fpr = (fpr / len(testloader)).cpu().numpy()
sn = (sn / len(testloader)).cpu().numpy()
sp = (sp / len(testloader)).cpu().numpy()
print('ACC:',acc)
print('TPR:',tpr)
print('FPR:',fpr)
print('SN:',sn)
print('SP:',sp)
f_log = open(os.path.join(result_path, "log_%s_%s_%s.txt"%(dataset,loss_fuc,net_name)),'a')
f_log.write('Epoch:%d acc:%08f\n'%(e, acc))
f_log.write('Epoch:%d TPR:%08f\n'%(e, tpr))
f_log.write('Epoch:%d FPR:%08f\n'%(e, fpr))
f_log.write('Epoch:%d SN:%08f\n'%(e, sn))
f_log.write('Epoch:%d SP:%08f\n'%(e, sp))
f_log.close()
return acc
best_acc = 0
loss_plot = [0]
for e in range(1, epochs + 1):
loss_plot = loss_plot + train(e)
if e % 10 == 0:
acc = test()
if acc > best_acc:
if best_acc != 0:
os.remove(os.path.join(weights_path,
'net_%s_%s_%s_%f.pth'%(dataset,loss_fuc,net_name,best_acc)))
torch.save(net.state_dict(),os.path.join(weights_path,
'net_%s_%s_%s_%f.pth'%(dataset,loss_fuc,net_name,acc)))
best_acc = acc
plt.plot(loss_plot[1:])
def test_plot():
net.eval()
res = []
for i, (x,y) in enumerate(testloader):
optimizer.zero_grad()
x = x.cuda(async=True)
y = y.cuda(async=True)
x = net(x)
x = torch.sigmoid(x).squeeze()
y = y.squeeze().int().long().cpu().detach().numpy()
x = torch.where(x > 0.5, torch.tensor(1).cuda(), torch.tensor(0).cuda()).cpu().detach().numpy()
acc = np.sum(np.where(x == y,1,0)) / np.sum(np.where(x == x,1,0))
res.append(acc)
im = cv2.merge([x*255,y*255,y*255])
plt.imsave(os.path.join(image_path,(str(i)+'_'+'%4f'%acc+'.png')),im.astype('uint8'), format="png")
return res
resume = os.path.join(weights_path,
'net_%s_%s_%s_%f.pth'%(dataset,loss_fuc,net_name,best_acc))
pre_params = torch.load(resume)
net.load_state_dict(pre_params)
res = test_plot()
res = np.array(res)
print(np.mean(res[0:20])) # DRIVE
print(np.mean(res[20:30])) # STARE
print(np.mean(res[30:44])) # CHASEDB1
print(np.mean(res[44:68])) # HRF
| 0.447943 | 0.277957 |
## License
Copyright 2020 (c) Anna Olena Zhab'yak, Michele Maione. All rights reserved.
Licensed under the [MIT](LICENSE) License.
# Dataset
The dataset used in this regression has 10 features and 20639 observations. In more detailes the columns are:
1. longitude: A measure of how far west a house is
2. latitude: A measure of how far north a house is
3. housingMedianAge: Median age of a house within a block
4. totalRooms: Total number of rooms within a block
5. totalBedrooms: Total number of bedrooms within a block
6. population: Total number of people residing within a block
7. households: Total number of households, a group of people residing within a home unit, for a block
8. medianIncome: Median income for households within a block of houses (measured in tens of thousands of US Dollars)
9. medianHouseValue: Median house value for households within a block (measured in US Dollars)
10. oceanProximity: Location of the house w.r.t ocean/sea
the dataset is avaible from this website : https://www.dropbox.com/s/zxv6ujxl8kmijfb/cal-housing.csv?dl=0
---------------------
### Libraries
For this analysis, we will use a few libraries for managing and transform the database as well as to implement the Ridge and Lasso regression from scratch, the PCA, the model selection and validation using sklearn. First, let's download the libraries!
```
import numpy
import pandas
import seaborn
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.decomposition import PCA
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.preprocessing import LabelEncoder, MinMaxScaler, StandardScaler, RobustScaler, Normalizer
from sklearn.model_selection import learning_curve, validation_curve, cross_val_score, cross_validate, train_test_split, KFold, GridSearchCV
from sklearn import decomposition
from plotting import Plotting
from lasso import Lasso
from cholesky import Cholesky
from svd import SVD
from lsqr import LSQR
```
---------------------------------------
### Dataset overview
First of all, we open the database to check the structure, show the first rows and take a look of its descriptive statistics.
```
data_frame = pandas.read_csv(filepath_or_buffer='cal-housing.csv')
data_frame.info()
data_frame.describe()
data_frame.head()
```
### Scores - MSE and Rยฒ
The mean_squared_error function computes mean square error, a risk metric corresponding to the expected value of the squared (quadratic) error or loss.
The coefficient Rยฒ is defined as $ 1 - {u \over v} $, where u is the residual sum of squares $ \sum (y - y')^2 $ and v is the total sum of squares $ \sum (y - \bar{y})^2 $. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse).
For this experiment we will use the MSE.
```
#scoring = 'r2'
#scoring_neg = False
#scoring_label = 'Rยฒ'
#scoring_label_loss = 'Rยฒ'
scoring = 'neg_mean_squared_error'
scoring_neg = True
scoring_label = 'MSE'
scoring_label_loss = 'Squared loss'
```
---------------------------------
### Data pre-processing
Before performing the analysis we create the constants and and define the statistical elements of the analysis.
The target value is the `median_house_value` which is predicted based on the features of different houses.
Shuffle the data to reduce the error and to give a normal distribution to the whole dataset.
```
column_to_predict = 'median_house_value'
categories_columns = ['ocean_proximity']
numerics_columns = ["longitude", "latitude", "housing_median_age", "total_rooms", "total_bedrooms", "population", "households", "median_income"]
```
Here is the distribution of the values of the target variable `median_house_value`.
It looks like a normal distribution with a group of outliers on the highest value of the house.
```
plt.figure(figsize=(15, 7))
plt.title('Distribuzione dei valori di "median_house_value"')
seaborn.distplot(data_frame['median_house_value'])
plt.grid()
plt.show()
```
### Missing values
The dataset contains missing values which are removed. This procedure is necessary to avoid errors in the execution.
```
for c in data_frame.columns:
if data_frame[c].hasnans:
m = data_frame[c].mean()
data_frame[c].fillna(value=m, inplace=True)
```
### Categorical variable
Categorical values cannot be treated as such for the statistical analysis, therefore they must be transormed in numbers. We create two encoding functions to compare how they qouls work, but at the end we apply the hot encoder of `useColumnCat`, this creates a column for every element of the categorical variable , add them to the rest of the dataset using dummies.
```
useLabelEncoder = False
useColumnCat = True
if useLabelEncoder:
labelencoder = LabelEncoder()
for c in categories_columns:
c_name = c + '_cat'
data_frame[c_name] = labelencoder.fit_transform(data_frame[c])
numerics_columns.append(c_name)
data_frame.drop(columns=categories_columns, inplace=True)
elif useColumnCat:
# genera le colonne per ogni elemento di una colonna categoria
columns_categories = pandas.DataFrame()
for c in categories_columns:
column = pandas.get_dummies(data=data_frame[c], prefix=c + '_')
columns_categories = pandas.concat((columns_categories, column), axis=1)
for col in columns_categories.columns:
numerics_columns.append(col)
# elimina le colonne categoria
data_frame.drop(columns=categories_columns, inplace=True)
# aggiungi le colonne per ogni elemento di una colonna categoria
data_frame = pandas.concat([data_frame, columns_categories], axis=1)
else:
data_frame['ocean_proximity'].replace(['INLAND', '<1H OCEAN', 'NEAR OCEAN', 'ISLAND', 'NEAR BAY'], [1, 20, 100, 1500, 500], inplace=True)
numerics_columns.append('ocean_proximity')
columns_to_remove = []
columns_to_use = list(data_frame.columns)
for u in columns_to_remove:
columns_to_use.remove(u)
if numerics_columns.count(u) > 0:
numerics_columns.remove(u)
data_frame.drop(columns=columns_to_remove, inplace=True)
#data_frame = data_frame.sample(frac=1)
X = data_frame[numerics_columns]
y = data_frame[column_to_predict]
```
Our dataset now looks like this
```
X
```
and this is our target variable
```
y
column_to_predict_idx = data_frame.columns.get_loc(column_to_predict)
cols = list(range(0, data_frame.shape[1]))
cols.remove(column_to_predict_idx)
X = data_frame[numerics_columns]
y = data_frame[column_to_predict]
```
### Correlation of the dataset
Explore the Peasrson's coefficient of correlation and build a simmetric correlation matrix between the features. This procedure is helpeful to eventually reduce the dimension of the dataset.
Pearson's coefficient is determined by :
$$ r_{i,j} = \frac{\sum_{t=1}^m (x_{i,t}-\mu_i)(x_{j,t}-\mu_j)}{\sqrt{\sum_{t=1}^m (x_{i,t}-\mu_i)^2}\sqrt{\sum_{t=1}^m (x_{j,t}-\mu_j)^2}} $$
corr = data_frame.corr()
Plotting.heatMap(corr, 'Correlation matrix between features')
The coefficient is in between -1 and +1, when it is close to |1| then there is a correlation (positive or negative) otherwise if the coefficient is close to 0 there isn't any correlation. If some features are linearly correlated they could be not useful, because we can explain one feature through the correlated one.
Furhter we can implement the PCA sicne some features seems to be correlated
----------------
# Data analysis and modelling
Ones we have cleaned all the data, we can procede with the application of the Ridge and Lasso regression and the PCA.
## Ridge regression
This is a regression technique wich allows to deal with the multicollinearity of the features and try to reduce the overfitting of the data.
The mechanism consists in minimizing two elements, the RSS and the objective function tuned by a certain parameter $\alpha $ , so the objective function is:
$$ \lVert y - Xw \rVert ^2_2 + \alpha \lVert w \rVert^2_2 $$
The tunning parameter alpha controls the model complexity and the a trade off between variance and bias.
1. How well the model fits our data gives the bias
2. how well model does on a completely new dataset goves the variance
We use the hyperparameter $ \alpha $ , to add a bias and prevent the model from overfitting.
First of all we split the dataset into train and test set 80% and 20 % respectively
```
x_train, x_test, y_train, y_test = train_test_split(X, y, train_size=0.8, shuffle=True, random_state=1986)
print('Train size:', x_train.shape[0])
print('Test size:', x_test.shape[0])
```
We have created three different implementations to solve Ridge regression, but we proceed with the Cholesky decomposition, which is the so-called closed-form solution
```
#ridge = LSQR()
#ridge= SVD()
ridge = Cholesky()
```
The values for the hyperparameter $ \alpha $ are generated as follows, using a logspace
```
alphas = numpy.sort(numpy.logspace(-5, -0.1, 20))
alphas
```
The validation of this parameter is done
```
train_score, test_scores = validation_curve(ridge, x_train, y_train, param_name="alpha", param_range=alphas, scoring=scoring, n_jobs=-1)
Plotting.plotAreaMeanStd(
'Ridge - Validation curve',
alphas,
[train_score, test_scores],
scoring_neg,
['Training', 'Test'],
['r', 'g'],
'ษ',
scoring_label_loss)
best_ษ = alphas[numpy.argmax(test_scores.mean(axis=1))]
print('Best ษ:', best_ษ)
```
More over we use a nested corss validation on the best aplha to see how this value is tunned given individual trial.
```
ridge.alphas = alphas
ridge.nestedCrossValidationKFold(x_train, y_train)
print('Best ษ for Nested CV:', ridge.best_alpha_NestedCV)
print('Best ษ for Non-Nested CV:', ridge.best_alpha_NonNestedCV)
Plotting.plotNestedCrossVal(
ridge.nested_cross_validation_trials,
ridge.nested_scores,
ridge.non_nested_scores,
ridge.score_difference)
```
### Learning curve
Here we train our algorithm using the 5-fold cross-validation on the best estimated hyperparameter by the nested cross-validation.
```
min_ts = int(x_train.shape[0] * 0.01)
max_ts = int(x_train.shape[0] * 0.8)
step_ts = int(x_train.shape[0] * 0.1)
sizes = range(min_ts, max_ts, step_ts)
print(sizes)
train_size, train_score, test_scores = learning_curve(ridge, x_train, y_train, train_sizes=sizes, shuffle=True, random_state=1986, scoring=scoring, n_jobs=-1)
Plotting.plotAreaMeanStd(
'Ridge - Learning curve',
train_size,
[train_score, test_scores],
scoring_neg,
['Training', 'Test'],
['r', 'g'],
'Training size',
scoring_label_loss)
print(f'Cross-validated risk estimate: {-test_scores.mean()}')
ridge.fit(x_train, y_train)
y_predict_c = ridge.predict(x_test)
print('MSE', mean_squared_error(y_test, y_predict_c))
print('Rยฒ', r2_score(y_test, y_predict_c))
Plotting.regPlot('Ridge', y_predict_c, y_test)
Plotting.coeficientPlot('Ridge', x_test, ridge.coef_)
```
## Lasso regression
this technique is similar to the previous one, but the minimizing objective function becomes:
$$ \lVert y - Xw \rVert ^2_2 + \alpha \lVert w \rVert $$
Lasso regression give a sparser solutions in general, and it shrinks some coefficients to zero. Here is the hyperparameter selection and the learning curve using the best estimated hyperparameter under nasted cross validation and not nasted cross validation, with a comparison between the two.
We use a 80/20 split of the dasaset as before.
```
x_train, x_test, y_train, y_test = train_test_split(X, y, train_size=0.8, shuffle=True, random_state=1986)
print('Train size:', x_train.shape[0])
print('Test size:', x_test.shape[0])
lasso = Lasso()
alphas = numpy.sort(numpy.logspace(-5, -0.1, 20)) * 30
alphas
train_score, test_scores = validation_curve(lasso, x_train, y_train, param_name="alpha", param_range=alphas, scoring=scoring, n_jobs=-1)
Plotting.plotAreaMeanStd(
'Lasso - Validation curve',
alphas,
[train_score, test_scores],
scoring_neg,
['Training', 'Test'],
['r', 'g'],
'ษ',
scoring_label_loss)
best_ษ = alphas[numpy.argmax(test_scores.mean(axis=1))]
print('Best ษ:', best_ษ)
```
Use nested cross-validation and compare the results with non nested form
```
lasso.alphas = alphas
lasso.nestedCrossValidationKFold(x_train, y_train)
print('Best ษ for Nested CV:', lasso.best_alpha_NestedCV)
print('Best ษ for Non-Nested CV:', lasso.best_alpha_NonNestedCV)
Plotting.plotNestedCrossVal(
lasso.nested_cross_validation_trials,
lasso.nested_scores,
lasso.non_nested_scores,
lasso.score_difference)
```
Now we learn the algorithm
```
min_ts = int(x_train.shape[0] * 0.01)
max_ts = int(x_train.shape[0] * 0.8)
step_ts = int(x_train.shape[0] * 0.1)
sizes = range(min_ts, max_ts, step_ts)
print(sizes)
train_size, train_score, test_scores = learning_curve(lasso, x_train, y_train, train_sizes=sizes, shuffle=True, random_state=1986, scoring=scoring, n_jobs=-1)
Plotting.plotAreaMeanStd(
'Lasso - Learning curve',
train_size,
[train_score, test_scores],
scoring_neg,
['Training', 'Test'],
['r', 'g'],
'Training size',
scoring_label_loss)
print(f'Cross-validated risk estimate: {-test_scores.mean()}')
```
This is a plot of the prediction under Lasso regression and the power of the coefficients.
```
lasso.fit(x_train, y_train)
y_predict_c = lasso.predict(x_test)
print('MSE', mean_squared_error(y_test, y_predict_c))
print('Rยฒ', r2_score(y_test, y_predict_c))
Plotting.regPlot('Lasso', y_predict_c, y_test)
Plotting.coeficientPlot('Lasso', x_test, lasso.coef_)
```
-----------------------------------------------------
## Principal component analysis
The principal component analysis transorms the dataset into a lower dimentional matrix by finding its eigenvalues and the eigen vectors. This hepls to understand how the variables behave in comparison with the first lager variance among the data (so the fisrt Principal Component) and the second larger variation (second principal component).
First we need to standardized the data, to have a unit variance and the mean centrered around 0, then we will plot again the ridge regression learning curve and compare its performance before/after the PCA transfomerformation, and check whether it is better or not.
```
useMinMaxScaler = True
if useMinMaxScaler:
column_to_predict_idx = data_frame.columns.get_loc(column_to_predict)
cols = list(range(0, data_frame.shape[1]))
cols.remove(column_to_predict_idx)
scaler = MinMaxScaler()
scaler.fit(data_frame)
data_frame = scaler.transform(data_frame)
data_frame = pandas.DataFrame(data_frame, columns=columns_to_use)
X = data_frame[numerics_columns]
y = data_frame[column_to_predict]
column_to_predict_idx = data_frame.columns.get_loc(column_to_predict)
cols = list(range(0, data_frame.shape[1]))
cols.remove(column_to_predict_idx)
X = data_frame[numerics_columns]
y = data_frame[column_to_predict]
X
x_train, x_test, y_train, y_test = train_test_split(X, y, train_size=0.8, shuffle=True, random_state=1986)
print('Train size:', x_train.shape[0])
print('Test size:', x_test.shape[0])
learner = Cholesky(0.040787427589690214)
```
This is the learning curve of Ridge regression without any decomposition, is the same we have learned before with the best alpha, but with a stardardized squared loss. We need this standardization to compare this curve with the decomposed learning curve later.
```
train_size, train_score, test_scores = learning_curve(learner, x_train, y_train, train_sizes=sizes, cv=5, scoring=scoring, shuffle = True, random_state=1986, n_jobs=-1)
Plotting.plotAreaMeanStd(
'Ridge - Learning curve',
train_size,
[train_score, test_scores],
scoring_neg,
['Training', 'Test'],
['r', 'g'],
'Training size',
scoring_label_loss)
```
We apply then the PCA decomposition, to see how the coefficients behave in 2 Principal Component analysis
```
coef_list = []
min_ts = int(X.shape[0] * 0.01)
max_ts = int(X.shape[0] * 0.8)
step_ts = int(X.shape[0] * 0.1)
sizes = range(min_ts, max_ts, step_ts)
for s in sizes:
x_train, x_test, y_train, y_test = train_test_split(X, y, train_size=s, random_state=1986)
learner.fit(x_train, y_train)
coef_list.append(learner.coef_)
coef_matrix = numpy.array(coef_list)
pca = decomposition.PCA(n_components=2)
pca.fit(coef_matrix)
coef_pca = pca.transform(coef_matrix)
fig, ax = plt.subplots()
plt.scatter(coef_pca[:,0], coef_pca[:,1])
```
Next we will see how the variance is explained by each variable, on our case 13 features
```
pca = decomposition.PCA(n_components=13)
pca.fit(X)
plt.title('PCA')
plt.plot(pca.singular_values_, label='Singular values')
plt.legend()
plt.show()
```
We apply a 8-PCA decomposition and learn again the Ridge algorithm to see if it can improve.
```
pca = decomposition.PCA(n_components=8)
pca.fit(X)
X_pca = pca.transform(X)
x_train, x_test, y_train, y_test = train_test_split(X_pca, y, train_size=0.8, shuffle=True, random_state=1986)
print('Train size:', x_train.shape[0])
print('Test size:', x_test.shape[0])
min_ts = int(x_train.shape[0] * 0.01)
max_ts = int(x_train.shape[0] * 0.8)
step_ts = int(x_train.shape[0] * 0.1)
sizes = range(min_ts, max_ts, step_ts)
print(sizes)
train_size, train_score, test_scores = learning_curve(learner, x_train, y_train, train_sizes=sizes, cv=5, scoring=scoring, shuffle = True, random_state=1986, n_jobs=-1)
Plotting.plotAreaMeanStd(
'PCA - Learning curve',
train_size,
[train_score, test_scores],
scoring_neg,
['Training', 'Test'],
['r', 'g'],
'Training size',
scoring_label_loss)
```
The curve is not stable, it presents a huge variance, the risk is higher and therefore the algorithm is not improving.
```
coef_list = []
for s in sizes:
x_train, x_test, y_train, y_test = train_test_split(X_pca, y, train_size=s, random_state=1986)
learner.fit(x_train, y_train)
coef_list.append(learner.coef_)
coef_matrix = numpy.array(coef_list)
pca = decomposition.PCA(n_components=2)
pca.fit(coef_matrix)
coef_pca = pca.transform(coef_matrix)
```
We plot again the coefficients after the decomposition to see how is the situation.
```
fig, ax = plt.subplots()
plt.scatter(coef_pca[:,0], coef_pca[:,1])
```
|
github_jupyter
|
import numpy
import pandas
import seaborn
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.decomposition import PCA
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.preprocessing import LabelEncoder, MinMaxScaler, StandardScaler, RobustScaler, Normalizer
from sklearn.model_selection import learning_curve, validation_curve, cross_val_score, cross_validate, train_test_split, KFold, GridSearchCV
from sklearn import decomposition
from plotting import Plotting
from lasso import Lasso
from cholesky import Cholesky
from svd import SVD
from lsqr import LSQR
data_frame = pandas.read_csv(filepath_or_buffer='cal-housing.csv')
data_frame.info()
data_frame.describe()
data_frame.head()
#scoring = 'r2'
#scoring_neg = False
#scoring_label = 'Rยฒ'
#scoring_label_loss = 'Rยฒ'
scoring = 'neg_mean_squared_error'
scoring_neg = True
scoring_label = 'MSE'
scoring_label_loss = 'Squared loss'
column_to_predict = 'median_house_value'
categories_columns = ['ocean_proximity']
numerics_columns = ["longitude", "latitude", "housing_median_age", "total_rooms", "total_bedrooms", "population", "households", "median_income"]
plt.figure(figsize=(15, 7))
plt.title('Distribuzione dei valori di "median_house_value"')
seaborn.distplot(data_frame['median_house_value'])
plt.grid()
plt.show()
for c in data_frame.columns:
if data_frame[c].hasnans:
m = data_frame[c].mean()
data_frame[c].fillna(value=m, inplace=True)
useLabelEncoder = False
useColumnCat = True
if useLabelEncoder:
labelencoder = LabelEncoder()
for c in categories_columns:
c_name = c + '_cat'
data_frame[c_name] = labelencoder.fit_transform(data_frame[c])
numerics_columns.append(c_name)
data_frame.drop(columns=categories_columns, inplace=True)
elif useColumnCat:
# genera le colonne per ogni elemento di una colonna categoria
columns_categories = pandas.DataFrame()
for c in categories_columns:
column = pandas.get_dummies(data=data_frame[c], prefix=c + '_')
columns_categories = pandas.concat((columns_categories, column), axis=1)
for col in columns_categories.columns:
numerics_columns.append(col)
# elimina le colonne categoria
data_frame.drop(columns=categories_columns, inplace=True)
# aggiungi le colonne per ogni elemento di una colonna categoria
data_frame = pandas.concat([data_frame, columns_categories], axis=1)
else:
data_frame['ocean_proximity'].replace(['INLAND', '<1H OCEAN', 'NEAR OCEAN', 'ISLAND', 'NEAR BAY'], [1, 20, 100, 1500, 500], inplace=True)
numerics_columns.append('ocean_proximity')
columns_to_remove = []
columns_to_use = list(data_frame.columns)
for u in columns_to_remove:
columns_to_use.remove(u)
if numerics_columns.count(u) > 0:
numerics_columns.remove(u)
data_frame.drop(columns=columns_to_remove, inplace=True)
#data_frame = data_frame.sample(frac=1)
X = data_frame[numerics_columns]
y = data_frame[column_to_predict]
X
y
column_to_predict_idx = data_frame.columns.get_loc(column_to_predict)
cols = list(range(0, data_frame.shape[1]))
cols.remove(column_to_predict_idx)
X = data_frame[numerics_columns]
y = data_frame[column_to_predict]
x_train, x_test, y_train, y_test = train_test_split(X, y, train_size=0.8, shuffle=True, random_state=1986)
print('Train size:', x_train.shape[0])
print('Test size:', x_test.shape[0])
#ridge = LSQR()
#ridge= SVD()
ridge = Cholesky()
alphas = numpy.sort(numpy.logspace(-5, -0.1, 20))
alphas
train_score, test_scores = validation_curve(ridge, x_train, y_train, param_name="alpha", param_range=alphas, scoring=scoring, n_jobs=-1)
Plotting.plotAreaMeanStd(
'Ridge - Validation curve',
alphas,
[train_score, test_scores],
scoring_neg,
['Training', 'Test'],
['r', 'g'],
'ษ',
scoring_label_loss)
best_ษ = alphas[numpy.argmax(test_scores.mean(axis=1))]
print('Best ษ:', best_ษ)
ridge.alphas = alphas
ridge.nestedCrossValidationKFold(x_train, y_train)
print('Best ษ for Nested CV:', ridge.best_alpha_NestedCV)
print('Best ษ for Non-Nested CV:', ridge.best_alpha_NonNestedCV)
Plotting.plotNestedCrossVal(
ridge.nested_cross_validation_trials,
ridge.nested_scores,
ridge.non_nested_scores,
ridge.score_difference)
min_ts = int(x_train.shape[0] * 0.01)
max_ts = int(x_train.shape[0] * 0.8)
step_ts = int(x_train.shape[0] * 0.1)
sizes = range(min_ts, max_ts, step_ts)
print(sizes)
train_size, train_score, test_scores = learning_curve(ridge, x_train, y_train, train_sizes=sizes, shuffle=True, random_state=1986, scoring=scoring, n_jobs=-1)
Plotting.plotAreaMeanStd(
'Ridge - Learning curve',
train_size,
[train_score, test_scores],
scoring_neg,
['Training', 'Test'],
['r', 'g'],
'Training size',
scoring_label_loss)
print(f'Cross-validated risk estimate: {-test_scores.mean()}')
ridge.fit(x_train, y_train)
y_predict_c = ridge.predict(x_test)
print('MSE', mean_squared_error(y_test, y_predict_c))
print('Rยฒ', r2_score(y_test, y_predict_c))
Plotting.regPlot('Ridge', y_predict_c, y_test)
Plotting.coeficientPlot('Ridge', x_test, ridge.coef_)
x_train, x_test, y_train, y_test = train_test_split(X, y, train_size=0.8, shuffle=True, random_state=1986)
print('Train size:', x_train.shape[0])
print('Test size:', x_test.shape[0])
lasso = Lasso()
alphas = numpy.sort(numpy.logspace(-5, -0.1, 20)) * 30
alphas
train_score, test_scores = validation_curve(lasso, x_train, y_train, param_name="alpha", param_range=alphas, scoring=scoring, n_jobs=-1)
Plotting.plotAreaMeanStd(
'Lasso - Validation curve',
alphas,
[train_score, test_scores],
scoring_neg,
['Training', 'Test'],
['r', 'g'],
'ษ',
scoring_label_loss)
best_ษ = alphas[numpy.argmax(test_scores.mean(axis=1))]
print('Best ษ:', best_ษ)
lasso.alphas = alphas
lasso.nestedCrossValidationKFold(x_train, y_train)
print('Best ษ for Nested CV:', lasso.best_alpha_NestedCV)
print('Best ษ for Non-Nested CV:', lasso.best_alpha_NonNestedCV)
Plotting.plotNestedCrossVal(
lasso.nested_cross_validation_trials,
lasso.nested_scores,
lasso.non_nested_scores,
lasso.score_difference)
min_ts = int(x_train.shape[0] * 0.01)
max_ts = int(x_train.shape[0] * 0.8)
step_ts = int(x_train.shape[0] * 0.1)
sizes = range(min_ts, max_ts, step_ts)
print(sizes)
train_size, train_score, test_scores = learning_curve(lasso, x_train, y_train, train_sizes=sizes, shuffle=True, random_state=1986, scoring=scoring, n_jobs=-1)
Plotting.plotAreaMeanStd(
'Lasso - Learning curve',
train_size,
[train_score, test_scores],
scoring_neg,
['Training', 'Test'],
['r', 'g'],
'Training size',
scoring_label_loss)
print(f'Cross-validated risk estimate: {-test_scores.mean()}')
lasso.fit(x_train, y_train)
y_predict_c = lasso.predict(x_test)
print('MSE', mean_squared_error(y_test, y_predict_c))
print('Rยฒ', r2_score(y_test, y_predict_c))
Plotting.regPlot('Lasso', y_predict_c, y_test)
Plotting.coeficientPlot('Lasso', x_test, lasso.coef_)
useMinMaxScaler = True
if useMinMaxScaler:
column_to_predict_idx = data_frame.columns.get_loc(column_to_predict)
cols = list(range(0, data_frame.shape[1]))
cols.remove(column_to_predict_idx)
scaler = MinMaxScaler()
scaler.fit(data_frame)
data_frame = scaler.transform(data_frame)
data_frame = pandas.DataFrame(data_frame, columns=columns_to_use)
X = data_frame[numerics_columns]
y = data_frame[column_to_predict]
column_to_predict_idx = data_frame.columns.get_loc(column_to_predict)
cols = list(range(0, data_frame.shape[1]))
cols.remove(column_to_predict_idx)
X = data_frame[numerics_columns]
y = data_frame[column_to_predict]
X
x_train, x_test, y_train, y_test = train_test_split(X, y, train_size=0.8, shuffle=True, random_state=1986)
print('Train size:', x_train.shape[0])
print('Test size:', x_test.shape[0])
learner = Cholesky(0.040787427589690214)
train_size, train_score, test_scores = learning_curve(learner, x_train, y_train, train_sizes=sizes, cv=5, scoring=scoring, shuffle = True, random_state=1986, n_jobs=-1)
Plotting.plotAreaMeanStd(
'Ridge - Learning curve',
train_size,
[train_score, test_scores],
scoring_neg,
['Training', 'Test'],
['r', 'g'],
'Training size',
scoring_label_loss)
coef_list = []
min_ts = int(X.shape[0] * 0.01)
max_ts = int(X.shape[0] * 0.8)
step_ts = int(X.shape[0] * 0.1)
sizes = range(min_ts, max_ts, step_ts)
for s in sizes:
x_train, x_test, y_train, y_test = train_test_split(X, y, train_size=s, random_state=1986)
learner.fit(x_train, y_train)
coef_list.append(learner.coef_)
coef_matrix = numpy.array(coef_list)
pca = decomposition.PCA(n_components=2)
pca.fit(coef_matrix)
coef_pca = pca.transform(coef_matrix)
fig, ax = plt.subplots()
plt.scatter(coef_pca[:,0], coef_pca[:,1])
pca = decomposition.PCA(n_components=13)
pca.fit(X)
plt.title('PCA')
plt.plot(pca.singular_values_, label='Singular values')
plt.legend()
plt.show()
pca = decomposition.PCA(n_components=8)
pca.fit(X)
X_pca = pca.transform(X)
x_train, x_test, y_train, y_test = train_test_split(X_pca, y, train_size=0.8, shuffle=True, random_state=1986)
print('Train size:', x_train.shape[0])
print('Test size:', x_test.shape[0])
min_ts = int(x_train.shape[0] * 0.01)
max_ts = int(x_train.shape[0] * 0.8)
step_ts = int(x_train.shape[0] * 0.1)
sizes = range(min_ts, max_ts, step_ts)
print(sizes)
train_size, train_score, test_scores = learning_curve(learner, x_train, y_train, train_sizes=sizes, cv=5, scoring=scoring, shuffle = True, random_state=1986, n_jobs=-1)
Plotting.plotAreaMeanStd(
'PCA - Learning curve',
train_size,
[train_score, test_scores],
scoring_neg,
['Training', 'Test'],
['r', 'g'],
'Training size',
scoring_label_loss)
coef_list = []
for s in sizes:
x_train, x_test, y_train, y_test = train_test_split(X_pca, y, train_size=s, random_state=1986)
learner.fit(x_train, y_train)
coef_list.append(learner.coef_)
coef_matrix = numpy.array(coef_list)
pca = decomposition.PCA(n_components=2)
pca.fit(coef_matrix)
coef_pca = pca.transform(coef_matrix)
fig, ax = plt.subplots()
plt.scatter(coef_pca[:,0], coef_pca[:,1])
| 0.621541 | 0.977414 |
# Week 1 Assignment: Data Validation
[Tensorflow Data Validation (TFDV)](https://cloud.google.com/solutions/machine-learning/analyzing-and-validating-data-at-scale-for-ml-using-tfx) is an open-source library that helps to understand, validate, and monitor production machine learning (ML) data at scale. Common use-cases include comparing training, evaluation and serving datasets, as well as checking for training/serving skew. You have seen the core functionalities of this package in the previous ungraded lab and you will get to practice them in this week's assignment.
In this lab, you will use TFDV in order to:
* Generate and visualize statistics from a dataframe
* Infer a dataset schema
* Calculate, visualize and fix anomalies
Let's begin!
## Table of Contents
- [1 - Setup and Imports](#1)
- [2 - Load the Dataset](#2)
- [2.1 - Read and Split the Dataset](#2-1)
- [2.1.1 - Data Splits](#2-1-1)
- [2.1.2 - Label Column](#2-1-2)
- [3 - Generate and Visualize Training Data Statistics](#3)
- [3.1 - Removing Irrelevant Features](#3-1)
- [Exercise 1 - Generate Training Statistics](#ex-1)
- [Exercise 2 - Visualize Training Statistics](#ex-2)
- [4 - Infer a Data Schema](#4)
- [Exercise 3: Infer the training set schema](#ex-3)
- [5 - Calculate, Visualize and Fix Evaluation Anomalies](#5)
- [Exercise 4: Compare Training and Evaluation Statistics](#ex-4)
- [Exercise 5: Detecting Anomalies](#ex-5)
- [Exercise 6: Fix evaluation anomalies in the schema](#ex-6)
- [6 - Schema Environments](#6)
- [Exercise 7: Check anomalies in the serving set](#ex-7)
- [Exercise 8: Modifying the domain](#ex-8)
- [Exercise 9: Detecting anomalies with environments](#ex-9)
- [7 - Check for Data Drift and Skew](#7)
- [8 - Display Stats for Data Slices](#8)
- [9 - Freeze the Schema](#8)
<a name='1'></a>
## 1 - Setup and Imports
```
# Import packages
import os
import pandas as pd
import tensorflow as tf
import tempfile, urllib, zipfile
import tensorflow_data_validation as tfdv
from tensorflow.python.lib.io import file_io
from tensorflow_data_validation.utils import slicing_util
from tensorflow_metadata.proto.v0.statistics_pb2 import DatasetFeatureStatisticsList, DatasetFeatureStatistics
# Set TF's logger to only display errors to avoid internal warnings being shown
tf.get_logger().setLevel('ERROR')
```
<a name='2'></a>
## 2 - Load the Dataset
You will be using the [Diabetes 130-US hospitals for years 1999-2008 Data Set](https://archive.ics.uci.edu/ml/datasets/diabetes+130-us+hospitals+for+years+1999-2008) donated to the University of California, Irvine (UCI) Machine Learning Repository. The dataset represents 10 years (1999-2008) of clinical care at 130 US hospitals and integrated delivery networks. It includes over 50 features representing patient and hospital outcomes.
This dataset has already been included in your Jupyter workspace so you can easily load it.
<a name='2-1'></a>
### 2.1 Read and Split the Dataset
```
# Read CSV data into a dataframe and recognize the missing data that is encoded with '?' string as NaN
df = pd.read_csv('data/diabetic_data.csv', header=0, na_values = '?')
# Preview the dataset
df.head()
```
<a name='2-1-1'></a>
#### Data splits
In a production ML system, the model performance can be negatively affected by anomalies and divergence between data splits for training, evaluation, and serving. To emulate a production system, you will split the dataset into:
* 70% training set
* 15% evaluation set
* 15% serving set
You will then use TFDV to visualize, analyze, and understand the data. You will create a data schema from the training dataset, then compare the evaluation and serving sets with this schema to detect anomalies and data drift/skew.
<a name='2-1-2'></a>
#### Label Column
This dataset has been prepared to analyze the factors related to readmission outcome. In this notebook, you will treat the `readmitted` column as the *target* or label column.
The target (or label) is important to know while splitting the data into training, evaluation and serving sets. In supervised learning, you need to include the target in the training and evaluation datasets. For the serving set however (i.e. the set that simulates the data coming from your users), the **label column needs to be dropped** since that is the feature that your model will be trying to predict.
The following function returns the training, evaluation and serving partitions of a given dataset:
```
def prepare_data_splits_from_dataframe(df):
'''
Splits a Pandas Dataframe into training, evaluation and serving sets.
Parameters:
df : pandas dataframe to split
Returns:
train_df: Training dataframe(70% of the entire dataset)
eval_df: Evaluation dataframe (15% of the entire dataset)
serving_df: Serving dataframe (15% of the entire dataset, label column dropped)
'''
# 70% of records for generating the training set
train_len = int(len(df) * 0.7)
# Remaining 30% of records for generating the evaluation and serving sets
eval_serv_len = len(df) - train_len
# Half of the 30%, which makes up 15% of total records, for generating the evaluation set
eval_len = eval_serv_len // 2
# Remaining 15% of total records for generating the serving set
serv_len = eval_serv_len - eval_len
# Sample the train, validation and serving sets. We specify a random state for repeatable outcomes.
train_df = df.iloc[:train_len].sample(frac=1, random_state=48).reset_index(drop=True)
eval_df = df.iloc[train_len: train_len + eval_len].sample(frac=1, random_state=48).reset_index(drop=True)
serving_df = df.iloc[train_len + eval_len: train_len + eval_len + serv_len].sample(frac=1, random_state=48).reset_index(drop=True)
# Serving data emulates the data that would be submitted for predictions, so it should not have the label column.
serving_df = serving_df.drop(['readmitted'], axis=1)
return train_df, eval_df, serving_df
# Split the datasets
train_df, eval_df, serving_df = prepare_data_splits_from_dataframe(df)
print('Training dataset has {} records\nValidation dataset has {} records\nServing dataset has {} records'.format(len(train_df),len(eval_df),len(serving_df)))
```
<a name='3'></a>
## 3 - Generate and Visualize Training Data Statistics
In this section, you will be generating descriptive statistics from the dataset. This is usually the first step when dealing with a dataset you are not yet familiar with. It is also known as performing an *exploratory data analysis* and its purpose is to understand the data types, the data itself and any possible issues that need to be addressed.
It is important to mention that **exploratory data analysis should be perfomed on the training dataset** only. This is because getting information out of the evaluation or serving datasets can be seen as "cheating" since this data is used to emulate data that you have not collected yet and will try to predict using your ML algorithm. **In general, it is a good practice to avoid leaking information from your evaluation and serving data into your model.**
<a name='3-1'></a>
### Removing Irrelevant Features
Before you generate the statistics, you may want to drop irrelevant features from your dataset. You can do that with TFDV with the [tfdv.StatsOptions](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/StatsOptions) class. It is usually **not a good idea** to drop features without knowing what information they contain. However there are times when this can be fairly obvious.
One of the important parameters of the `StatsOptions` class is `feature_allowlist`, which defines the features to include while calculating the data statistics. You can check the [documentation](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/StatsOptions#args) to learn more about the class arguments.
In this case, you will omit the statistics for `encounter_id` and `patient_nbr` since they are part of the internal tracking of patients in the hospital and they don't contain valuable information for the task at hand.
```
# Define features to remove
features_to_remove = {'encounter_id', 'patient_nbr'}
# Collect features to include while computing the statistics
approved_cols = [col for col in df.columns if (col not in features_to_remove)]
# Instantiate a StatsOptions class and define the feature_allowlist property
stats_options = tfdv.StatsOptions(feature_allowlist=approved_cols)
# Review the features to generate the statistics
for feature in stats_options.feature_allowlist:
print(feature)
```
<a name='ex-1'></a>
### Exercise 1: Generate Training Statistics
TFDV allows you to generate statistics from different data formats such as CSV or a Pandas DataFrame.
Since you already have the data stored in a DataFrame you can use the function [`tfdv.generate_statistics_from_dataframe()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/generate_statistics_from_dataframe) which, given a DataFrame and `stats_options`, generates an object of type `DatasetFeatureStatisticsList`. This object includes the computed statistics of the given dataset.
Complete the cell below to generate the statistics of the training set. Remember to pass the training dataframe and the `stats_options` that you defined above as arguments.
```
### START CODE HERE
train_stats = tfdv.generate_statistics_from_dataframe(dataframe=train_df,
stats_options=stats_options)
### END CODE HERE
# TEST CODE
# get the number of features used to compute statistics
print(f"Number of features used: {len(train_stats.datasets[0].features)}")
# check the number of examples used
print(f"Number of examples used: {train_stats.datasets[0].num_examples}")
# check the column names of the first and last feature
print(f"First feature: {train_stats.datasets[0].features[0].path.step[0]}")
print(f"Last feature: {train_stats.datasets[0].features[-1].path.step[0]}")
```
**Expected Output:**
```
Number of features used: 48
Number of examples used: 71236
First feature: race
Last feature: readmitted
```
<a name='ex-2'></a>
### Exercise 2: Visualize Training Statistics
Now that you have the computed statistics in the `DatasetFeatureStatisticsList` instance, you will need a way to **visualize** these to get actual insights. TFDV provides this functionality through the method [`tfdv.visualize_statistics()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/visualize_statistics).
Using this function in an interactive Python environment such as this one will output a very nice and convenient way to interact with the descriptive statistics you generated earlier.
**Try it out yourself!** Remember to pass in the generated training statistics in the previous exercise as an argument.
```
### START CODE HERE
tfdv.visualize_statistics(train_stats)
### END CODE HERE
```
<a name='4'></a>
## 4 - Infer a data schema
A schema defines the **properties of the data** and can thus be used to detect errors. Some of these properties include:
- which features are expected to be present
- feature type
- the number of values for a feature in each example
- the presence of each feature across all examples
- the expected domains of features
The schema is expected to be fairly static, whereas statistics can vary per data split. So, you will **infer the data schema from only the training dataset**. Later, you will generate statistics for evaluation and serving datasets and compare their state with the data schema to detect anomalies, drift and skew.
<a name='ex-3'></a>
### Exercise 3: Infer the training set schema
Schema inference is straightforward using [`tfdv.infer_schema()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/infer_schema). This function needs only the **statistics** (an instance of `DatasetFeatureStatisticsList`) of your data as input. The output will be a Schema [protocol buffer](https://developers.google.com/protocol-buffers) containing the results.
A complimentary function is [`tfdv.display_schema()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/display_schema) for displaying the schema in a table. This accepts a **Schema** protocol buffer as input.
Fill the code below to infer the schema from the training statistics using TFDV and display the result.
```
### START CODE HERE
# Infer the data schema by using the training statistics that you generated
schema = tfdv.infer_schema(statistics=train_stats)
# Display the data schema
tfdv.display_schema(schema)
### END CODE HERE
# TEST CODE
# Check number of features
print(f"Number of features in schema: {len(schema.feature)}")
# Check domain name of 2nd feature
print(f"Second feature in schema: {list(schema.feature)[1].domain}")
```
**Expected Output:**
```
Number of features in schema: 48
Second feature in schema: gender
```
**Be sure to check the information displayed before moving forward.**
<a name='5'></a>
## 5 - Calculate, Visualize and Fix Evaluation Anomalies
It is important that the schema of the evaluation data is consistent with the training data since the data that your model is going to receive should be consistent to the one you used to train it with.
Moreover, it is also important that the **features of the evaluation data belong roughly to the same range as the training data**. This ensures that the model will be evaluated on a similar loss surface covered during training.
<a name='ex-4'></a>
### Exercise 4: Compare Training and Evaluation Statistics
Now you are going to generate the evaluation statistics and compare it with training statistics. You can use the [`tfdv.generate_statistics_from_dataframe()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/generate_statistics_from_dataframe) function for this. But this time, you'll need to pass the **evaluation data**. For the `stats_options` parameter, the list you used before works here too.
Remember that to visualize the evaluation statistics you can use [`tfdv.visualize_statistics()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/visualize_statistics).
However, it is impractical to visualize both statistics separately and do your comparison from there. Fortunately, TFDV has got this covered. You can use the `visualize_statistics` function and pass additional parameters to overlay the statistics from both datasets (referenced as left-hand side and right-hand side statistics). Let's see what these parameters are:
- `lhs_statistics`: Required parameter. Expects an instance of `DatasetFeatureStatisticsList `.
- `rhs_statistics`: Expects an instance of `DatasetFeatureStatisticsList ` to compare with `lhs_statistics`.
- `lhs_name`: Name of the `lhs_statistics` dataset.
- `rhs_name`: Name of the `rhs_statistics` dataset.
For this case, remember to define the `lhs_statistics` protocol with the `eval_stats`, and the optional `rhs_statistics` protocol with the `train_stats`.
Additionally, check the function for the protocol name declaration, and define the lhs and rhs names as `'EVAL_DATASET'` and `'TRAIN_DATASET'` respectively.
```
### START CODE HERE
# Generate evaluation dataset statistics
# HINT: Remember to use the evaluation dataframe and to pass the stats_options (that you defined before) as an argument
eval_stats = tfdv.generate_statistics_from_dataframe(eval_df,
stats_options=stats_options)
# Compare evaluation data with training data
# HINT: Remember to use both the evaluation and training statistics with the lhs_statistics and rhs_statistics arguments
# HINT: Assign the names of 'EVAL_DATASET' and 'TRAIN_DATASET' to the lhs and rhs protocols
tfdv.visualize_statistics(
lhs_statistics=eval_stats,
rhs_statistics=train_stats,
lhs_name='EVAL_DATASET',
rhs_name='TRAIN_DATASET'
)
### END CODE HERE
# TEST CODE
# get the number of features used to compute statistics
print(f"Number of features: {len(eval_stats.datasets[0].features)}")
# check the number of examples used
print(f"Number of examples: {eval_stats.datasets[0].num_examples}")
# check the column names of the first and last feature
print(f"First feature: {eval_stats.datasets[0].features[0].path.step[0]}")
print(f"Last feature: {eval_stats.datasets[0].features[-1].path.step[0]}")
```
**Expected Output:**
```
Number of features: 48
Number of examples: 15265
First feature: race
Last feature: readmitted
```
<a name='ex-5'></a>
### Exercise 5: Detecting Anomalies ###
At this point, you should ask if your evaluation dataset matches the schema from your training dataset. For instance, if you scroll through the output cell in the previous exercise, you can see that the categorical feature **glimepiride-pioglitazone** has 1 unique value in the training set while the evaluation dataset has 2. You can verify with the built-in Pandas `describe()` method as well.
```
train_df["glimepiride-pioglitazone"].describe()
eval_df["glimepiride-pioglitazone"].describe()
```
It is possible but highly inefficient to visually inspect and determine all the anomalies. So, let's instead use TFDV functions to detect and display these.
You can use the function [`tfdv.validate_statistics()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/validate_statistics) for detecting anomalies and [`tfdv.display_anomalies()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/display_anomalies) for displaying them.
The `validate_statistics()` method has two required arguments:
- an instance of `DatasetFeatureStatisticsList`
- an instance of `Schema`
Fill in the following graded function which, given the statistics and schema, displays the anomalies found.
```
def calculate_and_display_anomalies(statistics, schema):
'''
Calculate and display anomalies.
Parameters:
statistics : Data statistics in statistics_pb2.DatasetFeatureStatisticsList format
schema : Data schema in schema_pb2.Schema format
Returns:
display of calculated anomalies
'''
### START CODE HERE
# HINTS: Pass the statistics and schema parameters into the validation function
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
# HINTS: Display input anomalies by using the calculated anomalies
tfdv.display_anomalies(anomalies)
### END CODE HERE
```
You should see detected anomalies in the `medical_specialty` and `glimepiride-pioglitazone` features by running the cell below.
```
# Check evaluation data for errors by validating the evaluation data staticss using the previously inferred schema
calculate_and_display_anomalies(eval_stats, schema=schema)
```
<a name='ex-6'></a>
### Exercise 6: Fix evaluation anomalies in the schema
The evaluation data has records with values for the features **glimepiride-pioglitazone** and **medical_speciality** that were not included in the schema generated from the training data. You can fix this by adding the new values that exist in the evaluation dataset to the domain of these features.
To get the `domain` of a particular feature you can use [`tfdv.get_domain()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/get_domain).
You can use the `append()` method to the `value` property of the returned `domain` to add strings to the valid list of values. To be more explicit, given a domain you can do something like:
```python
domain.value.append("feature_value")
```
```
### START CODE HERE
# Get the domain associated with the input feature, glimepiride-pioglitazone, from the schema
glimepiride_pioglitazone_domain = tfdv.get_domain(schema, 'glimepiride-pioglitazone')
# HINT: Append the missing value 'Steady' to the domain
glimepiride_pioglitazone_domain.value.append('Steady')
# Get the domain associated with the input feature, medical_specialty, from the schema
medical_specialty_domain = tfdv.get_domain(schema, 'medical_specialty')
# HINT: Append the missing value 'Neurophysiology' to the domain
medical_specialty_domain.value.append('Neurophysiology')
# HINT: Re-calculate and re-display anomalies with the new schema
calculate_and_display_anomalies(eval_stats, schema=schema)
### END CODE HERE
```
If you did the exercise correctly, you should see *"No anomalies found."* after running the cell above.
<a name='6'></a>
## 6 - Schema Environments
By default, all datasets in a pipeline should use the same schema. However, there are some exceptions.
For example, the **label column is dropped in the serving set** so this will be flagged when comparing with the training set schema.
**In this case, introducing slight schema variations is necessary.**
<a name='ex-7'></a>
### Exercise 7: Check anomalies in the serving set
Now you are going to check for anomalies in the **serving data**. The process is very similar to the one you previously did for the evaluation data with a little change.
Let's create a new `StatsOptions` that is aware of the information provided by the schema and use it when generating statistics from the serving DataFrame.
```
# Define a new statistics options by the tfdv.StatsOptions class for the serving data by passing the previously inferred schema
options = tfdv.StatsOptions(schema=schema,
infer_type_from_schema=True,
feature_allowlist=approved_cols)
### START CODE HERE
# Generate serving dataset statistics
# HINT: Remember to use the serving dataframe and to pass the newly defined statistics options
serving_stats = tfdv.generate_statistics_from_dataframe(dataframe=serving_df,
stats_options=options)
# HINT: Calculate and display anomalies using the generated serving statistics
calculate_and_display_anomalies(serving_stats, schema=schema)
### END CODE HERE
```
You should see that `metformin-rosiglitazone`, `metformin-pioglitazone`, `payer_code` and `medical_specialty` features have an anomaly (i.e. Unexpected string values) which is less than 1%.
Let's **relax the anomaly detection constraints** for the last two of these features by defining the `min_domain_mass` of the feature's distribution constraints.
```
# This relaxes the minimum fraction of values that must come from the domain for the feature.
# Get the feature and relax to match 90% of the domain
payer_code = tfdv.get_feature(schema, 'payer_code')
payer_code.distribution_constraints.min_domain_mass = 0.9
# Get the feature and relax to match 90% of the domain
medical_specialty = tfdv.get_feature(schema, 'medical_specialty')
medical_specialty.distribution_constraints.min_domain_mass = 0.9
# Detect anomalies with the updated constraints
calculate_and_display_anomalies(serving_stats, schema=schema)
```
If the `payer_code` and `medical_specialty` are no longer part of the output cell, then the relaxation worked!
<a name='ex-8'></a>
### Exercise 8: Modifying the Domain
Let's investigate the possible cause of the anomalies for the other features, namely `metformin-pioglitazone` and `metformin-rosiglitazone`. From the output of the previous exercise, you'll see that the `anomaly long description` says: "Examples contain values missing from the schema: Steady (<1%)". You can redisplay the schema and look at the domain of these features to verify this statement.
When you inferred the schema at the start of this lab, it's possible that some values were not detected in the training data so it was not included in the expected domain values of the feature's schema. In the case of `metformin-rosiglitazone` and `metformin-pioglitazone`, the value "Steady" is indeed missing. You will just see "No" in the domain of these two features after running the code cell below.
```
tfdv.display_schema(schema)
```
Towards the bottom of the Domain-Values pairs of the cell above, you can see that many features (including **'metformin'**) have the same values: `['Down', 'No', 'Steady', 'Up']`. These values are common to many features including the ones with missing values during schema inference.
TFDV allows you to modify the domains of some features to match an existing domain. To address the detected anomaly, you can **set the domain** of these features to the domain of the `metformin` feature.
Complete the function below to set the domain of a feature list to an existing feature domain.
For this, use the [`tfdv.set_domain()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/set_domain) function, which has the following parameters:
- `schema`: The schema
- `feature_path`: The name of the feature whose domain needs to be set.
- `domain`: A domain protocol buffer or the name of a global string domain present in the input schema.
```
def modify_domain_of_features(features_list, schema, to_domain_name):
'''
Modify a list of features' domains.
Parameters:
features_list : Features that need to be modified
schema: Inferred schema
to_domain_name : Target domain to be transferred to the features list
Returns:
schema: new schema
'''
### START CODE HERE
# HINT: Loop over the feature list and use set_domain with the inferred schema, feature name and target domain name
for feature in features_list:
tfdv.set_domain(schema, feature, to_domain_name)
### END CODE HERE
return schema
```
Using this function, set the domain of the features defined in the `domain_change_features` list below to be equal to **metformin's domain** to address the anomalies found.
**Since you are overriding the existing domain of the features, it is normal to get a warning so you don't do this by accident.**
```
domain_change_features = ['repaglinide', 'nateglinide', 'chlorpropamide', 'glimepiride',
'acetohexamide', 'glipizide', 'glyburide', 'tolbutamide', 'pioglitazone',
'rosiglitazone', 'acarbose', 'miglitol', 'troglitazone', 'tolazamide',
'examide', 'citoglipton', 'insulin', 'glyburide-metformin', 'glipizide-metformin',
'glimepiride-pioglitazone', 'metformin-rosiglitazone', 'metformin-pioglitazone']
# Infer new schema by using your modify_domain_of_features function
# and the defined domain_change_features feature list
schema = modify_domain_of_features(domain_change_features, schema, 'metformin')
# Display new schema
tfdv.display_schema(schema)
# TEST CODE
# check that the domain of some features are now switched to `metformin`
print(f"Domain name of 'chlorpropamide': {tfdv.get_feature(schema, 'chlorpropamide').domain}")
print(f"Domain values of 'chlorpropamide': {tfdv.get_domain(schema, 'chlorpropamide').value}")
print(f"Domain name of 'repaglinide': {tfdv.get_feature(schema, 'repaglinide').domain}")
print(f"Domain values of 'repaglinide': {tfdv.get_domain(schema, 'repaglinide').value}")
print(f"Domain name of 'nateglinide': {tfdv.get_feature(schema, 'nateglinide').domain}")
print(f"Domain values of 'nateglinide': {tfdv.get_domain(schema, 'nateglinide').value}")
```
**Expected Output:**
```
Domain name of 'chlorpropamide': metformin
Domain values of 'chlorpropamide': ['Down', 'No', 'Steady', 'Up']
Domain name of 'repaglinide': metformin
Domain values of 'repaglinide': ['Down', 'No', 'Steady', 'Up']
Domain name of 'nateglinide': metformin
Domain values of 'nateglinide': ['Down', 'No', 'Steady', 'Up']
```
Let's do a final check of anomalies to see if this solved the issue.
```
calculate_and_display_anomalies(serving_stats, schema=schema)
```
You should now see the `metformin-pioglitazone` and `metformin-rosiglitazone` features dropped from the output anomalies.
<a name='ex-9'></a>
### Exercise 9: Detecting anomalies with environments
There is still one thing to address. The `readmitted` feature (which is the label column) showed up as an anomaly ('Column dropped'). Since labels are not expected in the serving data, let's tell TFDV to ignore this detected anomaly.
This requirement of introducing slight schema variations can be expressed by using [environments](https://www.tensorflow.org/tfx/data_validation/get_started#schema_environments). In particular, features in the schema can be associated with a set of environments using `default_environment`, `in_environment` and `not_in_environment`.
```
# All features are by default in both TRAINING and SERVING environments.
schema.default_environment.append('TRAINING')
schema.default_environment.append('SERVING')
```
Complete the code below to exclude the `readmitted` feature from the `SERVING` environment.
To achieve this, you can use the [`tfdv.get_feature()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/get_feature) function to get the `readmitted` feature from the inferred schema and use its `not_in_environment` attribute to specify that `readmitted` should be removed from the `SERVING` environment's schema. This **attribute is a list** so you will have to **append** the name of the environment that you wish to omit this feature for.
To be more explicit, given a feature you can do something like:
```python
feature.not_in_environment.append('NAME_OF_ENVIRONMENT')
```
The function `tfdv.get_feature` receives the following parameters:
- `schema`: The schema.
- `feature_path`: The path of the feature to obtain from the schema. In this case this is equal to the name of the feature.
```
### START CODE HERE
# Specify that 'readmitted' feature is not in SERVING environment.
# HINT: Append the 'SERVING' environmnet to the not_in_environment attribute of the feature
tfdv.get_feature(schema, 'readmitted').not_in_environment.append('SERVING')
# HINT: Calculate anomalies with the validate_statistics function by using the serving statistics,
# inferred schema and the SERVING environment parameter.
serving_anomalies_with_env = tfdv.validate_statistics(serving_stats, schema, environment='SERVING')
### END CODE HERE
```
You should see "No anomalies found" by running the cell below.
```
# Display anomalies
tfdv.display_anomalies(serving_anomalies_with_env)
```
Now you have succesfully addressed all anomaly-related issues!
<a name='7'></a>
## 7 - Check for Data Drift and Skew
During data validation, you also need to check for data drift and data skew between the training and serving data. You can do this by specifying the [skew_comparator and drift_comparator](https://www.tensorflow.org/tfx/data_validation/get_started#checking_data_skew_and_drift) in the schema.
Drift and skew is expressed in terms of [L-infinity distance](https://en.wikipedia.org/wiki/Chebyshev_distance) which evaluates the difference between vectors as the greatest of the differences along any coordinate dimension.
You can set the threshold distance so that you receive warnings when the drift is higher than is acceptable. Setting the correct distance is typically an iterative process requiring domain knowledge and experimentation.
Let's check for the skew in the **diabetesMed** feature and drift in the **payer_code** feature.
```
# Calculate skew for the diabetesMed feature
diabetes_med = tfdv.get_feature(schema, 'diabetesMed')
diabetes_med.skew_comparator.infinity_norm.threshold = 0.03 # domain knowledge helps to determine this threshold
# Calculate drift for the payer_code feature
payer_code = tfdv.get_feature(schema, 'payer_code')
payer_code.drift_comparator.infinity_norm.threshold = 0.03 # domain knowledge helps to determine this threshold
# Calculate anomalies
skew_drift_anomalies = tfdv.validate_statistics(train_stats, schema,
previous_statistics=eval_stats,
serving_statistics=serving_stats)
# Display anomalies
tfdv.display_anomalies(skew_drift_anomalies)
```
In both of these cases, the detected anomaly distance is not too far from the threshold value of `0.03`. For this exercise, let's accept this as within bounds (i.e. you can set the distance to something like `0.035` instead).
**However, if the anomaly truly indicates a skew and drift, then further investigation is necessary as this could have a direct impact on model performance.**
<a name='8'></a>
## 8 - Display Stats for Data Slices <a class="anchor" id="fourth-objective"></a>
Finally, you can [slice the dataset and calculate the statistics](https://www.tensorflow.org/tfx/data_validation/get_started#computing_statistics_over_slices_of_data) for each unique value of a feature. By default, TFDV computes statistics for the overall dataset in addition to the configured slices. Each slice is identified by a unique name which is set as the dataset name in the [DatasetFeatureStatistics](https://github.com/tensorflow/metadata/blob/master/tensorflow_metadata/proto/v0/statistics.proto#L43) protocol buffer. Generating and displaying statistics over different slices of data can help track model and anomaly metrics.
Let's first define a few helper functions to make our code in the exercise more neat.
```
def split_datasets(dataset_list):
'''
split datasets.
Parameters:
dataset_list: List of datasets to split
Returns:
datasets: sliced data
'''
datasets = []
for dataset in dataset_list.datasets:
proto_list = DatasetFeatureStatisticsList()
proto_list.datasets.extend([dataset])
datasets.append(proto_list)
return datasets
def display_stats_at_index(index, datasets):
'''
display statistics at the specified data index
Parameters:
index : index to show the anomalies
datasets: split data
Returns:
display of generated sliced data statistics at the specified index
'''
if index < len(datasets):
print(datasets[index].datasets[0].name)
tfdv.visualize_statistics(datasets[index])
```
The function below returns a list of `DatasetFeatureStatisticsList` protocol buffers. As shown in the ungraded lab, the first one will be for `All Examples` followed by individual slices through the feature you specified.
To configure TFDV to generate statistics for dataset slices, you will use the function `tfdv.StatsOptions()` with the following 4 arguments:
- `schema`
- `slice_functions` passed as a list.
- `infer_type_from_schema` set to True.
- `feature_allowlist` set to the approved features.
Remember that `slice_functions` only work with [`generate_statistics_from_csv()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/generate_statistics_from_csv) so you will need to convert the dataframe to CSV.
```
def sliced_stats_for_slice_fn(slice_fn, approved_cols, dataframe, schema):
'''
generate statistics for the sliced data.
Parameters:
slice_fn : slicing definition
approved_cols: list of features to pass to the statistics options
dataframe: pandas dataframe to slice
schema: the schema
Returns:
slice_info_datasets: statistics for the sliced dataset
'''
# Set the StatsOptions
slice_stats_options = tfdv.StatsOptions(schema=schema,
slice_functions=[slice_fn],
infer_type_from_schema=True,
feature_allowlist=approved_cols)
# Convert Dataframe to CSV since `slice_functions` works only with `tfdv.generate_statistics_from_csv`
CSV_PATH = 'slice_sample.csv'
dataframe.to_csv(CSV_PATH)
# Calculate statistics for the sliced dataset
sliced_stats = tfdv.generate_statistics_from_csv(CSV_PATH, stats_options=slice_stats_options)
# Split the dataset using the previously defined split_datasets function
slice_info_datasets = split_datasets(sliced_stats)
return slice_info_datasets
```
With that, you can now use the helper functions to generate and visualize statistics for the sliced datasets.
```
# Generate slice function for the `medical_speciality` feature
slice_fn = slicing_util.get_feature_value_slicer(features={'medical_specialty': None})
# Generate stats for the sliced dataset
slice_datasets = sliced_stats_for_slice_fn(slice_fn, approved_cols, dataframe=train_df, schema=schema)
# Print name of slices for reference
print(f'Statistics generated for:\n')
print('\n'.join([sliced.datasets[0].name for sliced in slice_datasets]))
# Display at index 10, which corresponds to the slice named `medical_specialty_Gastroenterology`
display_stats_at_index(10, slice_datasets)
```
If you are curious, try different slice indices to extract the group statistics. For instance, `index=5` corresponds to all `medical_specialty_Surgery-General` records. You can also try slicing through multiple features as shown in the ungraded lab.
Another challenge is to implement your own helper functions. For instance, you can make a `display_stats_for_slice_name()` function so you don't have to determine the index of a slice. If done correctly, you can just do `display_stats_for_slice_name('medical_specialty_Gastroenterology', slice_datasets)` and it will generate the same result as `display_stats_at_index(10, slice_datasets)`.
<a name='9'></a>
## 9 - Freeze the schema
Now that the schema has been reviewed, you will store the schema in a file in its "frozen" state. This can be used to validate incoming data once your application goes live to your users.
This is pretty straightforward using Tensorflow's `io` utils and TFDV's [`write_schema_text()`](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/write_schema_text) function.
```
# Create output directory
OUTPUT_DIR = "output"
file_io.recursive_create_dir(OUTPUT_DIR)
# Use TensorFlow text output format pbtxt to store the schema
schema_file = os.path.join(OUTPUT_DIR, 'schema.pbtxt')
# write_schema_text function expect the defined schema and output path as parameters
tfdv.write_schema_text(schema, schema_file)
```
After submitting this assignment, you can click the Jupyter logo in the left upper corner of the screen to check the Jupyter filesystem. The `schema.pbtxt` file should be inside the `output` directory.
**Congratulations on finishing this week's assignment!** A lot of concepts where introduced and now you should feel more familiar with using TFDV for inferring schemas, anomaly detection and other data-related tasks.
**Keep it up!**
|
github_jupyter
|
# Import packages
import os
import pandas as pd
import tensorflow as tf
import tempfile, urllib, zipfile
import tensorflow_data_validation as tfdv
from tensorflow.python.lib.io import file_io
from tensorflow_data_validation.utils import slicing_util
from tensorflow_metadata.proto.v0.statistics_pb2 import DatasetFeatureStatisticsList, DatasetFeatureStatistics
# Set TF's logger to only display errors to avoid internal warnings being shown
tf.get_logger().setLevel('ERROR')
# Read CSV data into a dataframe and recognize the missing data that is encoded with '?' string as NaN
df = pd.read_csv('data/diabetic_data.csv', header=0, na_values = '?')
# Preview the dataset
df.head()
def prepare_data_splits_from_dataframe(df):
'''
Splits a Pandas Dataframe into training, evaluation and serving sets.
Parameters:
df : pandas dataframe to split
Returns:
train_df: Training dataframe(70% of the entire dataset)
eval_df: Evaluation dataframe (15% of the entire dataset)
serving_df: Serving dataframe (15% of the entire dataset, label column dropped)
'''
# 70% of records for generating the training set
train_len = int(len(df) * 0.7)
# Remaining 30% of records for generating the evaluation and serving sets
eval_serv_len = len(df) - train_len
# Half of the 30%, which makes up 15% of total records, for generating the evaluation set
eval_len = eval_serv_len // 2
# Remaining 15% of total records for generating the serving set
serv_len = eval_serv_len - eval_len
# Sample the train, validation and serving sets. We specify a random state for repeatable outcomes.
train_df = df.iloc[:train_len].sample(frac=1, random_state=48).reset_index(drop=True)
eval_df = df.iloc[train_len: train_len + eval_len].sample(frac=1, random_state=48).reset_index(drop=True)
serving_df = df.iloc[train_len + eval_len: train_len + eval_len + serv_len].sample(frac=1, random_state=48).reset_index(drop=True)
# Serving data emulates the data that would be submitted for predictions, so it should not have the label column.
serving_df = serving_df.drop(['readmitted'], axis=1)
return train_df, eval_df, serving_df
# Split the datasets
train_df, eval_df, serving_df = prepare_data_splits_from_dataframe(df)
print('Training dataset has {} records\nValidation dataset has {} records\nServing dataset has {} records'.format(len(train_df),len(eval_df),len(serving_df)))
# Define features to remove
features_to_remove = {'encounter_id', 'patient_nbr'}
# Collect features to include while computing the statistics
approved_cols = [col for col in df.columns if (col not in features_to_remove)]
# Instantiate a StatsOptions class and define the feature_allowlist property
stats_options = tfdv.StatsOptions(feature_allowlist=approved_cols)
# Review the features to generate the statistics
for feature in stats_options.feature_allowlist:
print(feature)
### START CODE HERE
train_stats = tfdv.generate_statistics_from_dataframe(dataframe=train_df,
stats_options=stats_options)
### END CODE HERE
# TEST CODE
# get the number of features used to compute statistics
print(f"Number of features used: {len(train_stats.datasets[0].features)}")
# check the number of examples used
print(f"Number of examples used: {train_stats.datasets[0].num_examples}")
# check the column names of the first and last feature
print(f"First feature: {train_stats.datasets[0].features[0].path.step[0]}")
print(f"Last feature: {train_stats.datasets[0].features[-1].path.step[0]}")
Number of features used: 48
Number of examples used: 71236
First feature: race
Last feature: readmitted
### START CODE HERE
tfdv.visualize_statistics(train_stats)
### END CODE HERE
### START CODE HERE
# Infer the data schema by using the training statistics that you generated
schema = tfdv.infer_schema(statistics=train_stats)
# Display the data schema
tfdv.display_schema(schema)
### END CODE HERE
# TEST CODE
# Check number of features
print(f"Number of features in schema: {len(schema.feature)}")
# Check domain name of 2nd feature
print(f"Second feature in schema: {list(schema.feature)[1].domain}")
Number of features in schema: 48
Second feature in schema: gender
### START CODE HERE
# Generate evaluation dataset statistics
# HINT: Remember to use the evaluation dataframe and to pass the stats_options (that you defined before) as an argument
eval_stats = tfdv.generate_statistics_from_dataframe(eval_df,
stats_options=stats_options)
# Compare evaluation data with training data
# HINT: Remember to use both the evaluation and training statistics with the lhs_statistics and rhs_statistics arguments
# HINT: Assign the names of 'EVAL_DATASET' and 'TRAIN_DATASET' to the lhs and rhs protocols
tfdv.visualize_statistics(
lhs_statistics=eval_stats,
rhs_statistics=train_stats,
lhs_name='EVAL_DATASET',
rhs_name='TRAIN_DATASET'
)
### END CODE HERE
# TEST CODE
# get the number of features used to compute statistics
print(f"Number of features: {len(eval_stats.datasets[0].features)}")
# check the number of examples used
print(f"Number of examples: {eval_stats.datasets[0].num_examples}")
# check the column names of the first and last feature
print(f"First feature: {eval_stats.datasets[0].features[0].path.step[0]}")
print(f"Last feature: {eval_stats.datasets[0].features[-1].path.step[0]}")
Number of features: 48
Number of examples: 15265
First feature: race
Last feature: readmitted
train_df["glimepiride-pioglitazone"].describe()
eval_df["glimepiride-pioglitazone"].describe()
def calculate_and_display_anomalies(statistics, schema):
'''
Calculate and display anomalies.
Parameters:
statistics : Data statistics in statistics_pb2.DatasetFeatureStatisticsList format
schema : Data schema in schema_pb2.Schema format
Returns:
display of calculated anomalies
'''
### START CODE HERE
# HINTS: Pass the statistics and schema parameters into the validation function
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
# HINTS: Display input anomalies by using the calculated anomalies
tfdv.display_anomalies(anomalies)
### END CODE HERE
# Check evaluation data for errors by validating the evaluation data staticss using the previously inferred schema
calculate_and_display_anomalies(eval_stats, schema=schema)
domain.value.append("feature_value")
### START CODE HERE
# Get the domain associated with the input feature, glimepiride-pioglitazone, from the schema
glimepiride_pioglitazone_domain = tfdv.get_domain(schema, 'glimepiride-pioglitazone')
# HINT: Append the missing value 'Steady' to the domain
glimepiride_pioglitazone_domain.value.append('Steady')
# Get the domain associated with the input feature, medical_specialty, from the schema
medical_specialty_domain = tfdv.get_domain(schema, 'medical_specialty')
# HINT: Append the missing value 'Neurophysiology' to the domain
medical_specialty_domain.value.append('Neurophysiology')
# HINT: Re-calculate and re-display anomalies with the new schema
calculate_and_display_anomalies(eval_stats, schema=schema)
### END CODE HERE
# Define a new statistics options by the tfdv.StatsOptions class for the serving data by passing the previously inferred schema
options = tfdv.StatsOptions(schema=schema,
infer_type_from_schema=True,
feature_allowlist=approved_cols)
### START CODE HERE
# Generate serving dataset statistics
# HINT: Remember to use the serving dataframe and to pass the newly defined statistics options
serving_stats = tfdv.generate_statistics_from_dataframe(dataframe=serving_df,
stats_options=options)
# HINT: Calculate and display anomalies using the generated serving statistics
calculate_and_display_anomalies(serving_stats, schema=schema)
### END CODE HERE
# This relaxes the minimum fraction of values that must come from the domain for the feature.
# Get the feature and relax to match 90% of the domain
payer_code = tfdv.get_feature(schema, 'payer_code')
payer_code.distribution_constraints.min_domain_mass = 0.9
# Get the feature and relax to match 90% of the domain
medical_specialty = tfdv.get_feature(schema, 'medical_specialty')
medical_specialty.distribution_constraints.min_domain_mass = 0.9
# Detect anomalies with the updated constraints
calculate_and_display_anomalies(serving_stats, schema=schema)
tfdv.display_schema(schema)
def modify_domain_of_features(features_list, schema, to_domain_name):
'''
Modify a list of features' domains.
Parameters:
features_list : Features that need to be modified
schema: Inferred schema
to_domain_name : Target domain to be transferred to the features list
Returns:
schema: new schema
'''
### START CODE HERE
# HINT: Loop over the feature list and use set_domain with the inferred schema, feature name and target domain name
for feature in features_list:
tfdv.set_domain(schema, feature, to_domain_name)
### END CODE HERE
return schema
domain_change_features = ['repaglinide', 'nateglinide', 'chlorpropamide', 'glimepiride',
'acetohexamide', 'glipizide', 'glyburide', 'tolbutamide', 'pioglitazone',
'rosiglitazone', 'acarbose', 'miglitol', 'troglitazone', 'tolazamide',
'examide', 'citoglipton', 'insulin', 'glyburide-metformin', 'glipizide-metformin',
'glimepiride-pioglitazone', 'metformin-rosiglitazone', 'metformin-pioglitazone']
# Infer new schema by using your modify_domain_of_features function
# and the defined domain_change_features feature list
schema = modify_domain_of_features(domain_change_features, schema, 'metformin')
# Display new schema
tfdv.display_schema(schema)
# TEST CODE
# check that the domain of some features are now switched to `metformin`
print(f"Domain name of 'chlorpropamide': {tfdv.get_feature(schema, 'chlorpropamide').domain}")
print(f"Domain values of 'chlorpropamide': {tfdv.get_domain(schema, 'chlorpropamide').value}")
print(f"Domain name of 'repaglinide': {tfdv.get_feature(schema, 'repaglinide').domain}")
print(f"Domain values of 'repaglinide': {tfdv.get_domain(schema, 'repaglinide').value}")
print(f"Domain name of 'nateglinide': {tfdv.get_feature(schema, 'nateglinide').domain}")
print(f"Domain values of 'nateglinide': {tfdv.get_domain(schema, 'nateglinide').value}")
Domain name of 'chlorpropamide': metformin
Domain values of 'chlorpropamide': ['Down', 'No', 'Steady', 'Up']
Domain name of 'repaglinide': metformin
Domain values of 'repaglinide': ['Down', 'No', 'Steady', 'Up']
Domain name of 'nateglinide': metformin
Domain values of 'nateglinide': ['Down', 'No', 'Steady', 'Up']
calculate_and_display_anomalies(serving_stats, schema=schema)
# All features are by default in both TRAINING and SERVING environments.
schema.default_environment.append('TRAINING')
schema.default_environment.append('SERVING')
feature.not_in_environment.append('NAME_OF_ENVIRONMENT')
### START CODE HERE
# Specify that 'readmitted' feature is not in SERVING environment.
# HINT: Append the 'SERVING' environmnet to the not_in_environment attribute of the feature
tfdv.get_feature(schema, 'readmitted').not_in_environment.append('SERVING')
# HINT: Calculate anomalies with the validate_statistics function by using the serving statistics,
# inferred schema and the SERVING environment parameter.
serving_anomalies_with_env = tfdv.validate_statistics(serving_stats, schema, environment='SERVING')
### END CODE HERE
# Display anomalies
tfdv.display_anomalies(serving_anomalies_with_env)
# Calculate skew for the diabetesMed feature
diabetes_med = tfdv.get_feature(schema, 'diabetesMed')
diabetes_med.skew_comparator.infinity_norm.threshold = 0.03 # domain knowledge helps to determine this threshold
# Calculate drift for the payer_code feature
payer_code = tfdv.get_feature(schema, 'payer_code')
payer_code.drift_comparator.infinity_norm.threshold = 0.03 # domain knowledge helps to determine this threshold
# Calculate anomalies
skew_drift_anomalies = tfdv.validate_statistics(train_stats, schema,
previous_statistics=eval_stats,
serving_statistics=serving_stats)
# Display anomalies
tfdv.display_anomalies(skew_drift_anomalies)
def split_datasets(dataset_list):
'''
split datasets.
Parameters:
dataset_list: List of datasets to split
Returns:
datasets: sliced data
'''
datasets = []
for dataset in dataset_list.datasets:
proto_list = DatasetFeatureStatisticsList()
proto_list.datasets.extend([dataset])
datasets.append(proto_list)
return datasets
def display_stats_at_index(index, datasets):
'''
display statistics at the specified data index
Parameters:
index : index to show the anomalies
datasets: split data
Returns:
display of generated sliced data statistics at the specified index
'''
if index < len(datasets):
print(datasets[index].datasets[0].name)
tfdv.visualize_statistics(datasets[index])
def sliced_stats_for_slice_fn(slice_fn, approved_cols, dataframe, schema):
'''
generate statistics for the sliced data.
Parameters:
slice_fn : slicing definition
approved_cols: list of features to pass to the statistics options
dataframe: pandas dataframe to slice
schema: the schema
Returns:
slice_info_datasets: statistics for the sliced dataset
'''
# Set the StatsOptions
slice_stats_options = tfdv.StatsOptions(schema=schema,
slice_functions=[slice_fn],
infer_type_from_schema=True,
feature_allowlist=approved_cols)
# Convert Dataframe to CSV since `slice_functions` works only with `tfdv.generate_statistics_from_csv`
CSV_PATH = 'slice_sample.csv'
dataframe.to_csv(CSV_PATH)
# Calculate statistics for the sliced dataset
sliced_stats = tfdv.generate_statistics_from_csv(CSV_PATH, stats_options=slice_stats_options)
# Split the dataset using the previously defined split_datasets function
slice_info_datasets = split_datasets(sliced_stats)
return slice_info_datasets
# Generate slice function for the `medical_speciality` feature
slice_fn = slicing_util.get_feature_value_slicer(features={'medical_specialty': None})
# Generate stats for the sliced dataset
slice_datasets = sliced_stats_for_slice_fn(slice_fn, approved_cols, dataframe=train_df, schema=schema)
# Print name of slices for reference
print(f'Statistics generated for:\n')
print('\n'.join([sliced.datasets[0].name for sliced in slice_datasets]))
# Display at index 10, which corresponds to the slice named `medical_specialty_Gastroenterology`
display_stats_at_index(10, slice_datasets)
# Create output directory
OUTPUT_DIR = "output"
file_io.recursive_create_dir(OUTPUT_DIR)
# Use TensorFlow text output format pbtxt to store the schema
schema_file = os.path.join(OUTPUT_DIR, 'schema.pbtxt')
# write_schema_text function expect the defined schema and output path as parameters
tfdv.write_schema_text(schema, schema_file)
| 0.678433 | 0.988313 |
```
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import ParameterGrid
from sklearn.svm import LinearSVC
from sklearn.pipeline import Pipeline
from sklearn.datasets import fetch_20newsgroups
from sklearn.tree import DecisionTreeClassifier
from sklearn.base import TransformerMixin,BaseEstimator
from sklearn.decomposition import TruncatedSVD
import matplotlib.pyplot as plt
import math
cats = ['alt.atheism', 'sci.space']
newsgroups_train = fetch_20newsgroups(subset='train', categories=cats)
newsgroups_test = fetch_20newsgroups(subset='test', categories=cats)
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
class SkippableTruncatedSVD(TruncatedSVD):
def __init__(self,skip=False,n_components=2, algorithm="randomized", n_iter=5,
random_state=None, tol=0.):
self.skip = skip
super().__init__(n_components, algorithm, n_iter, random_state, tol)
# execute if not being skipped
def fit(self, X, y=None):
if self.skip:
return self
else:
return super().fit(X,y)
# execute if not being skipped
def fit_transform(self, X, y=None):
if self.skip:
return X
else:
return super().fit_transform(X,y)
# execute if not being skipped
def transform(self, X):
if self.skip:
return X
else:
return super().transform(X)
param_grid = [
{
'tfidf__max_features':[100,200,500,1000],
'svd__skip':[True,False],
'svd__n_components':[2,5,10,20]
}
]
len(ParameterGrid(param_grid))
pipeline = Pipeline([
('tfidf',TfidfVectorizer()),
('svd',SkippableTruncatedSVD()),
('clf',LogisticRegression())
])
num_cols = 3
num_rows = math.ceil(len(ParameterGrid(param_grid)) / num_cols)
plt.clf()
fig,axes = plt.subplots(num_rows,num_columns,sharey=True)
fig.set_size_inches(num_columns*5,num_rows*5)
for i,g in enumerate(ParameterGrid(param_grid)):
pipeline.set_params(**g)
pipeline.fit(X_train,y_train)
y_preds = pipeline.predict_proba(X_test)
# take the second column because the classifier outputs scores for
# the 0 class as well
preds = y_preds[:,1]
# fpr means false-positive-rate
# tpr means true-positive-rate
fpr, tpr, _ = metrics.roc_curve(y_test, preds)
auc_score = metrics.auc(fpr, tpr)
ax = axes[i // num_cols, i % num_cols]
ax.set_title(str(g),fontsize=8)
ax.plot(fpr, tpr, label='AUC = {:.2f}'.format(auc_score))
ax.legend(loc='lower right')
# it's helpful to add a diagonal to indicate where chance
# scores lie (i.e. just flipping a coin)
ax.plot([0,1],[0,1],'r--')
ax.set_xlim([-0.1,1.1])
ax.set_ylim([-0.1,1.1])
ax.set_ylabel('True Positive Rate')
ax.set_xlabel('False Positive Rate')
plt.show()
```
|
github_jupyter
|
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import ParameterGrid
from sklearn.svm import LinearSVC
from sklearn.pipeline import Pipeline
from sklearn.datasets import fetch_20newsgroups
from sklearn.tree import DecisionTreeClassifier
from sklearn.base import TransformerMixin,BaseEstimator
from sklearn.decomposition import TruncatedSVD
import matplotlib.pyplot as plt
import math
cats = ['alt.atheism', 'sci.space']
newsgroups_train = fetch_20newsgroups(subset='train', categories=cats)
newsgroups_test = fetch_20newsgroups(subset='test', categories=cats)
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
class SkippableTruncatedSVD(TruncatedSVD):
def __init__(self,skip=False,n_components=2, algorithm="randomized", n_iter=5,
random_state=None, tol=0.):
self.skip = skip
super().__init__(n_components, algorithm, n_iter, random_state, tol)
# execute if not being skipped
def fit(self, X, y=None):
if self.skip:
return self
else:
return super().fit(X,y)
# execute if not being skipped
def fit_transform(self, X, y=None):
if self.skip:
return X
else:
return super().fit_transform(X,y)
# execute if not being skipped
def transform(self, X):
if self.skip:
return X
else:
return super().transform(X)
param_grid = [
{
'tfidf__max_features':[100,200,500,1000],
'svd__skip':[True,False],
'svd__n_components':[2,5,10,20]
}
]
len(ParameterGrid(param_grid))
pipeline = Pipeline([
('tfidf',TfidfVectorizer()),
('svd',SkippableTruncatedSVD()),
('clf',LogisticRegression())
])
num_cols = 3
num_rows = math.ceil(len(ParameterGrid(param_grid)) / num_cols)
plt.clf()
fig,axes = plt.subplots(num_rows,num_columns,sharey=True)
fig.set_size_inches(num_columns*5,num_rows*5)
for i,g in enumerate(ParameterGrid(param_grid)):
pipeline.set_params(**g)
pipeline.fit(X_train,y_train)
y_preds = pipeline.predict_proba(X_test)
# take the second column because the classifier outputs scores for
# the 0 class as well
preds = y_preds[:,1]
# fpr means false-positive-rate
# tpr means true-positive-rate
fpr, tpr, _ = metrics.roc_curve(y_test, preds)
auc_score = metrics.auc(fpr, tpr)
ax = axes[i // num_cols, i % num_cols]
ax.set_title(str(g),fontsize=8)
ax.plot(fpr, tpr, label='AUC = {:.2f}'.format(auc_score))
ax.legend(loc='lower right')
# it's helpful to add a diagonal to indicate where chance
# scores lie (i.e. just flipping a coin)
ax.plot([0,1],[0,1],'r--')
ax.set_xlim([-0.1,1.1])
ax.set_ylim([-0.1,1.1])
ax.set_ylabel('True Positive Rate')
ax.set_xlabel('False Positive Rate')
plt.show()
| 0.833087 | 0.597549 |
Topic: Challenge Set 1 Subject: Explore MTA turnstile data Date: 04/13/2018 Name: student name Worked with: other students' name Location: sea18_ds10/student_submissions/challenges/01-mta/shaikh_reshama/challenge_set_1_reshama.ipynb
#### Initial Setup
-Data was collected between mid-april and early May
-Code is set up where you just need to change the csv files and it will combine all 3 into one data *super_df*
We also removed many columns that did not add value to the analysis. Packages pandas, numpy, matplotlib, seaborn, datetime, and dateutil were used in this analysis.
```
import sys
# imports a library 'pandas', names it as 'pd'
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Image
import pprint
# enables inline plots, without it plots don't show up in the notebook
%matplotlib inline
import dateutil.parser
from datetime import *
# various options in pandas
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 500)
pd.set_option('display.precision', 3)
#read data from this folder
df1 = pd.read_csv('turnstile_170422.csv')
df2 = pd.read_csv('turnstile_170429.csv')
df3 = pd.read_csv('turnstile_170506.csv')
#Work with turnstile name as combo
def Col_Con(mta):
mta['TURNSTILE'] = mta['C/A'] + mta['UNIT'] + mta['SCP']
mta.drop('C/A', axis=1, inplace = True)
mta.drop('UNIT', axis=1,inplace = True)
mta.drop('SCP', axis=1, inplace = True)
mta = mta.drop('LINENAME', axis=1)
mta = mta.drop('DIVISION', axis=1)
mta = mta.drop('DESC', axis=1)
mta.columns = mta.columns.str.strip()
return mta
mta1 = Col_Con(df1)
mta2 = Col_Con(df2)
mta3 = Col_Con(df3)
leng = len(mta1.index)
leng3 = len(mta2.index)
leng4 = len(mta3.index)
# make new columns, blank
#Only use if we are using turnstiles as data 'mta['Cu_ENT'] = [0] * leng'
def new_cols(mta, leng):
mta['ENT_COUNT'] = [0] * leng
mta['EXT_COUNT'] = [0] * leng
mta['DATE_TIME'] = [''] * leng
mta['DATE_TIME_WD'] = [''] * leng
mta['Donor_Est'] = [0] * leng
return mta
mta1 = new_cols(mta1, leng)
mta2 = new_cols(mta2, leng3)
mta3 = new_cols(mta3, leng4)
```
## Manipulating Data
### Datetime
A datetime column was added to help consolidate the dataframe later in the program. It helped us graph the mean traffic based on time of day.
### Summing Counts
The ENTRIES and EXITS column are set on a tally system, so it does not tell us exactly how many entries there were between the timestamp and the previous timestamp. Using the column data, a entry per time period was calculated based on the marker at the current timestamp and the marker at the previous time stamp (300,000 - 289,000) = 11,000 people passed through the time stamp.
```
# Fill empty date time column with formula
# I dont think we will use this...
def date_time(df):
leng= len(df.index)
for i in range(leng):
datetime = df.DATE[i]+ ' ' + df.TIME[i]
value = dateutil.parser.parse(datetime)
df.at[i,'DATE_TIME'] = value
df.at[i,'DATE_TIME_WD'] = int(value.weekday())
return df
mta1 = date_time(mta1)
mta2 = date_time(mta2)
mta3 = date_time(mta3)
#fill empty ent_count with formula
def coun_ent(mta, leng):
for i in range(leng):
if i == 0:
mta.at[i,'ENT_COUNT'] = 0
else:
if mta.TURNSTILE[i] == mta.TURNSTILE[i-1]:
diff = mta.ENTRIES[i] - mta.ENTRIES[i-1]
if diff < 0: diff = 0
if diff > 10000: diff = 0
mta.at[i,'ENT_COUNT'] = diff
else:
mta.at[i,'ENT_COUNT'] = 0
return mta
mta1 = coun_ent(mta1,leng)
mta2 = coun_ent(mta2, leng3)
mta3 = coun_ent(mta3, leng4)
#fill in new exit counts
def coun_ext(mta):
leng = len(mta.index)
for i in range(leng):
if i == 0:
mta.at[i,'EXT_COUNT'] = 0
else:
if mta.TURNSTILE[i] == mta.TURNSTILE[i-1]:
diff = mta.EXITS[i] - mta.EXITS[i-1]
if diff < 0: diff = 0
if diff > 10000: diff = 0
mta.at[i,'EXT_COUNT'] = diff
else:
mta.at[i,'EXT_COUNT'] = 0
return mta
mta1 = coun_ext(mta1)
mta2 = coun_ext(mta2)
mta3 = coun_ext(mta3)
#checking dataframe
mta1.head(10)
#Removing obsolete columns
def mor_cln(mta):
mta = mta.drop('TIME', axis=1)
mta = mta.drop('ENTRIES', axis=1)
mta = mta.drop('EXITS', axis=1)
return mta
mta1 = mor_cln(mta1)
mta2 = mor_cln(mta2)
mta3 = mor_cln(mta3)
```
## Merging the Dataframes
### Initial Merge
The initial merge is for the csv dataframes that have undergone the manipulations up to the point. It merges by 'STATION', 'DATE_TIME_WD' and 'DATE_TIME' (meaning we keep the columns as indexs, and need to reset_index to move them back to columns). It also creates aggregated ENT_COUNT (entry count) and and EXT_COUNT(exit count) based on day of the week. So the order is creating the entry and exit counts based turnstiles data per datetime hour.
Functions:
-get_agg
-merge_agg
### Secondary Merge
Using concat to 'stack' dataframes, then using get_agg_mean to get the mean of each station's entry count based on day of the week and datetime hour. This results to our super_df.
-get_agg_mean
-merge_agg
```
#Creating a function to merge aggregate columns with original DF
def merge_agg(ent_agg, ext_agg):
ent_agg.columns = ent_agg.columns.droplevel(level=1)
ext_agg.columns = ext_agg.columns.droplevel(level=1)
#Creating a new data frame removing repeted values
q_mta = pd.merge(ent_agg, ext_agg, on=['STATION','DATE_TIME_WD','DATE_TIME'], how='left')
return q_mta
#Creating a function to get aggregate sum data for two columns
def get_agg(mta):
mt = mta.groupby(['STATION','DATE_TIME_WD', 'DATE_TIME'])
ent_agg = mt.agg({'ENT_COUNT':['sum']})
ent_agg.reset_index(inplace=True)
ext_agg = mt.agg({'EXT_COUNT':['sum']})
#returns "index" as columns .reset_index()
ext_agg.reset_index(inplace=True)
fin_agg = merge_agg(ent_agg, ext_agg)
return fin_agg
#Creating a function to get aggregate mean data for two columns
def get_agg_mean(mta):
mt = mta.groupby(['STATION','DATE_TIME_WD', 'DATE_TIME'])
ent_agg = mt.agg({'ENT_COUNT':['mean']})
ent_agg.reset_index(inplace=True)
ext_agg = mt.agg({'EXT_COUNT':['mean']})
#returns "index" as columns .reset_index()
ext_agg.reset_index(inplace=True)
fin_agg = merge_agg(ent_agg, ext_agg)
return fin_agg
#Updating DF with aggregate data
mta1 = get_agg(mta1)
mta2 = get_agg(mta2)
mta3 = get_agg(mta3)
#Function that combines 3 dfs into one
def super_merge(a,b,c):
new_df = pd.concat([a, b])
q_mta = pd.concat([new_df, c])
q_mta=get_agg_mean(q_mta)
return q_mta
#Combining DFs
super_df = super_merge(mta1,mta2,mta3)
super_df
```
## Creating unique dataframes
### Entrys per day
We are removing the extra datetime column and focusing solely on the weekday for dataframe no_dt (ie no datetime). We have to copy super_df and then aggregate sum the traffic over the each day (because we are summing the total entries at each datetime point). Then we merge the two series together and return a new dataframe.
```
#Creating new functions to get aggregates over an entire day and combine with original DF
def merge_agg_day(ent_agg, ext_agg):
ent_agg.columns = ent_agg.columns.droplevel(level=1)
ext_agg.columns = ext_agg.columns.droplevel(level=1)
#Creating a new data frame removing repeted values
q_mta = pd.merge(ent_agg, ext_agg, on=['STATION','DATE_TIME_WD'], how='left')
return q_mta
def get_agg_day(mta):
mt = mta.groupby(['STATION','DATE_TIME_WD'])
ent_agg = mt.agg({'ENT_COUNT':['sum']})
ent_agg.reset_index(inplace=True)
ext_agg = mt.agg({'EXT_COUNT':['sum']})
#returns "index" as columns .reset_index()
ext_agg.reset_index(inplace=True)
fin_agg = merge_agg_day(ent_agg, ext_agg)
return fin_agg
#Creates a DF with cumulative entries per day per station with no datetime object
no_dt = get_agg_day(super_df)
no_dt
#Sorting DF and getting the top 30 stations
super_df.sort_values(by=['ENT_COUNT'], ascending = 0, inplace = True)
uni_stations = list(super_df.STATION.unique())[:20]
```
### Entries per time period
The following sets up and cleans the graph for showing the amount of riders perevery 4 hours. We start by setting the datetime to an hour so we can graph it easier. We then filter out the top 20 stations to avoid over complicating the graph. We then use aggregate sum on the entry counts and exit counts, then merge it to a new dataframe.
Functions used:
- get_agg_time
- merge_agg_time
```
#Setting values for code below
super_df.reset_index(drop=True, inplace = True)
f = dateutil.parser.parse('02:00:00')
late = dateutil.parser.parse('23:00:00')
mid = dateutil.parser.parse('00:00:00')
#Creating a column to use as an axis for plotting daily trends
#Creating a better df for reading traffic per date time
for i in range(len(super_df.index)):
val = (super_df['DATE_TIME'][i]).hour
#moving the midnight values (because it returns a 0) to 11 pm values to make a better looking graph
if val == mid.hour:
val =late.hour
super_df.at[i,'New_DATE_TIME'] = val
#removing bad data
if super_df.ENT_COUNT[i] >=100000 and (val<f.hour or val>late.hour):
super_df.drop(i, inplace = True)
if super_df.ENT_COUNT[i] <=1000 or super_df['DATE_TIME_WD'][i]>=5:
super_df.drop(i, inplace = True)
#Creating new data while sorting out low traffic stations, then resetting the indeces
uni_df = super_df[super_df['STATION'].isin(uni_stations)]
uni_df.reset_index(drop=True, inplace = True)
#Creating new functions to get aggregates over an entire day and combine with original DF
def merge_agg_time(ent_agg, ext_agg):
ent_agg.columns = ent_agg.columns.droplevel(level=1)
ext_agg.columns = ext_agg.columns.droplevel(level=1)
#Creating a new data frame removing repeted values
q_mta = pd.merge(ent_agg, ext_agg, on=['STATION','DATE_TIME_WD','New_DATE_TIME'], how='left')
return q_mta
def get_agg_time(mta):
mt = mta.groupby(['STATION','DATE_TIME_WD','New_DATE_TIME'])
ent_agg = mt.agg({'ENT_COUNT':['sum']})
ent_agg.reset_index(inplace=True)
ext_agg = mt.agg({'EXT_COUNT':['sum']})
#returns "index" as columns .reset_index()
ext_agg.reset_index(inplace=True)
fin_agg = merge_agg_time(ent_agg, ext_agg)
return fin_agg
uni_df = get_agg_time(uni_df)
uni_df
```
## Filtering the no_dt dataframe
We take the no_dt dataframe and filter out the top 20 stations to make a plot over.
```
no_dt_df = no_dt[no_dt['STATION'].isin(uni_stations)]
no_dt_df.sort_values(by='ENT_COUNT', ascending = 0, inplace = True)
no_dt_df.reset_index(drop=True, inplace=True)
no_dt_df
```
## Graphing
### Seaborn lmplot
Using the uni_df created to graph the ridership based on time of day, we make a seaborn (denoted sns in code) lmplot. the sns.set() allows us to manipulate different aspects of the graph. It's more or less self explanatory. The second block of code does the following (corresponds with line of code):
- resets the graph attributes to default (negating everything in previous block of code)
- creates a list for the x-axis labels
- creates a fig variable that plots the lmplot, x_bins trys to clean up the graph by creating bins.
- sets axis titles
- relabels x axis with a list with len corresponding to original amount of tick marks
- adds a plot title
### Seaborn pinpoint plot
We are creating a pinpoint plot to show the trend of ridership over a full week.
The 3rd block of coding does the following:
- tells ipython to create a pop up with the graph
- creates a list to relabel x axis
- creates pinpoint graph, also sets title using .set
- relabels x and y axis
```
# Graphs are below
import seaborn as sns
import matplotlib.dates as mdates
#Setting graph conponents
sns.set(rc={"font.style":"normal",
"axes.facecolor":('white'),
"figure.facecolor":'black',
"grid.color":'black',
"grid.linestyle":':',
"axes.grid":True,
'axes.labelsize':15,
'figure.figsize':(30, 30),
'xtick.labelsize':12,
'ytick.labelsize':12})
sns.reset_orig()
%matplotlib inline
# build the figure
hours = ['','12 am','5 am','10 am', '3 pm', '8 pm']
fig = sns.lmplot(y="ENT_COUNT", x="New_DATE_TIME", data=uni_df, hue = 'STATION', fit_reg = False, x_bins=12)
fig = fig.set_axis_labels('Time', 'Total Entries')
fig = fig.set(xticklabels=hours)
plt.title('20 Largest Stations: Daily Traffic')
%matplotlib osx
week = ['Monday','Tuesday', 'Wednesday','Thursday', 'Friday','Saturday', 'Sunday']
fig, ax = plt.subplots(figsize=(15,12))
fig = sns.pointplot(y="ENT_COUNT", x="DATE_TIME_WD", data=no_dt_df, join=True, hue='STATION', ax=ax, linestyles='-').set_title('20 Largest Traffic Stations')
ax.set(xlabel='Dates', ylabel='Total Entries', xticklabels=week)
```
|
github_jupyter
|
import sys
# imports a library 'pandas', names it as 'pd'
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Image
import pprint
# enables inline plots, without it plots don't show up in the notebook
%matplotlib inline
import dateutil.parser
from datetime import *
# various options in pandas
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 500)
pd.set_option('display.precision', 3)
#read data from this folder
df1 = pd.read_csv('turnstile_170422.csv')
df2 = pd.read_csv('turnstile_170429.csv')
df3 = pd.read_csv('turnstile_170506.csv')
#Work with turnstile name as combo
def Col_Con(mta):
mta['TURNSTILE'] = mta['C/A'] + mta['UNIT'] + mta['SCP']
mta.drop('C/A', axis=1, inplace = True)
mta.drop('UNIT', axis=1,inplace = True)
mta.drop('SCP', axis=1, inplace = True)
mta = mta.drop('LINENAME', axis=1)
mta = mta.drop('DIVISION', axis=1)
mta = mta.drop('DESC', axis=1)
mta.columns = mta.columns.str.strip()
return mta
mta1 = Col_Con(df1)
mta2 = Col_Con(df2)
mta3 = Col_Con(df3)
leng = len(mta1.index)
leng3 = len(mta2.index)
leng4 = len(mta3.index)
# make new columns, blank
#Only use if we are using turnstiles as data 'mta['Cu_ENT'] = [0] * leng'
def new_cols(mta, leng):
mta['ENT_COUNT'] = [0] * leng
mta['EXT_COUNT'] = [0] * leng
mta['DATE_TIME'] = [''] * leng
mta['DATE_TIME_WD'] = [''] * leng
mta['Donor_Est'] = [0] * leng
return mta
mta1 = new_cols(mta1, leng)
mta2 = new_cols(mta2, leng3)
mta3 = new_cols(mta3, leng4)
# Fill empty date time column with formula
# I dont think we will use this...
def date_time(df):
leng= len(df.index)
for i in range(leng):
datetime = df.DATE[i]+ ' ' + df.TIME[i]
value = dateutil.parser.parse(datetime)
df.at[i,'DATE_TIME'] = value
df.at[i,'DATE_TIME_WD'] = int(value.weekday())
return df
mta1 = date_time(mta1)
mta2 = date_time(mta2)
mta3 = date_time(mta3)
#fill empty ent_count with formula
def coun_ent(mta, leng):
for i in range(leng):
if i == 0:
mta.at[i,'ENT_COUNT'] = 0
else:
if mta.TURNSTILE[i] == mta.TURNSTILE[i-1]:
diff = mta.ENTRIES[i] - mta.ENTRIES[i-1]
if diff < 0: diff = 0
if diff > 10000: diff = 0
mta.at[i,'ENT_COUNT'] = diff
else:
mta.at[i,'ENT_COUNT'] = 0
return mta
mta1 = coun_ent(mta1,leng)
mta2 = coun_ent(mta2, leng3)
mta3 = coun_ent(mta3, leng4)
#fill in new exit counts
def coun_ext(mta):
leng = len(mta.index)
for i in range(leng):
if i == 0:
mta.at[i,'EXT_COUNT'] = 0
else:
if mta.TURNSTILE[i] == mta.TURNSTILE[i-1]:
diff = mta.EXITS[i] - mta.EXITS[i-1]
if diff < 0: diff = 0
if diff > 10000: diff = 0
mta.at[i,'EXT_COUNT'] = diff
else:
mta.at[i,'EXT_COUNT'] = 0
return mta
mta1 = coun_ext(mta1)
mta2 = coun_ext(mta2)
mta3 = coun_ext(mta3)
#checking dataframe
mta1.head(10)
#Removing obsolete columns
def mor_cln(mta):
mta = mta.drop('TIME', axis=1)
mta = mta.drop('ENTRIES', axis=1)
mta = mta.drop('EXITS', axis=1)
return mta
mta1 = mor_cln(mta1)
mta2 = mor_cln(mta2)
mta3 = mor_cln(mta3)
#Creating a function to merge aggregate columns with original DF
def merge_agg(ent_agg, ext_agg):
ent_agg.columns = ent_agg.columns.droplevel(level=1)
ext_agg.columns = ext_agg.columns.droplevel(level=1)
#Creating a new data frame removing repeted values
q_mta = pd.merge(ent_agg, ext_agg, on=['STATION','DATE_TIME_WD','DATE_TIME'], how='left')
return q_mta
#Creating a function to get aggregate sum data for two columns
def get_agg(mta):
mt = mta.groupby(['STATION','DATE_TIME_WD', 'DATE_TIME'])
ent_agg = mt.agg({'ENT_COUNT':['sum']})
ent_agg.reset_index(inplace=True)
ext_agg = mt.agg({'EXT_COUNT':['sum']})
#returns "index" as columns .reset_index()
ext_agg.reset_index(inplace=True)
fin_agg = merge_agg(ent_agg, ext_agg)
return fin_agg
#Creating a function to get aggregate mean data for two columns
def get_agg_mean(mta):
mt = mta.groupby(['STATION','DATE_TIME_WD', 'DATE_TIME'])
ent_agg = mt.agg({'ENT_COUNT':['mean']})
ent_agg.reset_index(inplace=True)
ext_agg = mt.agg({'EXT_COUNT':['mean']})
#returns "index" as columns .reset_index()
ext_agg.reset_index(inplace=True)
fin_agg = merge_agg(ent_agg, ext_agg)
return fin_agg
#Updating DF with aggregate data
mta1 = get_agg(mta1)
mta2 = get_agg(mta2)
mta3 = get_agg(mta3)
#Function that combines 3 dfs into one
def super_merge(a,b,c):
new_df = pd.concat([a, b])
q_mta = pd.concat([new_df, c])
q_mta=get_agg_mean(q_mta)
return q_mta
#Combining DFs
super_df = super_merge(mta1,mta2,mta3)
super_df
#Creating new functions to get aggregates over an entire day and combine with original DF
def merge_agg_day(ent_agg, ext_agg):
ent_agg.columns = ent_agg.columns.droplevel(level=1)
ext_agg.columns = ext_agg.columns.droplevel(level=1)
#Creating a new data frame removing repeted values
q_mta = pd.merge(ent_agg, ext_agg, on=['STATION','DATE_TIME_WD'], how='left')
return q_mta
def get_agg_day(mta):
mt = mta.groupby(['STATION','DATE_TIME_WD'])
ent_agg = mt.agg({'ENT_COUNT':['sum']})
ent_agg.reset_index(inplace=True)
ext_agg = mt.agg({'EXT_COUNT':['sum']})
#returns "index" as columns .reset_index()
ext_agg.reset_index(inplace=True)
fin_agg = merge_agg_day(ent_agg, ext_agg)
return fin_agg
#Creates a DF with cumulative entries per day per station with no datetime object
no_dt = get_agg_day(super_df)
no_dt
#Sorting DF and getting the top 30 stations
super_df.sort_values(by=['ENT_COUNT'], ascending = 0, inplace = True)
uni_stations = list(super_df.STATION.unique())[:20]
#Setting values for code below
super_df.reset_index(drop=True, inplace = True)
f = dateutil.parser.parse('02:00:00')
late = dateutil.parser.parse('23:00:00')
mid = dateutil.parser.parse('00:00:00')
#Creating a column to use as an axis for plotting daily trends
#Creating a better df for reading traffic per date time
for i in range(len(super_df.index)):
val = (super_df['DATE_TIME'][i]).hour
#moving the midnight values (because it returns a 0) to 11 pm values to make a better looking graph
if val == mid.hour:
val =late.hour
super_df.at[i,'New_DATE_TIME'] = val
#removing bad data
if super_df.ENT_COUNT[i] >=100000 and (val<f.hour or val>late.hour):
super_df.drop(i, inplace = True)
if super_df.ENT_COUNT[i] <=1000 or super_df['DATE_TIME_WD'][i]>=5:
super_df.drop(i, inplace = True)
#Creating new data while sorting out low traffic stations, then resetting the indeces
uni_df = super_df[super_df['STATION'].isin(uni_stations)]
uni_df.reset_index(drop=True, inplace = True)
#Creating new functions to get aggregates over an entire day and combine with original DF
def merge_agg_time(ent_agg, ext_agg):
ent_agg.columns = ent_agg.columns.droplevel(level=1)
ext_agg.columns = ext_agg.columns.droplevel(level=1)
#Creating a new data frame removing repeted values
q_mta = pd.merge(ent_agg, ext_agg, on=['STATION','DATE_TIME_WD','New_DATE_TIME'], how='left')
return q_mta
def get_agg_time(mta):
mt = mta.groupby(['STATION','DATE_TIME_WD','New_DATE_TIME'])
ent_agg = mt.agg({'ENT_COUNT':['sum']})
ent_agg.reset_index(inplace=True)
ext_agg = mt.agg({'EXT_COUNT':['sum']})
#returns "index" as columns .reset_index()
ext_agg.reset_index(inplace=True)
fin_agg = merge_agg_time(ent_agg, ext_agg)
return fin_agg
uni_df = get_agg_time(uni_df)
uni_df
no_dt_df = no_dt[no_dt['STATION'].isin(uni_stations)]
no_dt_df.sort_values(by='ENT_COUNT', ascending = 0, inplace = True)
no_dt_df.reset_index(drop=True, inplace=True)
no_dt_df
# Graphs are below
import seaborn as sns
import matplotlib.dates as mdates
#Setting graph conponents
sns.set(rc={"font.style":"normal",
"axes.facecolor":('white'),
"figure.facecolor":'black',
"grid.color":'black',
"grid.linestyle":':',
"axes.grid":True,
'axes.labelsize':15,
'figure.figsize':(30, 30),
'xtick.labelsize':12,
'ytick.labelsize':12})
sns.reset_orig()
%matplotlib inline
# build the figure
hours = ['','12 am','5 am','10 am', '3 pm', '8 pm']
fig = sns.lmplot(y="ENT_COUNT", x="New_DATE_TIME", data=uni_df, hue = 'STATION', fit_reg = False, x_bins=12)
fig = fig.set_axis_labels('Time', 'Total Entries')
fig = fig.set(xticklabels=hours)
plt.title('20 Largest Stations: Daily Traffic')
%matplotlib osx
week = ['Monday','Tuesday', 'Wednesday','Thursday', 'Friday','Saturday', 'Sunday']
fig, ax = plt.subplots(figsize=(15,12))
fig = sns.pointplot(y="ENT_COUNT", x="DATE_TIME_WD", data=no_dt_df, join=True, hue='STATION', ax=ax, linestyles='-').set_title('20 Largest Traffic Stations')
ax.set(xlabel='Dates', ylabel='Total Entries', xticklabels=week)
| 0.296247 | 0.813831 |
# Continous images using CNNs
```
from __future__ import print_function
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime, timedelta
from IPython.display import HTML
```
Set seed for reproducibility
```
seed = 999
#manualSeed = random.randint(1, 10000)
random.seed(seed)
torch.manual_seed(seed)
print(seed)
```
Define data directory and the number of workers
```
data_directory = "data/Terrain/processed"
n_workers = 2
```
## Settings and hyper parameters
```
sample_size = 512
i_image_size = 512
o_image_size = 1081
n_channels = 1
n_features = 64
n_epochs = 5
learning_r = 0.0002
beta1 = 0.5
n_gpus = 1
```
## Defining the network
```
class Network(nn.Module):
def __init__(self, n_gpus):
super(Network, self).__init__()
self.n_gpus = n_gpus
self.main = nn.Sequential(
# 512 x 512
nn.Conv2d(n_channels + 2, n_features, 4, stride=1, padding=1),
nn.BatchNorm2d(n_features),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, stride=2),
# 256 x 256
nn.Conv2d(n_features, n_features * 2, 4, stride=2, padding=1),
nn.BatchNorm2d(n_features * 2),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, stride=2),
# 128 x 128
nn.Conv2d(n_features * 2, n_features * 4, 4, stride=2, padding=1),
nn.BatchNorm2d(n_features * 4),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, stride=2),
# 64 x 64
nn.Conv2d(n_features * 4, n_features * 8, 4, stride=2, padding=1),
nn.BatchNorm2d(n_features * 8),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, stride=2),
# 32 x 32
nn.Conv2d(n_features * 8, n_features * 16, 4, stride=2, padding=1),
nn.BatchNorm2d(n_features * 16),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, stride=2),
nn.Flatten(),
nn.Dropout(),
nn.Linear(2 * 2 * n_features * 16, 1000),
nn.ReLU(inplace=True),
nn.Linear(1000, 512),
nn.ReLU(inplace=True),
nn.Linear(512, 64),
nn.ReLU(inplace=True),
nn.Linear(64, n_channels)
)
def forward(self, input):
return self.main(input).float()
```
## Loading the Dataset
```
dataset_transforms = transforms.Compose([
transforms.Grayscale(),
transforms.Resize(o_image_size),
transforms.CenterCrop(o_image_size),
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
dataset = dset.ImageFolder(root=data_directory, transform=dataset_transforms)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=1,
shuffle=True, num_workers=n_workers)
run_cuda = torch.cuda.is_available() and n_gpus > 0
device = torch.device("cuda:0" if run_cuda else "cpu")
print(run_cuda, torch.cuda.is_available())
batch = next(iter(dataloader))
plt.figure(figsize=(15, 15))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(vutils.make_grid(batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))
plt.show()
def save_img(image, name):
array = array.detach()
array = array.reshape(28,28)
plt.imshow(immage, cmap='binary')
plt.xticks([])
plt.yticks([])
plt.savefig('output/' + name)
network = Network(n_gpus).to(device)
if device.type is 'cuda' and n_gpus > 1:
network = nn.DataParallel(n_gpus, list(range(n_gpus)))
criterion = nn.MSELoss()
optimizer = optim.Adam(network.parameters(), lr=0.001, betas=(beta1, 0.999))
loss_log = []
iters, start_epoch = 0, 0
restore_session = True
runtime = ''
start_time = datetime.now()
def print_progress(line):
with open('out.txt', 'w') as file:
file.write(line)
def save_session(cur_epoch, cur_iters, cur_time):
net_state = { 'state_dict': network.state_dict(), 'optimizer': optimizer.state_dict() }
log_state = { 'iters': cur_iters, 'epoch': cur_epoch + 1, 'runtime': cur_time, 'loss': loss_log }
state = { 'net_state': net_state, 'log_state': log_state }
torch.save(state, 'checkpoints/css/latest.pth.tar')
def load_session(filename='checkpoints/css/latest.pth.tar'):
if os.path.isfile(filename):
print('=> Loading checkpoint from {}'.format(filename))
checkpoint = torch.load(filename)
return checkpoint
else:
print('=> Checkpoint {} not found'.format(filename))
if os.path.isfile('checkpoints/css/latest.pth.tar') and restore_session:
session = load_session('checkpoints/css/latest.pth.tar')
# Load logs
loss_log = session['log_state']['loss']
iters, start_epoch = session['log_state']['iters'], session['log_state']['epoch']
runtime = session['log_state']['runtime']
# Load network
network.load_state_dict(session['net_state']['state_dict'])
optimizer.load_state_dict(session['net_state']['optimizer'])
dt = datetime.strptime(runtime, '%H:%M:%S')
runtime_delta = timedelta(hours=dt.hour, minutes=dt.minute, seconds=dt.second)
start_time = start_time - runtime_delta
# Start training
for epoch in range(start_epoch, n_epochs):
for i, image in enumerate(dataloader):
for j in range(sample_size):
network.zero_grad()
x_space = random.uniform(0, 1)
y_space = random.uniform(0, 1)
input_tensor = torch.Tensor([[batch[0][0][0].numpy(),
np.full((o_image_size, o_image_size), x_space),
np.full((o_image_size, o_image_size), y_space)]])
x_coordinate = int(x_space * o_image_size)
y_coordinate = int(y_space * o_image_size)
label = torch.tensor([image[0][0][0][y_coordinate][x_coordinate]], dtype=torch.float)
output = network(input_tensor.to(device))
loss = criterion(output, label.to(device))
loss.backward()
optimizer.step()
runtime = str(datetime.now() - start_time).split('.')[:1][0]
if j % 5 is 0:
print_progress('Runtime: \e[32m%s\e[0m\n\n[\e[32m%d\e[0m/\e[32m%d\e[0m] [\e[32m%d\e[0m/\e[32m%d\e[0m] [\e[32m%d\e[0m/\e[32m%d\e[0m] [\e[32m%d\e[0m] loss: \e[32m%.4f\e[0m CUDA: \e[32m%s\e[0m' %
(runtime, epoch + 1, n_epochs, i, len(dataloader), j, sample_size, iters, loss.item(), run_cuda))
loss_log.append(loss.item())
if (iters % 1000 is 0 and i is not 0) or (epoch is n_epochs - 1 and i == len(dataloader) - 1):
save_session(epoch, iters, runtime)
iters += 1
x_space = random.uniform(0, 1)
y_space = random.uniform(0, 1)
input_tensor = torch.Tensor([[batch[0][0][0].numpy(),
np.full((o_image_size, o_image_size), x_space),
np.full((o_image_size, o_image_size), y_space)]])
x_coordinate = int(x_space * o_image_size)
y_coordinate = int(y_space * o_image_size)
with torch.no_grad():
output = network(input_tensor.to(device))
label = batch[0][0][0][y_coordinate][x_coordinate]
print(label, output.to('cpu')[0][0])
```
|
github_jupyter
|
from __future__ import print_function
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime, timedelta
from IPython.display import HTML
seed = 999
#manualSeed = random.randint(1, 10000)
random.seed(seed)
torch.manual_seed(seed)
print(seed)
data_directory = "data/Terrain/processed"
n_workers = 2
sample_size = 512
i_image_size = 512
o_image_size = 1081
n_channels = 1
n_features = 64
n_epochs = 5
learning_r = 0.0002
beta1 = 0.5
n_gpus = 1
class Network(nn.Module):
def __init__(self, n_gpus):
super(Network, self).__init__()
self.n_gpus = n_gpus
self.main = nn.Sequential(
# 512 x 512
nn.Conv2d(n_channels + 2, n_features, 4, stride=1, padding=1),
nn.BatchNorm2d(n_features),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, stride=2),
# 256 x 256
nn.Conv2d(n_features, n_features * 2, 4, stride=2, padding=1),
nn.BatchNorm2d(n_features * 2),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, stride=2),
# 128 x 128
nn.Conv2d(n_features * 2, n_features * 4, 4, stride=2, padding=1),
nn.BatchNorm2d(n_features * 4),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, stride=2),
# 64 x 64
nn.Conv2d(n_features * 4, n_features * 8, 4, stride=2, padding=1),
nn.BatchNorm2d(n_features * 8),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, stride=2),
# 32 x 32
nn.Conv2d(n_features * 8, n_features * 16, 4, stride=2, padding=1),
nn.BatchNorm2d(n_features * 16),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, stride=2),
nn.Flatten(),
nn.Dropout(),
nn.Linear(2 * 2 * n_features * 16, 1000),
nn.ReLU(inplace=True),
nn.Linear(1000, 512),
nn.ReLU(inplace=True),
nn.Linear(512, 64),
nn.ReLU(inplace=True),
nn.Linear(64, n_channels)
)
def forward(self, input):
return self.main(input).float()
dataset_transforms = transforms.Compose([
transforms.Grayscale(),
transforms.Resize(o_image_size),
transforms.CenterCrop(o_image_size),
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
dataset = dset.ImageFolder(root=data_directory, transform=dataset_transforms)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=1,
shuffle=True, num_workers=n_workers)
run_cuda = torch.cuda.is_available() and n_gpus > 0
device = torch.device("cuda:0" if run_cuda else "cpu")
print(run_cuda, torch.cuda.is_available())
batch = next(iter(dataloader))
plt.figure(figsize=(15, 15))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(vutils.make_grid(batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))
plt.show()
def save_img(image, name):
array = array.detach()
array = array.reshape(28,28)
plt.imshow(immage, cmap='binary')
plt.xticks([])
plt.yticks([])
plt.savefig('output/' + name)
network = Network(n_gpus).to(device)
if device.type is 'cuda' and n_gpus > 1:
network = nn.DataParallel(n_gpus, list(range(n_gpus)))
criterion = nn.MSELoss()
optimizer = optim.Adam(network.parameters(), lr=0.001, betas=(beta1, 0.999))
loss_log = []
iters, start_epoch = 0, 0
restore_session = True
runtime = ''
start_time = datetime.now()
def print_progress(line):
with open('out.txt', 'w') as file:
file.write(line)
def save_session(cur_epoch, cur_iters, cur_time):
net_state = { 'state_dict': network.state_dict(), 'optimizer': optimizer.state_dict() }
log_state = { 'iters': cur_iters, 'epoch': cur_epoch + 1, 'runtime': cur_time, 'loss': loss_log }
state = { 'net_state': net_state, 'log_state': log_state }
torch.save(state, 'checkpoints/css/latest.pth.tar')
def load_session(filename='checkpoints/css/latest.pth.tar'):
if os.path.isfile(filename):
print('=> Loading checkpoint from {}'.format(filename))
checkpoint = torch.load(filename)
return checkpoint
else:
print('=> Checkpoint {} not found'.format(filename))
if os.path.isfile('checkpoints/css/latest.pth.tar') and restore_session:
session = load_session('checkpoints/css/latest.pth.tar')
# Load logs
loss_log = session['log_state']['loss']
iters, start_epoch = session['log_state']['iters'], session['log_state']['epoch']
runtime = session['log_state']['runtime']
# Load network
network.load_state_dict(session['net_state']['state_dict'])
optimizer.load_state_dict(session['net_state']['optimizer'])
dt = datetime.strptime(runtime, '%H:%M:%S')
runtime_delta = timedelta(hours=dt.hour, minutes=dt.minute, seconds=dt.second)
start_time = start_time - runtime_delta
# Start training
for epoch in range(start_epoch, n_epochs):
for i, image in enumerate(dataloader):
for j in range(sample_size):
network.zero_grad()
x_space = random.uniform(0, 1)
y_space = random.uniform(0, 1)
input_tensor = torch.Tensor([[batch[0][0][0].numpy(),
np.full((o_image_size, o_image_size), x_space),
np.full((o_image_size, o_image_size), y_space)]])
x_coordinate = int(x_space * o_image_size)
y_coordinate = int(y_space * o_image_size)
label = torch.tensor([image[0][0][0][y_coordinate][x_coordinate]], dtype=torch.float)
output = network(input_tensor.to(device))
loss = criterion(output, label.to(device))
loss.backward()
optimizer.step()
runtime = str(datetime.now() - start_time).split('.')[:1][0]
if j % 5 is 0:
print_progress('Runtime: \e[32m%s\e[0m\n\n[\e[32m%d\e[0m/\e[32m%d\e[0m] [\e[32m%d\e[0m/\e[32m%d\e[0m] [\e[32m%d\e[0m/\e[32m%d\e[0m] [\e[32m%d\e[0m] loss: \e[32m%.4f\e[0m CUDA: \e[32m%s\e[0m' %
(runtime, epoch + 1, n_epochs, i, len(dataloader), j, sample_size, iters, loss.item(), run_cuda))
loss_log.append(loss.item())
if (iters % 1000 is 0 and i is not 0) or (epoch is n_epochs - 1 and i == len(dataloader) - 1):
save_session(epoch, iters, runtime)
iters += 1
x_space = random.uniform(0, 1)
y_space = random.uniform(0, 1)
input_tensor = torch.Tensor([[batch[0][0][0].numpy(),
np.full((o_image_size, o_image_size), x_space),
np.full((o_image_size, o_image_size), y_space)]])
x_coordinate = int(x_space * o_image_size)
y_coordinate = int(y_space * o_image_size)
with torch.no_grad():
output = network(input_tensor.to(device))
label = batch[0][0][0][y_coordinate][x_coordinate]
print(label, output.to('cpu')[0][0])
| 0.904114 | 0.826991 |
<a href="https://colab.research.google.com/github/zwarshavsky/DS-Unit-1-Sprint-1-Dealing-With-Data/blob/master/module2-loadingdata/LS_DS_112_Loading_Data_test.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Lambda School Data Science - Loading, Cleaning and Visualizing Data
Objectives for today:
- Load data from multiple sources into a Python notebook
- From a URL (github or otherwise)
- CSV upload method
- !wget method
- "Clean" a dataset using common Python libraries
- Removing NaN values "Data Imputation"
- Create basic plots appropriate for different data types
- Scatter Plot
- Histogram
- Density Plot
- Pairplot (if we have time)
# Part 1 - Loading Data
Data comes in many shapes and sizes - we'll start by loading tabular data, usually in csv format.
Data set sources:
- https://archive.ics.uci.edu/ml/datasets.html
- https://github.com/awesomedata/awesome-public-datasets
- https://registry.opendata.aws/ (beyond scope for now, but good to be aware of)
Let's start with an example - [data about flags](https://archive.ics.uci.edu/ml/datasets/Flags).
## Lecture example - flag data
```
#Zhenya made a change
# Step 1 - find the actual file to download
# From navigating the page, clicking "Data Folder"
flag_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data'
# You can "shell out" in a notebook for more powerful tools
# https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html
# Funny extension, but on inspection looks like a csv
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data
# Extensions are just a norm! You have to inspect to be sure what something is
# Step 2 - load the data
# How to deal with a csv? ๐ผ
import pandas as pd
flag_data = pd.read_csv(flag_data_url)
# Step 3 - verify we've got *something*
flag_data.head()
# Step 4 - Looks a bit odd - verify that it is what we want
flag_data.count()
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data | wc
# So we have 193 observations with funny names, file has 194 rows
# Looks like the file has no header row, but read_csv assumes it does
help(pd.read_csv)
# Alright, we can pass header=None to fix this
flag_data = pd.read_csv(flag_data_url, header=None)
flag_data.head()
flag_data.count()
flag_data.isna().sum()
```
### Yes, but what does it *mean*?
This data is fairly nice - it was "donated" and is already "clean" (no missing values). But there are no variable names - so we have to look at the codebook (also from the site).
```
1. name: Name of the country concerned
2. landmass: 1=N.America, 2=S.America, 3=Europe, 4=Africa, 4=Asia, 6=Oceania
3. zone: Geographic quadrant, based on Greenwich and the Equator; 1=NE, 2=SE, 3=SW, 4=NW
4. area: in thousands of square km
5. population: in round millions
6. language: 1=English, 2=Spanish, 3=French, 4=German, 5=Slavic, 6=Other Indo-European, 7=Chinese, 8=Arabic, 9=Japanese/Turkish/Finnish/Magyar, 10=Others
7. religion: 0=Catholic, 1=Other Christian, 2=Muslim, 3=Buddhist, 4=Hindu, 5=Ethnic, 6=Marxist, 7=Others
8. bars: Number of vertical bars in the flag
9. stripes: Number of horizontal stripes in the flag
10. colours: Number of different colours in the flag
11. red: 0 if red absent, 1 if red present in the flag
12. green: same for green
13. blue: same for blue
14. gold: same for gold (also yellow)
15. white: same for white
16. black: same for black
17. orange: same for orange (also brown)
18. mainhue: predominant colour in the flag (tie-breaks decided by taking the topmost hue, if that fails then the most central hue, and if that fails the leftmost hue)
19. circles: Number of circles in the flag
20. crosses: Number of (upright) crosses
21. saltires: Number of diagonal crosses
22. quarters: Number of quartered sections
23. sunstars: Number of sun or star symbols
24. crescent: 1 if a crescent moon symbol present, else 0
25. triangle: 1 if any triangles present, 0 otherwise
26. icon: 1 if an inanimate image present (e.g., a boat), otherwise 0
27. animate: 1 if an animate image (e.g., an eagle, a tree, a human hand) present, 0 otherwise
28. text: 1 if any letters or writing on the flag (e.g., a motto or slogan), 0 otherwise
29. topleft: colour in the top-left corner (moving right to decide tie-breaks)
30. botright: Colour in the bottom-left corner (moving left to decide tie-breaks)
```
Exercise - read the help for `read_csv` and figure out how to load the data with the above variable names. One pitfall to note - with `header=None` pandas generated variable names starting from 0, but the above list starts from 1...
```
```
## Steps of Loading and Exploring a Dataset:
- Find a dataset that looks interesting
- Learn what you can about it
- What's in it?
- How many rows and columns?
- What types of variables?
- Look at the raw contents of the file
- Load it into your workspace (notebook)
- Handle any challenges with headers
- Handle any problems with missing values
- Then you can start to explore the data
- Look at the summary statistics
- Look at counts of different categories
- Make some plots to look at the distribution of the data
## 3 ways of loading a dataset
### From its URL
```
```
### From a local file
```
```
### Using the `!wget` command
```
```
# Part 2 - Deal with Missing Values
## Diagnose Missing Values
Lets use the Adult Dataset from UCI. <https://github.com/ryanleeallred/datasets>
```
```
## Fill Missing Values
```
```
# Part 3 - Explore the Dataset:
## Look at "Summary Statistics
### Numeric
```
```
###Non-Numeric
```
```
## Look at Categorical Values
# Part 4 - Basic Visualizations (using the Pandas Library)
## Histogram
```
# Pandas Histogram
```
## Density Plot (KDE)
```
# Pandas Density Plot
```
## Scatter Plot
```
# Pandas Scatterplot
```
|
github_jupyter
|
#Zhenya made a change
# Step 1 - find the actual file to download
# From navigating the page, clicking "Data Folder"
flag_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data'
# You can "shell out" in a notebook for more powerful tools
# https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html
# Funny extension, but on inspection looks like a csv
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data
# Extensions are just a norm! You have to inspect to be sure what something is
# Step 2 - load the data
# How to deal with a csv? ๐ผ
import pandas as pd
flag_data = pd.read_csv(flag_data_url)
# Step 3 - verify we've got *something*
flag_data.head()
# Step 4 - Looks a bit odd - verify that it is what we want
flag_data.count()
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data | wc
# So we have 193 observations with funny names, file has 194 rows
# Looks like the file has no header row, but read_csv assumes it does
help(pd.read_csv)
# Alright, we can pass header=None to fix this
flag_data = pd.read_csv(flag_data_url, header=None)
flag_data.head()
flag_data.count()
flag_data.isna().sum()
1. name: Name of the country concerned
2. landmass: 1=N.America, 2=S.America, 3=Europe, 4=Africa, 4=Asia, 6=Oceania
3. zone: Geographic quadrant, based on Greenwich and the Equator; 1=NE, 2=SE, 3=SW, 4=NW
4. area: in thousands of square km
5. population: in round millions
6. language: 1=English, 2=Spanish, 3=French, 4=German, 5=Slavic, 6=Other Indo-European, 7=Chinese, 8=Arabic, 9=Japanese/Turkish/Finnish/Magyar, 10=Others
7. religion: 0=Catholic, 1=Other Christian, 2=Muslim, 3=Buddhist, 4=Hindu, 5=Ethnic, 6=Marxist, 7=Others
8. bars: Number of vertical bars in the flag
9. stripes: Number of horizontal stripes in the flag
10. colours: Number of different colours in the flag
11. red: 0 if red absent, 1 if red present in the flag
12. green: same for green
13. blue: same for blue
14. gold: same for gold (also yellow)
15. white: same for white
16. black: same for black
17. orange: same for orange (also brown)
18. mainhue: predominant colour in the flag (tie-breaks decided by taking the topmost hue, if that fails then the most central hue, and if that fails the leftmost hue)
19. circles: Number of circles in the flag
20. crosses: Number of (upright) crosses
21. saltires: Number of diagonal crosses
22. quarters: Number of quartered sections
23. sunstars: Number of sun or star symbols
24. crescent: 1 if a crescent moon symbol present, else 0
25. triangle: 1 if any triangles present, 0 otherwise
26. icon: 1 if an inanimate image present (e.g., a boat), otherwise 0
27. animate: 1 if an animate image (e.g., an eagle, a tree, a human hand) present, 0 otherwise
28. text: 1 if any letters or writing on the flag (e.g., a motto or slogan), 0 otherwise
29. topleft: colour in the top-left corner (moving right to decide tie-breaks)
30. botright: Colour in the bottom-left corner (moving left to decide tie-breaks)
```
## Steps of Loading and Exploring a Dataset:
- Find a dataset that looks interesting
- Learn what you can about it
- What's in it?
- How many rows and columns?
- What types of variables?
- Look at the raw contents of the file
- Load it into your workspace (notebook)
- Handle any challenges with headers
- Handle any problems with missing values
- Then you can start to explore the data
- Look at the summary statistics
- Look at counts of different categories
- Make some plots to look at the distribution of the data
## 3 ways of loading a dataset
### From its URL
### From a local file
### Using the `!wget` command
# Part 2 - Deal with Missing Values
## Diagnose Missing Values
Lets use the Adult Dataset from UCI. <https://github.com/ryanleeallred/datasets>
## Fill Missing Values
# Part 3 - Explore the Dataset:
## Look at "Summary Statistics
### Numeric
###Non-Numeric
## Look at Categorical Values
# Part 4 - Basic Visualizations (using the Pandas Library)
## Histogram
## Density Plot (KDE)
## Scatter Plot
| 0.771672 | 0.987723 |
# Vim2
```
%%time
import tables
f = tables.open_file('../data/crcns-vim2/Stimuli.mat')
Xv = f.get_node('/sv')[:]
Xt = f.get_node('/st')[:]
f.close()
h5file.close()
%%time
h5file = tables.open_file("../data/crcns-vim2/derived/Stimuli.mat", mode="w", title="Stimulus file")
h5file.create_array('/', 'sv', Xv, "Validation stimulus")
h5file.create_array('/', 'st', Xt, "Training stimulus")
h5file.flush()
h5file.close()
%%time
import tables
f = tables.open_file('../data/crcns-vim2/derived/Stimuli.mat')
Xv = f.get_node('/sv')[:]
Xt = f.get_node('/st')[:]
f.close()
%%time
import numpy as np
import tables
def get_max_r2(all_responses):
all_responses = (all_responses - all_responses.mean(2, keepdims=True)) / all_responses.std(2, keepdims=True)
p_r_bar = (np.nanmean(all_responses, axis=1) ** 2).mean(1)
bar_p_r = (np.nanmean(all_responses ** 2, axis=1)).mean(1)
p_r_bar
N = (~np.isnan(all_responses).any(axis=2)).sum(axis=1)
p_mu = 1 / (N - 1) * (N * p_r_bar - bar_p_r)
max_r2 = p_mu / p_r_bar
return max_r2
for subject in [1, 2, 3]:
print(f"Subject {subject}")
f = tables.open_file(f'../data/crcns-vim2/VoxelResponses_subject{subject}.mat')
Yv = f.get_node('/rv')[:]
Yt = f.get_node('/rt')[:]
Ya = f.get_node('/rva')[:]
max_r2 = get_max_r2(Ya)
nodes = f.list_nodes('/roi')
rois = {}
for node in nodes:
rois[node.name] = node[:]
f.close()
mask = (~np.isnan(Yv)).all(1) & (~np.isnan(Yt)).all(1)
h5file = tables.open_file(f'../data/crcns-vim2/derived/VoxelResponses_subject{subject}.mat', mode="w", title="Response file")
h5file.create_array('/', 'rv', Yv, "Validation responses")
h5file.create_array('/', 'rt', Yt, "Training responses")
h5file.create_array('/', 'maxr2', max_r2, "Max R^2 for this dataset")
h5file.create_array('/', 'mask', mask, "Master mask")
groups = h5file.create_group('/', 'roi')
for key, node in rois.items():
h5file.create_array(groups, key, node, "ROI")
h5file.flush()
h5file.close()
Yt.shape
%%time
import tables
f = tables.open_file(f'../data/crcns-vim2/derived/VoxelResponses_subject{subject}.mat')
Yv = f.get_node('/rv')[:]
Yt = f.get_node('/rt')[:]
nodes = f.list_nodes('/roi')
rois = {}
for node in nodes:
rois[node.name] = node[:]
f.close()
```
# pvc-1
```
import matplotlib
import matplotlib.image
import numpy as np
import os
import tables
root = '../data/crcns-ringach-data'
movie_info = {}
h5file = tables.open_file(f'../data/crcns-ringach-data/derived/movies.h5', 'w')
for i in range(30):
for j in range(4):
print(i, j)
root_ = os.path.join(root, "movie_frames", f"movie{j:03}_{i:03}.images")
with open(os.path.join(root_, 'nframes'), 'r') as f:
nframes = int(f.read())
ims = []
for frame in range(nframes):
im_name = f'movie{j:03}_{i:03}_{frame:03}.jpeg'
the_im = matplotlib.image.imread(os.path.join(root_, im_name))
assert the_im.shape[0] == 240
the_im = the_im.reshape((120, 2, 160, 2, 3)).mean(3).mean(1)
the_im = the_im[8:, 24:136, :].transpose((2, 0, 1))
ims.append(the_im.astype(np.uint8))
m = np.stack(ims, axis=0)
h5file.create_array('/', f'movie{j:03}_{i:03}', m, "Movie")
h5file.close()
h5file.close()
the_im = matplotlib.image.imread(os.path.join(root_, im_name))
plt.imshow(the_im)
the_im = the_im.reshape((120, 2, 160, 2, 3)).mean(3).mean(1).astype(np.uint8)
#the_im = the_im[8:, 24:136, :].astype(np.uint8)
plt.imshow(the_im)
import matplotlib.pyplot as plt
h5file = tables.open_file(f'../data/crcns-ringach-data/derived/movies.h5', 'r')
plt.imshow(h5file.get_node('/movie000_000')[:][10, :, :, :].transpose((1, 2, 0)))
```
|
github_jupyter
|
%%time
import tables
f = tables.open_file('../data/crcns-vim2/Stimuli.mat')
Xv = f.get_node('/sv')[:]
Xt = f.get_node('/st')[:]
f.close()
h5file.close()
%%time
h5file = tables.open_file("../data/crcns-vim2/derived/Stimuli.mat", mode="w", title="Stimulus file")
h5file.create_array('/', 'sv', Xv, "Validation stimulus")
h5file.create_array('/', 'st', Xt, "Training stimulus")
h5file.flush()
h5file.close()
%%time
import tables
f = tables.open_file('../data/crcns-vim2/derived/Stimuli.mat')
Xv = f.get_node('/sv')[:]
Xt = f.get_node('/st')[:]
f.close()
%%time
import numpy as np
import tables
def get_max_r2(all_responses):
all_responses = (all_responses - all_responses.mean(2, keepdims=True)) / all_responses.std(2, keepdims=True)
p_r_bar = (np.nanmean(all_responses, axis=1) ** 2).mean(1)
bar_p_r = (np.nanmean(all_responses ** 2, axis=1)).mean(1)
p_r_bar
N = (~np.isnan(all_responses).any(axis=2)).sum(axis=1)
p_mu = 1 / (N - 1) * (N * p_r_bar - bar_p_r)
max_r2 = p_mu / p_r_bar
return max_r2
for subject in [1, 2, 3]:
print(f"Subject {subject}")
f = tables.open_file(f'../data/crcns-vim2/VoxelResponses_subject{subject}.mat')
Yv = f.get_node('/rv')[:]
Yt = f.get_node('/rt')[:]
Ya = f.get_node('/rva')[:]
max_r2 = get_max_r2(Ya)
nodes = f.list_nodes('/roi')
rois = {}
for node in nodes:
rois[node.name] = node[:]
f.close()
mask = (~np.isnan(Yv)).all(1) & (~np.isnan(Yt)).all(1)
h5file = tables.open_file(f'../data/crcns-vim2/derived/VoxelResponses_subject{subject}.mat', mode="w", title="Response file")
h5file.create_array('/', 'rv', Yv, "Validation responses")
h5file.create_array('/', 'rt', Yt, "Training responses")
h5file.create_array('/', 'maxr2', max_r2, "Max R^2 for this dataset")
h5file.create_array('/', 'mask', mask, "Master mask")
groups = h5file.create_group('/', 'roi')
for key, node in rois.items():
h5file.create_array(groups, key, node, "ROI")
h5file.flush()
h5file.close()
Yt.shape
%%time
import tables
f = tables.open_file(f'../data/crcns-vim2/derived/VoxelResponses_subject{subject}.mat')
Yv = f.get_node('/rv')[:]
Yt = f.get_node('/rt')[:]
nodes = f.list_nodes('/roi')
rois = {}
for node in nodes:
rois[node.name] = node[:]
f.close()
import matplotlib
import matplotlib.image
import numpy as np
import os
import tables
root = '../data/crcns-ringach-data'
movie_info = {}
h5file = tables.open_file(f'../data/crcns-ringach-data/derived/movies.h5', 'w')
for i in range(30):
for j in range(4):
print(i, j)
root_ = os.path.join(root, "movie_frames", f"movie{j:03}_{i:03}.images")
with open(os.path.join(root_, 'nframes'), 'r') as f:
nframes = int(f.read())
ims = []
for frame in range(nframes):
im_name = f'movie{j:03}_{i:03}_{frame:03}.jpeg'
the_im = matplotlib.image.imread(os.path.join(root_, im_name))
assert the_im.shape[0] == 240
the_im = the_im.reshape((120, 2, 160, 2, 3)).mean(3).mean(1)
the_im = the_im[8:, 24:136, :].transpose((2, 0, 1))
ims.append(the_im.astype(np.uint8))
m = np.stack(ims, axis=0)
h5file.create_array('/', f'movie{j:03}_{i:03}', m, "Movie")
h5file.close()
h5file.close()
the_im = matplotlib.image.imread(os.path.join(root_, im_name))
plt.imshow(the_im)
the_im = the_im.reshape((120, 2, 160, 2, 3)).mean(3).mean(1).astype(np.uint8)
#the_im = the_im[8:, 24:136, :].astype(np.uint8)
plt.imshow(the_im)
import matplotlib.pyplot as plt
h5file = tables.open_file(f'../data/crcns-ringach-data/derived/movies.h5', 'r')
plt.imshow(h5file.get_node('/movie000_000')[:][10, :, :, :].transpose((1, 2, 0)))
| 0.134463 | 0.578686 |
My good friend __[Zane Blanton](https://github.com/zscore)__ commented on how interesting the graph of USA unemployment rate looked. So inspired by him I decided to start up this notebook to study it with some of the tools that I'm aware of.
We'll use pandas because it's a great library for working with data. And matplotlib to plot graphs, I'm setting some stuff here to make the graphs bigger and nicer.
```
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = [12, 9]
plt.style.use(['fivethirtyeight'])
```
Let's download the data and put it in a pandas dataframe. Yes, pandas already has a function to download *and* load the CSV with just an URL! Neat.
```
UNRATE_URL = ('https://fred.stlouisfed.org/graph/fredgraph.csv?chart_type=line'
'&recession_bars=on&log_scales=&bgcolor=%23e1e9f0&graph_bgcolor='
'%23ffffff&fo=Open+Sans&ts=12&tts=12&txtcolor=%23444444&show_leg'
'end=yes&show_axis_titles=yes&drp=0&cosd=1948-01-01&coed=2017-06'
'-01&height=450&stacking=&range=&mode=fred&id=UNRATE&transformat'
'ion=lin&nd=1948-01-01&ost=-99999&oet=99999&lsv=&lev=&mma=0&fml='
'a&fgst=lin&fgsnd=2009-06-01&fq=Monthly&fam=avg&vintage_date=&re'
'vision_date=&line_color=%234572a7&line_style=solid&lw=2&scale=l'
'eft&mark_type=none&mw=2&width=968')
df = pd.read_csv(UNRATE_URL, parse_dates=['DATE'], index_col=['DATE'])
```
Pandas dataframe objects come with methods hooked up with matplotlib and we can tweak stuff inside the cell, jupyter notebook will deal with it before showing the plot.
```
ax = df['UNRATE'].plot()
ax.get_xaxis().label.set_visible(False)
```
As you can see the graph is clearly cyclical, which reminds me of... You know... Crisis theory.
You see, capitalism has a cyclical behaviour. It's much like a prey-predator dynamic.
So considering this... Are we close to the unemployment rate rising again? Looking at the graph it sure feels like it, but let's try some things!
First let's calculate the moving average of the data (aka the rolling mean). It's a pretty straightforward idea, it's just about calculating the average for a "window" of time, say 12 months here. And keep "rolling" the window to get a new time series.
```
df['rmean'] = df['UNRATE'].rolling(window=12).mean()
ax = df['UNRATE'].plot()
df['rmean'].plot(color='C8')
ax.get_xaxis().label.set_visible(False)
```
Yellow here is the moving average, as you can see it's a smoothed version of the original data with a slight lag.
Some people intepret the moving average as a representation of the "correct" value of the data. That means that substantial deviations from it eventually result in a return to the correct value, in other words, a return to the moving average.
Stock analysts came up with a way to detect those significant deviations away from the moving average. They calculate 2 additional series, one above and one below the moving average. The region delimited by those 2 series are called Bollinger bands. When looking at stocks it's important to pay attention at moments when the value of the stock gets outside the bands, stock analysts see those excursions as trading signals to buy or sell the stocks.
And yes, I'm aware that there's some unrigorous magical thinking in this approach, Bollinger bands sure get a lot of flak because of this. But I feel like indicators are always interesting, even when they're built over a shoddy basis.
To draw the Bollinger bands first we need to calculate the moving standard deviation of this series, again with 12 months to match the moving average.
```
rstd = df['UNRATE'].rolling(window=12).std()
```
The idea is to use standard deviation to track volatility, and we create a band with ยฑKฯ, where K is a constant around 2.
```
K = 1.9
df['upper_band'] = df['rmean'] + K*rstd
df['lower_band'] = df['rmean'] - K*rstd
```
Too bad the data is monthly and not daily, Bollinger bands were designed to analyze daily time series. The usual size of the window is 20 days and K is 2. Because the window here is smaller (12) I tweaked K to be a little smaller.
Ok, so let's take a look at how those bands look like in a smaller period: 1980-1992. Those were wild times, we got the Latin American debt crisis, Black Monday and the 1991 recession! <span style="background-color:black; color:black">They were kinda minor compared to 2008 though...</span>
```
ax = df['rmean']['1980':'1992'].plot(color='C6', alpha=.3)
df['UNRATE']['1980':'1992'].plot()
ax.get_xaxis().label.set_visible(False)
ax.fill_between(df.index, df['lower_band'], df['upper_band'], facecolor='C6', alpha=.3)
```
The idea is that when the values get outside the band (ยฑKฯ) you should interpret it as a signal. If this were a trading stock it means that when the value gets below the band (-Kฯ) and right back up then it's a buy signal. If the value gets above the band (+Kฯ) and right back down then it's a sell signal.
So let's keep track of those critical dates:
```
critical_upper_dates = df.loc[df['UNRATE'] > df['upper_band']].index
critical_lower_dates = df.loc[df['UNRATE'] < df['lower_band']].index
```
And use them to create 2 lists of critical intervals:
```
from dateutil.relativedelta import relativedelta
def critical_intervals(critical_dates):
ret = []
start = None
for d in critical_dates:
if start is None:
start = d
elif d - relativedelta(months=1) != end:
ret.append((start, end))
start = d
end = d
ret.append((start, end))
return ret
critical_upper_intervals = critical_intervals(critical_upper_dates)
critical_lower_intervals = critical_intervals(critical_lower_dates)
```
So now let's highlight the critical intervals, green for when it goes above the band and red for when it goes below the band.
```
def plot_unrate_with_highlighted_bollinger_bands(date_start=None, date_end=None):
ax = df['rmean'][date_start:date_end].plot(color='C6', alpha=.3)
df['UNRATE'][date_start:date_end].plot(color='C6')
ax.get_xaxis().label.set_visible(False)
ax.grid(False)
ax.fill_between(df.index, df['lower_band'], df['upper_band'], alpha=.3)
for interval in critical_upper_intervals:
ax.axvspan(*interval, color='green', alpha=.3)
for interval in critical_lower_intervals:
ax.axvspan(*interval, color='red', alpha=.3)
plot_unrate_with_highlighted_bollinger_bands('1980', '1992')
```
Let's look at the 70s (Nixon shock, stagflation!). We got some interesting reds here:
```
plot_unrate_with_highlighted_bollinger_bands('1970', '1980')
```
And now let's look at the years around the infamous dotcom bubble... It signalled 8 consecutive reds before it happened. Notice that one green in June 2003, it was a rare example where the Bollinger band signal really worked!
```
plot_unrate_with_highlighted_bollinger_bands('1992', '2005')
```
Now let's check the entire graph:
```
plot_unrate_with_highlighted_bollinger_bands()
```
Lastly let's focus on the 2008 crash until the present:
```
plot_unrate_with_highlighted_bollinger_bands('2007')
```
So... Are we detecting some signs of rebound? Well, it's hard to say. What we can see here is that unemployment rates are doing *abnormally* well lately!
There was never any other moment in the history before where it hit so many consecutive reds this way, unemployment is decreasing with incessant signals that maybe it's decreasing too fast.
Well, whatever is coming up, I feel like this shows that the US economy (and consequently the world) is going through a special period.
|
github_jupyter
|
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = [12, 9]
plt.style.use(['fivethirtyeight'])
UNRATE_URL = ('https://fred.stlouisfed.org/graph/fredgraph.csv?chart_type=line'
'&recession_bars=on&log_scales=&bgcolor=%23e1e9f0&graph_bgcolor='
'%23ffffff&fo=Open+Sans&ts=12&tts=12&txtcolor=%23444444&show_leg'
'end=yes&show_axis_titles=yes&drp=0&cosd=1948-01-01&coed=2017-06'
'-01&height=450&stacking=&range=&mode=fred&id=UNRATE&transformat'
'ion=lin&nd=1948-01-01&ost=-99999&oet=99999&lsv=&lev=&mma=0&fml='
'a&fgst=lin&fgsnd=2009-06-01&fq=Monthly&fam=avg&vintage_date=&re'
'vision_date=&line_color=%234572a7&line_style=solid&lw=2&scale=l'
'eft&mark_type=none&mw=2&width=968')
df = pd.read_csv(UNRATE_URL, parse_dates=['DATE'], index_col=['DATE'])
ax = df['UNRATE'].plot()
ax.get_xaxis().label.set_visible(False)
df['rmean'] = df['UNRATE'].rolling(window=12).mean()
ax = df['UNRATE'].plot()
df['rmean'].plot(color='C8')
ax.get_xaxis().label.set_visible(False)
rstd = df['UNRATE'].rolling(window=12).std()
K = 1.9
df['upper_band'] = df['rmean'] + K*rstd
df['lower_band'] = df['rmean'] - K*rstd
ax = df['rmean']['1980':'1992'].plot(color='C6', alpha=.3)
df['UNRATE']['1980':'1992'].plot()
ax.get_xaxis().label.set_visible(False)
ax.fill_between(df.index, df['lower_band'], df['upper_band'], facecolor='C6', alpha=.3)
critical_upper_dates = df.loc[df['UNRATE'] > df['upper_band']].index
critical_lower_dates = df.loc[df['UNRATE'] < df['lower_band']].index
from dateutil.relativedelta import relativedelta
def critical_intervals(critical_dates):
ret = []
start = None
for d in critical_dates:
if start is None:
start = d
elif d - relativedelta(months=1) != end:
ret.append((start, end))
start = d
end = d
ret.append((start, end))
return ret
critical_upper_intervals = critical_intervals(critical_upper_dates)
critical_lower_intervals = critical_intervals(critical_lower_dates)
def plot_unrate_with_highlighted_bollinger_bands(date_start=None, date_end=None):
ax = df['rmean'][date_start:date_end].plot(color='C6', alpha=.3)
df['UNRATE'][date_start:date_end].plot(color='C6')
ax.get_xaxis().label.set_visible(False)
ax.grid(False)
ax.fill_between(df.index, df['lower_band'], df['upper_band'], alpha=.3)
for interval in critical_upper_intervals:
ax.axvspan(*interval, color='green', alpha=.3)
for interval in critical_lower_intervals:
ax.axvspan(*interval, color='red', alpha=.3)
plot_unrate_with_highlighted_bollinger_bands('1980', '1992')
plot_unrate_with_highlighted_bollinger_bands('1970', '1980')
plot_unrate_with_highlighted_bollinger_bands('1992', '2005')
plot_unrate_with_highlighted_bollinger_bands()
plot_unrate_with_highlighted_bollinger_bands('2007')
| 0.501953 | 0.948298 |
# GARCH Stock Forecasting
## Read Data
```
import pandas_datareader.data as web
from datetime import datetime, timedelta
import pandas as pd
import matplotlib.pyplot as plt
from arch import arch_model
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
import numpy as np
```
## DIS Volatility
```
start = datetime(2015, 1, 1)
end = datetime(2020, 6, 10)
dis = web.DataReader('DIS', 'yahoo', start=start, end=end)
returns = 100 * dis.Close.pct_change().dropna()
plt.figure(figsize=(10,4))
plt.plot(returns)
plt.ylabel('Pct Return', fontsize=16)
plt.title('DIS Returns', fontsize=20)
```
## PACF
```
plot_pacf(returns**2)
plt.show()
```
## Fit GARCH(3,3)
```
model = arch_model(returns, p=3, q=3)
model_fit = model.fit()
model_fit.summary()
```
## Try GARCH(3,0) = ARCH(3)
```
model = arch_model(returns, p=3, q=0)
model_fit = model.fit()
model_fit.summary()
rolling_predictions = []
test_size = 365
for i in range(test_size):
train = returns[:-(test_size-i)]
model = arch_model(train, p=3, q=0)
model_fit = model.fit(disp='off')
pred = model_fit.forecast(horizon=1)
rolling_predictions.append(np.sqrt(pred.variance.values[-1,:][0]))
rolling_predictions = pd.Series(rolling_predictions, index=returns.index[-365:])
plt.figure(figsize=(10,4))
true, = plt.plot(returns[-365:])
preds, = plt.plot(rolling_predictions)
plt.title('Volatility Prediction - Rolling Forecast', fontsize=20)
plt.legend(['True Returns', 'Predicted Volatility'], fontsize=16)
```
# S&P 500
```
start = datetime(2000, 1, 1)
end = datetime(2020, 6, 10)
spy = web.DataReader('SPY', 'yahoo', start=start, end=end)
returns = 100 * spy.Close.pct_change().dropna()
plt.figure(figsize=(10,4))
plt.plot(returns)
plt.ylabel('Pct Return', fontsize=16)
plt.title('SPY Returns', fontsize=20)
```
## PACF
```
plot_pacf(returns**2)
plt.show()
```
## Fit GARCH(2,2)
```
model = arch_model(returns, p=2, q=2)
model_fit = model.fit()
model_fit.summary()
```
## Rolling Forecast
```
rolling_predictions = []
test_size = 365*5
for i in range(test_size):
train = returns[:-(test_size-i)]
model = arch_model(train, p=2, q=2)
model_fit = model.fit(disp='off')
pred = model_fit.forecast(horizon=1)
rolling_predictions.append(np.sqrt(pred.variance.values[-1,:][0]))
rolling_predictions = pd.Series(rolling_predictions, index=returns.index[-365*5:])
plt.figure(figsize=(10,4))
true, = plt.plot(returns[-365*5:])
preds, = plt.plot(rolling_predictions)
plt.title('Volatility Prediction - Rolling Forecast', fontsize=20)
plt.legend(['True Returns', 'Predicted Volatility'], fontsize=16)
```
# How to use the model
```
train = returns
model = arch_model(train, p=2, q=2)
model_fit = model.fit(disp='off')
pred = model_fit.forecast(horizon=7)
future_dates = [returns.index[-1] + timedelta(days=i) for i in range(1,8)]
pred = pd.Series(np.sqrt(pred.variance.values[-1,:]), index=future_dates)
plt.figure(figsize=(10,4))
plt.plot(pred)
plt.title('Volatility Prediction - Next 7 Days', fontsize=20)
```
|
github_jupyter
|
import pandas_datareader.data as web
from datetime import datetime, timedelta
import pandas as pd
import matplotlib.pyplot as plt
from arch import arch_model
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
import numpy as np
start = datetime(2015, 1, 1)
end = datetime(2020, 6, 10)
dis = web.DataReader('DIS', 'yahoo', start=start, end=end)
returns = 100 * dis.Close.pct_change().dropna()
plt.figure(figsize=(10,4))
plt.plot(returns)
plt.ylabel('Pct Return', fontsize=16)
plt.title('DIS Returns', fontsize=20)
plot_pacf(returns**2)
plt.show()
model = arch_model(returns, p=3, q=3)
model_fit = model.fit()
model_fit.summary()
model = arch_model(returns, p=3, q=0)
model_fit = model.fit()
model_fit.summary()
rolling_predictions = []
test_size = 365
for i in range(test_size):
train = returns[:-(test_size-i)]
model = arch_model(train, p=3, q=0)
model_fit = model.fit(disp='off')
pred = model_fit.forecast(horizon=1)
rolling_predictions.append(np.sqrt(pred.variance.values[-1,:][0]))
rolling_predictions = pd.Series(rolling_predictions, index=returns.index[-365:])
plt.figure(figsize=(10,4))
true, = plt.plot(returns[-365:])
preds, = plt.plot(rolling_predictions)
plt.title('Volatility Prediction - Rolling Forecast', fontsize=20)
plt.legend(['True Returns', 'Predicted Volatility'], fontsize=16)
start = datetime(2000, 1, 1)
end = datetime(2020, 6, 10)
spy = web.DataReader('SPY', 'yahoo', start=start, end=end)
returns = 100 * spy.Close.pct_change().dropna()
plt.figure(figsize=(10,4))
plt.plot(returns)
plt.ylabel('Pct Return', fontsize=16)
plt.title('SPY Returns', fontsize=20)
plot_pacf(returns**2)
plt.show()
model = arch_model(returns, p=2, q=2)
model_fit = model.fit()
model_fit.summary()
rolling_predictions = []
test_size = 365*5
for i in range(test_size):
train = returns[:-(test_size-i)]
model = arch_model(train, p=2, q=2)
model_fit = model.fit(disp='off')
pred = model_fit.forecast(horizon=1)
rolling_predictions.append(np.sqrt(pred.variance.values[-1,:][0]))
rolling_predictions = pd.Series(rolling_predictions, index=returns.index[-365*5:])
plt.figure(figsize=(10,4))
true, = plt.plot(returns[-365*5:])
preds, = plt.plot(rolling_predictions)
plt.title('Volatility Prediction - Rolling Forecast', fontsize=20)
plt.legend(['True Returns', 'Predicted Volatility'], fontsize=16)
train = returns
model = arch_model(train, p=2, q=2)
model_fit = model.fit(disp='off')
pred = model_fit.forecast(horizon=7)
future_dates = [returns.index[-1] + timedelta(days=i) for i in range(1,8)]
pred = pd.Series(np.sqrt(pred.variance.values[-1,:]), index=future_dates)
plt.figure(figsize=(10,4))
plt.plot(pred)
plt.title('Volatility Prediction - Next 7 Days', fontsize=20)
| 0.708313 | 0.938237 |
# Day 2, Part B: TD3 Algorithm
## Learning goals
- Find out why TD3 is more performant for this environment than PPO
- Walk through the TD3 code and learn how this author constructed it
- See examples of terminology useage that are different from the CartPole example
- Learn what a replay buffer is
## Definitions
- **Simulation environment**: Notice that this is not the same as the python/conda environment. The simulation environment is the simulated world where the reinforcement learning takes place. It provides opportunities for an agent to learn and explore, and ideally provides challenges that aid in efficient learning.
- **Agent (aka actor or policy)**: An entity in the simulation environment that performs actions. The agent could be a person, a robot, a car, a thermostat, etc.
- **State variable**: An observed variable in the simulation environment. They can be coordinates of objects or entities, an amount of fuel in a tank, air temperature, wind speed, etc.
- **Action variable**: An action that the agent can perform. Examples: step forward, increase velocity to 552.5 knots, push object left with force of 212.3 N, etc.
- **Reward**: A value given to the agent for doing something considered to be 'good'. Reward is commonly assigned at each time step and cumulated during a learning episode.
- **Episode**: A learning event consisting of multiple steps in which the agent can explore. It starts with the unmodified environment and continues until the goal is achieved or something prevents further progress, such as a robot getting stuck in a hole. Multiple episodes are typically run in loops until the model is fully trained.
- **Model (aka policy or agent)**: An RL model is composed of the modeling architecture (e.g., neural network) and parameters or weights that define the unique behavior of the model.
- **Policy (aka model or agent)**: The parameters of a model that encode the best choices to make in an environment. The choices are not necessarily good ones until the model undergoes training. The policy (or model) is the "brain" of the agent.
- **Replay Buffer**: A place in memory to store state, action, reward and other variables describing environmental state transitions. It is effectively the agent's memory of past experiences.
- **On-policy**: The value of the next action is determined using the current actor policy.
- **Off-policy**: The value of the next action is determined by a function, such as a value function, instead of the current actor policy.
- **Value function**: Function (typically a neural network) used to estimate the value, or expected reward, of an action.
## TD3 vs PPO
One of the big differences between these two is that PPO is an on-policy method, while TD3 is an off-policy method. On-policy means that the value of the next action is determined using the current actor policy, and off-policy means that the value is determined by a different function, such as a value function.
In this specific case, TD3 builds two Q-functions (twin quality value functions) that map future expected rewards given the current action (current time step). On the other hand, PPO makes all reward estimates by applying the current actor policy along multi-step trajectories. By using the same policy to estimate rewards as the actor policy, PPO needs to learn over more time steps to gain the same range of exploration as TD3.
TD3 also builds a replay buffer as it learns off-policy. This makes it more sample-efficient and therefore a great choice when simulations (or real-world robots) are slower than the algorithm.
Off-policy methods tend to be less stable than on-policy methods, but TD3 has some tricks for reducing instability, which will be discussed below.
Check out [TD3notebook.ipynb](https://github.com/Quansight/Practical-RL/blob/main/TD3notebook.ipynb) - this is a direct translation from the author-provided `main.py`: all we've done is stashed the configuration variables into a dictionary, named `args`, and shoved all the code that would be executed normally into a function called `main()` so it can be called simply in the notebook.
In this notebook, let's walk through the code a bit. To keep the notebook functional, we've removed the `main()` definition. In general, the function is mostly concerned with setting values for variables to be used in the `for` loop, which is where the meat of the learning happens. Let's look more closely...
```
import numpy as np
import torch
import gym
import pybullet_envs
import os
import sys
from pathlib import Path
from tensorboardX import SummaryWriter
```
We have the original TD3 algorithm as a python file in this repo, so we can import it as a submodule and use it in the algorithm below.
```
sys.path.append(str(Path().resolve().parent))
import utils
import TD3
log_dir = "tmp/"
os.makedirs(log_dir, exist_ok=True)
writer = SummaryWriter(logdir=log_dir )
```
## Evaluation
This [first function](https://github.com/sfujim/TD3/blob/master/main.py#L15) is used to evaluate the policy, either while the agent is learning or afterward when the model is fully trained.
- It first makes a new environment with a fixed random seed
- Then it loops through several learning episodes and records the reward earned from each one
- The average reward is calculated, printed to the screen, and returned to the calling function
```
# Runs policy for X episodes and returns average reward
# A fixed seed is used for the eval environment
def eval_policy(policy, env_name, seed, eval_episodes=10):
eval_env = gym.make(env_name)
eval_env.seed(seed + 100)
avg_reward = 0.
for _ in range(eval_episodes):
state, done = eval_env.reset(), False
while not done:
action = policy.select_action(np.array(state))
state, reward, done, _ = eval_env.step(action)
avg_reward += reward
avg_reward /= eval_episodes
print("---------------------------------------")
print(f"Evaluation over {eval_episodes} episodes: {avg_reward:.3f}")
print("---------------------------------------")
return avg_reward
```
## Variables and Initialization
This first part of the code is simply a dictionary of parameters to be specified for the modeling.
```
args = {
"policy" : "TD3", # Policy name
"env" : "AntBulletEnv-v0", # OpenAI gym environment name
"seed" : 0, # Sets Gym, PyTorch and Numpy seeds
"start_timesteps" : 25e3, # Time steps initial random policy is used
"eval_freq" : 5e3, # How often (time steps) we evaluate
"max_timesteps" : 2e6, # Max time steps to run environment
"expl_noise" : 0.1, # Std of Gaussian exploration noise
"batch_size" : 256, # Batch size for both actor and critic
"discount" : 0.99, # Discount factor
"tau" : 0.005, # Target network update rate
"policy_noise" : 0.2, # Noise added to target policy during critic update
"noise_clip" : 0.5, # Range to clip target policy noise
"policy_freq" : 2, # Frequency of delayed policy updates
"save_model" : "store_true", # Save model and optimizer parameters
"load_model" : "", # Model load file name, "" doesn't load, "default" uses file_name
}
```
Make a file name to keep track of the models we've made.
```
file_name = f"{args['policy']}_{args['env']}_{args['seed']}"
```
Make sure some subfolders are present to save the results and the model.
```
if not os.path.exists("./results"):
os.makedirs("./results")
if args['save_model'] and not os.path.exists("./models"):
os.makedirs("./models")
```
>**In the next cell, make the gym environment just like we did in the CartPole example.** Use `args['env']` as the environment name and return the usual `env` object.
<details>
<summary>Click to reveal answer</summary>
env = gym.make(args['env'])
</details>
<br>
Set the random seeds for the environment, Torch (if we run on GPU), and NumPy.
```
env.seed(args['seed'])
env.action_space.seed(args['seed'])
torch.manual_seed(args['seed'])
np.random.seed(args['seed'])
```
We need the algorithm (TD3) to know some things about the environment, including the dimensions of the state and action spaces. TD3 also needs to know the largest action value to expect.
>Try printing some of the following values to get a better understanding of what values are being passed to TD3 (**just print the kwargs dict**). The state dimensions might be larger than you expected. If you go to the walker base class for pybullet there is a `calc_state` function ([here](https://github.com/bulletphysics/bullet3/blob/a62fb187a5c83a2e1e3e0376565ab3ae47870465/examples/pybullet/gym/pybullet_envs/robot_locomotors.py#L35)). See if you can find a few of the state variables.
```
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
max_action = float(env.action_space.high[0])
kwargs = {
"state_dim": state_dim,
"action_dim": action_dim,
"max_action": max_action,
"discount": args['discount'],
"tau": args['tau'],
}
```
## TD3 Tricks
TD3 is an improvement upon DDPG. Some folks refer to those improvements as "tricks" because they are fairly simple.
One way to improve exploration is to simply add noise to the actions during learning. This ensures that the decisions made by the agent are not the same every time. Even as the agent learns better actions, it will continue to try actions that are at least a little bit different from the known high-reward actions.
As you read on OpenAI Spinning Up, they list the three "tricks":
>**Trick One: Clipped Double-Q Learning**. TD3 learns two Q-functions instead of one (hence โtwinโ), and uses the smaller of the two Q-values to form the targets in the Bellman error loss functions.
>
>**Trick Two: โDelayedโ Policy Updates**. TD3 updates the policy (and target networks) less frequently than the Q-function. The paper recommends one policy update for every two Q-function updates.
>
>**Trick Three: Target Policy Smoothing**. TD3 adds noise to the target action, to make it harder for the policy to exploit Q-function errors by smoothing out Q along changes in action.
The three variables below are each used in the tricks, and the noise variables are scaled to the action space.
```
# Trick One
kwargs["noise_clip"] = args['noise_clip'] * max_action
# Trick Two
kwargs["policy_freq"] = args['policy_freq']
# Trick Three
kwargs["policy_noise"] = args['policy_noise'] * max_action
```
>**In your own words, write a description of each of the tricks, stating cleary why they help learning.** Feel free to review the Spinning up descriptions and reviewing the TD3 paper. Some explanation is given in this notebook too. We will ask three of you to describe one of the tricks. As we discuss them, feel free to update your description.
- Trick One: (type answer here)
- Trick Two: (type answer here)
- Trick Three: (type answer here)
Initialize the TD3 policy.
>**But first, go back to the CartPole example (Day1, Part A) and find the cell where we created an instance of the PPO algorithm. What name did we give PPO in that case? What name does the author of TD3 give below?**
<details>
<summary>Click to reveal answer</summary>
For CartPole, we followed OpenAI's convention of naming the algorithm "model", but here, TD3 is given the name "policy". This kind of inconsistency in terminology is common in RL, so keep in mind that "model" and "policy" are equivalent between these two examples. You might see "agent" or "actor" used in other code as well.
</details>
<br>
```
policy = TD3.TD3(**kwargs)
```
This cell just loads a previous model or starts a new one.
```
if args['load_model'] != "":
policy_file = file_name if args['load_model'] == "default" else args['load_model']
policy.load(f"./models/{policy_file}")
```
## Experience Replay Buffer
This buffer is what keeps track of past experiences. The algorithm will sample from this buffer to estimate the value of the agent's next action. The buffer does not keep all experiences, but ideally it keeps a representative range of them.
The experiences are state transitions tied to actions and rewards.
>**Look at the file `utils.py` for what else is stored in the buffer. Describe the values that you can by listing them here.**
- (type answer here)
- (type answer here)
- (type answer here)
- (type answer here)
- (type answer here)
```
replay_buffer = utils.ReplayBuffer(state_dim, action_dim)
```
## Learning over many episodes
Scan through the code in the next cell, then keep reading to learn about parts of the code.
```
# Evaluate untrained policy and save as the first one in a sequence of trained policies
evaluations = [eval_policy(policy, args['env'], args['seed'])]
state, done = env.reset(), False
episode_reward = 0
episode_timesteps = 0
episode_num = 0
for t in range(int(args['max_timesteps'])):
episode_timesteps += 1
# Select action randomly or according to policy
if t < args['start_timesteps']:
action = env.action_space.sample()
else:
action = (
policy.select_action(np.array(state))
+ np.random.normal(0, max_action * args['expl_noise'], size=action_dim)
).clip(-max_action, max_action)
# Perform action
next_state, reward, done, _ = env.step(action)
done_bool = float(done) if episode_timesteps < env._max_episode_steps else 0
# Store data in replay buffer
replay_buffer.add(state, action, next_state, reward, done_bool)
state = next_state
episode_reward += reward
# Train agent after collecting sufficient data
if t >= args['start_timesteps']:
policy.train(replay_buffer, args['batch_size'])
if done:
# +1 to account for 0 indexing. +0 on ep_timesteps since it will increment +1 even if done=True
writer.add_scalar('Reward', episode_reward, t+1)
print(f"Total T: {t+1} Episode Num: {episode_num+1} Episode T: {episode_timesteps} Reward: {episode_reward:.3f}")
# Reset environment
state, done = env.reset(), False
episode_reward = 0
episode_timesteps = 0
episode_num += 1
# Evaluate episode
if (t + 1) % args['eval_freq'] == 0:
evaluations.append(eval_policy(policy, args['env'], args['seed']))
np.save(f"./results/{file_name}", evaluations)
if args['save_model']:
policy.save(f"./models/{file_name}")
writer.export_scalars_to_json("./all_scalars.json")
writer.close()
```
While the above cell is running feel free to launch tensorboard in another frame and look for 'scalars' in the options - to do so, run the command below in a terminal (linux):
`tensorboard --logdir ./tmp/`
It may take some time for the data to show up (I usually see it around 30k steps) - refresh until you see it, then set the auto-refresh if you want in the settings (gear-icon).
In the following section, note that for the first `start_timesteps` number of time steps, the action is simply filled from random sampling of possible choices; this helps fill the replay buffer and give a baseline before actual policy choices are made.
```python
if t < args['start_timesteps']:
action = env.action_space.sample()
else:
action = (
policy.select_action(np.array(state))
+ np.random.normal(0, max_action * args['expl_noise'], size=action_dim)
).clip(-max_action, max_action)
```
The bulk of the actual training happens in only a few lines. The below section takes the selected action from above, applies it to the environment, and returns the new environment state, including the reward and a done flag. It then checks whether the number of time steps reached the maximum or not.
```python
next_state, reward, done, _ = env.step(action)
done_bool = float(done) if episode_timesteps < env._max_episode_steps else 0
```
The outcome of the time step is saved to the experience replay buffer.
```python
replay_buffer.add(state, action, next_state, reward, done_bool)
```
Then the code updates the state, saves the reward, and, if the replay buffer has recieved enough baseline values, trains the policy. At this point, the ant will explore the environment by trying to move its legs such that it receives high rewards.
```python
state = next_state
episode_reward += reward
# Train agent after collecting sufficient data
if t >= args['start_timesteps']:
policy.train(replay_buffer, args['batch_size'])
```
Once the environment reaches the described `done` state, the environment and some variables are reset.
```python
if done:
# +1 to account for 0 indexing. +0 on ep_timesteps since it will increment +1 even if done=True
print(f"Total T: {t+1} Episode Num: {episode_num+1} Episode T: {episode_timesteps} Reward: {episode_reward:.3f}")
# Reset environment
state, done = env.reset(), False
episode_reward = 0
episode_timesteps = 0
episode_num += 1
```
Before starting a new episode, every `eval_freq` number of time steps, the policy is evaluated against a number of episodes outside the training process, and saves the current policy for good measure.
```python
# Evaluate episode
if (t + 1) % args['eval_freq'] == 0:
evaluations.append(eval_policy(policy, args['env'], args['seed']))
np.save(f"./results/{file_name}", evaluations)
if args['save_model']:
policy.save(f"./models/{file_name}")
```
That's it. It's nice having all the complicated heavy lifting already coded for us.
If you run the notebook, as is, it will train for two million time steps with all the standard hyperparameters the TD3 authors set up and out will pop a policy that allows the robot ant to sprint like the animation below.
```
import IPython.display as ipd
ipd.Image("../animations/base_ant.png")
```
|
github_jupyter
|
import numpy as np
import torch
import gym
import pybullet_envs
import os
import sys
from pathlib import Path
from tensorboardX import SummaryWriter
sys.path.append(str(Path().resolve().parent))
import utils
import TD3
log_dir = "tmp/"
os.makedirs(log_dir, exist_ok=True)
writer = SummaryWriter(logdir=log_dir )
# Runs policy for X episodes and returns average reward
# A fixed seed is used for the eval environment
def eval_policy(policy, env_name, seed, eval_episodes=10):
eval_env = gym.make(env_name)
eval_env.seed(seed + 100)
avg_reward = 0.
for _ in range(eval_episodes):
state, done = eval_env.reset(), False
while not done:
action = policy.select_action(np.array(state))
state, reward, done, _ = eval_env.step(action)
avg_reward += reward
avg_reward /= eval_episodes
print("---------------------------------------")
print(f"Evaluation over {eval_episodes} episodes: {avg_reward:.3f}")
print("---------------------------------------")
return avg_reward
args = {
"policy" : "TD3", # Policy name
"env" : "AntBulletEnv-v0", # OpenAI gym environment name
"seed" : 0, # Sets Gym, PyTorch and Numpy seeds
"start_timesteps" : 25e3, # Time steps initial random policy is used
"eval_freq" : 5e3, # How often (time steps) we evaluate
"max_timesteps" : 2e6, # Max time steps to run environment
"expl_noise" : 0.1, # Std of Gaussian exploration noise
"batch_size" : 256, # Batch size for both actor and critic
"discount" : 0.99, # Discount factor
"tau" : 0.005, # Target network update rate
"policy_noise" : 0.2, # Noise added to target policy during critic update
"noise_clip" : 0.5, # Range to clip target policy noise
"policy_freq" : 2, # Frequency of delayed policy updates
"save_model" : "store_true", # Save model and optimizer parameters
"load_model" : "", # Model load file name, "" doesn't load, "default" uses file_name
}
file_name = f"{args['policy']}_{args['env']}_{args['seed']}"
if not os.path.exists("./results"):
os.makedirs("./results")
if args['save_model'] and not os.path.exists("./models"):
os.makedirs("./models")
env.seed(args['seed'])
env.action_space.seed(args['seed'])
torch.manual_seed(args['seed'])
np.random.seed(args['seed'])
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
max_action = float(env.action_space.high[0])
kwargs = {
"state_dim": state_dim,
"action_dim": action_dim,
"max_action": max_action,
"discount": args['discount'],
"tau": args['tau'],
}
# Trick One
kwargs["noise_clip"] = args['noise_clip'] * max_action
# Trick Two
kwargs["policy_freq"] = args['policy_freq']
# Trick Three
kwargs["policy_noise"] = args['policy_noise'] * max_action
policy = TD3.TD3(**kwargs)
if args['load_model'] != "":
policy_file = file_name if args['load_model'] == "default" else args['load_model']
policy.load(f"./models/{policy_file}")
replay_buffer = utils.ReplayBuffer(state_dim, action_dim)
# Evaluate untrained policy and save as the first one in a sequence of trained policies
evaluations = [eval_policy(policy, args['env'], args['seed'])]
state, done = env.reset(), False
episode_reward = 0
episode_timesteps = 0
episode_num = 0
for t in range(int(args['max_timesteps'])):
episode_timesteps += 1
# Select action randomly or according to policy
if t < args['start_timesteps']:
action = env.action_space.sample()
else:
action = (
policy.select_action(np.array(state))
+ np.random.normal(0, max_action * args['expl_noise'], size=action_dim)
).clip(-max_action, max_action)
# Perform action
next_state, reward, done, _ = env.step(action)
done_bool = float(done) if episode_timesteps < env._max_episode_steps else 0
# Store data in replay buffer
replay_buffer.add(state, action, next_state, reward, done_bool)
state = next_state
episode_reward += reward
# Train agent after collecting sufficient data
if t >= args['start_timesteps']:
policy.train(replay_buffer, args['batch_size'])
if done:
# +1 to account for 0 indexing. +0 on ep_timesteps since it will increment +1 even if done=True
writer.add_scalar('Reward', episode_reward, t+1)
print(f"Total T: {t+1} Episode Num: {episode_num+1} Episode T: {episode_timesteps} Reward: {episode_reward:.3f}")
# Reset environment
state, done = env.reset(), False
episode_reward = 0
episode_timesteps = 0
episode_num += 1
# Evaluate episode
if (t + 1) % args['eval_freq'] == 0:
evaluations.append(eval_policy(policy, args['env'], args['seed']))
np.save(f"./results/{file_name}", evaluations)
if args['save_model']:
policy.save(f"./models/{file_name}")
writer.export_scalars_to_json("./all_scalars.json")
writer.close()
The bulk of the actual training happens in only a few lines. The below section takes the selected action from above, applies it to the environment, and returns the new environment state, including the reward and a done flag. It then checks whether the number of time steps reached the maximum or not.
The outcome of the time step is saved to the experience replay buffer.
Then the code updates the state, saves the reward, and, if the replay buffer has recieved enough baseline values, trains the policy. At this point, the ant will explore the environment by trying to move its legs such that it receives high rewards.
Once the environment reaches the described `done` state, the environment and some variables are reset.
Before starting a new episode, every `eval_freq` number of time steps, the policy is evaluated against a number of episodes outside the training process, and saves the current policy for good measure.
That's it. It's nice having all the complicated heavy lifting already coded for us.
If you run the notebook, as is, it will train for two million time steps with all the standard hyperparameters the TD3 authors set up and out will pop a policy that allows the robot ant to sprint like the animation below.
| 0.505615 | 0.990015 |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# Item2Item recommendations with DKN
The second task is about knowledge-aware item-to-item recommendations. We still use DKN for demonstration.
The learning framework is illustrated as follows:
<img src="https://recodatasets.blob.core.windows.net/kdd2020/images/Item2item-framework.JPG" width="500">
```
import sys
sys.path.append("../../../")
from reco_utils.recommender.deeprec.deeprec_utils import *
from reco_utils.recommender.deeprec.models.dkn_item2item import *
from reco_utils.recommender.deeprec.io.dkn_item2item_iterator import *
import time
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
data_path = 'data_folder/my/DKN-training-folder'
yaml_file = './dkn.yaml' #os.path.join(data_path, r'../../../../../../dkn.yaml')
train_file = os.path.join(data_path, r'item2item_train_instances.txt')
valid_file = os.path.join(data_path, r'item2item_valid_instances.txt')
news_feature_file = os.path.join(data_path, r'../paper_feature.txt')
wordEmb_file = os.path.join(data_path, r'word_embedding.npy')
entityEmb_file = os.path.join(data_path, r'entity_embedding.npy')
contextEmb_file = os.path.join(data_path, r'context_embedding.npy')
infer_embedding_file = os.path.join(data_path, r'infer_embedding_item2item.txt')
news_feature_file = os.path.join(data_path, r'../paper_feature.txt')
epoch = 10
hparams = prepare_hparams(yaml_file,
news_feature_file=news_feature_file,
wordEmb_file=wordEmb_file,
entityEmb_file=entityEmb_file,
contextEmb_file=contextEmb_file,
epochs=epoch,
is_clip_norm=True,
max_grad_norm=0.5,
his_size=20,
MODEL_DIR=os.path.join(data_path, 'save_models'),
learning_rate=0.0002,
embed_l2=0.0,
layer_l2=0.0,
batch_size=32,
use_entity=True,
use_context=True
)
print(hparams.values)
```
To build an item2item recommendation model based on the Recommender repo, you only need to modify two files:
1. Data Loader : dkn_item2item_iterator.py
2. Model : dkn_item2item.py
<img src="https://recodatasets.blob.core.windows.net/kdd2020/images%2Fcode-changed-item2item.JPG" width="700">
```
input_creator = DKNItem2itemTextIterator
hparams.neg_num=9
```
A special parameter is `neg_num`. It indicates how many negative instances exist in a group for softmax computation.
Training and validation instances are organized as follows:
<img src="https://recodatasets.blob.core.windows.net/kdd2020/images/item2item-instances.JPG" width="700">
```
model = DKNItem2Item(hparams, input_creator)
t01 = time.time()
print(model.run_eval(valid_file))
t02 = time.time()
print((t02 - t01) / 60)
model.fit(train_file, valid_file)
model.run_get_embedding(news_feature_file, infer_embedding_file)
```
Again, we compre with DKN performance between using knowledge entities or without using knowledge entities (DKN(-)):
| Models | Group-AUC | MRR |NDCG@2 | NDCG@4 |
| :------| :------: | :------: | :------: | :------ |
| DKN | 0.9557 | 0.8993 | 0.8951 | 0.9123 |
| DKN(-) | 0.9506 | 0.8817 | 0.8758 | 0.8982 |
|
github_jupyter
|
import sys
sys.path.append("../../../")
from reco_utils.recommender.deeprec.deeprec_utils import *
from reco_utils.recommender.deeprec.models.dkn_item2item import *
from reco_utils.recommender.deeprec.io.dkn_item2item_iterator import *
import time
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
data_path = 'data_folder/my/DKN-training-folder'
yaml_file = './dkn.yaml' #os.path.join(data_path, r'../../../../../../dkn.yaml')
train_file = os.path.join(data_path, r'item2item_train_instances.txt')
valid_file = os.path.join(data_path, r'item2item_valid_instances.txt')
news_feature_file = os.path.join(data_path, r'../paper_feature.txt')
wordEmb_file = os.path.join(data_path, r'word_embedding.npy')
entityEmb_file = os.path.join(data_path, r'entity_embedding.npy')
contextEmb_file = os.path.join(data_path, r'context_embedding.npy')
infer_embedding_file = os.path.join(data_path, r'infer_embedding_item2item.txt')
news_feature_file = os.path.join(data_path, r'../paper_feature.txt')
epoch = 10
hparams = prepare_hparams(yaml_file,
news_feature_file=news_feature_file,
wordEmb_file=wordEmb_file,
entityEmb_file=entityEmb_file,
contextEmb_file=contextEmb_file,
epochs=epoch,
is_clip_norm=True,
max_grad_norm=0.5,
his_size=20,
MODEL_DIR=os.path.join(data_path, 'save_models'),
learning_rate=0.0002,
embed_l2=0.0,
layer_l2=0.0,
batch_size=32,
use_entity=True,
use_context=True
)
print(hparams.values)
input_creator = DKNItem2itemTextIterator
hparams.neg_num=9
model = DKNItem2Item(hparams, input_creator)
t01 = time.time()
print(model.run_eval(valid_file))
t02 = time.time()
print((t02 - t01) / 60)
model.fit(train_file, valid_file)
model.run_get_embedding(news_feature_file, infer_embedding_file)
| 0.145237 | 0.69039 |
```
import sys
sys.path.append('./../')
import json
import numpy as np
from skimage import color
from skimage import filters
from skimage import transform
import util
REQUEST = json.dumps({
'path' : {},
'args' : {}
})
```
# tellusAPIใฎ็ปๅใๅๅพใใใ ใใฎAPI
```
# GET /img/:img_type/:z/:x/:y
# ่ชญใฟๅใ
request = json.loads(REQUEST)
# ๅฟ
่ฆ้
็ฎๆฝๅบ
img_type = request['path'].get('img_type')
x = request['path'].get('x')
y = request['path'].get('y')
z = request['path'].get('z')
if util.input_validation_check(x, y, z, img_type):
img_np = util.get_image_using_tellus_api(x, y, z, img_type)
if len(img_np) != 0:
img_base64 = util.make_base64_image(img_np)
print('<html><img src="data:image/png;base64, {}" /></html>'.format(img_base64))
else:
pass
# ResponseInfo GET /img/:img_type/:z/:x/:y
print(json.dumps({"headers" : {"Content-Type" : "text/html"},"status" : 201}))
```
# ๅ
ๅญฆ็ปๅใๅ
ใซNDSI็ปๅใ่ฟใAPI
```
def make_snow_filter(img_true):
"""ๆๅบฆใๅฉ็จใใใใฃใซใฟใผใไฝๆ"""
# RGB็ปๅใใHSV็ปๅใธใฎๅคๆ
img_hsv = color.rgb2hsv(img_true.astype("uint8"))
img_v = img_hsv[:,:,2]
# 0ใงๅๆๅ
height, width= img_v.shape
snow_filter = np.zeros((height, width))
# ้พๅคใซใใจใฅใใฆ2ๅคๅ
snow_filter = img_v < 70 / 255
# RGBใซๅคๆ
snow_filter = color.gray2rgb(snow_filter)
return snow_filter
def ditect_ndsi(img_band2, img_band4):
"""NDSI๏ผNormalized Difference Snow Index๏ผ็ปๅใไฝๆ"""
# NDSIใ่จ็ฎ
img_NDSI = (img_band2[:,:,1] - img_band4[:,:,1]) / (img_band2[:,:,1] + img_band4[:,:,1])
#ๆๅคงใจๆๅฐใ่จญๅฎใใ0~1ใซๆญฃ่ฆๅ
img_NDSI = np.clip(img_NDSI + 0.3, 0, 1)
# grayใใRGBใซๅคๆ
height, width= img_NDSI.shape
img_NDSI_rgb = np.zeros((height, width, 3))
img_NDSI_rgb[:,:,0] = 0
img_NDSI_rgb[:,:,1] = img_NDSI * 255
img_NDSI_rgb[:,:,2] = 255
return img_NDSI_rgb
# GET /ndsi_img/:z/:x/:y
# ่ชญใฟๅใ
request = json.loads(REQUEST)
# ๅฟ
่ฆ้
็ฎๆฝๅบ
x = request['path'].get('x')
y = request['path'].get('y')
z = request['path'].get('z')
# z="13"
# x="7252"
# y="3234"
if util.input_validation_check(x, y, z):
#ใใผใฟๅๅพ
img_band1 = util.get_image_using_tellus_api(x, y, z, "band1")
img_band2 = util.get_image_using_tellus_api(x, y, z, "band2")
img_band3 = util.get_image_using_tellus_api(x, y, z, "band3")
img_band4 = util.get_image_using_tellus_api(x, y, z, "band4")
img_true = np.c_[img_band3[:,:,0:1], img_band2[:,:,1:2], img_band1[:,:,2:3]]
#ใใฃใซใฟใผไฝๆ
snow_filter = make_snow_filter(img_true)
#NDSI็ปๅๅๅพ
img_ndsi = ditect_ndsi(img_band2, img_band4)
#ใใฃใซใฟใผ้ฉ็จ
img_output = img_true * snow_filter + img_ndsi * (1- snow_filter)
# ่กจ็คบใฎใใใซๅๅคๆ
img_output_base64 = util.make_base64_image(img_output.astype("uint8"))
# ๅบๅ
print('<html><img src="data:image/png;base64, {}" /></html>'.format(img_output_base64))
# ResponseInfo GET /ndsi_img/:z/:x/:y
print(json.dumps({"headers" : {"Content-Type" : "text/html"},"status" : 201}))
```
# SAR็ปๅใๅ
ใซ่งฃๆ็ตๆใ่ฟใAPI
```
def analysis_sar_diff(img_fuji_sep, img_fuji_dec):
#ๅทฎๅใๆฝๅบ
img_fuji_diff = img_fuji_sep - img_fuji_dec
#ใใคใบ้คๅป
img_fuji_diff_gaus = filters.gaussian(img_fuji_diff, sigma=8)
#่กจ็คบใฎใใใซใตใคใบๅคๆด
img_fuji_diff_gaus = transform.resize(img_fuji_diff_gaus, (256,256), mode='reflect', anti_aliasing=True)
#ๆญฃ่ฆๅ
img_fuji_diff_gaus_norm = np.clip( img_fuji_diff_gaus * 10 ** 20 , 0, 255)
# grayใใRGBใซๅคๆ
height, width= img_fuji_diff_gaus_norm.shape
img_output = np.zeros((height, width, 3))
img_output[:,:,0] = 127
img_output[:,:,1] = img_fuji_diff_gaus_norm
img_output[:,:,2] = 255
return img_output
# GET /sar_analysis_img
# ่ชญใฟๅใ
request = json.loads(REQUEST)
#ใใผใฟ่ชญใฟ่พผใฟ
img_fuji_sep, img_fuji_dec = util.get_local_sar_image()
#่งฃๆ
img_output = analysis_sar_diff(img_fuji_sep, img_fuji_dec)
# ่กจ็คบใฎใใใซๅๅคๆ
img_output_base64 = util.make_base64_image(img_output.astype("uint8"))
# ๅบๅ
print('<html><img src="data:image/png;base64, {}" /></html>'.format(img_output_base64))
# ResponseInfo GET /sar_analysis_img
print(json.dumps({"headers" : {"Content-Type" : "text/html"},"status" : 201}))
```
|
github_jupyter
|
import sys
sys.path.append('./../')
import json
import numpy as np
from skimage import color
from skimage import filters
from skimage import transform
import util
REQUEST = json.dumps({
'path' : {},
'args' : {}
})
# GET /img/:img_type/:z/:x/:y
# ่ชญใฟๅใ
request = json.loads(REQUEST)
# ๅฟ
่ฆ้
็ฎๆฝๅบ
img_type = request['path'].get('img_type')
x = request['path'].get('x')
y = request['path'].get('y')
z = request['path'].get('z')
if util.input_validation_check(x, y, z, img_type):
img_np = util.get_image_using_tellus_api(x, y, z, img_type)
if len(img_np) != 0:
img_base64 = util.make_base64_image(img_np)
print('<html><img src="data:image/png;base64, {}" /></html>'.format(img_base64))
else:
pass
# ResponseInfo GET /img/:img_type/:z/:x/:y
print(json.dumps({"headers" : {"Content-Type" : "text/html"},"status" : 201}))
def make_snow_filter(img_true):
"""ๆๅบฆใๅฉ็จใใใใฃใซใฟใผใไฝๆ"""
# RGB็ปๅใใHSV็ปๅใธใฎๅคๆ
img_hsv = color.rgb2hsv(img_true.astype("uint8"))
img_v = img_hsv[:,:,2]
# 0ใงๅๆๅ
height, width= img_v.shape
snow_filter = np.zeros((height, width))
# ้พๅคใซใใจใฅใใฆ2ๅคๅ
snow_filter = img_v < 70 / 255
# RGBใซๅคๆ
snow_filter = color.gray2rgb(snow_filter)
return snow_filter
def ditect_ndsi(img_band2, img_band4):
"""NDSI๏ผNormalized Difference Snow Index๏ผ็ปๅใไฝๆ"""
# NDSIใ่จ็ฎ
img_NDSI = (img_band2[:,:,1] - img_band4[:,:,1]) / (img_band2[:,:,1] + img_band4[:,:,1])
#ๆๅคงใจๆๅฐใ่จญๅฎใใ0~1ใซๆญฃ่ฆๅ
img_NDSI = np.clip(img_NDSI + 0.3, 0, 1)
# grayใใRGBใซๅคๆ
height, width= img_NDSI.shape
img_NDSI_rgb = np.zeros((height, width, 3))
img_NDSI_rgb[:,:,0] = 0
img_NDSI_rgb[:,:,1] = img_NDSI * 255
img_NDSI_rgb[:,:,2] = 255
return img_NDSI_rgb
# GET /ndsi_img/:z/:x/:y
# ่ชญใฟๅใ
request = json.loads(REQUEST)
# ๅฟ
่ฆ้
็ฎๆฝๅบ
x = request['path'].get('x')
y = request['path'].get('y')
z = request['path'].get('z')
# z="13"
# x="7252"
# y="3234"
if util.input_validation_check(x, y, z):
#ใใผใฟๅๅพ
img_band1 = util.get_image_using_tellus_api(x, y, z, "band1")
img_band2 = util.get_image_using_tellus_api(x, y, z, "band2")
img_band3 = util.get_image_using_tellus_api(x, y, z, "band3")
img_band4 = util.get_image_using_tellus_api(x, y, z, "band4")
img_true = np.c_[img_band3[:,:,0:1], img_band2[:,:,1:2], img_band1[:,:,2:3]]
#ใใฃใซใฟใผไฝๆ
snow_filter = make_snow_filter(img_true)
#NDSI็ปๅๅๅพ
img_ndsi = ditect_ndsi(img_band2, img_band4)
#ใใฃใซใฟใผ้ฉ็จ
img_output = img_true * snow_filter + img_ndsi * (1- snow_filter)
# ่กจ็คบใฎใใใซๅๅคๆ
img_output_base64 = util.make_base64_image(img_output.astype("uint8"))
# ๅบๅ
print('<html><img src="data:image/png;base64, {}" /></html>'.format(img_output_base64))
# ResponseInfo GET /ndsi_img/:z/:x/:y
print(json.dumps({"headers" : {"Content-Type" : "text/html"},"status" : 201}))
def analysis_sar_diff(img_fuji_sep, img_fuji_dec):
#ๅทฎๅใๆฝๅบ
img_fuji_diff = img_fuji_sep - img_fuji_dec
#ใใคใบ้คๅป
img_fuji_diff_gaus = filters.gaussian(img_fuji_diff, sigma=8)
#่กจ็คบใฎใใใซใตใคใบๅคๆด
img_fuji_diff_gaus = transform.resize(img_fuji_diff_gaus, (256,256), mode='reflect', anti_aliasing=True)
#ๆญฃ่ฆๅ
img_fuji_diff_gaus_norm = np.clip( img_fuji_diff_gaus * 10 ** 20 , 0, 255)
# grayใใRGBใซๅคๆ
height, width= img_fuji_diff_gaus_norm.shape
img_output = np.zeros((height, width, 3))
img_output[:,:,0] = 127
img_output[:,:,1] = img_fuji_diff_gaus_norm
img_output[:,:,2] = 255
return img_output
# GET /sar_analysis_img
# ่ชญใฟๅใ
request = json.loads(REQUEST)
#ใใผใฟ่ชญใฟ่พผใฟ
img_fuji_sep, img_fuji_dec = util.get_local_sar_image()
#่งฃๆ
img_output = analysis_sar_diff(img_fuji_sep, img_fuji_dec)
# ่กจ็คบใฎใใใซๅๅคๆ
img_output_base64 = util.make_base64_image(img_output.astype("uint8"))
# ๅบๅ
print('<html><img src="data:image/png;base64, {}" /></html>'.format(img_output_base64))
# ResponseInfo GET /sar_analysis_img
print(json.dumps({"headers" : {"Content-Type" : "text/html"},"status" : 201}))
| 0.23092 | 0.334386 |
# Name
Data preparation using Spark on YARN with Cloud Dataproc
# Label
Cloud Dataproc, GCP, Cloud Storage, Spark, Kubeflow, pipelines, components, YARN
# Summary
A Kubeflow Pipeline component to prepare data by submitting a Spark job on YARN to Cloud Dataproc.
# Details
## Intended use
Use the component to run an Apache Spark job as one preprocessing step in a Kubeflow Pipeline.
## Runtime arguments
Argument | Description | Optional | Data type | Accepted values | Default |
:--- | :---------- | :--- | :------- | :------| :------|
project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to.|No | GCPProjectID | | |
region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | |
cluster_name | The name of the cluster to run the job. | No | String | | |
main_jar_file_uri | The Hadoop Compatible Filesystem (HCFS) URI of the JAR file that contains the main class. | No | GCSPath | | |
main_class | The name of the driver's main class. The JAR file that contains the class must be either in the default CLASSPATH or specified in `spark_job.jarFileUris`.| No | | | |
args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.| Yes | | | |
spark_job | The payload of a [SparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/SparkJob).| Yes | | | |
job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | | | |
wait_interval | The number of seconds to wait between polling the operation. | Yes | | | 30 |
## Output
Name | Description | Type
:--- | :---------- | :---
job_id | The ID of the created job. | String
## Cautions & requirements
To use the component, you must:
* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).
* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).
* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.
* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project.
## Detailed description
This component creates a Spark job from [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).
Follow these steps to use the component in a pipeline:
1. Install the Kubeflow Pipeline SDK:
```
%%capture --no-stderr
!pip3 install kfp --upgrade
```
2. Load the component using KFP SDK
```
import kfp.components as comp
dataproc_submit_spark_job_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.1/components/gcp/dataproc/submit_spark_job/component.yaml')
help(dataproc_submit_spark_job_op)
```
### Sample
Note: The following sample code works in an IPython notebook or directly in Python code.
#### Set up a Dataproc cluster
[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code.
#### Prepare a Spark job
Upload your Spark JAR file to a Cloud Storage bucket. In the sample, we use a JAR file that is preinstalled in the main cluster: `file:///usr/lib/spark/examples/jars/spark-examples.jar`.
Here is the [source code of the sample](https://github.com/apache/spark/blob/master/examples/src/main/java/org/apache/spark/examples/JavaSparkPi.java).
To package a self-contained Spark application, follow these [instructions](https://spark.apache.org/docs/latest/quick-start.html#self-contained-applications).
#### Set sample parameters
```
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
SPARK_FILE_URI = 'file:///usr/lib/spark/examples/jars/spark-examples.jar'
MAIN_CLASS = 'org.apache.spark.examples.SparkPi'
ARGS = ['1000']
EXPERIMENT_NAME = 'Dataproc - Submit Spark Job'
```
#### Example pipeline that uses the component
```
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc submit Spark job pipeline',
description='Dataproc submit Spark job pipeline'
)
def dataproc_submit_spark_job_pipeline(
project_id = PROJECT_ID,
region = REGION,
cluster_name = CLUSTER_NAME,
main_jar_file_uri = '',
main_class = MAIN_CLASS,
args = json.dumps(ARGS),
spark_job=json.dumps({ 'jarFileUris': [ SPARK_FILE_URI ] }),
job='{}',
wait_interval='30'
):
dataproc_submit_spark_job_op(
project_id=project_id,
region=region,
cluster_name=cluster_name,
main_jar_file_uri=main_jar_file_uri,
main_class=main_class,
args=args,
spark_job=spark_job,
job=job,
wait_interval=wait_interval)
```
#### Compile the pipeline
```
pipeline_func = dataproc_submit_spark_job_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
```
#### Submit the pipeline for execution
```
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
```
## References
* [Component Python code](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/component_sdk/python/kfp_component/google/dataproc/_submit_spark_job.py)
* [Component Docker file](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/Dockerfile)
* [Sample notebook](https://github.com/kubeflow/pipelines/blob/master/components/gcp/dataproc/submit_spark_job/sample.ipynb)
* [Dataproc SparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/SparkJob)
## License
By deploying or using this software you agree to comply with the [AI Hub Terms of Service](https://aihub.cloud.google.com/u/0/aihub-tos) and the [Google APIs Terms of Service](https://developers.google.com/terms/). To the extent of a direct conflict of terms, the AI Hub Terms of Service will control.
|
github_jupyter
|
%%capture --no-stderr
!pip3 install kfp --upgrade
import kfp.components as comp
dataproc_submit_spark_job_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.1/components/gcp/dataproc/submit_spark_job/component.yaml')
help(dataproc_submit_spark_job_op)
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
SPARK_FILE_URI = 'file:///usr/lib/spark/examples/jars/spark-examples.jar'
MAIN_CLASS = 'org.apache.spark.examples.SparkPi'
ARGS = ['1000']
EXPERIMENT_NAME = 'Dataproc - Submit Spark Job'
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc submit Spark job pipeline',
description='Dataproc submit Spark job pipeline'
)
def dataproc_submit_spark_job_pipeline(
project_id = PROJECT_ID,
region = REGION,
cluster_name = CLUSTER_NAME,
main_jar_file_uri = '',
main_class = MAIN_CLASS,
args = json.dumps(ARGS),
spark_job=json.dumps({ 'jarFileUris': [ SPARK_FILE_URI ] }),
job='{}',
wait_interval='30'
):
dataproc_submit_spark_job_op(
project_id=project_id,
region=region,
cluster_name=cluster_name,
main_jar_file_uri=main_jar_file_uri,
main_class=main_class,
args=args,
spark_job=spark_job,
job=job,
wait_interval=wait_interval)
pipeline_func = dataproc_submit_spark_job_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
| 0.322099 | 0.923868 |
```
data = [[66707599984, 'Conservador', (5100., 3500., 1400., 200.)],
[55695397315, 'Conservador', (4900., 3000., 1400., 200.)],
[63743886918, 'Conservador', (4700., 3200., 1300., 200.)],
[55941368774, 'Conservador', (4600., 3100., 1500., 200.)],
[75486280874, 'Conservador', (5000., 3600., 1400., 200.)],
[53164949799, 'Conservador', (5400., 3900., 1700., 400.)],
[39898704131, 'Conservador', (4600., 3400., 1400., 300.)],
[53740901207, 'Conservador', (5000., 3400., 1500., 200.)],
[51735950236, 'Conservador', (4400., 2900., 1400., 200.)],
[47305108951, 'Conservador', (4900., 3100., 1500., 100.)],
[63858864633, 'Conservador', (5400., 3700., 1500., 200.)],
[53363167240, 'Conservador', (4800., 3400., 1600., 200.)],
[72133754195, 'Conservador', (4800., 3000., 1400., 100.)],
[52802483512, 'Conservador', (4300., 3000., 1100., 100.)],
[57925287214, 'Conservador', (4800., 3400., 1900., 200.)],
[74354632224, 'Conservador', (5000., 3000., 1600., 200.)],
[64020216626, 'Conservador', (5000., 3400., 1600., 400.)],
[78223722856, 'Conservador', (5200., 3500., 1500., 200.)],
[58245228846, 'Conservador', (5200., 3400., 1400., 200.)],
[74490686776, 'Conservador', (4700., 3200., 1600., 200.)],
[48646824781, 'Conservador', (4800., 3100., 1600., 200.)],
[77381458676, 'Conservador', (5400., 3400., 1500., 400.)],
[41615431874, 'Conservador', (5200., 4100., 1500., 100.)],
[52163844491, 'Conservador', (5500., 4200., 1400., 200.)],
[70276304567, 'Conservador', (4900., 3100., 1500., 200.)],
[69119828185, 'Conservador', (5000., 3200., 1200., 200.)],
[65441690046, 'Conservador', (5500., 3500., 1300., 200.)],
[56457227894, 'Conservador', (4900., 3600., 1400., 100.)],
[46939428126, 'Conservador', (4400., 3000., 1300., 200.)],
[60979942480, 'Conservador', (5100., 3400., 1500., 200.)],
[41648583220, 'Conservador', (5000., 3500., 1300., 300.)],
[50376331791, 'Conservador', (4500., 2300., 1300., 300.)],
[67008801023, 'Conservador', (4400., 3200., 1300., 200.)],
[72149193419, 'Conservador', (5000., 3500., 1600., 600.)],
[62830733382, 'Conservador', (5100., 3800., 1900., 400.)],
[56716675811, 'Conservador', (4800., 3000., 1400., 300.)],
[61089667146, 'Conservador', (5100., 3800., 1600., 200.)],
[47795509468, 'Conservador', (4600., 3200., 1400., 200.)],
[60899885693, 'Conservador', (5300., 3700., 1500., 200.)],
[53433670705, 'Conservador', (5000., 3300., 1400., 200.)],
[54850120580, 'Moderado', (7000., 3200., 4700., 1400.)],
[71457789994, 'Moderado', (6400., 3200., 4500., 1500.)],
[67692777563, 'Moderado', (6900., 3100., 4900., 1500.)],
[43133573182, 'Moderado', (5500., 2300., 4000., 1300.)],
[55150612815, 'Moderado', (6500., 2800., 4600., 1500.)],
[48211725243, 'Moderado', (5700., 2800., 4500., 1300.)],
[76686463776, 'Moderado', (6300., 3300., 4700., 1600.)],
[71971000560, 'Moderado', (4900., 2400., 3300., 1000.)],
[40307235992, 'Moderado', (6600., 2900., 4600., 1300.)],
[44826533081, 'Moderado', (5200., 2700., 3900., 1400.)],
[45735414894, 'Moderado', (5900., 3200., 4800., 1800.)],
[57137146514, 'Moderado', (6100., 2800., 4000., 1300.)],
[53657058251, 'Moderado', (6300., 2500., 4900., 1500.)],
[52941460485, 'Moderado', (6100., 2800., 4700., 1200.)],
[44306600683, 'Moderado', (6400., 2900., 4300., 1300.)],
[43460747924, 'Moderado', (6600., 3000., 4400., 1400.)],
[75590376075, 'Moderado', (6800., 2800., 4800., 1400.)],
[68267282206, 'Moderado', (6700., 3000., 5000., 1700.)],
[77567920298, 'Moderado', (6000., 2900., 4500., 1500.)],
[67600419504, 'Moderado', (5700., 2600., 3500., 1000.)],
[44902189811, 'Moderado', (5500., 2400., 3800., 1100.)],
[62966866614, 'Moderado', (5500., 2400., 3700., 1000.)],
[56182108880, 'Moderado', (5800., 2700., 3900., 1200.)],
[78299785392, 'Moderado', (6000., 2700., 5100., 1600.)],
[45206071878, 'Moderado', (5400., 3000., 4500., 1500.)],
[57381925887, 'Moderado', (6000., 3400., 4500., 1600.)],
[65654934891, 'Moderado', (6700., 3100., 4700., 1500.)],
[56130640481, 'Moderado', (6300., 2300., 4400., 1300.)],
[59667611672, 'Moderado', (5600., 3000., 4100., 1300.)],
[40349334385, 'Moderado', (5500., 2500., 4000., 1300.)],
[68422640081, 'Moderado', (5500., 2600., 4400., 1200.)],
[55245923439, 'Moderado', (6100., 3000., 4600., 1400.)],
[51286696873, 'Moderado', (5800., 2600., 4000., 1200.)],
[41065279767, 'Moderado', (5000., 2300., 3300., 1000.)],
[42866454119, 'Moderado', (5600., 2700., 4200., 1300.)],
[61962944542, 'Moderado', (5700., 3000., 4200., 1200.)],
[48623501235, 'Moderado', (5700., 2900., 4200., 1300.)],
[49475220139, 'Moderado', (6200., 2900., 4300., 1300.)],
[52245218531, 'Moderado', (5100., 2500., 3000., 1100.)],
[50932926697, 'Moderado', (5700., 2800., 4100., 1300.)],
[47432932248, 'Agressivo', (6300., 3300., 6000., 2500.)],
[39321991579, 'Agressivo', (5800., 2700., 5100., 1900.)],
[46283759608, 'Agressivo', (7100., 3000., 5900., 2100.)],
[56996272538, 'Agressivo', (6300., 2900., 5600., 1800.)],
[77232189978, 'Agressivo', (6500., 3000., 5800., 2200.)],
[77183282421, 'Agressivo', (7600., 3000., 6600., 2100.)],
[42857147573, 'Agressivo', (4900., 2500., 4500., 1700.)],
[39331584043, 'Agressivo', (7300., 2900., 6300., 1800.)],
[48130345228, 'Agressivo', (6700., 2500., 5800., 1800.)],
[71422443953, 'Agressivo', (7200., 3600., 6100., 2500.)],
[72508507904, 'Agressivo', (6900., 3200., 5700., 2300.)],
[41188727558, 'Agressivo', (5600., 2800., 4900., 2000.)],
[61358776640, 'Agressivo', (7700., 2800., 6700., 2000.)],
[66934042323, 'Agressivo', (6300., 2700., 4900., 1800.)],
[40622495567, 'Agressivo', (6700., 3300., 5700., 2100.)],
[57221661311, 'Agressivo', (7200., 3200., 6000., 1800.)],
[45159362930, 'Agressivo', (6200., 2800., 4800., 1800.)],
[45018975174, 'Agressivo', (6100., 3000., 4900., 1800.)],
[70685429140, 'Agressivo', (6400., 2800., 5600., 2100.)],
[61808723477, 'Agressivo', (7200., 3000., 5800., 1600.)],
[56363906548, 'Agressivo', (7400., 2800., 6100., 1900.)],
[39646194720, 'Agressivo', (7900., 3800., 6400., 2000.)],
[55385494438, 'Agressivo', (6400., 2800., 5600., 2200.)],
[75796138061, 'Agressivo', (6300., 2800., 5100., 1500.)],
[53595767857, 'Agressivo', (6100., 2600., 5600., 1400.)],
[48758828080, 'Agressivo', (7700., 3000., 6100., 2300.)],
[58387651356, 'Agressivo', (6300., 3400., 5600., 2400.)],
[72846931192, 'Agressivo', (6400., 3100., 5500., 1800.)],
[47046896346, 'Agressivo', (6000., 3000., 4800., 1800.)],
[69730292799, 'Agressivo', (6900., 3100., 5400., 2100.)],
[48177836349, 'Agressivo', (6700., 3100., 5600., 2400.)],
[57976326635, 'Agressivo', (6900., 3100., 5100., 2300.)],
[55710813002, 'Agressivo', (5800., 2700., 5100., 1900.)],
[64028580439, 'Agressivo', (6800., 3200., 5900., 2300.)],
[49962942971, 'Agressivo', (6700., 3300., 5700., 2500.)],
[47250893163, 'Agressivo', (6700., 3000., 5200., 2300.)],
[75559276274, 'Agressivo', (6300., 2500., 5000., 1900.)],
[58529878272, 'Agressivo', (6500., 3000., 5200., 2000.)],
[76005896622, 'Agressivo', (6200., 3400., 5400., 2300.)],
[49212614633, 'Agressivo', (5900., 3000., 5100., 1800.)]]
no_class = [[45926320819, '', (5800., 4000., 1200., 200.)],
[52559670741, '', (5700., 4400., 1500., 400.)],
[59016004832, '', (5400., 3900., 1300., 400.)],
[66175672425, '', (5100., 3500., 1400., 300.)],
[53330429526, '', (5700., 3800., 1700., 300.)],
[43765563403, '', (5100., 3800., 1500., 300.)],
[68020822591, '', (5400., 3400., 1700., 200.)],
[53939481689, '', (5100., 3700., 1500., 400.)],
[47014057561, '', (4600., 3600., 1000., 200.)],
[57183542047, '', (5100., 3300., 1700., 500.)],
[68518284363, '', (5000., 2000., 3500., 1000.)],
[65806049885, '', (5900., 3000., 4200., 1500.)],
[54128073086, '', (6000., 2200., 4000., 1000.)],
[41306785494, '', (6100., 2900., 4700., 1400.)],
[65234831039, '', (5600., 2900., 3600., 1300.)],
[50964498067, '', (6700., 3100., 4400., 1400.)],
[50810951429, '', (5600., 3000., 4500., 1500.)],
[48765044397, '', (5800., 2700., 4100., 1000.)],
[41960083761, '', (6200., 2200., 4500., 1500.)],
[76657763082, '', (5600., 2500., 3900., 1100.)],
[64726487742, '', (6500., 3200., 5100., 2000.)],
[75746566283, '', (6400., 2700., 5300., 1900.)],
[78576734793, '', (6800., 3000., 5500., 2100.)],
[56440141847, '', (5700., 2500., 5000., 2000.)],
[66827423000, '', (5800., 2800., 5100., 2400.)],
[45267873396, '', (6400., 3200., 5300., 2300.)],
[46387191493, '', (6500., 3000., 5500., 1800.)],
[54273611732, '', (7700., 3800., 6700., 2200.)],
[75135392881, '', (7700., 2600., 6900., 2300.)],
[64703873108, '', (6000., 2200., 5000., 1500.)]]
from knn import knn
resultado_final = {}
calculo = knn(data)
for i in no_class:
linha = calculo.calcula_distancia(i)
vizinhos = calculo.k_vizinhos(linha, 5)
perfis = calculo.retorna_perfil(vizinhos)
retorno = calculo.moda_lista(perfis)
resultado_final[i[0]] = max(retorno, key=retorno.get)
for cpf, tipo in resultado_final.items():
cpf = str(cpf)
cpf = cpf[:3] + "." + cpf[3:6] + "." + cpf[6:9] + "-" + cpf[9:]
print(f'O perfil do CPF {cpf} รฉ tipo: {tipo}')
```
|
github_jupyter
|
data = [[66707599984, 'Conservador', (5100., 3500., 1400., 200.)],
[55695397315, 'Conservador', (4900., 3000., 1400., 200.)],
[63743886918, 'Conservador', (4700., 3200., 1300., 200.)],
[55941368774, 'Conservador', (4600., 3100., 1500., 200.)],
[75486280874, 'Conservador', (5000., 3600., 1400., 200.)],
[53164949799, 'Conservador', (5400., 3900., 1700., 400.)],
[39898704131, 'Conservador', (4600., 3400., 1400., 300.)],
[53740901207, 'Conservador', (5000., 3400., 1500., 200.)],
[51735950236, 'Conservador', (4400., 2900., 1400., 200.)],
[47305108951, 'Conservador', (4900., 3100., 1500., 100.)],
[63858864633, 'Conservador', (5400., 3700., 1500., 200.)],
[53363167240, 'Conservador', (4800., 3400., 1600., 200.)],
[72133754195, 'Conservador', (4800., 3000., 1400., 100.)],
[52802483512, 'Conservador', (4300., 3000., 1100., 100.)],
[57925287214, 'Conservador', (4800., 3400., 1900., 200.)],
[74354632224, 'Conservador', (5000., 3000., 1600., 200.)],
[64020216626, 'Conservador', (5000., 3400., 1600., 400.)],
[78223722856, 'Conservador', (5200., 3500., 1500., 200.)],
[58245228846, 'Conservador', (5200., 3400., 1400., 200.)],
[74490686776, 'Conservador', (4700., 3200., 1600., 200.)],
[48646824781, 'Conservador', (4800., 3100., 1600., 200.)],
[77381458676, 'Conservador', (5400., 3400., 1500., 400.)],
[41615431874, 'Conservador', (5200., 4100., 1500., 100.)],
[52163844491, 'Conservador', (5500., 4200., 1400., 200.)],
[70276304567, 'Conservador', (4900., 3100., 1500., 200.)],
[69119828185, 'Conservador', (5000., 3200., 1200., 200.)],
[65441690046, 'Conservador', (5500., 3500., 1300., 200.)],
[56457227894, 'Conservador', (4900., 3600., 1400., 100.)],
[46939428126, 'Conservador', (4400., 3000., 1300., 200.)],
[60979942480, 'Conservador', (5100., 3400., 1500., 200.)],
[41648583220, 'Conservador', (5000., 3500., 1300., 300.)],
[50376331791, 'Conservador', (4500., 2300., 1300., 300.)],
[67008801023, 'Conservador', (4400., 3200., 1300., 200.)],
[72149193419, 'Conservador', (5000., 3500., 1600., 600.)],
[62830733382, 'Conservador', (5100., 3800., 1900., 400.)],
[56716675811, 'Conservador', (4800., 3000., 1400., 300.)],
[61089667146, 'Conservador', (5100., 3800., 1600., 200.)],
[47795509468, 'Conservador', (4600., 3200., 1400., 200.)],
[60899885693, 'Conservador', (5300., 3700., 1500., 200.)],
[53433670705, 'Conservador', (5000., 3300., 1400., 200.)],
[54850120580, 'Moderado', (7000., 3200., 4700., 1400.)],
[71457789994, 'Moderado', (6400., 3200., 4500., 1500.)],
[67692777563, 'Moderado', (6900., 3100., 4900., 1500.)],
[43133573182, 'Moderado', (5500., 2300., 4000., 1300.)],
[55150612815, 'Moderado', (6500., 2800., 4600., 1500.)],
[48211725243, 'Moderado', (5700., 2800., 4500., 1300.)],
[76686463776, 'Moderado', (6300., 3300., 4700., 1600.)],
[71971000560, 'Moderado', (4900., 2400., 3300., 1000.)],
[40307235992, 'Moderado', (6600., 2900., 4600., 1300.)],
[44826533081, 'Moderado', (5200., 2700., 3900., 1400.)],
[45735414894, 'Moderado', (5900., 3200., 4800., 1800.)],
[57137146514, 'Moderado', (6100., 2800., 4000., 1300.)],
[53657058251, 'Moderado', (6300., 2500., 4900., 1500.)],
[52941460485, 'Moderado', (6100., 2800., 4700., 1200.)],
[44306600683, 'Moderado', (6400., 2900., 4300., 1300.)],
[43460747924, 'Moderado', (6600., 3000., 4400., 1400.)],
[75590376075, 'Moderado', (6800., 2800., 4800., 1400.)],
[68267282206, 'Moderado', (6700., 3000., 5000., 1700.)],
[77567920298, 'Moderado', (6000., 2900., 4500., 1500.)],
[67600419504, 'Moderado', (5700., 2600., 3500., 1000.)],
[44902189811, 'Moderado', (5500., 2400., 3800., 1100.)],
[62966866614, 'Moderado', (5500., 2400., 3700., 1000.)],
[56182108880, 'Moderado', (5800., 2700., 3900., 1200.)],
[78299785392, 'Moderado', (6000., 2700., 5100., 1600.)],
[45206071878, 'Moderado', (5400., 3000., 4500., 1500.)],
[57381925887, 'Moderado', (6000., 3400., 4500., 1600.)],
[65654934891, 'Moderado', (6700., 3100., 4700., 1500.)],
[56130640481, 'Moderado', (6300., 2300., 4400., 1300.)],
[59667611672, 'Moderado', (5600., 3000., 4100., 1300.)],
[40349334385, 'Moderado', (5500., 2500., 4000., 1300.)],
[68422640081, 'Moderado', (5500., 2600., 4400., 1200.)],
[55245923439, 'Moderado', (6100., 3000., 4600., 1400.)],
[51286696873, 'Moderado', (5800., 2600., 4000., 1200.)],
[41065279767, 'Moderado', (5000., 2300., 3300., 1000.)],
[42866454119, 'Moderado', (5600., 2700., 4200., 1300.)],
[61962944542, 'Moderado', (5700., 3000., 4200., 1200.)],
[48623501235, 'Moderado', (5700., 2900., 4200., 1300.)],
[49475220139, 'Moderado', (6200., 2900., 4300., 1300.)],
[52245218531, 'Moderado', (5100., 2500., 3000., 1100.)],
[50932926697, 'Moderado', (5700., 2800., 4100., 1300.)],
[47432932248, 'Agressivo', (6300., 3300., 6000., 2500.)],
[39321991579, 'Agressivo', (5800., 2700., 5100., 1900.)],
[46283759608, 'Agressivo', (7100., 3000., 5900., 2100.)],
[56996272538, 'Agressivo', (6300., 2900., 5600., 1800.)],
[77232189978, 'Agressivo', (6500., 3000., 5800., 2200.)],
[77183282421, 'Agressivo', (7600., 3000., 6600., 2100.)],
[42857147573, 'Agressivo', (4900., 2500., 4500., 1700.)],
[39331584043, 'Agressivo', (7300., 2900., 6300., 1800.)],
[48130345228, 'Agressivo', (6700., 2500., 5800., 1800.)],
[71422443953, 'Agressivo', (7200., 3600., 6100., 2500.)],
[72508507904, 'Agressivo', (6900., 3200., 5700., 2300.)],
[41188727558, 'Agressivo', (5600., 2800., 4900., 2000.)],
[61358776640, 'Agressivo', (7700., 2800., 6700., 2000.)],
[66934042323, 'Agressivo', (6300., 2700., 4900., 1800.)],
[40622495567, 'Agressivo', (6700., 3300., 5700., 2100.)],
[57221661311, 'Agressivo', (7200., 3200., 6000., 1800.)],
[45159362930, 'Agressivo', (6200., 2800., 4800., 1800.)],
[45018975174, 'Agressivo', (6100., 3000., 4900., 1800.)],
[70685429140, 'Agressivo', (6400., 2800., 5600., 2100.)],
[61808723477, 'Agressivo', (7200., 3000., 5800., 1600.)],
[56363906548, 'Agressivo', (7400., 2800., 6100., 1900.)],
[39646194720, 'Agressivo', (7900., 3800., 6400., 2000.)],
[55385494438, 'Agressivo', (6400., 2800., 5600., 2200.)],
[75796138061, 'Agressivo', (6300., 2800., 5100., 1500.)],
[53595767857, 'Agressivo', (6100., 2600., 5600., 1400.)],
[48758828080, 'Agressivo', (7700., 3000., 6100., 2300.)],
[58387651356, 'Agressivo', (6300., 3400., 5600., 2400.)],
[72846931192, 'Agressivo', (6400., 3100., 5500., 1800.)],
[47046896346, 'Agressivo', (6000., 3000., 4800., 1800.)],
[69730292799, 'Agressivo', (6900., 3100., 5400., 2100.)],
[48177836349, 'Agressivo', (6700., 3100., 5600., 2400.)],
[57976326635, 'Agressivo', (6900., 3100., 5100., 2300.)],
[55710813002, 'Agressivo', (5800., 2700., 5100., 1900.)],
[64028580439, 'Agressivo', (6800., 3200., 5900., 2300.)],
[49962942971, 'Agressivo', (6700., 3300., 5700., 2500.)],
[47250893163, 'Agressivo', (6700., 3000., 5200., 2300.)],
[75559276274, 'Agressivo', (6300., 2500., 5000., 1900.)],
[58529878272, 'Agressivo', (6500., 3000., 5200., 2000.)],
[76005896622, 'Agressivo', (6200., 3400., 5400., 2300.)],
[49212614633, 'Agressivo', (5900., 3000., 5100., 1800.)]]
no_class = [[45926320819, '', (5800., 4000., 1200., 200.)],
[52559670741, '', (5700., 4400., 1500., 400.)],
[59016004832, '', (5400., 3900., 1300., 400.)],
[66175672425, '', (5100., 3500., 1400., 300.)],
[53330429526, '', (5700., 3800., 1700., 300.)],
[43765563403, '', (5100., 3800., 1500., 300.)],
[68020822591, '', (5400., 3400., 1700., 200.)],
[53939481689, '', (5100., 3700., 1500., 400.)],
[47014057561, '', (4600., 3600., 1000., 200.)],
[57183542047, '', (5100., 3300., 1700., 500.)],
[68518284363, '', (5000., 2000., 3500., 1000.)],
[65806049885, '', (5900., 3000., 4200., 1500.)],
[54128073086, '', (6000., 2200., 4000., 1000.)],
[41306785494, '', (6100., 2900., 4700., 1400.)],
[65234831039, '', (5600., 2900., 3600., 1300.)],
[50964498067, '', (6700., 3100., 4400., 1400.)],
[50810951429, '', (5600., 3000., 4500., 1500.)],
[48765044397, '', (5800., 2700., 4100., 1000.)],
[41960083761, '', (6200., 2200., 4500., 1500.)],
[76657763082, '', (5600., 2500., 3900., 1100.)],
[64726487742, '', (6500., 3200., 5100., 2000.)],
[75746566283, '', (6400., 2700., 5300., 1900.)],
[78576734793, '', (6800., 3000., 5500., 2100.)],
[56440141847, '', (5700., 2500., 5000., 2000.)],
[66827423000, '', (5800., 2800., 5100., 2400.)],
[45267873396, '', (6400., 3200., 5300., 2300.)],
[46387191493, '', (6500., 3000., 5500., 1800.)],
[54273611732, '', (7700., 3800., 6700., 2200.)],
[75135392881, '', (7700., 2600., 6900., 2300.)],
[64703873108, '', (6000., 2200., 5000., 1500.)]]
from knn import knn
resultado_final = {}
calculo = knn(data)
for i in no_class:
linha = calculo.calcula_distancia(i)
vizinhos = calculo.k_vizinhos(linha, 5)
perfis = calculo.retorna_perfil(vizinhos)
retorno = calculo.moda_lista(perfis)
resultado_final[i[0]] = max(retorno, key=retorno.get)
for cpf, tipo in resultado_final.items():
cpf = str(cpf)
cpf = cpf[:3] + "." + cpf[3:6] + "." + cpf[6:9] + "-" + cpf[9:]
print(f'O perfil do CPF {cpf} รฉ tipo: {tipo}')
| 0.123524 | 0.720725 |
```
!pip install --upgrade tables
!pip install eli5
!pip install xgboost
!pip install hyperopt
import pandas as pd
import numpy as np
from hyperopt import hp, fmin, tpe, STATUS_OK
import xgboost as xgb
from sklearn.metrics import mean_absolute_error as mae
from sklearn.model_selection import cross_val_score, KFold
import eli5
from eli5.sklearn import PermutationImportance
```
Przechodzimy do naszego katalogu
```
cd "/content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_two/dw_matrix_cargb-"
```
Wczytujemy nasze dane
```
df = pd.read_hdf('data/car.h5')
df.shape
#Feature Engineering
SUFFIX_CAT = '__cat'
for feat in df.columns:
if isinstance(df[feat][0], list): continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else:
df[feat + SUFFIX_CAT] = factorized_values
df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x))
df['param_pojemnoลฤ-skokowa'] = df['param_pojemnoลฤ-skokowa'].map(lambda x: -1 if str(x) == 'None' else int( str(x).split('cm')[0].replace(' ', '')))
df['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) == 'None' else int(x.split(' ')[0]))
def run_model(model, feats):
X = df[feats].values
y = df['price_value'].values
scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
feats = [ 'param_napฤd__cat','param_rok-produkcji','param_stan__cat','param_skrzynia-biegรณw__cat','param_faktura-vat__cat','param_moc','param_marka-pojazdu__cat','feature_kamera-cofania__cat','param_typ__cat','param_pojemnoลฤ-skokowa','seller_name__cat','feature_wspomaganie-kierownicy__cat','param_model-pojazdu__cat', 'param_wersja__cat','param_kod-silnika__cat','feature_system-start-stop__cat','feature_asystent-pasa-ruchu__cat','feature_czujniki-parkowania-przednie__cat','feature_ลopatki-zmiany-biegรณw__cat','feature_regulowane-zawieszenie__cat']
xgb_params = {
'max_depth': 5,
'n_estimators': 50,
'leraning_rate': 0.1,
'seed': 0
}
run_model(xgb.XGBRegressor(**xgb_params), feats)
```
Gdy z 'feats' usunelismy rozszerzenie __cat z trzech parametrรณw param_rok-produkcji, param_pojemnoลฤ-skokowa, param_moc to za kazdym usuniฤciem wynik siฤ poprawiaล
Pomyลlmy w jaki sposรณb mozemy dobieraฤ parametry **xgb_params**
Ustawiamy zakresy parametrรณw, ich kroki np. **'max_depth'- zakres 4 do 10 z krokiem 2** to jest nasza przestrzeล i przekazujemy jฤ
do algorytmu ktรณry ma jฤ
optymalizowaฤ
***Hyperopt***
```
def obj_func(params):
print("Training with params: ")
print(params)
mean_mae, score_std = run_model(xgb.XGBRegressor(**params), feats)
return {'loss': np.abs(mean_mae), 'status': STATUS_OK}
#space
xgb_reg_params = {
'learning_rate': hp.choice('learning_rate', np.arange(0.05, 0.31, 0.05)),
'max_depth': hp.choice('max_depth', np.arange(5, 16, 1, dtype=int)),
'subsample': hp.quniform('subsample', 0.5, 1, 0.05),
'colsample_bytree': hp.quniform('colsample_bytree', 0.5, 1, 0.05),
'objective': 'reg:squarederror',
'n_estimators': 100,
'seed': 0,
}
## run
best = fmin(obj_func, xgb_reg_params, algo=tpe.suggest, max_evals=25)
best
```
|
github_jupyter
|
!pip install --upgrade tables
!pip install eli5
!pip install xgboost
!pip install hyperopt
import pandas as pd
import numpy as np
from hyperopt import hp, fmin, tpe, STATUS_OK
import xgboost as xgb
from sklearn.metrics import mean_absolute_error as mae
from sklearn.model_selection import cross_val_score, KFold
import eli5
from eli5.sklearn import PermutationImportance
cd "/content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_two/dw_matrix_cargb-"
df = pd.read_hdf('data/car.h5')
df.shape
#Feature Engineering
SUFFIX_CAT = '__cat'
for feat in df.columns:
if isinstance(df[feat][0], list): continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else:
df[feat + SUFFIX_CAT] = factorized_values
df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x))
df['param_pojemnoลฤ-skokowa'] = df['param_pojemnoลฤ-skokowa'].map(lambda x: -1 if str(x) == 'None' else int( str(x).split('cm')[0].replace(' ', '')))
df['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) == 'None' else int(x.split(' ')[0]))
def run_model(model, feats):
X = df[feats].values
y = df['price_value'].values
scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
feats = [ 'param_napฤd__cat','param_rok-produkcji','param_stan__cat','param_skrzynia-biegรณw__cat','param_faktura-vat__cat','param_moc','param_marka-pojazdu__cat','feature_kamera-cofania__cat','param_typ__cat','param_pojemnoลฤ-skokowa','seller_name__cat','feature_wspomaganie-kierownicy__cat','param_model-pojazdu__cat', 'param_wersja__cat','param_kod-silnika__cat','feature_system-start-stop__cat','feature_asystent-pasa-ruchu__cat','feature_czujniki-parkowania-przednie__cat','feature_ลopatki-zmiany-biegรณw__cat','feature_regulowane-zawieszenie__cat']
xgb_params = {
'max_depth': 5,
'n_estimators': 50,
'leraning_rate': 0.1,
'seed': 0
}
run_model(xgb.XGBRegressor(**xgb_params), feats)
def obj_func(params):
print("Training with params: ")
print(params)
mean_mae, score_std = run_model(xgb.XGBRegressor(**params), feats)
return {'loss': np.abs(mean_mae), 'status': STATUS_OK}
#space
xgb_reg_params = {
'learning_rate': hp.choice('learning_rate', np.arange(0.05, 0.31, 0.05)),
'max_depth': hp.choice('max_depth', np.arange(5, 16, 1, dtype=int)),
'subsample': hp.quniform('subsample', 0.5, 1, 0.05),
'colsample_bytree': hp.quniform('colsample_bytree', 0.5, 1, 0.05),
'objective': 'reg:squarederror',
'n_estimators': 100,
'seed': 0,
}
## run
best = fmin(obj_func, xgb_reg_params, algo=tpe.suggest, max_evals=25)
best
| 0.562657 | 0.534491 |
# Logistic Regression
## Agenda
1. Refresh your memory on how to do linear regression in scikit-learn
2. Attempt to use linear regression for classification
3. Show you why logistic regression is a better alternative for classification
4. Brief overview of probability, odds, e, log, and log-odds
5. Explain the form of logistic regression
6. Explain how to interpret logistic regression coefficients
7. Demonstrate how logistic regression works with categorical features
8. Compare logistic regression with other models
## Part 1: Predicting a Continuous Response
```
# glass identification dataset
import pandas as pd
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/glass/glass.data'
col_names = ['id','ri','na','mg','al','si','k','ca','ba','fe','glass_type']
glass = pd.read_csv(url, names=col_names, index_col='id')
glass.sort_values('al', inplace=True)
glass.head()
```
**Question:** Pretend that we want to predict **ri**, and our only feature is **al**. How could we do it using machine learning?
**Answer:** We could frame it as a regression problem, and use a linear regression model with **al** as the only feature and **ri** as the response.
**Question:** How would we **visualize** this model?
**Answer:** Create a scatter plot with **al** on the x-axis and **ri** on the y-axis, and draw the line of best fit.
```
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set(font_scale=1.5)
sns.lmplot(x='al', y='ri', data=glass, ci=None)
```
**Question:** How would we draw this plot without using Seaborn?
```
# scatter plot using Pandas
glass.plot(kind='scatter', x='al', y='ri')
# equivalent scatter plot using Matplotlib
plt.scatter(glass.al, glass.ri)
plt.xlabel('al')
plt.ylabel('ri')
# fit a linear regression model
# make predictions for all values of X and add back to the original dataframe
# plot those predictions connected by a line
# put the plots together
```
### Refresher: interpreting linear regression coefficients
Linear regression equation: $y = \beta_0 + \beta_1x$
```
# compute prediction for al=2 using the equation
linreg.intercept_ + linreg.coef_ * 2
# compute prediction for al=2 using the predict method
linreg.predict(2)
# examine coefficient for al
zip(feature_cols, linreg.coef_)
```
**Interpretation:** A 1 unit increase in 'al' is associated with a 0.0025 unit decrease in 'ri'.
```
# increasing al by 1 (so that al=3) decreases ri by 0.0025
1.51699012 - 0.0024776063874696243
# compute prediction for al=3 using the predict method
linreg.predict(3)
```
## Part 2: Predicting a Categorical Response
```
# examine glass_type
glass.glass_type.value_counts().sort_index()
# types 1, 2, 3 are window glass
# types 5, 6, 7 are household glass
glass['household'] = glass.glass_type.map({1:0, 2:0, 3:0, 5:1, 6:1, 7:1})
glass.head()
```
Let's change our task, so that we're predicting **household** using **al**. Let's visualize the relationship to figure out how to do this:
```
plt.scatter(glass.al, glass.household)
plt.xlabel('al')
plt.ylabel('household')
```
Let's draw a **regression line**, like we did before:
```
# fit a linear regression model and store the predictions
feature_cols = ['al']
X = glass[feature_cols]
y = glass.household
linreg.fit(X, y)
glass['household_pred'] = linreg.predict(X)
# scatter plot that includes the regression line
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred, color='red')
plt.xlabel('al')
plt.ylabel('household')
```
If **al=3**, what class do we predict for household? **1**
If **al=1.5**, what class do we predict for household? **0**
We predict the 0 class for **lower** values of al, and the 1 class for **higher** values of al. What's our cutoff value? Around **al=2**, because that's where the linear regression line crosses the midpoint between predicting class 0 and class 1.
Therefore, we'll say that if **household_pred >= 0.5**, we predict a class of **1**, else we predict a class of **0**.
```
# understanding np.where
import numpy as np
nums = np.array([5, 15, 8])
# np.where returns the first value if the condition is True, and the second value if the condition is False
np.where(nums > 10, 'big', 'small')
# transform household_pred to 1 or 0
glass['household_pred_class'] = np.where(glass.household_pred >= 0.5, 1, 0)
glass.head()
# plot the class predictions
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_class, color='red')
plt.xlabel('al')
plt.ylabel('household')
```
## Part 3: Using Logistic Regression Instead
Logistic regression can do what we just did:
```
# fit a logistic regression model and store the class predictions
# plot the class predictions
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_class, color='red')
plt.xlabel('al')
plt.ylabel('household')
```
What if we wanted the **predicted probabilities** instead of just the **class predictions**, to understand how confident we are in a given prediction?
```
# store the predicted probabilites of class 1
glass['household_pred_prob'] = logreg.predict_proba(X)[:, 1]
# plot the predicted probabilities
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_prob, color='red')
plt.xlabel('al')
plt.ylabel('household')
# examine some example predictions
print logreg.predict_proba(1)
print logreg.predict_proba(2)
print logreg.predict_proba(3)
```
The first column indicates the predicted probability of **class 0**, and the second column indicates the predicted probability of **class 1**.
## Part 4: Probability, odds, e, log, log-odds
$$probability = \frac {one\ outcome} {all\ outcomes}$$
$$odds = \frac {one\ outcome} {all\ other\ outcomes}$$
Examples:
- Dice roll of 1: probability = 1/6, odds = 1/5
- Even dice roll: probability = 3/6, odds = 3/3 = 1
- Dice roll less than 5: probability = 4/6, odds = 4/2 = 2
$$odds = \frac {probability} {1 - probability}$$
$$probability = \frac {odds} {1 + odds}$$
```
# create a table of probability versus odds
table = pd.DataFrame({'probability':[0.1, 0.2, 0.25, 0.5, 0.6, 0.8, 0.9]})
table['odds'] = table.probability/(1 - table.probability)
table
```
What is **e**? It is the base rate of growth shared by all continually growing processes:
```
# exponential function: e^1
np.exp(1)
```
What is a **(natural) log**? It gives you the time needed to reach a certain level of growth:
```
# time needed to grow 1 unit to 2.718 units
np.log(2.718)
```
It is also the **inverse** of the exponential function:
```
np.log(np.exp(5))
# add log-odds to the table
table['logodds'] = np.log(table.odds)
table
```
## Part 5: What is Logistic Regression?
**Linear regression:** continuous response is modeled as a linear combination of the features:
$$y = \beta_0 + \beta_1x$$
**Logistic regression:** log-odds of a categorical response being "true" (1) is modeled as a linear combination of the features:
$$\log \left({p\over 1-p}\right) = \beta_0 + \beta_1x$$
This is called the **logit function**.
Probability is sometimes written as pi:
$$\log \left({\pi\over 1-\pi}\right) = \beta_0 + \beta_1x$$
The equation can be rearranged into the **logistic function**:
$$\pi = \frac{e^{\beta_0 + \beta_1x}} {1 + e^{\beta_0 + \beta_1x}}$$
In other words:
- Logistic regression outputs the **probabilities of a specific class**
- Those probabilities can be converted into **class predictions**
The **logistic function** has some nice properties:
- Takes on an "s" shape
- Output is bounded by 0 and 1
We have covered how this works for **binary classification problems** (two response classes). But what about **multi-class classification problems** (more than two response classes)?
- Most common solution for classification models is **"one-vs-all"** (also known as **"one-vs-rest"**): decompose the problem into multiple binary classification problems
- **Multinomial logistic regression** can solve this as a single problem
## Part 6: Interpreting Logistic Regression Coefficients
```
# plot the predicted probabilities again
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_prob, color='red')
plt.xlabel('al')
plt.ylabel('household')
# compute predicted log-odds for al=2 using the equation
logodds = logreg.intercept_ + logreg.coef_[0] * 2
logodds
# convert log-odds to odds
odds = np.exp(logodds)
odds
# convert odds to probability
prob = odds/(1 + odds)
prob
# compute predicted probability for al=2 using the predict_proba method
logreg.predict_proba(2)[:, 1]
# examine the coefficient for al
zip(feature_cols, logreg.coef_[0])
```
**Interpretation:** A 1 unit increase in 'al' is associated with a 4.18 unit increase in the log-odds of 'household'.
```
# increasing al by 1 (so that al=3) increases the log-odds by 4.18
logodds = 0.64722323 + 4.1804038614510901
odds = np.exp(logodds)
prob = odds/(1 + odds)
prob
# compute predicted probability for al=3 using the predict_proba method
logreg.predict_proba(3)[:, 1]
```
**Bottom line:** Positive coefficients increase the log-odds of the response (and thus increase the probability), and negative coefficients decrease the log-odds of the response (and thus decrease the probability).
```
# examine the intercept
logreg.intercept_
```
**Interpretation:** For an 'al' value of 0, the log-odds of 'household' is -7.71.
```
# convert log-odds to probability
logodds = logreg.intercept_
odds = np.exp(logodds)
prob = odds/(1 + odds)
prob
```
That makes sense from the plot above, because the probability of household=1 should be very low for such a low 'al' value.

Changing the $\beta_0$ value shifts the curve **horizontally**, whereas changing the $\beta_1$ value changes the **slope** of the curve.
## Part 7: Using Logistic Regression with Categorical Features
Logistic regression can still be used with **categorical features**. Let's see what that looks like:
```
# create a categorical feature
glass['high_ba'] = np.where(glass.ba > 0.5, 1, 0)
```
Let's use Seaborn to draw the logistic curve:
```
# original (continuous) feature
sns.lmplot(x='ba', y='household', data=glass, ci=None, logistic=True)
# categorical feature
sns.lmplot(x='high_ba', y='household', data=glass, ci=None, logistic=True)
# categorical feature, with jitter added
sns.lmplot(x='high_ba', y='household', data=glass, ci=None, logistic=True, x_jitter=0.05, y_jitter=0.05)
# fit a logistic regression model
# examine the coefficient for high_ba
zip(feature_cols, logreg.coef_[0])
```
**Interpretation:** Having a high 'ba' value is associated with a 4.43 unit increase in the log-odds of 'household' (as compared to a low 'ba' value).
## Part 8: Comparing Logistic Regression with Other Models
Advantages of logistic regression:
- Highly interpretable (if you remember how)
- Model training and prediction are fast
- No tuning is required (excluding regularization)
- Features don't need scaling
- Can perform well with a small number of observations
- Outputs well-calibrated predicted probabilities
Disadvantages of logistic regression:
- Presumes a linear relationship between the features and the log-odds of the response
- Performance is (generally) not competitive with the best supervised learning methods
- Can't automatically learn feature interactions
|
github_jupyter
|
# glass identification dataset
import pandas as pd
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/glass/glass.data'
col_names = ['id','ri','na','mg','al','si','k','ca','ba','fe','glass_type']
glass = pd.read_csv(url, names=col_names, index_col='id')
glass.sort_values('al', inplace=True)
glass.head()
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set(font_scale=1.5)
sns.lmplot(x='al', y='ri', data=glass, ci=None)
# scatter plot using Pandas
glass.plot(kind='scatter', x='al', y='ri')
# equivalent scatter plot using Matplotlib
plt.scatter(glass.al, glass.ri)
plt.xlabel('al')
plt.ylabel('ri')
# fit a linear regression model
# make predictions for all values of X and add back to the original dataframe
# plot those predictions connected by a line
# put the plots together
# compute prediction for al=2 using the equation
linreg.intercept_ + linreg.coef_ * 2
# compute prediction for al=2 using the predict method
linreg.predict(2)
# examine coefficient for al
zip(feature_cols, linreg.coef_)
# increasing al by 1 (so that al=3) decreases ri by 0.0025
1.51699012 - 0.0024776063874696243
# compute prediction for al=3 using the predict method
linreg.predict(3)
# examine glass_type
glass.glass_type.value_counts().sort_index()
# types 1, 2, 3 are window glass
# types 5, 6, 7 are household glass
glass['household'] = glass.glass_type.map({1:0, 2:0, 3:0, 5:1, 6:1, 7:1})
glass.head()
plt.scatter(glass.al, glass.household)
plt.xlabel('al')
plt.ylabel('household')
# fit a linear regression model and store the predictions
feature_cols = ['al']
X = glass[feature_cols]
y = glass.household
linreg.fit(X, y)
glass['household_pred'] = linreg.predict(X)
# scatter plot that includes the regression line
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred, color='red')
plt.xlabel('al')
plt.ylabel('household')
# understanding np.where
import numpy as np
nums = np.array([5, 15, 8])
# np.where returns the first value if the condition is True, and the second value if the condition is False
np.where(nums > 10, 'big', 'small')
# transform household_pred to 1 or 0
glass['household_pred_class'] = np.where(glass.household_pred >= 0.5, 1, 0)
glass.head()
# plot the class predictions
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_class, color='red')
plt.xlabel('al')
plt.ylabel('household')
# fit a logistic regression model and store the class predictions
# plot the class predictions
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_class, color='red')
plt.xlabel('al')
plt.ylabel('household')
# store the predicted probabilites of class 1
glass['household_pred_prob'] = logreg.predict_proba(X)[:, 1]
# plot the predicted probabilities
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_prob, color='red')
plt.xlabel('al')
plt.ylabel('household')
# examine some example predictions
print logreg.predict_proba(1)
print logreg.predict_proba(2)
print logreg.predict_proba(3)
# create a table of probability versus odds
table = pd.DataFrame({'probability':[0.1, 0.2, 0.25, 0.5, 0.6, 0.8, 0.9]})
table['odds'] = table.probability/(1 - table.probability)
table
# exponential function: e^1
np.exp(1)
# time needed to grow 1 unit to 2.718 units
np.log(2.718)
np.log(np.exp(5))
# add log-odds to the table
table['logodds'] = np.log(table.odds)
table
# plot the predicted probabilities again
plt.scatter(glass.al, glass.household)
plt.plot(glass.al, glass.household_pred_prob, color='red')
plt.xlabel('al')
plt.ylabel('household')
# compute predicted log-odds for al=2 using the equation
logodds = logreg.intercept_ + logreg.coef_[0] * 2
logodds
# convert log-odds to odds
odds = np.exp(logodds)
odds
# convert odds to probability
prob = odds/(1 + odds)
prob
# compute predicted probability for al=2 using the predict_proba method
logreg.predict_proba(2)[:, 1]
# examine the coefficient for al
zip(feature_cols, logreg.coef_[0])
# increasing al by 1 (so that al=3) increases the log-odds by 4.18
logodds = 0.64722323 + 4.1804038614510901
odds = np.exp(logodds)
prob = odds/(1 + odds)
prob
# compute predicted probability for al=3 using the predict_proba method
logreg.predict_proba(3)[:, 1]
# examine the intercept
logreg.intercept_
# convert log-odds to probability
logodds = logreg.intercept_
odds = np.exp(logodds)
prob = odds/(1 + odds)
prob
# create a categorical feature
glass['high_ba'] = np.where(glass.ba > 0.5, 1, 0)
# original (continuous) feature
sns.lmplot(x='ba', y='household', data=glass, ci=None, logistic=True)
# categorical feature
sns.lmplot(x='high_ba', y='household', data=glass, ci=None, logistic=True)
# categorical feature, with jitter added
sns.lmplot(x='high_ba', y='household', data=glass, ci=None, logistic=True, x_jitter=0.05, y_jitter=0.05)
# fit a logistic regression model
# examine the coefficient for high_ba
zip(feature_cols, logreg.coef_[0])
| 0.801042 | 0.987289 |
```
!pip install librosa
```
# Load the wav files and convert to stft
```
import librosa
s, sr = librosa.load("../input/denoise-data/train_clean_male.wav" , sr=None)
S = librosa.stft( s , n_fft=1024 , hop_length=512)
sn , sr = librosa.load("../input/denoise-data/train_dirty_male.wav" , sr=None)
X = librosa.stft(sn , n_fft=1024 , hop_length=512)
print("Input Clear voice data shape : ", S.shape)
print("Input Noise voice data shape : ", X.shape)
import numpy as np
S_abs = np.abs(S)
X_abs = np.abs(X)
S_in = np.swapaxes(S_abs , 0 , 1)
X_in = np.swapaxes(X_abs , 0 , 1)
#Import Libraries
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
```
# Model with 2D CNNs
### Model data preparation
```
from collections import deque
data_in_clean =[]
data_in_dirty =[]
i_dirty = deque(maxlen=20)
i_dirty.extend( np.zeros(513 , dtype=np.float32) for i in range(20))
for x_in , s_in in zip(X_in , S_in):
i_dirty.append(x_in)
data_in_clean.append(s_in)
data_in_dirty.append(np.array(i_dirty))
from torch.utils.data import Dataset , DataLoader
class Wav_DataGenerator(Dataset):
def __init__(self , noise_wav , clean_wav , seed):
super(Wav_DataGenerator , self).__init__()
self.noise_wav = noise_wav
self.clean_wav = clean_wav
self.seed = torch.manual_seed(seed)
def __getitem__(self , index):
data_x = self.noise_wav[index]
data_y = self.clean_wav[index]
data_x = data_x[np.newaxis , : , : ]
return data_x , data_y
def __len__(self ):
return len(self.noise_wav)
```
## Define the data loaders
```
#define the data generator
train_data = Wav_DataGenerator(data_in_dirty , data_in_clean , 1264)
train_dataloader = DataLoader(train_data , batch_size=32 , shuffle=True)
```
## Define the 2D CNN model
```
class Net(nn.Module):
#This defines the structure of the NN.
def __init__(self , activation='relu'):
super(Net, self).__init__()
self.wav_size = 513
self.conv2d_1 = nn.Conv2d(in_channels=1 , out_channels=16 , kernel_size=(3,3) , padding=(1,1) )
self.conv2d_2 = nn.Conv2d(in_channels=16 , out_channels=32 , kernel_size=(3,3), padding=(1,1) , stride=(2,2) )
self.conv2d_3 = nn.Conv2d(in_channels=32 , out_channels=64 , kernel_size=(3,3), stride=(2,2))
#self.conv2d_4 = nn.Conv2d(in_channels=64 , out_channels=128 , kernel_size=(3,3), stride=(2,2))
self.flatten_size = 64*2*128*2
self.out_layer = nn.Linear(self.flatten_size , 513)
#select the activation function
if(activation=='relu'):
self.activation_fn = nn.ReLU()
if(activation=='logistic_sigmoid'):
self.activation_fn = nn.LogSigmoid()
def forward(self, x):
x = F.relu(self.conv2d_1(x))
x = F.relu(self.conv2d_2(x))
x = F.relu(self.conv2d_3(x))
#x = F.relu(self.conv2d_4(x))
x = x.view(-1,self.flatten_size)
out = self.activation_fn(self.out_layer(x))
#Softmax gets probabilities.
return out
#model weight initialization function
def init_weights_normal(m):
if type(m) == nn.Linear:
torch.nn.init.normal_(m.weight , mean=0 , std=0.01)
m.bias.data.fill_(0)
def init_weights_xavier(m):
if type(m) == nn.Linear:
torch.nn.init.xavier_normal_(m.weight , gain=0.8)
m.bias.data.fill_(0)
def init_weights_kaiman(m):
if type(m) == nn.Linear:
torch.nn.init.kaiming_normal_(m.weight)
m.bias.data.fill_(0)
```
## initialize the model
```
#define the model
device="cuda:0" if torch.cuda.is_available() else "cpu"
Denoise_Model = Net()
Denoise_Model.apply(init_weights_xavier)
#load the model to gpu if available
Denoise_Model.to(device)
class SNR_loss(nn.Module):
def __init__(self):
super(SNR_loss , self).__init__()
def forward(self , x , target):
sum_signal = torch.sum(torch.square(x), 1)
dif_noise = torch.sum(torch.square(x-target) , 1)
log_base = -10*torch.log10(sum_signal / (dif_noise ))
log_out = torch.sum(log_base , 0)
return log_out
#define the model optimizer and loss
optimizer = optim.Adam(Denoise_Model.parameters() , lr=0.001)
#SNR loss function
criterion = SNR_loss()
```
## Model Training
```
#training the model
epoch = 350
model_train_loss = []
for i_epoch in range(epoch):
epoch_loss = 0
for batch_idx, (data, target) in enumerate(train_dataloader):
data, target = data.to(device) , target.to(device)
#Variables in Pytorch are differenciable.
data, target = Variable(data), Variable(target)
#This will zero out the gradients for this batch.
optimizer.zero_grad()
output = Denoise_Model(data)
# Calculate the loss The negative log likelihood loss. It is useful to train a classification problem with C classes.
loss =criterion(output, target)
#dloss/dx for every Variable
loss.backward()
#to do a one-step update on our parameter.
optimizer.step()
epoch_loss += loss.detach().to('cpu').item()
#Print out the loss periodically.
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
i_epoch, batch_idx * len(data), len(train_dataloader.dataset),
100. * batch_idx / len(train_dataloader), loss.detach().item()))
tn , sr = librosa.load("../input/denoise-data/test_x_01.wav" , sr=None)
X = librosa.stft(tn , n_fft=1024 , hop_length=512)
T_abs = np.abs(X)
T_in = np.swapaxes(T_abs , 0 , 1)
data_in_test =[]
i_test = deque(maxlen=20)
i_test.extend( np.zeros(513 , dtype=np.float32) for i in range(20))
for t_in in T_in:
i_test.append(t_in)
data_in_test.append(np.array(i_test))
data_in_test = np.array(data_in_test)[:,np.newaxis,:,:]
T_in_tensor = torch.tensor(data_in_test , dtype=torch.float32)
#inference the model
T_out_tensor = Denoise_Model(T_in_tensor.to(device))
T_out = T_out_tensor.detach().to("cpu").numpy()
T_out = np.swapaxes(T_out , 0 , 1)
#obtain the pahse information from the signal
T_phase = X / T_abs
#do Hadamard product
S_hat = np.multiply(T_phase,T_out)
#create the output sound file from the test signal stft
import soundfile as sf
iStftMat = librosa.istft(S_hat, hop_length=512)
sf.write("testOut_2d.wav", iStftMat , sr)
```
## Play audio
```
import IPython
IPython.display.Audio("testOut_2d.wav")
```
|
github_jupyter
|
!pip install librosa
import librosa
s, sr = librosa.load("../input/denoise-data/train_clean_male.wav" , sr=None)
S = librosa.stft( s , n_fft=1024 , hop_length=512)
sn , sr = librosa.load("../input/denoise-data/train_dirty_male.wav" , sr=None)
X = librosa.stft(sn , n_fft=1024 , hop_length=512)
print("Input Clear voice data shape : ", S.shape)
print("Input Noise voice data shape : ", X.shape)
import numpy as np
S_abs = np.abs(S)
X_abs = np.abs(X)
S_in = np.swapaxes(S_abs , 0 , 1)
X_in = np.swapaxes(X_abs , 0 , 1)
#Import Libraries
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
from collections import deque
data_in_clean =[]
data_in_dirty =[]
i_dirty = deque(maxlen=20)
i_dirty.extend( np.zeros(513 , dtype=np.float32) for i in range(20))
for x_in , s_in in zip(X_in , S_in):
i_dirty.append(x_in)
data_in_clean.append(s_in)
data_in_dirty.append(np.array(i_dirty))
from torch.utils.data import Dataset , DataLoader
class Wav_DataGenerator(Dataset):
def __init__(self , noise_wav , clean_wav , seed):
super(Wav_DataGenerator , self).__init__()
self.noise_wav = noise_wav
self.clean_wav = clean_wav
self.seed = torch.manual_seed(seed)
def __getitem__(self , index):
data_x = self.noise_wav[index]
data_y = self.clean_wav[index]
data_x = data_x[np.newaxis , : , : ]
return data_x , data_y
def __len__(self ):
return len(self.noise_wav)
#define the data generator
train_data = Wav_DataGenerator(data_in_dirty , data_in_clean , 1264)
train_dataloader = DataLoader(train_data , batch_size=32 , shuffle=True)
class Net(nn.Module):
#This defines the structure of the NN.
def __init__(self , activation='relu'):
super(Net, self).__init__()
self.wav_size = 513
self.conv2d_1 = nn.Conv2d(in_channels=1 , out_channels=16 , kernel_size=(3,3) , padding=(1,1) )
self.conv2d_2 = nn.Conv2d(in_channels=16 , out_channels=32 , kernel_size=(3,3), padding=(1,1) , stride=(2,2) )
self.conv2d_3 = nn.Conv2d(in_channels=32 , out_channels=64 , kernel_size=(3,3), stride=(2,2))
#self.conv2d_4 = nn.Conv2d(in_channels=64 , out_channels=128 , kernel_size=(3,3), stride=(2,2))
self.flatten_size = 64*2*128*2
self.out_layer = nn.Linear(self.flatten_size , 513)
#select the activation function
if(activation=='relu'):
self.activation_fn = nn.ReLU()
if(activation=='logistic_sigmoid'):
self.activation_fn = nn.LogSigmoid()
def forward(self, x):
x = F.relu(self.conv2d_1(x))
x = F.relu(self.conv2d_2(x))
x = F.relu(self.conv2d_3(x))
#x = F.relu(self.conv2d_4(x))
x = x.view(-1,self.flatten_size)
out = self.activation_fn(self.out_layer(x))
#Softmax gets probabilities.
return out
#model weight initialization function
def init_weights_normal(m):
if type(m) == nn.Linear:
torch.nn.init.normal_(m.weight , mean=0 , std=0.01)
m.bias.data.fill_(0)
def init_weights_xavier(m):
if type(m) == nn.Linear:
torch.nn.init.xavier_normal_(m.weight , gain=0.8)
m.bias.data.fill_(0)
def init_weights_kaiman(m):
if type(m) == nn.Linear:
torch.nn.init.kaiming_normal_(m.weight)
m.bias.data.fill_(0)
#define the model
device="cuda:0" if torch.cuda.is_available() else "cpu"
Denoise_Model = Net()
Denoise_Model.apply(init_weights_xavier)
#load the model to gpu if available
Denoise_Model.to(device)
class SNR_loss(nn.Module):
def __init__(self):
super(SNR_loss , self).__init__()
def forward(self , x , target):
sum_signal = torch.sum(torch.square(x), 1)
dif_noise = torch.sum(torch.square(x-target) , 1)
log_base = -10*torch.log10(sum_signal / (dif_noise ))
log_out = torch.sum(log_base , 0)
return log_out
#define the model optimizer and loss
optimizer = optim.Adam(Denoise_Model.parameters() , lr=0.001)
#SNR loss function
criterion = SNR_loss()
#training the model
epoch = 350
model_train_loss = []
for i_epoch in range(epoch):
epoch_loss = 0
for batch_idx, (data, target) in enumerate(train_dataloader):
data, target = data.to(device) , target.to(device)
#Variables in Pytorch are differenciable.
data, target = Variable(data), Variable(target)
#This will zero out the gradients for this batch.
optimizer.zero_grad()
output = Denoise_Model(data)
# Calculate the loss The negative log likelihood loss. It is useful to train a classification problem with C classes.
loss =criterion(output, target)
#dloss/dx for every Variable
loss.backward()
#to do a one-step update on our parameter.
optimizer.step()
epoch_loss += loss.detach().to('cpu').item()
#Print out the loss periodically.
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
i_epoch, batch_idx * len(data), len(train_dataloader.dataset),
100. * batch_idx / len(train_dataloader), loss.detach().item()))
tn , sr = librosa.load("../input/denoise-data/test_x_01.wav" , sr=None)
X = librosa.stft(tn , n_fft=1024 , hop_length=512)
T_abs = np.abs(X)
T_in = np.swapaxes(T_abs , 0 , 1)
data_in_test =[]
i_test = deque(maxlen=20)
i_test.extend( np.zeros(513 , dtype=np.float32) for i in range(20))
for t_in in T_in:
i_test.append(t_in)
data_in_test.append(np.array(i_test))
data_in_test = np.array(data_in_test)[:,np.newaxis,:,:]
T_in_tensor = torch.tensor(data_in_test , dtype=torch.float32)
#inference the model
T_out_tensor = Denoise_Model(T_in_tensor.to(device))
T_out = T_out_tensor.detach().to("cpu").numpy()
T_out = np.swapaxes(T_out , 0 , 1)
#obtain the pahse information from the signal
T_phase = X / T_abs
#do Hadamard product
S_hat = np.multiply(T_phase,T_out)
#create the output sound file from the test signal stft
import soundfile as sf
iStftMat = librosa.istft(S_hat, hop_length=512)
sf.write("testOut_2d.wav", iStftMat , sr)
import IPython
IPython.display.Audio("testOut_2d.wav")
| 0.710929 | 0.731514 |
#### **Title**: PolyDraw
**Description**: A linked streams example demonstrating how to use the PolyDraw stream.
**Dependencies**: Bokeh
**Backends**: [Bokeh](./PolyDraw.ipynb)
```
import holoviews as hv
from holoviews import opts, streams
hv.extension('bokeh')
```
The ``PolyDraw`` stream adds a bokeh tool to the source plot, which allows drawing, dragging and deleting polygons and making the drawn data available to Python. The tool supports the following actions:
**Add patch/multi-line**
Double tap to add the first vertex, then use tap to add each subsequent vertex, to finalize the draw action double tap to insert the final vertex or press the ESC key to stop drawing.
**Move patch/multi-line**
Tap and drag an existing patch/multi-line; the point will be dropped once you let go of the mouse button.
**Delete patch/multi-line**
Tap a patch/multi-line to select it then press BACKSPACE key while the mouse is within the plot area.
### Properties
* **``drag``** (boolean): Whether to enable dragging of paths and polygons
* **``empty_value``**: Value to add to non-coordinate columns when adding new path or polygon
* **``num_objects``** (int): Maximum number of paths or polygons to draw before deleting the oldest object
* **``show_vertices``** (boolean): Whether to show the vertices of the paths or polygons
* **``styles``** (dict): Dictionary of style properties (e.g. line_color, line_width etc.) to apply to each path and polygon. If values are lists the values will cycle over the values)
* **``vertex_style``** (dict): Dictionary of style properties (e.g. fill_color, line_width etc.) to apply to vertices if ``show_vertices`` enabled
As a simple example we will create simple ``Path`` and ``Polygons`` elements and attach each to a ``PolyDraw`` stream. We will also enable the ``drag`` option on the stream to enable dragging of existing glyphs. Additionally we can enable the ``show_vertices`` option which shows the vertices of the drawn polygons/lines and adds the ability to snap to them. Finally the ``num_objects`` option limits the number of lines/polygons that can be drawn by dropping the first glyph when the limit is exceeded.
```
path = hv.Path([[(1, 5), (9, 5)]])
poly = hv.Polygons([[(2, 2), (5, 8), (8, 2)]])
path_stream = streams.PolyDraw(source=path, drag=True, show_vertices=True)
poly_stream = streams.PolyDraw(source=poly, drag=True, num_objects=4,
show_vertices=True, styles={
'fill_color': ['red', 'green', 'blue']
})
(path * poly).opts(
opts.Path(color='red', height=400, line_width=5, width=400),
opts.Polygons(fill_alpha=0.3, active_tools=['poly_draw']))
```
<center><img src="https://assets.holoviews.org/gifs/examples/streams/bokeh/poly_draw.gif" width=400></center>
Whenever the data source is edited the data is synced with Python, both in the notebook and when deployed on the bokeh server. The data is made available as a dictionary of columns:
```
path_stream.data
```
Alternatively we can use the ``element`` property to get an Element containing the returned data:
```
path_stream.element * poly_stream.element
```
|
github_jupyter
|
import holoviews as hv
from holoviews import opts, streams
hv.extension('bokeh')
path = hv.Path([[(1, 5), (9, 5)]])
poly = hv.Polygons([[(2, 2), (5, 8), (8, 2)]])
path_stream = streams.PolyDraw(source=path, drag=True, show_vertices=True)
poly_stream = streams.PolyDraw(source=poly, drag=True, num_objects=4,
show_vertices=True, styles={
'fill_color': ['red', 'green', 'blue']
})
(path * poly).opts(
opts.Path(color='red', height=400, line_width=5, width=400),
opts.Polygons(fill_alpha=0.3, active_tools=['poly_draw']))
path_stream.data
path_stream.element * poly_stream.element
| 0.342572 | 0.975296 |
# ้ฎ้ข่ฝฌๅไธบๅ้็ฉบ้ด็ๆฐๅญฆ่ฟ็ฎ
```
import tensorflow_hub as hub
import numpy as np
# load a pre-trained embedding
# Token based text embedding trained on Chinese Google News 100B corpus.
# https://tfhub.dev/google/nnlm-zh-dim50/2
embed = hub.load("https://tfhub.dev/google/nnlm-zh-dim50/2")
embed(["ๅญฆ็"]) # turn a string into a tensor
def cos_sim(vector_a, vector_b):
"""
่ฎก็ฎไธคไธชๅ้ไน้ด็ไฝๅผฆ็ธไผผๅบฆ
:param vector_a: ๅ้ a
:param vector_b: ๅ้ b
:return: sim
"""
vector_a = np.mat(vector_a)
vector_b = np.mat(vector_b)
num = float(vector_a * vector_b.T)
denom = np.linalg.norm(vector_a) * np.linalg.norm(vector_b)
cos = num / denom
sim = 0.5 + 0.5 * cos
return sim
def embeddings_cos_sim(ab):
embeddings = embed(ab)
B=embeddings.numpy()[1]
A=embeddings.numpy()[0]
print(ab, cos_sim(A, B))
embeddings_cos_sim(["็ซ","็"])
embeddings_cos_sim(["ๅปบ็ญ่ฎพ่ฎก","็ฉบ้ด่ฎพ่ฎก"])
```
# ็นๅพๅทฅ็จใๆบๅจๅญฆไน
```
from sklearn import datasets
import matplotlib.pyplot as plt
from sklearn import tree
# Irisๆฐๆฎ้ๆฏๅธธ็จ็ๅ็ฑปๅฎ้ชๆฐๆฎ้๏ผ
# ็ฑFisher, 1936ๆถ้ๆด็ใIrisไน็งฐ้ธขๅฐพ่ฑๅๆฐๆฎ้๏ผ
# ๆฏไธ็ฑปๅค้ๅ้ๅๆ็ๆฐๆฎ้ใๆฐๆฎ้ๅ
ๅซ150ไธชๆฐๆฎ้๏ผ
# ๅไธบ3็ฑป๏ผๆฏ็ฑป50ไธชๆฐๆฎ๏ผๆฏไธชๆฐๆฎๅ
ๅซ4ไธชๅฑๆงใ
# ๅฏ้่ฟ่ฑ่ผ้ฟๅบฆ๏ผ่ฑ่ผๅฎฝๅบฆ๏ผ่ฑ็ฃ้ฟๅบฆ๏ผ่ฑ็ฃๅฎฝๅบฆ4ไธชๅฑๆง้ขๆต้ธขๅฐพ่ฑๅๅฑไบ๏ผSetosa๏ผVersicolour๏ผVirginica๏ผไธไธช็ง็ฑปไธญ็ๅชไธ็ฑปใ
#่ฝฝๅ
ฅๆฐๆฎ้
iris = datasets.load_iris()
#print(iris)
iris_data=iris['data']
#print(iris_data[0])
iris_label=iris['target']
#print(iris_label[0])
iris_target_name=iris['target_names']
print(iris_target_name)
X=np.array(iris_data)
Y=np.array(iris_label)
# print(X[0],iris_target_name[0])
# # #่ฎญ็ป,ๅณ็ญๆ
model=tree.DecisionTreeClassifier(max_depth=3)
# # ๅผๅง่ฎญ็ป
model.fit(X,Y)
# #่ฟ้้ขๆตๅฝๅ่พๅ
ฅ็ๅผ็ๆๅฑๅ็ฑป
# print('้ขๆต็ฑปๅซๆฏ',iris_target_name[clf.predict([[5,3,1,0.1]])[0]])
k=[1,2,3,4]
k[0]
model.predict([ [1,3,0.5,6] ] )[0]
print('้ขๆต็ฑปๅซๆฏ',iris_target_name[0])
```
# ๆทฑๅบฆๅญฆไน -่กจ็คบๅญฆไน ๏ผไธ็ฉ็ๅ้
```
#ๆฌงๆฐ่ท็ฆปๅไฝๅผฆ็ธไผผๅบฆ
def dist_sim(vector_a, vector_b):
vector_a = np.mat(vector_a)
vector_b = np.mat(vector_b)
dist = np.linalg.norm(vector_a - vector_b)
sim = 1.0 / (1.0 + dist) #ๅฝไธๅ
return sim
dist_sim(A,B)
```
# ๆบๅจๅญฆไน ๅ็ฑป
## ๆพๅบ็พคไฝไธญ็KOL
### ๅฏนๆฏๆฌงๅผ่ท็ฆปไธไฝๅผฆ็ธไผผๅบฆ
```
#ๅ ่ฝฝๆฐๆฎ
import pandas as pd
df = pd.read_csv("data/students.csv")
#ๆฅ็ไธๆฐๆฎ็ๅๅ ๆก
df.head()
#ๅ้่ฆ็ๅ
student=df.loc[1,['Name','Email','School','Major','grade','Interest','Interest level','Code']].values.tolist()
#ๅ ่ฝฝ ้ข่ฎญ็ปๆจกๅ
import tensorflow_hub as hub
embed = hub.load("model/nnlm-zh-dim50")
# ๆต่ฏไธ
embeddings = embed(["".join(student)])[0]
embeddings.numpy()
#ๆ็นๅพ่ฝฌๆ ็จ ๅฏๅ้
students=[]
for i in range(len(df)):
#print(i)
student=df.loc[i,['Email','School','Major','grade','Interest','Interest level','Code']].values.tolist()
students.append(embed(["".join(student)])[0].numpy())
students=np.array(students)
#ๆฏไฝๅๅญฆ็็จ ๅฏๅ้
students
#ไฝฟ็จscikit learn็ไฝๅผฆ็ธไผผๅบฆ่ฎก็ฎๆนๆณ
from sklearn.metrics.pairwise import cosine_similarity
sim=cosine_similarity(students)
#ๆฅ็ไธ็ฌฌไธไฝไธ็ฌฌไบไฝๅๅญฆ็็ธไผผๅบฆ
sim[0][1]
#ไธบๆฏไฝๅๅญฆ่ฎก็ฎไธไปๆ็ธไผผ็ไธไฝๅๅญฆ๏ผๅชๅ็ธไผผๅบฆๅคงไบ0.6็็ฌฌไธไฝๅๅญฆ
count_students={}
for i in range(len(students)):
others=[]
for j in range(len(students)):
if i!=j:
others.append({
"index":j,
"score":sim[i][j]
})
others=sorted(others, key=lambda x:x["score"],reverse=True)
if others[0]['score']>0.6:
print(df.loc[i,'Name'],df.loc[others[0]['index'],'Name'],others[0]['score'])
if not df.loc[others[0]['index'],'Name'] in count_students:
count_students[df.loc[others[0]['index'],'Name']]=0
count_students[df.loc[others[0]['index'],'Name']]+=others[0]['score']
# ไธๅคชๅฏ่ฝๆฏkol็ๅๅญฆ๏ผไธๅ
ถไปๅๅญฆ็ธไผผๆง่พไฝ)
for i in range(len(students)):
if not df.loc[i,'Name'] in count_students:
print(df.loc[i,'Name'])
# ๆๆๅฏ่ฝๆฏkol็ๅๅญฆ
sorted(count_students.items(), key=lambda x:x[1],reverse=True)
```
# ่็ฑป็ฎๆณ
### ่็ฑป็ญ็บงๅญฆ็
```
#ๅผๅ
ฅๅบ๏ผๅนถๆต่ฏ
from sklearn.cluster import KMeans,DBSCAN,Birch
import numpy as np
X = np.array([[1,2, 2], [1,2, 4], [1,2, 0],[4, 2,2], [4,2, 4], [4,1, 0]])
kmeans = KMeans(n_clusters=2, random_state=0).fit(X)
kmeans.labels_ #่พๅบๅๅงๆฐๆฎ็่็ฑปๅ็ๆ ็ญพๅผ
#่ฎพ็ฝฎๆฐ้
model = KMeans(n_clusters =2)
#่ฎญ็ป
model.fit(students)
for i in range(len(model.labels_)):
if model.labels_[i]==0:
print(df.loc[i,'Name'])
#ๆขไธ็ง
model=DBSCAN(eps=0.11, min_samples=2).fit(students)
print(model.labels_)
#ๆขไธ็ง
model = Birch(n_clusters=2)
model.fit(students)
print(model.labels_)
# ๅฏ่งๅๆฅ็ๆจกๅๅญฆไน ๅฐ็ๅ็ฑป
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
tsne = TSNE(n_components=2)
decomposition_data = tsne.fit_transform(students)
x = []
y = []
for i in decomposition_data:
x.append(i[0])
y.append(i[1])
plt.figure(figsize=(10, 10))
ax = plt.axes()
plt.scatter(x, y, c=model.labels_, marker="x")
plt.xticks(())
plt.yticks(())
plt.show()
```
# ๆทฑๅบฆๅญฆไน hello world
### ๆๅๆฐๅญๅ็ฑป
```
%load_ext tensorboard
#ๅผๅ
ฅ็ธๅ
ณๅบ
import cv2
from matplotlib import pyplot as plt
import tensorflow as tf
import datetime
# ไธ่ฝฝmnistๆฐๆฎ้
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# ๆฐๆฎ้็ปดๅบฆ
print(x_test.shape,y_test.shape)
#ๆฅ็ไธๆฐๆฎ้
plt.imshow(x_test[1882])
plt.show()
#ๆจกๅๅฎไน
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(10, activation='relu'),
#tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
#่ฎญ็ปๅผๅง
model.fit(x_train, y_train, epochs=5)
#่ฏไผฐๆจกๅ
model.evaluate(x_test, y_test, verbose=2)
```
# ้ข่ฒๅ็ฑป v1.0
### ไฝฟ็จ Pandas ๅค็ๆฐๆฎ
```
#ไฝฟ็จ Pandas ๅๅปบไธไธช dataframe
import pandas as pd
dataframe=pd.read_json("data/colorData.json", orient="records")
#้ข่งไธๅ้ขๅ ๆกๆฐๆฎ
dataframe.head()
#ๆฅ็ไธๆฐๆฎ็ฑปๅ
dataframe.dtypes
#label้่ฆ่ฝฌๆint
dataframe['label'] = pd.Categorical(dataframe['label'])
dataframe['label'] = dataframe.label.cat.codes
# ่ทๅๆ ็ญพๅ็งฐ
#code--label
def get_label_name(label=0):
labels=pd.Categorical(['brown-ish','blue-ish', 'green-ish', 'grey-ish', 'orange-ish',
'pink-ish', 'purple-ish', 'red-ish', 'yellow-ish'])
index=labels.codes.tolist().index(label)
return labels.categories.tolist()[index]
# ๆต่ฏไธ
get_label_name(3)
#ๆฅ็ๆ ็ญพๅๅธ
dataframe.loc[:, 'label'].value_counts()
#ๅๅค่ฎญ็ปๆฐๆฎ
dataframe = dataframe[['r','g','b','label']]
```
### ๅถไฝๆฐๆฎ้
```
#ๅๅฒๆ่ฎญ็ป้ใ้ช่ฏ้ใๆต่ฏ้
from sklearn.model_selection import train_test_split
df=dataframe.copy()
train, test = train_test_split(df, test_size=0.1)
train, val = train_test_split(train, test_size=0.1)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
# ไธ็งไป Pandas Dataframe ๅๅปบ tf.data ๆฐๆฎ้็ๅฎ็จ็จๅบๆนๆณ๏ผutility method๏ผ
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
x = dataframe.copy()
x=x.astype('float64')
y = x.pop('label')
ds = tf.data.Dataset.from_tensor_slices((x.values, y.values))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
import datetime
import tensorflow as tf
#ๅๆฐ
#Batch Size๏ผไธๆฌก่ฎญ็ปๆ้ๅ็ๆ ทๆฌๆฐใ
BATCH_SIZE=64
#ๅคฑๆดป็(Dropout Rate) ๆฏๅฑไธญไธขๅผ็็ฅ็ปๅ
ๅ ๆดๅฑ็ฅ็ปๅ
็ๆฏ็
DROPOUT_RATE=0.1045
#่ฝฎๆฌก๏ผๆดไธช่พๅ
ฅๆฐๆฎ็ๅๆฌกๅๅๅๅๅไผ ้
EPOCHS=100
log_dir = "logs/fit/DROPOUT_RATE_" + str(DROPOUT_RATE)+"_"+datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
train_ds=df_to_dataset(train,batch_size=BATCH_SIZE)
val_ds=df_to_dataset(val,batch_size=BATCH_SIZE)
test_ds=df_to_dataset(test,batch_size=BATCH_SIZE)
for f in train_ds.take(1):
print(f)
```
### ๆจกๅ
```
model = tf.keras.Sequential([
tf.keras.layers.Dense(12, input_shape=(3,),activation='softplus'),
tf.keras.layers.Dense(48, activation='relu'),
tf.keras.layers.Dropout(DROPOUT_RATE),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(DROPOUT_RATE),
tf.keras.layers.Dense(48, activation='relu'),
tf.keras.layers.Dense(9, activation='softmax')
])
#optimizer=tf.keras.optimizers.SGD(learning_rate=0.25)
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()
model.fit(train_ds,
validation_data=val_ds,
epochs=EPOCHS,
callbacks=[tensorboard_callback])
%tensorboard --logdir logs/fit
test_loss, test_acc = model.evaluate(test_ds, verbose=2)
print('\nTest accuracy:', test_acc)
```
### ้ขๆต
```
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
import numpy as np
predictions = probability_model.predict(np.array([[110,2,25]]))
predictions[0]
get_label_name(np.argmax(predictions[0]))
```
# RGB่ฝฌHSV
### rgb->hsv
```
import cv2
# opencv
import numpy as np
from matplotlib import pyplot as plt
#ๅๅปบไธๅผ ้ป่ฒ็ๅพ็
img = np.zeros((28,32), np.float32)
print(img.shape)
img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
print(img.shape)
plt.imshow(img)
plt.show()
rgb_to_hsv([110,2,25])
'''
HSV้ข่ฒ็ฉบ้ด่งๅฎ:H่ๅด0~360,S่ๅด0~1,V่ๅด0~1
PSไธญ็HSV่ๅด๏ผHๆฏ0-360๏ผSๆฏ0-1๏ผV๏ผB๏ผๆฏ0-1
opencvไธญ็HSV่ๅด๏ผHๆฏ0-180๏ผSๆฏ0-255๏ผVๆฏ0-255
'''
# h:0-360 , s:0-255, v:0-255
# r:0-255, g:0-255, b:0-255
def rgb_to_hsv(rgb=[]):
img=np.array([[rgb]],np.uint8)
#print(img)
#print(img)
plt.imshow(img)
plt.show()
img_hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
img_hsv[0][0]*=2
#print(img_hsv)
return img_hsv
def hsv_to_rgb(hsv=[]):
hsv[0]/=2
img_hsv=np.array([[hsv]],np.uint8)
img_rgb = cv2.cvtColor(img_hsv, cv2.COLOR_HSV2RGB)
plt.imshow(img_rgb)
plt.show()
return img_rgb
#rgb_to_hsv([250,0,0])
#hsv_to_rgb([360,255,255])
# ไปh=0 ๅผๅงๆ่ฝฌ๏ผๆฏ18ยฐๅไธ็ป้ข่ฒ๏ผไฝไธบ้
่ฒๆนๆก
for i in range(0,360,18):
#print(i)
a=hsv_to_rgb([i,255,255])
b=hsv_to_rgb([i+18,255,255])
print(a,b)
#opencv่ฏปๅๅพ็๏ผ้ป่ฎคๆฏBGR
img=cv2.imread('img/test.jpg',cv2.IMREAD_COLOR)
img=cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show()
```
|
github_jupyter
|
import tensorflow_hub as hub
import numpy as np
# load a pre-trained embedding
# Token based text embedding trained on Chinese Google News 100B corpus.
# https://tfhub.dev/google/nnlm-zh-dim50/2
embed = hub.load("https://tfhub.dev/google/nnlm-zh-dim50/2")
embed(["ๅญฆ็"]) # turn a string into a tensor
def cos_sim(vector_a, vector_b):
"""
่ฎก็ฎไธคไธชๅ้ไน้ด็ไฝๅผฆ็ธไผผๅบฆ
:param vector_a: ๅ้ a
:param vector_b: ๅ้ b
:return: sim
"""
vector_a = np.mat(vector_a)
vector_b = np.mat(vector_b)
num = float(vector_a * vector_b.T)
denom = np.linalg.norm(vector_a) * np.linalg.norm(vector_b)
cos = num / denom
sim = 0.5 + 0.5 * cos
return sim
def embeddings_cos_sim(ab):
embeddings = embed(ab)
B=embeddings.numpy()[1]
A=embeddings.numpy()[0]
print(ab, cos_sim(A, B))
embeddings_cos_sim(["็ซ","็"])
embeddings_cos_sim(["ๅปบ็ญ่ฎพ่ฎก","็ฉบ้ด่ฎพ่ฎก"])
from sklearn import datasets
import matplotlib.pyplot as plt
from sklearn import tree
# Irisๆฐๆฎ้ๆฏๅธธ็จ็ๅ็ฑปๅฎ้ชๆฐๆฎ้๏ผ
# ็ฑFisher, 1936ๆถ้ๆด็ใIrisไน็งฐ้ธขๅฐพ่ฑๅๆฐๆฎ้๏ผ
# ๆฏไธ็ฑปๅค้ๅ้ๅๆ็ๆฐๆฎ้ใๆฐๆฎ้ๅ
ๅซ150ไธชๆฐๆฎ้๏ผ
# ๅไธบ3็ฑป๏ผๆฏ็ฑป50ไธชๆฐๆฎ๏ผๆฏไธชๆฐๆฎๅ
ๅซ4ไธชๅฑๆงใ
# ๅฏ้่ฟ่ฑ่ผ้ฟๅบฆ๏ผ่ฑ่ผๅฎฝๅบฆ๏ผ่ฑ็ฃ้ฟๅบฆ๏ผ่ฑ็ฃๅฎฝๅบฆ4ไธชๅฑๆง้ขๆต้ธขๅฐพ่ฑๅๅฑไบ๏ผSetosa๏ผVersicolour๏ผVirginica๏ผไธไธช็ง็ฑปไธญ็ๅชไธ็ฑปใ
#่ฝฝๅ
ฅๆฐๆฎ้
iris = datasets.load_iris()
#print(iris)
iris_data=iris['data']
#print(iris_data[0])
iris_label=iris['target']
#print(iris_label[0])
iris_target_name=iris['target_names']
print(iris_target_name)
X=np.array(iris_data)
Y=np.array(iris_label)
# print(X[0],iris_target_name[0])
# # #่ฎญ็ป,ๅณ็ญๆ
model=tree.DecisionTreeClassifier(max_depth=3)
# # ๅผๅง่ฎญ็ป
model.fit(X,Y)
# #่ฟ้้ขๆตๅฝๅ่พๅ
ฅ็ๅผ็ๆๅฑๅ็ฑป
# print('้ขๆต็ฑปๅซๆฏ',iris_target_name[clf.predict([[5,3,1,0.1]])[0]])
k=[1,2,3,4]
k[0]
model.predict([ [1,3,0.5,6] ] )[0]
print('้ขๆต็ฑปๅซๆฏ',iris_target_name[0])
#ๆฌงๆฐ่ท็ฆปๅไฝๅผฆ็ธไผผๅบฆ
def dist_sim(vector_a, vector_b):
vector_a = np.mat(vector_a)
vector_b = np.mat(vector_b)
dist = np.linalg.norm(vector_a - vector_b)
sim = 1.0 / (1.0 + dist) #ๅฝไธๅ
return sim
dist_sim(A,B)
#ๅ ่ฝฝๆฐๆฎ
import pandas as pd
df = pd.read_csv("data/students.csv")
#ๆฅ็ไธๆฐๆฎ็ๅๅ ๆก
df.head()
#ๅ้่ฆ็ๅ
student=df.loc[1,['Name','Email','School','Major','grade','Interest','Interest level','Code']].values.tolist()
#ๅ ่ฝฝ ้ข่ฎญ็ปๆจกๅ
import tensorflow_hub as hub
embed = hub.load("model/nnlm-zh-dim50")
# ๆต่ฏไธ
embeddings = embed(["".join(student)])[0]
embeddings.numpy()
#ๆ็นๅพ่ฝฌๆ ็จ ๅฏๅ้
students=[]
for i in range(len(df)):
#print(i)
student=df.loc[i,['Email','School','Major','grade','Interest','Interest level','Code']].values.tolist()
students.append(embed(["".join(student)])[0].numpy())
students=np.array(students)
#ๆฏไฝๅๅญฆ็็จ ๅฏๅ้
students
#ไฝฟ็จscikit learn็ไฝๅผฆ็ธไผผๅบฆ่ฎก็ฎๆนๆณ
from sklearn.metrics.pairwise import cosine_similarity
sim=cosine_similarity(students)
#ๆฅ็ไธ็ฌฌไธไฝไธ็ฌฌไบไฝๅๅญฆ็็ธไผผๅบฆ
sim[0][1]
#ไธบๆฏไฝๅๅญฆ่ฎก็ฎไธไปๆ็ธไผผ็ไธไฝๅๅญฆ๏ผๅชๅ็ธไผผๅบฆๅคงไบ0.6็็ฌฌไธไฝๅๅญฆ
count_students={}
for i in range(len(students)):
others=[]
for j in range(len(students)):
if i!=j:
others.append({
"index":j,
"score":sim[i][j]
})
others=sorted(others, key=lambda x:x["score"],reverse=True)
if others[0]['score']>0.6:
print(df.loc[i,'Name'],df.loc[others[0]['index'],'Name'],others[0]['score'])
if not df.loc[others[0]['index'],'Name'] in count_students:
count_students[df.loc[others[0]['index'],'Name']]=0
count_students[df.loc[others[0]['index'],'Name']]+=others[0]['score']
# ไธๅคชๅฏ่ฝๆฏkol็ๅๅญฆ๏ผไธๅ
ถไปๅๅญฆ็ธไผผๆง่พไฝ)
for i in range(len(students)):
if not df.loc[i,'Name'] in count_students:
print(df.loc[i,'Name'])
# ๆๆๅฏ่ฝๆฏkol็ๅๅญฆ
sorted(count_students.items(), key=lambda x:x[1],reverse=True)
#ๅผๅ
ฅๅบ๏ผๅนถๆต่ฏ
from sklearn.cluster import KMeans,DBSCAN,Birch
import numpy as np
X = np.array([[1,2, 2], [1,2, 4], [1,2, 0],[4, 2,2], [4,2, 4], [4,1, 0]])
kmeans = KMeans(n_clusters=2, random_state=0).fit(X)
kmeans.labels_ #่พๅบๅๅงๆฐๆฎ็่็ฑปๅ็ๆ ็ญพๅผ
#่ฎพ็ฝฎๆฐ้
model = KMeans(n_clusters =2)
#่ฎญ็ป
model.fit(students)
for i in range(len(model.labels_)):
if model.labels_[i]==0:
print(df.loc[i,'Name'])
#ๆขไธ็ง
model=DBSCAN(eps=0.11, min_samples=2).fit(students)
print(model.labels_)
#ๆขไธ็ง
model = Birch(n_clusters=2)
model.fit(students)
print(model.labels_)
# ๅฏ่งๅๆฅ็ๆจกๅๅญฆไน ๅฐ็ๅ็ฑป
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
tsne = TSNE(n_components=2)
decomposition_data = tsne.fit_transform(students)
x = []
y = []
for i in decomposition_data:
x.append(i[0])
y.append(i[1])
plt.figure(figsize=(10, 10))
ax = plt.axes()
plt.scatter(x, y, c=model.labels_, marker="x")
plt.xticks(())
plt.yticks(())
plt.show()
%load_ext tensorboard
#ๅผๅ
ฅ็ธๅ
ณๅบ
import cv2
from matplotlib import pyplot as plt
import tensorflow as tf
import datetime
# ไธ่ฝฝmnistๆฐๆฎ้
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# ๆฐๆฎ้็ปดๅบฆ
print(x_test.shape,y_test.shape)
#ๆฅ็ไธๆฐๆฎ้
plt.imshow(x_test[1882])
plt.show()
#ๆจกๅๅฎไน
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(10, activation='relu'),
#tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
#่ฎญ็ปๅผๅง
model.fit(x_train, y_train, epochs=5)
#่ฏไผฐๆจกๅ
model.evaluate(x_test, y_test, verbose=2)
#ไฝฟ็จ Pandas ๅๅปบไธไธช dataframe
import pandas as pd
dataframe=pd.read_json("data/colorData.json", orient="records")
#้ข่งไธๅ้ขๅ ๆกๆฐๆฎ
dataframe.head()
#ๆฅ็ไธๆฐๆฎ็ฑปๅ
dataframe.dtypes
#label้่ฆ่ฝฌๆint
dataframe['label'] = pd.Categorical(dataframe['label'])
dataframe['label'] = dataframe.label.cat.codes
# ่ทๅๆ ็ญพๅ็งฐ
#code--label
def get_label_name(label=0):
labels=pd.Categorical(['brown-ish','blue-ish', 'green-ish', 'grey-ish', 'orange-ish',
'pink-ish', 'purple-ish', 'red-ish', 'yellow-ish'])
index=labels.codes.tolist().index(label)
return labels.categories.tolist()[index]
# ๆต่ฏไธ
get_label_name(3)
#ๆฅ็ๆ ็ญพๅๅธ
dataframe.loc[:, 'label'].value_counts()
#ๅๅค่ฎญ็ปๆฐๆฎ
dataframe = dataframe[['r','g','b','label']]
#ๅๅฒๆ่ฎญ็ป้ใ้ช่ฏ้ใๆต่ฏ้
from sklearn.model_selection import train_test_split
df=dataframe.copy()
train, test = train_test_split(df, test_size=0.1)
train, val = train_test_split(train, test_size=0.1)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
# ไธ็งไป Pandas Dataframe ๅๅปบ tf.data ๆฐๆฎ้็ๅฎ็จ็จๅบๆนๆณ๏ผutility method๏ผ
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
x = dataframe.copy()
x=x.astype('float64')
y = x.pop('label')
ds = tf.data.Dataset.from_tensor_slices((x.values, y.values))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
import datetime
import tensorflow as tf
#ๅๆฐ
#Batch Size๏ผไธๆฌก่ฎญ็ปๆ้ๅ็ๆ ทๆฌๆฐใ
BATCH_SIZE=64
#ๅคฑๆดป็(Dropout Rate) ๆฏๅฑไธญไธขๅผ็็ฅ็ปๅ
ๅ ๆดๅฑ็ฅ็ปๅ
็ๆฏ็
DROPOUT_RATE=0.1045
#่ฝฎๆฌก๏ผๆดไธช่พๅ
ฅๆฐๆฎ็ๅๆฌกๅๅๅๅๅไผ ้
EPOCHS=100
log_dir = "logs/fit/DROPOUT_RATE_" + str(DROPOUT_RATE)+"_"+datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
train_ds=df_to_dataset(train,batch_size=BATCH_SIZE)
val_ds=df_to_dataset(val,batch_size=BATCH_SIZE)
test_ds=df_to_dataset(test,batch_size=BATCH_SIZE)
for f in train_ds.take(1):
print(f)
model = tf.keras.Sequential([
tf.keras.layers.Dense(12, input_shape=(3,),activation='softplus'),
tf.keras.layers.Dense(48, activation='relu'),
tf.keras.layers.Dropout(DROPOUT_RATE),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(DROPOUT_RATE),
tf.keras.layers.Dense(48, activation='relu'),
tf.keras.layers.Dense(9, activation='softmax')
])
#optimizer=tf.keras.optimizers.SGD(learning_rate=0.25)
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()
model.fit(train_ds,
validation_data=val_ds,
epochs=EPOCHS,
callbacks=[tensorboard_callback])
%tensorboard --logdir logs/fit
test_loss, test_acc = model.evaluate(test_ds, verbose=2)
print('\nTest accuracy:', test_acc)
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
import numpy as np
predictions = probability_model.predict(np.array([[110,2,25]]))
predictions[0]
get_label_name(np.argmax(predictions[0]))
import cv2
# opencv
import numpy as np
from matplotlib import pyplot as plt
#ๅๅปบไธๅผ ้ป่ฒ็ๅพ็
img = np.zeros((28,32), np.float32)
print(img.shape)
img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
print(img.shape)
plt.imshow(img)
plt.show()
rgb_to_hsv([110,2,25])
'''
HSV้ข่ฒ็ฉบ้ด่งๅฎ:H่ๅด0~360,S่ๅด0~1,V่ๅด0~1
PSไธญ็HSV่ๅด๏ผHๆฏ0-360๏ผSๆฏ0-1๏ผV๏ผB๏ผๆฏ0-1
opencvไธญ็HSV่ๅด๏ผHๆฏ0-180๏ผSๆฏ0-255๏ผVๆฏ0-255
'''
# h:0-360 , s:0-255, v:0-255
# r:0-255, g:0-255, b:0-255
def rgb_to_hsv(rgb=[]):
img=np.array([[rgb]],np.uint8)
#print(img)
#print(img)
plt.imshow(img)
plt.show()
img_hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
img_hsv[0][0]*=2
#print(img_hsv)
return img_hsv
def hsv_to_rgb(hsv=[]):
hsv[0]/=2
img_hsv=np.array([[hsv]],np.uint8)
img_rgb = cv2.cvtColor(img_hsv, cv2.COLOR_HSV2RGB)
plt.imshow(img_rgb)
plt.show()
return img_rgb
#rgb_to_hsv([250,0,0])
#hsv_to_rgb([360,255,255])
# ไปh=0 ๅผๅงๆ่ฝฌ๏ผๆฏ18ยฐๅไธ็ป้ข่ฒ๏ผไฝไธบ้
่ฒๆนๆก
for i in range(0,360,18):
#print(i)
a=hsv_to_rgb([i,255,255])
b=hsv_to_rgb([i+18,255,255])
print(a,b)
#opencv่ฏปๅๅพ็๏ผ้ป่ฎคๆฏBGR
img=cv2.imread('img/test.jpg',cv2.IMREAD_COLOR)
img=cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show()
| 0.323487 | 0.899343 |
# Regression with Amazon SageMaker XGBoost (Parquet input)
This notebook exhibits the use of a Parquet dataset for use with the SageMaker XGBoost algorithm. The example here is almost the same as [Regression with Amazon SageMaker XGBoost algorithm](xgboost_abalone.ipynb).
This notebook tackles the exact same problem with the same solution, but has been modified for a Parquet input.
The original notebook provides details of dataset and the machine learning use-case.
This notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.
```
!pip3 install -U sagemaker
import os
import boto3
import re
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket here if you wish.
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/DEMO-xgboost-parquet"
bucket_path = "https://s3-{}.amazonaws.com/{}".format(region, bucket)
```
We will use [PyArrow](https://arrow.apache.org/docs/python/) library to store the Abalone dataset in the Parquet format.
```
import pyarrow
%%time
import numpy as np
import pandas as pd
from sklearn.datasets import load_svmlight_file
s3 = boto3.client("s3")
# Download the dataset and load into a pandas dataframe
FILE_NAME = "abalone.csv"
s3.download_file("sagemaker-sample-files", f"datasets/tabular/uci_abalone/abalone.csv", FILE_NAME)
feature_names = [
"Sex",
"Length",
"Diameter",
"Height",
"Whole weight",
"Shucked weight",
"Viscera weight",
"Shell weight",
"Rings",
]
data = pd.read_csv(FILE_NAME, header=None, names=feature_names)
# SageMaker XGBoost has the convention of label in the first column
data = data[feature_names[-1:] + feature_names[:-1]]
data["Sex"] = data["Sex"].astype("category").cat.codes
# Split the downloaded data into train/test dataframes
train, test = np.split(data.sample(frac=1), [int(0.8 * len(data))])
# requires PyArrow installed
train.to_parquet("abalone_train.parquet")
test.to_parquet("abalone_test.parquet")
%%time
sagemaker.Session().upload_data(
"abalone_train.parquet", bucket=bucket, key_prefix=prefix + "/" + "train"
)
sagemaker.Session().upload_data(
"abalone_test.parquet", bucket=bucket, key_prefix=prefix + "/" + "test"
)
```
We obtain the new container by specifying the framework version (1.5-1). This version specifies the upstream XGBoost framework version (1.3) and an additional SageMaker version (1). If you have an existing XGBoost workflow based on the previous (0.72, 0.90-1, 0.90-2, or 1.0-1) container, this would be the only change necessary to get the same workflow working with the new container.
```
container = sagemaker.image_uris.retrieve("xgboost", region, "1.5-1")
```
After setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
```
%%time
import time
from time import gmtime, strftime
job_name = "xgboost-parquet-example-training-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
# Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = {
"AlgorithmSpecification": {"TrainingImage": container, "TrainingInputMode": "Pipe"},
"RoleArn": role,
"OutputDataConfig": {"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m5.2xlarge", "VolumeSizeInGB": 20},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.7",
"objective": "reg:linear",
"num_round": "10",
"verbosity": "2",
},
"StoppingCondition": {"MaxRuntimeInSeconds": 3600},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + "/train",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "application/x-parquet",
"CompressionType": "None",
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + "/test",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "application/x-parquet",
"CompressionType": "None",
},
],
}
client = boto3.client("sagemaker", region_name=region)
client.create_training_job(**create_training_params)
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
while status != "Completed" and status != "Failed":
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
%matplotlib inline
from sagemaker.analytics import TrainingJobAnalytics
metric_name = "validation:rmse"
metrics_dataframe = TrainingJobAnalytics(
training_job_name=job_name, metric_names=[metric_name]
).dataframe()
plt = metrics_dataframe.plot(
kind="line", figsize=(12, 5), x="timestamp", y="value", style="b.", legend=False
)
plt.set_ylabel(metric_name);
```
|
github_jupyter
|
!pip3 install -U sagemaker
import os
import boto3
import re
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket here if you wish.
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/DEMO-xgboost-parquet"
bucket_path = "https://s3-{}.amazonaws.com/{}".format(region, bucket)
import pyarrow
%%time
import numpy as np
import pandas as pd
from sklearn.datasets import load_svmlight_file
s3 = boto3.client("s3")
# Download the dataset and load into a pandas dataframe
FILE_NAME = "abalone.csv"
s3.download_file("sagemaker-sample-files", f"datasets/tabular/uci_abalone/abalone.csv", FILE_NAME)
feature_names = [
"Sex",
"Length",
"Diameter",
"Height",
"Whole weight",
"Shucked weight",
"Viscera weight",
"Shell weight",
"Rings",
]
data = pd.read_csv(FILE_NAME, header=None, names=feature_names)
# SageMaker XGBoost has the convention of label in the first column
data = data[feature_names[-1:] + feature_names[:-1]]
data["Sex"] = data["Sex"].astype("category").cat.codes
# Split the downloaded data into train/test dataframes
train, test = np.split(data.sample(frac=1), [int(0.8 * len(data))])
# requires PyArrow installed
train.to_parquet("abalone_train.parquet")
test.to_parquet("abalone_test.parquet")
%%time
sagemaker.Session().upload_data(
"abalone_train.parquet", bucket=bucket, key_prefix=prefix + "/" + "train"
)
sagemaker.Session().upload_data(
"abalone_test.parquet", bucket=bucket, key_prefix=prefix + "/" + "test"
)
container = sagemaker.image_uris.retrieve("xgboost", region, "1.5-1")
%%time
import time
from time import gmtime, strftime
job_name = "xgboost-parquet-example-training-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
# Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = {
"AlgorithmSpecification": {"TrainingImage": container, "TrainingInputMode": "Pipe"},
"RoleArn": role,
"OutputDataConfig": {"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m5.2xlarge", "VolumeSizeInGB": 20},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.7",
"objective": "reg:linear",
"num_round": "10",
"verbosity": "2",
},
"StoppingCondition": {"MaxRuntimeInSeconds": 3600},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + "/train",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "application/x-parquet",
"CompressionType": "None",
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + "/test",
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "application/x-parquet",
"CompressionType": "None",
},
],
}
client = boto3.client("sagemaker", region_name=region)
client.create_training_job(**create_training_params)
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
while status != "Completed" and status != "Failed":
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print(status)
%matplotlib inline
from sagemaker.analytics import TrainingJobAnalytics
metric_name = "validation:rmse"
metrics_dataframe = TrainingJobAnalytics(
training_job_name=job_name, metric_names=[metric_name]
).dataframe()
plt = metrics_dataframe.plot(
kind="line", figsize=(12, 5), x="timestamp", y="value", style="b.", legend=False
)
plt.set_ylabel(metric_name);
| 0.457379 | 0.934035 |
<a href="https://colab.research.google.com/github/xuezzou/Rebuild-My-Professor/blob/main/Tags-Prediction/Word_Embeddings.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Word Embedding
This notebook is part of the source code of our project to predict professor tages based on user comment. It uses Word2Vec (provided by gensim) to train a custom embedding of the words that appeared in our comments databased collected from ratemyprofessors.com.
You do not need to rerun this notebook to run our model notebooks.
To run this notebook, first add [our drive](https://drive.google.com/drive/folders/15wGLUvjiGtFXMZ0XzpDYeSH1XEqWjUBB) as shortcut to your google drive or change the path below.
For more information, please visit our [github repository](https://github.com/xuezzou/Rebuild-My-Professor/tree/main/Tags-Prediction).
```
import pandas as pd
import numpy as np
from nltk.tokenize import sent_tokenize
from google.colab import drive
import nltk
import os
from gensim.models import Word2Vec
from tqdm import tqdm
import json
from gensim.models.callbacks import CallbackAny2Vec
from gensim.models import Word2Vec, KeyedVectors
drive.mount('/content/drive')
input_path = "/content/drive/My Drive/Rebuild my Professor/data/Ratings/"
files = [file for file in os.listdir(input_path) if file.endswith("csv")]
data = []
for file in tqdm(files):
df = pd.read_csv(input_path + file, engine="python", index_col = 0)
data.append(df)
data = pd.concat(data)
nltk.download('punkt')
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
comments = data["rComments"].tolist()
sentences = []
for eachComment in tqdm(comments):
if isinstance(eachComment, str):
tmp = tokenizer.tokenize(eachComment)
sentences.extend(tmp)
print(len(sentences), " sentences are collected.")
all_words = []
for i, sent in tqdm(enumerate(sentences)):
all_words.append(nltk.word_tokenize(sent))
if (i % 1000000 == 0) & (i > 0):
k = i / 1000000
with open('/content/drive/My Drive/Rebuild my Professor/Word_Embeddings/words_{}.json'.format(int(k)), 'w') as f:
json.dump(all_words, f)
all_words = []
with open('/content/drive/My Drive/Rebuild my Professor/Word_Embeddings/words_{}.json'.format(i / 1000000 + 1), 'w') as f:
json.dump(all_words, f)
from IPython import get_ipython;
get_ipython().magic('reset -sf')
input_path = "/content/drive/My Drive/Rebuild my Professor/Word_Embeddings/"
word_files = os.listdir(input_path)
all_words = []
for file_name in tqdm(word_files[:6]):
with open(input_path + file_name, 'r') as j:
words = json.loads(j.read())
all_words.extend(words)
class callback(CallbackAny2Vec):
'''Callback to print loss after each epoch.'''
def __init__(self):
self.epoch = 0
def on_epoch_end(self, model):
loss = model.get_latest_training_loss()
print('Loss after epoch {}: {}'.format(self.epoch, loss))
self.epoch += 1
word2vec = Word2Vec(sentences=all_words, size=32, min_count=2, window=5, workers=4, iter=5, compute_loss=True, callbacks=[callback()])
word2vec.wv.save_word2vec_format('/content/drive/My Drive/Rebuild my Professor/word2vec_model.bin', binary=True)
```
# Understanding the Word2Vec embedding space
Below, we showed the most similar words of several domain specific words to have a better understanding of the word2vec embedding we trained.
```
word2vec.wv.most_similar('office', topn=20)
word2vec.wv.most_similar('paper', topn=20)
word2vec.wv.most_similar('exam', topn=20)
word2vec.wv.most_similar('professor', topn=20)
word2vec.wv.most_similar('curve', topn=20)
word2vec.wv.most_similar('grade', topn=20)
word2vec.wv.most_similar('A', topn=20)
word2vec.wv.most_similar('C', topn=20)
word2vec.wv.most_similar('C++', topn=20)
word2vec.wv.most_similar('timed', topn=20)
word2vec.wv.most_similar('math', topn=20)
word2vec.wv.most_similar('extra', topn=20)
word2vec.wv.most_similar('Vanderbilt', topn=20)
keys = ['paper', 'exam', 'math', 'homework', 'curve', 'extra', 'professor', 'great', 'helpful', 'Vanderbilt']
embedding_clusters = []
word_clusters = []
for word in keys:
embeddings = []
words = []
for similar_word, _ in model.wv.most_similar(word, topn=30):
words.append(similar_word)
embeddings.append(model[similar_word])
embedding_clusters.append(embeddings)
word_clusters.append(words)
from sklearn.manifold import TSNE
import numpy as np
embedding_clusters = np.array(embedding_clusters)
n, m, k = embedding_clusters.shape
tsne_model_en_2d = TSNE(perplexity=15, n_components=2, init='pca', n_iter=3500, random_state=32)
embeddings_en_2d = np.array(tsne_model_en_2d.fit_transform(embedding_clusters.reshape(n * m, k))).reshape(n, m, 2)
```
We can also visualize the embedding by reducing the dimension and plotting several words and their most similar words in one graph.
```
import matplotlib.pyplot as plt
import matplotlib.cm as cm
% matplotlib inline
def tsne_plot_similar_words(title, labels, embedding_clusters, word_clusters, a, filename=None):
plt.figure(figsize=(25, 16))
colors = cm.rainbow(np.linspace(0, 1, len(labels)))
for label, embeddings, words, color in zip(labels, embedding_clusters, word_clusters, colors):
x = embeddings[:, 0]
y = embeddings[:, 1]
plt.scatter(x, y, c=color, alpha=a, label=label)
for i, word in enumerate(words):
plt.annotate(word, alpha=0.8, xy=(x[i], y[i]), xytext=(5, 2),
textcoords='offset points', ha='right', va='bottom', size=8)
plt.legend(loc=4)
plt.title(title)
plt.grid(True)
if filename:
plt.savefig(filename, format='png', dpi=150, bbox_inches='tight')
plt.show()
tsne_plot_similar_words('Similar words from Google News', keys, embeddings_en_2d, word_clusters, 0.7,
'similar_words.png')
```
# Text Classification Model
```
from gensim.test.utils import datapath
word2vec = KeyedVectors.load_word2vec_format(datapath("/content/drive/My Drive/Rebuild my Professor/Radar Chart notebooks/word2vec_model.bin"), binary=True)
import torch
import torch.nn as nn
weights = torch.FloatTensor(word2vec.wv.vectors)
embedding = nn.Embedding.from_pretrained(weights)
embedded_words = word2vec.vocab
import os
input_path = "/content/drive/My Drive/Rebuild my Professor/data/Ratings/"
df = pd.DataFrame()
for file in tqdm(os.listdir(input_path)):
if file.endswith("csv"):
df_temp = pd.read_csv(input_path + file, lineterminator='\n')
df_temp = df_temp.sample(frac=0.01)
df = df.append(df_temp)
df=df.loc[:,['rComments','teacherRatingTags']]
df = df[df['teacherRatingTags'] != '[]']
import ast
df['teacherRatingTags'] = df['teacherRatingTags'].apply(ast.literal_eval)
# One-hot encoding of multilabel
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
df_tags = pd.DataFrame(mlb.fit_transform(df['teacherRatingTags']),columns=mlb.classes_)
import nltk
nltk.download('stopwords')
nltk.download('punkt')
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
import re
stop_words = set(stopwords.words('english'))
def preprocess_text(sen):
# Remove punctuations and numbers
sentence = re.sub('[^a-zA-Z]', ' ', sen)
# Removing multiple spaces
sentence = re.sub(r'\s+', ' ', sentence)
# Removing stopwords
tokenized_words = word_tokenize(sentence)
sentence = [word for word in tokenized_words if (not word in stop_words) and (word in embedded_words)]
return sentence
sentences = [preprocess_text(sentence) for sentence in tqdm(df["rComments"])]
def encode_sentence(sentence, N=150):
encoded = np.zeros(N, dtype=int)
enc1 = np.array([embedded_words[word].index for word in sentence])
encoded[:len(enc1)] = enc1
return encoded
MAX_LENGTH = 150
encoded_sentences = [encode_sentence(sentence, MAX_LENGTH) for sentence in tqdm(sentences)]
X = np.array(encoded_sentences)
y = df_tags
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.2)
from numpy import array
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers.core import Activation, Dropout, Dense
from keras.layers import Flatten, LSTM
from keras.layers import GlobalMaxPooling1D
from keras.models import Model
from keras.layers.embeddings import Embedding
from sklearn.model_selection import train_test_split
from keras.preprocessing.text import Tokenizer
from keras.layers import Input
from keras.layers.merge import Concatenate
from sklearn.feature_extraction.text import CountVectorizer
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, SpatialDropout1D
from sklearn.model_selection import train_test_split
from keras.utils.np_utils import to_categorical
from sklearn.utils import resample
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix,classification_report
import re
lstm_out = 196
MAX_LENGTH = 150
MAX_FEATURE = len(word2vec.vocab)
WEIGHTS = word2vec.vectors
EMBEDDING_DIM = WEIGHTS.shape[1]
model = Sequential()
model.add(Embedding(MAX_FEATURE,
EMBEDDING_DIM,
weights=[WEIGHTS],
input_length=MAX_LENGTH))
model.add(SpatialDropout1D(0.4))
model.add(LSTM(lstm_out, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(20,activation='softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer='adam',metrics = ['accuracy'])
print(model.summary())
import tensorflow as tf
print(tf.__version__)
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
batch_size = 128
model.fit(X_train, y_train, epochs = 10, batch_size=batch_size, verbose = 1)
max_fatures = 2000
tokenizer = Tokenizer(num_words=max_fatures, split=' ')
tokenizer.fit_on_texts(sentences[""])
X = tokenizer.texts_to_sequences(new_df['rComments'].values)
X = pad_sequences(X)
# print(X[:2])
Y = pd.get_dummies(new_df[category]).values
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size = 0.20, random_state = 42)
print(X_train.shape,Y_train.shape)
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from nltk.tokenize import sent_tokenize
from google.colab import drive
import nltk
import os
from gensim.models import Word2Vec
from tqdm import tqdm
import json
from gensim.models.callbacks import CallbackAny2Vec
from gensim.models import Word2Vec, KeyedVectors
drive.mount('/content/drive')
input_path = "/content/drive/My Drive/Rebuild my Professor/data/Ratings/"
files = [file for file in os.listdir(input_path) if file.endswith("csv")]
data = []
for file in tqdm(files):
df = pd.read_csv(input_path + file, engine="python", index_col = 0)
data.append(df)
data = pd.concat(data)
nltk.download('punkt')
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
comments = data["rComments"].tolist()
sentences = []
for eachComment in tqdm(comments):
if isinstance(eachComment, str):
tmp = tokenizer.tokenize(eachComment)
sentences.extend(tmp)
print(len(sentences), " sentences are collected.")
all_words = []
for i, sent in tqdm(enumerate(sentences)):
all_words.append(nltk.word_tokenize(sent))
if (i % 1000000 == 0) & (i > 0):
k = i / 1000000
with open('/content/drive/My Drive/Rebuild my Professor/Word_Embeddings/words_{}.json'.format(int(k)), 'w') as f:
json.dump(all_words, f)
all_words = []
with open('/content/drive/My Drive/Rebuild my Professor/Word_Embeddings/words_{}.json'.format(i / 1000000 + 1), 'w') as f:
json.dump(all_words, f)
from IPython import get_ipython;
get_ipython().magic('reset -sf')
input_path = "/content/drive/My Drive/Rebuild my Professor/Word_Embeddings/"
word_files = os.listdir(input_path)
all_words = []
for file_name in tqdm(word_files[:6]):
with open(input_path + file_name, 'r') as j:
words = json.loads(j.read())
all_words.extend(words)
class callback(CallbackAny2Vec):
'''Callback to print loss after each epoch.'''
def __init__(self):
self.epoch = 0
def on_epoch_end(self, model):
loss = model.get_latest_training_loss()
print('Loss after epoch {}: {}'.format(self.epoch, loss))
self.epoch += 1
word2vec = Word2Vec(sentences=all_words, size=32, min_count=2, window=5, workers=4, iter=5, compute_loss=True, callbacks=[callback()])
word2vec.wv.save_word2vec_format('/content/drive/My Drive/Rebuild my Professor/word2vec_model.bin', binary=True)
word2vec.wv.most_similar('office', topn=20)
word2vec.wv.most_similar('paper', topn=20)
word2vec.wv.most_similar('exam', topn=20)
word2vec.wv.most_similar('professor', topn=20)
word2vec.wv.most_similar('curve', topn=20)
word2vec.wv.most_similar('grade', topn=20)
word2vec.wv.most_similar('A', topn=20)
word2vec.wv.most_similar('C', topn=20)
word2vec.wv.most_similar('C++', topn=20)
word2vec.wv.most_similar('timed', topn=20)
word2vec.wv.most_similar('math', topn=20)
word2vec.wv.most_similar('extra', topn=20)
word2vec.wv.most_similar('Vanderbilt', topn=20)
keys = ['paper', 'exam', 'math', 'homework', 'curve', 'extra', 'professor', 'great', 'helpful', 'Vanderbilt']
embedding_clusters = []
word_clusters = []
for word in keys:
embeddings = []
words = []
for similar_word, _ in model.wv.most_similar(word, topn=30):
words.append(similar_word)
embeddings.append(model[similar_word])
embedding_clusters.append(embeddings)
word_clusters.append(words)
from sklearn.manifold import TSNE
import numpy as np
embedding_clusters = np.array(embedding_clusters)
n, m, k = embedding_clusters.shape
tsne_model_en_2d = TSNE(perplexity=15, n_components=2, init='pca', n_iter=3500, random_state=32)
embeddings_en_2d = np.array(tsne_model_en_2d.fit_transform(embedding_clusters.reshape(n * m, k))).reshape(n, m, 2)
import matplotlib.pyplot as plt
import matplotlib.cm as cm
% matplotlib inline
def tsne_plot_similar_words(title, labels, embedding_clusters, word_clusters, a, filename=None):
plt.figure(figsize=(25, 16))
colors = cm.rainbow(np.linspace(0, 1, len(labels)))
for label, embeddings, words, color in zip(labels, embedding_clusters, word_clusters, colors):
x = embeddings[:, 0]
y = embeddings[:, 1]
plt.scatter(x, y, c=color, alpha=a, label=label)
for i, word in enumerate(words):
plt.annotate(word, alpha=0.8, xy=(x[i], y[i]), xytext=(5, 2),
textcoords='offset points', ha='right', va='bottom', size=8)
plt.legend(loc=4)
plt.title(title)
plt.grid(True)
if filename:
plt.savefig(filename, format='png', dpi=150, bbox_inches='tight')
plt.show()
tsne_plot_similar_words('Similar words from Google News', keys, embeddings_en_2d, word_clusters, 0.7,
'similar_words.png')
from gensim.test.utils import datapath
word2vec = KeyedVectors.load_word2vec_format(datapath("/content/drive/My Drive/Rebuild my Professor/Radar Chart notebooks/word2vec_model.bin"), binary=True)
import torch
import torch.nn as nn
weights = torch.FloatTensor(word2vec.wv.vectors)
embedding = nn.Embedding.from_pretrained(weights)
embedded_words = word2vec.vocab
import os
input_path = "/content/drive/My Drive/Rebuild my Professor/data/Ratings/"
df = pd.DataFrame()
for file in tqdm(os.listdir(input_path)):
if file.endswith("csv"):
df_temp = pd.read_csv(input_path + file, lineterminator='\n')
df_temp = df_temp.sample(frac=0.01)
df = df.append(df_temp)
df=df.loc[:,['rComments','teacherRatingTags']]
df = df[df['teacherRatingTags'] != '[]']
import ast
df['teacherRatingTags'] = df['teacherRatingTags'].apply(ast.literal_eval)
# One-hot encoding of multilabel
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
df_tags = pd.DataFrame(mlb.fit_transform(df['teacherRatingTags']),columns=mlb.classes_)
import nltk
nltk.download('stopwords')
nltk.download('punkt')
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
import re
stop_words = set(stopwords.words('english'))
def preprocess_text(sen):
# Remove punctuations and numbers
sentence = re.sub('[^a-zA-Z]', ' ', sen)
# Removing multiple spaces
sentence = re.sub(r'\s+', ' ', sentence)
# Removing stopwords
tokenized_words = word_tokenize(sentence)
sentence = [word for word in tokenized_words if (not word in stop_words) and (word in embedded_words)]
return sentence
sentences = [preprocess_text(sentence) for sentence in tqdm(df["rComments"])]
def encode_sentence(sentence, N=150):
encoded = np.zeros(N, dtype=int)
enc1 = np.array([embedded_words[word].index for word in sentence])
encoded[:len(enc1)] = enc1
return encoded
MAX_LENGTH = 150
encoded_sentences = [encode_sentence(sentence, MAX_LENGTH) for sentence in tqdm(sentences)]
X = np.array(encoded_sentences)
y = df_tags
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.2)
from numpy import array
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers.core import Activation, Dropout, Dense
from keras.layers import Flatten, LSTM
from keras.layers import GlobalMaxPooling1D
from keras.models import Model
from keras.layers.embeddings import Embedding
from sklearn.model_selection import train_test_split
from keras.preprocessing.text import Tokenizer
from keras.layers import Input
from keras.layers.merge import Concatenate
from sklearn.feature_extraction.text import CountVectorizer
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, SpatialDropout1D
from sklearn.model_selection import train_test_split
from keras.utils.np_utils import to_categorical
from sklearn.utils import resample
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix,classification_report
import re
lstm_out = 196
MAX_LENGTH = 150
MAX_FEATURE = len(word2vec.vocab)
WEIGHTS = word2vec.vectors
EMBEDDING_DIM = WEIGHTS.shape[1]
model = Sequential()
model.add(Embedding(MAX_FEATURE,
EMBEDDING_DIM,
weights=[WEIGHTS],
input_length=MAX_LENGTH))
model.add(SpatialDropout1D(0.4))
model.add(LSTM(lstm_out, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(20,activation='softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer='adam',metrics = ['accuracy'])
print(model.summary())
import tensorflow as tf
print(tf.__version__)
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
batch_size = 128
model.fit(X_train, y_train, epochs = 10, batch_size=batch_size, verbose = 1)
max_fatures = 2000
tokenizer = Tokenizer(num_words=max_fatures, split=' ')
tokenizer.fit_on_texts(sentences[""])
X = tokenizer.texts_to_sequences(new_df['rComments'].values)
X = pad_sequences(X)
# print(X[:2])
Y = pd.get_dummies(new_df[category]).values
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size = 0.20, random_state = 42)
print(X_train.shape,Y_train.shape)
| 0.414425 | 0.827619 |
```
from estruturas.pilha import *
from estruturas.fila import *
from estruturas.deque import *
from estruturas.pilha_dinamica import *
from estruturas.fila_dinamica import *
from estruturas.lista import *
from estruturas.arvore import *
from estruturas.arvore_binaria import *
from cyjupyter import Cytoscape
import json
```
# รrvore Rubro-Negra
ร um tipo de รกrvore binรกria balanceada criada por Rudolf Bayer em 1972, com aperfeiรงoamentos de J. Guibas e R. Sedgewick em 1978.
Utiliza um esquema de coloraรงรฃo de nรณs para manter o balanceamento da รกrvore. Lembrando que as รกrvores AVL usam a altura da sub-รกrvores para estruturar o balanceamento.
Desta forma, na รกrvore rubro-negra, cada nรณ da รกrvore possui um atributo de cor, que pode ser <b>vermelho</b> ou <b>preto</b>
<img src="./img/arvore_rb.png" width="300">
### Propriedades
* Todo nรณ da รกrvore รฉ <b>vermelho</b> ou <b>preto</b>
* A raiz รฉ sempre <b>preta</b>
* Todo nรณ folha รฉ <b>preto</b>
* Se um nรณ รฉ <b>vermelho</b>, entรฃo seus filhos sรฃo <b>pretos</b>
* Nรฃo existem nรณs <b>vermelhos</b> consecutivos
* Para cada nรณ, todos os caminhos desse nรณ para os nรณs folhas descendentes contรฉm o mesmo nรบmero de nรณs <b>pretos</b>
Como todo nรณ folha termina com dois ponteiros nulos, eles podem ser ignorados na representaรงรฃo da รกrvore para fins de didรกtica.
<img src="./img/arvore_rb_sem_folhas.png" width="650">
A altura h de uma รกrvore rubro-negra de n chaves ou nรณsinternos รฉ no mรกximo 2 log(n+1)
* Esse resultado mostra a importรขncia e utilidade de umaรกrvore rubro-negra, pois veremos que a busca, inserรงรฃo eremoรงรฃo tรชm complexidade de tempo deO(h)=O(logn).
* Inserรงรตes e remoรงรตes feitas numa รกrvore rubro-negrapode modificar a sua estrutura. Precisamos garantir quenenhuma das propriedades de รกrvore rubro-negra seja violada.
* Para isso podemos ter que mudar a estrutura da รกrvore eas cores de alguns dos nรณs da รกrvore. A mudanรงa daestrutura da รกrvore รฉ feita por dois tipos de rotaรงรตes emramos da รกrvore:
* left-rotate e
* right-rotate
## Balanceamento
Eฬ feito por meio de rotacฬงoฬes e ajuste de cores a cada insercฬงaฬo ou remocฬงaฬo
* Manteฬm o equiliฬbrio da aฬrvore
* Corrigem possiฬveis violacฬงoฬes de suas propriedades
* Custo maฬximo de qualquer algoritmo eฬ O(log N)
### Rotaรงรฃo left-rotate
Seja uma รกrvore binรกria apontada por T
| Passo 1 | Passo 2 |
|---------|---------|
| <img src="./img/arvore_rb_left_rot1.png" width="350"> | <img src="./img/arvore_rb_left_rot2.png" width="350"> |
```
left-rotate(T,x):
y โ right[x]
right[x] โ left[y]
if left[y] <> nil[T] then
pai[left[y]] โ x5
endif
pai[y] โ pai[x]
if pai[x] = nil[T] then
T โ y
else
if x = left[pai[x]] then
left[pai[x]] โ y
else
right[pai[x]] โ y
end if
end if
left[y] โ x
pai[x] โ y
```
O algoritmo right-rotate(T, y) รฉ anรกlogo.
1. Implemente o algoritmo left-rotate(T, x) para a nossa estrutura de รกrvore RB
```
def leftRotate(self, x):
y = x.right
x.right = y.left
if y.left != self.TNULL:
y.left.parent = x
y.parent = x.parent
if x.parent is None:
self.root = y
elif x == x.parent.left:
x.parent.left = y
else:
x.parent.right = y
y.left = x
x.parent = y
```
2. Implemente o algoritmo right-rotate(T, x) para a nossa estrutura de รกrvore RB.
```
def rightRotate(self, x):
y = x.left
x.left = y.right
if y.right != self.TNULL:
y.right.parent = x
y.parent = x.parent
if x.parent is None:
self.root = y
elif x == x.parent.right:
x.parent.right = y
else:
x.parent.left = y
y.right = x
x.parent = y
```
3. Complete a implementaรงรฃo para uma รกrvore RB, baseado na classe Java disponรญvel em https://www.ime.usp.br/~pf/estruturas-de-dados/aulas/st-redblack.html. Classe RedBlackBST: https://www.ime.usp.br/~pf/sedgewick-wayne/algs4/RedBlackBST.java
```
https://github.com/VanessaSilva99/EstruturaDeDados2/tree/main/AVL/Rubro_Negra_Tree
```
4. Teste a inserรงรฃo a รกrvore inserindo os elementos E, A, R, C, H, X, M, P, L
|
github_jupyter
|
from estruturas.pilha import *
from estruturas.fila import *
from estruturas.deque import *
from estruturas.pilha_dinamica import *
from estruturas.fila_dinamica import *
from estruturas.lista import *
from estruturas.arvore import *
from estruturas.arvore_binaria import *
from cyjupyter import Cytoscape
import json
left-rotate(T,x):
y โ right[x]
right[x] โ left[y]
if left[y] <> nil[T] then
pai[left[y]] โ x5
endif
pai[y] โ pai[x]
if pai[x] = nil[T] then
T โ y
else
if x = left[pai[x]] then
left[pai[x]] โ y
else
right[pai[x]] โ y
end if
end if
left[y] โ x
pai[x] โ y
def leftRotate(self, x):
y = x.right
x.right = y.left
if y.left != self.TNULL:
y.left.parent = x
y.parent = x.parent
if x.parent is None:
self.root = y
elif x == x.parent.left:
x.parent.left = y
else:
x.parent.right = y
y.left = x
x.parent = y
def rightRotate(self, x):
y = x.left
x.left = y.right
if y.right != self.TNULL:
y.right.parent = x
y.parent = x.parent
if x.parent is None:
self.root = y
elif x == x.parent.right:
x.parent.right = y
else:
x.parent.left = y
y.right = x
x.parent = y
https://github.com/VanessaSilva99/EstruturaDeDados2/tree/main/AVL/Rubro_Negra_Tree
| 0.324771 | 0.883286 |
```
%load_ext autoreload
%autoreload 2
import json
from dac_costing.model import DacModel, DacSection, BatterySection, EnergySection
```
## Model Parameters
Here we open the default model parameters. We can modify these as needed...
```
with open('../dac_costing/data/parameters.json', 'r') as f:
params = json.load(f)
from uncertainties import ufloat
stdev = 0.1
def cast_to_ufloat(d):
u = {}
for p, val in d.items():
if isinstance(val, dict):
u[p] = cast_to_ufloat(val)
if isinstance(val, float):
u[p] = ufloat(val, val*stdev, tag=p)
else:
u[p] = val
return u
params = cast_to_ufloat(params)
```
## C1 - Natural Gas
```
from dac_costing.model import NgThermalSection
params['Base Energy Requirement [MW]'] = 46.6 # ='Report Data'!C58
params['Required Thermal Energy [GJ/tCO2]'] = 6.64
params['Total Capex [$]'] = 1029 # =+'Report Data'!C21
electric = EnergySection(source='NGCC w/ CCS', battery=None, params=params)
electric.compute().series
thermal = NgThermalSection(source='Advanced NGCC', battery=None, params=params)
dac = DacSection(params=params)
dac_all = DacModel(electric=electric, thermal=thermal, dac=dac, params=params)
dac_all.compute().series
```
## C2 - Electric Kiln (Solar)
```
params['Base Energy Requirement [MW]'] = 38 # ='Report Data'!C64
params['Total Capex [$]'] = 936.01 # ='Report Data'!H27
ebattery = BatterySection(params)
electric = EnergySection(source='Solar', battery=ebattery, params=params)
params['Base Energy Requirement [MW]'] = 234 # =F18
tbattery = BatterySection(params=params)
thermal = EnergySection(source='Solar', battery=tbattery, params=params)
dac = DacSection(params=params)
dac_all = DacModel(electric=electric, thermal=thermal, dac=dac, params=params)
dac_all.compute().series
```
## C3 - Electric Kiln (Nuclear)
```
params['Base Energy Requierement [MW]'] = 38 # ='Report Data'!C64
params['Base Energy Requierement [MW]'] = 234 # =F18
params['Total Capex [$]'] = 936.01 # ='Report Data'!H27
electric = EnergySection(source='Advanced Nuclear', battery=None, params=params)
thermal = EnergySection(source='Advanced Nuclear', battery=None, params=params)
dac = DacSection(params=params)
dac_all = DacModel(electric=electric, thermal=thermal, dac=dac, params=params)
dac_all.compute().series
```
## C4 - Electric Kiln (Wind)
```
params['Base Energy Requierement [MW]'] = 38 # ='Report Data'!C64
params['Total Capex [$]'] = 936.01 # ='Report Data'!H27
ebattery = BatterySection(params=params)
electric = EnergySection(source='Wind', battery=ebattery, params=params)
params['Base Energy Requierement [MW]'] = 234 # =F18
tbattery = BatterySection(params=params)
thermal = EnergySection(source='Wind', battery=tbattery, params=params)
dac = DacSection(params=params)
dac_all = DacModel(electric=electric, thermal=thermal, dac=dac, params=params)
dac_all.compute().series
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
import json
from dac_costing.model import DacModel, DacSection, BatterySection, EnergySection
with open('../dac_costing/data/parameters.json', 'r') as f:
params = json.load(f)
from uncertainties import ufloat
stdev = 0.1
def cast_to_ufloat(d):
u = {}
for p, val in d.items():
if isinstance(val, dict):
u[p] = cast_to_ufloat(val)
if isinstance(val, float):
u[p] = ufloat(val, val*stdev, tag=p)
else:
u[p] = val
return u
params = cast_to_ufloat(params)
from dac_costing.model import NgThermalSection
params['Base Energy Requirement [MW]'] = 46.6 # ='Report Data'!C58
params['Required Thermal Energy [GJ/tCO2]'] = 6.64
params['Total Capex [$]'] = 1029 # =+'Report Data'!C21
electric = EnergySection(source='NGCC w/ CCS', battery=None, params=params)
electric.compute().series
thermal = NgThermalSection(source='Advanced NGCC', battery=None, params=params)
dac = DacSection(params=params)
dac_all = DacModel(electric=electric, thermal=thermal, dac=dac, params=params)
dac_all.compute().series
params['Base Energy Requirement [MW]'] = 38 # ='Report Data'!C64
params['Total Capex [$]'] = 936.01 # ='Report Data'!H27
ebattery = BatterySection(params)
electric = EnergySection(source='Solar', battery=ebattery, params=params)
params['Base Energy Requirement [MW]'] = 234 # =F18
tbattery = BatterySection(params=params)
thermal = EnergySection(source='Solar', battery=tbattery, params=params)
dac = DacSection(params=params)
dac_all = DacModel(electric=electric, thermal=thermal, dac=dac, params=params)
dac_all.compute().series
params['Base Energy Requierement [MW]'] = 38 # ='Report Data'!C64
params['Base Energy Requierement [MW]'] = 234 # =F18
params['Total Capex [$]'] = 936.01 # ='Report Data'!H27
electric = EnergySection(source='Advanced Nuclear', battery=None, params=params)
thermal = EnergySection(source='Advanced Nuclear', battery=None, params=params)
dac = DacSection(params=params)
dac_all = DacModel(electric=electric, thermal=thermal, dac=dac, params=params)
dac_all.compute().series
params['Base Energy Requierement [MW]'] = 38 # ='Report Data'!C64
params['Total Capex [$]'] = 936.01 # ='Report Data'!H27
ebattery = BatterySection(params=params)
electric = EnergySection(source='Wind', battery=ebattery, params=params)
params['Base Energy Requierement [MW]'] = 234 # =F18
tbattery = BatterySection(params=params)
thermal = EnergySection(source='Wind', battery=tbattery, params=params)
dac = DacSection(params=params)
dac_all = DacModel(electric=electric, thermal=thermal, dac=dac, params=params)
dac_all.compute().series
| 0.286369 | 0.571826 |
# Oceanmapper tutorial #
This is a working example of how to use oceanmapper to generate a 3D map of of ocean bathymetry and data using mayavi.
The tutorial will work through the following steps:
- loading python modules
- reading in data/model output, formatted as a numpy array
- set parameters for 3D projection
- plotting a 2D vertical data slice on a 3D map (in this case a vertical section of oxygen data)
- changing the map projection and vertical scaling
- changing colormaps
- adding vectors and lines to the map
## Step 1: import modules ##
Here we need to import the Python modules (groups of functions) that we need to run the script. We just need mayavi, numpy, and the oceanmapper module.
```
from mayavi import mlab
import numpy as np
import oceanmapper as omap
```
Don't worry about the warning! Keep going!
Now let's do a quick test to make sure mayavi is working in the notebook. The following code allows mayavi to show 3D plots in the notebook, then runs a test, which should show a 3D plot that is interactive (you can click and drag to view from different angles).
```
mlab.init_notebook('x3d',600, 600)
mlab.figure()
mlab.test_plot3d()
```
## Step 2: load data or model output ##
This is where you read in the data to be plotted to python. In this case we are using a netCDF file of GO-SHIP I09S repeat hydrographic section dissolved oxygen __[downloaded from the CCHDO](https://cchdo.ucsd.edu/cruise/09AR20041223)__, which you could replace with any data of your own in any format that is readable by Python.
```
fn = 'i09s_20041223_oxygen.npz' #specify your filename here
d=np.load(fn) #load it
xdata=d['lon']
ydata=d['lat']
zdata = d['depth']
scalardata = d['oxygen'] #this is your data on the surface, could be anything
scalardata #check the data looks ok
```
## Step 3: specify parameters for map projection ##
This is where you decide on the properties of your 3D map. In this example I've set it up to map a rectangular sector of bathymetry including the Southern part of Australia, with the dissolved oxygen I09S section data shown on top.
```
mode='rectangle'
lat_min = -55
lat_max = -25
lon_min = 100
lon_max = 150
zscale = 500
data_cmap='YlGnBu'
vmin=150
vmax=210
data_alpha = 1
topo_cmap = 'bone'
```
## Step 4: make 3D map ##
Now you are all set up to generate a 3D map, using a single function that will plot ETOPO topography and the data together. The mayavi scene should show up in another window.
```
mlab.init_notebook('x3d',600, 600)
mlab.figure()
omap.topo_surface3d(mode,xdata,ydata,zdata,scalardata,zscale=zscale,vmin=vmin,vmax=vmax,topo_limits=[lon_min, lon_max, lat_min, lat_max],data_alpha=data_alpha,data_cmap=data_cmap,topo_cmap=topo_cmap,topo_cmap_reverse=True)
```
Yay, you've made your first 3D ocean map! Your map is now an object, called mfig, which you can get properties and modify using any of the inbuilt mayavi functions. With these you can do many things you'd do with a regular 2D plot, like add axis labels, add a colorbar, etc.
You should be able to click and drag the map to view from different angles, known as the mayavi 'view'.
You can get current information about the view using mlab.view()
```
mlab.view()
```
To save the current view of the map use, mlab.savefig
```
mlab.savefig('Imadea3Dmap.png')
```
## Step 5: modify map parameters ##
Now you can play around with the input parameters to change how your map looks.
What happens if you go back to the input parameters and change the latitude and longitude limits and rerun the script?
What if you change the depth scaling zscale? The default here is to divide the depth (in meters) by 500, a larger number leads to less exaggeration of the depth axis, and a smaller number leads to greater exaggeration.
What happens if you change the mode from 'rectangle' to 'sphere'? (Hint, you will likely also want to increase the zscale in this case)
You can also play with changing the colormaps for the data and topography. There are more parameters that can be changed, and you can input your own topography file instead of using the default ETOPO. To see all the input parameter options and defaults we can run help() on the topo_surface3d function
```
help(omap.topo_surface3d)
```
## Step 6: add more to the map ##
There are similar functions in oceanmapper that can add additional 2D surfaces, 3D arrows, or trajectories onto the same bathymetry map.
Try adding something else to the map
## Step 7: try it with your own data/model output ##
Now you know how to make a 3D map, you can go back to the beginning of the tutorial, change the input data to your own file(s), change the parameters and make your own 3D figures!
|
github_jupyter
|
from mayavi import mlab
import numpy as np
import oceanmapper as omap
mlab.init_notebook('x3d',600, 600)
mlab.figure()
mlab.test_plot3d()
fn = 'i09s_20041223_oxygen.npz' #specify your filename here
d=np.load(fn) #load it
xdata=d['lon']
ydata=d['lat']
zdata = d['depth']
scalardata = d['oxygen'] #this is your data on the surface, could be anything
scalardata #check the data looks ok
mode='rectangle'
lat_min = -55
lat_max = -25
lon_min = 100
lon_max = 150
zscale = 500
data_cmap='YlGnBu'
vmin=150
vmax=210
data_alpha = 1
topo_cmap = 'bone'
mlab.init_notebook('x3d',600, 600)
mlab.figure()
omap.topo_surface3d(mode,xdata,ydata,zdata,scalardata,zscale=zscale,vmin=vmin,vmax=vmax,topo_limits=[lon_min, lon_max, lat_min, lat_max],data_alpha=data_alpha,data_cmap=data_cmap,topo_cmap=topo_cmap,topo_cmap_reverse=True)
mlab.view()
mlab.savefig('Imadea3Dmap.png')
help(omap.topo_surface3d)
| 0.252292 | 0.980394 |
```
%run common.ipynb
SPP = ['gambiae', 'coluzzii']
METRIC = 'hamming'
```
# Single amplicon processing
```
# sample names per species
sps = dict()
for sp in SPP:
sps[sp] = samples.loc[samples.species==sp, 'ox_code']
display([x.shape for x in sps.values()])
# read amplicon data
ampl = '60'
callset = zarr.open_group(AMPL_HAP_ZARR, 'r')
gt = allel.GenotypeArray(callset[ampl]['calldata/genotype'])
# extract samples - those differ between autosome and X
gs = pd.Series(callset[ampl]['samples'][:])
# nuber of variants
nvar = gt.shape[0]
# extract genotypes per species
gts = dict()
for sp in SPP:
gts[sp] = gt[:, gs.isin(sps[sp]), :]
display([x.shape for x in gts.values()])
# convert to haplotypes
hts = dict()
for sp in SPP:
hts[sp] = gts[sp].reshape(gts[sp].shape[0], 2 * gts[sp].shape[1]).transpose()
display([x.shape for x in hts.values()])
# pairwise distances
pds = dict()
# within sp
for sp in SPP:
pds[sp] = pdist(hts[sp], metric=METRIC)
# between spp for pairwise combinations of species
pds['between_spp'] = np.array([])
for (sp1, sp2) in itertools.combinations(SPP, 2):
pds['between_spp'] = np.append(pds['between_spp'],
cdist(hts[sp1],hts[sp2], metric=METRIC))
# combine multiple
for (sp, dist) in pds.items():
plt.hist(dist * nvar, bins=range(8), alpha=0.5, label=sp);
plt.legend();
```
# Batch processing
```
callset = zarr.open_group(AMPL_HAP_ZARR, 'r')
# accumulate pairwise distances into dict
pds = dict()
for ampl in callset:
sys.stdout.write('\r' + ampl)
pds[ampl] = dict()
# read genotypes
gt = allel.GenotypeArray(callset[ampl]['calldata/genotype'])
# extract samples - those differ between autosome and X amplicons
gs = pd.Series(callset[ampl]['samples'][:])
# nuber of variants
pds[ampl]['nvar'] = gt.shape[0]
# extract genotypes per species
gts = dict()
for sp in SPP:
gts[sp] = gt[:, gs.isin(sps[sp]), :]
# convert to haplotypes
hts = dict()
for sp in SPP:
hts[sp] = gts[sp].reshape(gts[sp].shape[0], 2 * gts[sp].shape[1]).transpose()
# pairwise distances
# within sp
for sp in SPP:
pds[ampl][sp] = pdist(hts[sp], metric=METRIC)
# between spp for pairwise combinations of species
pds[ampl]['between_spp'] = np.array([])
for (sp1, sp2) in itertools.combinations(SPP, 2):
pds[ampl]['between_spp'] = np.append(pds[ampl]['between_spp'],
cdist(hts[sp1],hts[sp2], metric=METRIC))
fig, axs = plt.subplots(8,8, figsize=(15,15))
for i, ampl in enumerate(AMPLS):
ax = axs.flatten()[i]
n = pds[ampl]['nvar']
for sp in SPP + ['between_spp']:
pd_var = pds[ampl][sp] * n
bins = np.arange(0, int(pd_var.max()) + 1.5) - 0.5
ax.hist(pd_var,
bins=bins,
alpha=0.4,
label=sp);
ax.set_xticks(bins + 0.5)
if i == 0:
ax.legend()
ax.set_title(ampl)
# ax.set_xscale('log')
ax.set_yscale('log')
ax.set_ylim(0.9)
fig.tight_layout()
```
## Within-species distance thresholds
Above debug histogram shows that in most cases the distributions of pairwise distances between haplotypes are contiguous, thus taking maximum observed within-species distances will work as a threshold. The only doubtful amplicon is 33, where gambiae is far much more variable than both coluzzii and between-species.
```
var_data = panel_mosquito.loc[:, ['start_insert','end_insert']].reset_index()
# sequence length in agam genome, both start and end positions taken into insert by subsetting code
# (see previous notebook)
var_data['len_insert'] = var_data.end_insert - var_data.start_insert + 1
# find maximum within-species distances across both species
def max_sp_pd_var(ampl):
max_sp_dist = [max(pds[ampl][sp]) for sp in SPP]
return max(max_sp_dist) * pds[ampl]['nvar']
var_data['max_withinspecies_var'] = var_data.Primary_ID.apply(max_sp_pd_var)
# normalise by sequence length
var_data['max_withinspecies_dist'] = var_data.max_withinspecies_var / var_data.len_insert
var_data.head()
# write
var_data.to_csv(WSP_VAR_FILE, index=False)
! head {WSP_VAR_FILE}
```
## Compare to previous estimates
```
# estimate based on extended sample set
seq_wsp_file = '../../../data/phylo_ampl_dada2/comb1_5/3_thresholds.tsv'
seq_wsp = pd.read_csv(seq_wsp_file, sep='\t')
seq_wsp.head()
# convert to dict
seq_t = seq_wsp.set_index('target')['dist_threshold'].to_dict()
var_data['seq_threshold'] = var_data.Primary_ID.astype(int).replace(seq_t)
fig, ax = plt.subplots(1,1, figsize=(10,4))
ax.scatter(var_data.Primary_ID.astype(int), var_data.seq_threshold, c='b', alpha=0.5, label='sequencing')
ax.scatter(var_data.Primary_ID.astype(int), var_data.max_withinspecies_dist, c='r', alpha=0.5, label='Ag1000g')
ticks = range(62)
ax.set_xticks(ticks)
plt.xticks(rotation=45)
for t in ticks:
ax.axvline(t - 0.5, c='grey')
ax.legend();
fig, ax = plt.subplots(1,1, figsize=(4,4))
ax.scatter(var_data.max_withinspecies_dist,
var_data.seq_threshold)
ax.set_xlabel('Ag1000g max dist')
ax.set_ylabel('Sequencing data threshold')
ax.plot([0, 0.1], [0, 0.1], c='grey', ls='--');
```
Thresholds are underestimated for the most variable compared to sequencing data - potentially because of lack of indel data?
|
github_jupyter
|
%run common.ipynb
SPP = ['gambiae', 'coluzzii']
METRIC = 'hamming'
# sample names per species
sps = dict()
for sp in SPP:
sps[sp] = samples.loc[samples.species==sp, 'ox_code']
display([x.shape for x in sps.values()])
# read amplicon data
ampl = '60'
callset = zarr.open_group(AMPL_HAP_ZARR, 'r')
gt = allel.GenotypeArray(callset[ampl]['calldata/genotype'])
# extract samples - those differ between autosome and X
gs = pd.Series(callset[ampl]['samples'][:])
# nuber of variants
nvar = gt.shape[0]
# extract genotypes per species
gts = dict()
for sp in SPP:
gts[sp] = gt[:, gs.isin(sps[sp]), :]
display([x.shape for x in gts.values()])
# convert to haplotypes
hts = dict()
for sp in SPP:
hts[sp] = gts[sp].reshape(gts[sp].shape[0], 2 * gts[sp].shape[1]).transpose()
display([x.shape for x in hts.values()])
# pairwise distances
pds = dict()
# within sp
for sp in SPP:
pds[sp] = pdist(hts[sp], metric=METRIC)
# between spp for pairwise combinations of species
pds['between_spp'] = np.array([])
for (sp1, sp2) in itertools.combinations(SPP, 2):
pds['between_spp'] = np.append(pds['between_spp'],
cdist(hts[sp1],hts[sp2], metric=METRIC))
# combine multiple
for (sp, dist) in pds.items():
plt.hist(dist * nvar, bins=range(8), alpha=0.5, label=sp);
plt.legend();
callset = zarr.open_group(AMPL_HAP_ZARR, 'r')
# accumulate pairwise distances into dict
pds = dict()
for ampl in callset:
sys.stdout.write('\r' + ampl)
pds[ampl] = dict()
# read genotypes
gt = allel.GenotypeArray(callset[ampl]['calldata/genotype'])
# extract samples - those differ between autosome and X amplicons
gs = pd.Series(callset[ampl]['samples'][:])
# nuber of variants
pds[ampl]['nvar'] = gt.shape[0]
# extract genotypes per species
gts = dict()
for sp in SPP:
gts[sp] = gt[:, gs.isin(sps[sp]), :]
# convert to haplotypes
hts = dict()
for sp in SPP:
hts[sp] = gts[sp].reshape(gts[sp].shape[0], 2 * gts[sp].shape[1]).transpose()
# pairwise distances
# within sp
for sp in SPP:
pds[ampl][sp] = pdist(hts[sp], metric=METRIC)
# between spp for pairwise combinations of species
pds[ampl]['between_spp'] = np.array([])
for (sp1, sp2) in itertools.combinations(SPP, 2):
pds[ampl]['between_spp'] = np.append(pds[ampl]['between_spp'],
cdist(hts[sp1],hts[sp2], metric=METRIC))
fig, axs = plt.subplots(8,8, figsize=(15,15))
for i, ampl in enumerate(AMPLS):
ax = axs.flatten()[i]
n = pds[ampl]['nvar']
for sp in SPP + ['between_spp']:
pd_var = pds[ampl][sp] * n
bins = np.arange(0, int(pd_var.max()) + 1.5) - 0.5
ax.hist(pd_var,
bins=bins,
alpha=0.4,
label=sp);
ax.set_xticks(bins + 0.5)
if i == 0:
ax.legend()
ax.set_title(ampl)
# ax.set_xscale('log')
ax.set_yscale('log')
ax.set_ylim(0.9)
fig.tight_layout()
var_data = panel_mosquito.loc[:, ['start_insert','end_insert']].reset_index()
# sequence length in agam genome, both start and end positions taken into insert by subsetting code
# (see previous notebook)
var_data['len_insert'] = var_data.end_insert - var_data.start_insert + 1
# find maximum within-species distances across both species
def max_sp_pd_var(ampl):
max_sp_dist = [max(pds[ampl][sp]) for sp in SPP]
return max(max_sp_dist) * pds[ampl]['nvar']
var_data['max_withinspecies_var'] = var_data.Primary_ID.apply(max_sp_pd_var)
# normalise by sequence length
var_data['max_withinspecies_dist'] = var_data.max_withinspecies_var / var_data.len_insert
var_data.head()
# write
var_data.to_csv(WSP_VAR_FILE, index=False)
! head {WSP_VAR_FILE}
# estimate based on extended sample set
seq_wsp_file = '../../../data/phylo_ampl_dada2/comb1_5/3_thresholds.tsv'
seq_wsp = pd.read_csv(seq_wsp_file, sep='\t')
seq_wsp.head()
# convert to dict
seq_t = seq_wsp.set_index('target')['dist_threshold'].to_dict()
var_data['seq_threshold'] = var_data.Primary_ID.astype(int).replace(seq_t)
fig, ax = plt.subplots(1,1, figsize=(10,4))
ax.scatter(var_data.Primary_ID.astype(int), var_data.seq_threshold, c='b', alpha=0.5, label='sequencing')
ax.scatter(var_data.Primary_ID.astype(int), var_data.max_withinspecies_dist, c='r', alpha=0.5, label='Ag1000g')
ticks = range(62)
ax.set_xticks(ticks)
plt.xticks(rotation=45)
for t in ticks:
ax.axvline(t - 0.5, c='grey')
ax.legend();
fig, ax = plt.subplots(1,1, figsize=(4,4))
ax.scatter(var_data.max_withinspecies_dist,
var_data.seq_threshold)
ax.set_xlabel('Ag1000g max dist')
ax.set_ylabel('Sequencing data threshold')
ax.plot([0, 0.1], [0, 0.1], c='grey', ls='--');
| 0.437583 | 0.81648 |
```
import multiprocessing as mp
# https://github.com/matplotlib/matplotlib/issues/15410#issuecomment-625027757
mp.set_start_method('forkserver')
%matplotlib inline
import warnings
import pandas as pd
import matplotlib.pyplot as plt
from datetime import datetime
from functools import reduce
from vortexasdk import CargoTimeSeries, Products, Geographies
from vortexasdk.utils import convert_to_list
warnings.filterwarnings("ignore")
plt.rcParams['figure.figsize'] = (15, 10)
plt.rcParams.update({'font.size': 14})
START_DATE = datetime(2019, 6, 10)
END_DATE = datetime(2020, 6, 10)
UNIT = 'b'
```
# Define helper functions
```
def get_product_id_exact(product_name):
if product_name is None:
return None
products = [p.id for p in Products().search(product_name).to_list() if p.name==product_name]
assert len(products) == 1
return products[0]
def get_geography_id_exact(geog_name):
if geog_name is None:
return None
geogs = [g.id for g in Geographies().search(geog_name).to_list() if g.name==geog_name]
assert len(geogs) == 1
return geogs[0]
def merge(data_frames):
return reduce(
lambda left, right: pd.merge(
left, right, left_index=True, right_index=True, how="outer"
),
data_frames,
)
def plot_df(df, title=None, unit=UNIT):
df.plot(title=title, grid=True)
plt.xlabel('date')
plt.ylabel('k' + unit);
def prepare_dataset(df_fs, product_names, destination_names, storage_names, filter_activity):
# just keep key and value
df_fs = df_fs[['key', 'value']]
# use kilo unit not unit
df_fs['value'] = df_fs['value'] / 1000
# rename columns
col_name = str((destination_names or " ")) + \
" " + str((storage_names) or " ") + \
" " + str((product_names) or " ") + \
": " + filter_activity
df_fs = df_fs.rename(columns={'key': 'date', 'value': col_name})
# remove time zone from timestamp
df_fs['date'] = pd.to_datetime(df_fs['date']).dt.tz_localize(None)
return df_fs.set_index('date')
def fetch_timeseries(filter_activity, product_names=None, destination_names=None, storage_names=None,
unit=UNIT, frequency='day', start_date=START_DATE, end_date=END_DATE):
# Generate IDs
product_ids = [get_product_id_exact(name) for name in convert_to_list(product_names)]
destination_ids = [get_geography_id_exact(name) for name in convert_to_list(destination_names)]
storage_ids = [get_geography_id_exact(name) for name in convert_to_list(storage_names)]
# Load Data
df = CargoTimeSeries().search(timeseries_frequency=frequency,
timeseries_unit=unit,
disable_geographic_exclusion_rules=True,
filter_products=product_ids,
filter_destinations=destination_ids,
filter_storage_locations=storage_ids,
filter_activity=filter_activity,
filter_time_min=start_date,
filter_time_max=end_date).to_df()
# Rename columns, set index etc
return prepare_dataset(df, product_names, destination_names, storage_names, filter_activity)
```
# Define our commonly used constants
```
clean = "Clean Petroleum Products"
naphtha = "Naphtha"
diesel_gasoil = "Diesel/Gasoil"
gasoline_blending_components = "Gasoline/Blending Components"
jet_kero = "Jet/Kero"
```
# Analysis Start
### Load all global clean floating storage cargos
```
clean_fs = fetch_timeseries("storing_state", clean)
clean_fs.head()
plot_df(clean_fs, "Global Clean Floating Storage")
```
### Let's look at the product split of these global floating storage cargos
```
data_frames = [
fetch_timeseries("storing_state", gasoline_blending_components),
fetch_timeseries("storing_state", diesel_gasoil),
fetch_timeseries("storing_state", naphtha),
fetch_timeseries("storing_state", jet_kero)
]
df_merged = merge(data_frames)
df_merged.head()
plot_df(df_merged)
```
### Asia-only floating storage
```
dfs_asia = [
fetch_timeseries("storing_state", storage_names="Asia", product_names=gasoline_blending_components),
fetch_timeseries("storing_state", storage_names="Asia", product_names=diesel_gasoil),
fetch_timeseries("storing_state", storage_names="Asia", product_names=naphtha),
fetch_timeseries("storing_state", storage_names="Asia", product_names=jet_kero)
]
df_asia = merge(dfs_asia)
df_asia.head()
plot_df(df_asia)
```
### See how Diesel/Gasoil storage levels are split across Asian geographies
```
dfs_diesel_gasoil_countries = [
fetch_timeseries("storing_state", product_names=diesel_gasoil, storage_names="South Korea"),
fetch_timeseries("storing_state", product_names=diesel_gasoil, storage_names="India"),
fetch_timeseries("storing_state", product_names=diesel_gasoil, storage_names="China"),
fetch_timeseries("storing_state", product_names=diesel_gasoil, storage_names=["Singapore", "Malaysia", "Indonesia"])
]
df_diesel_gasoil_countries = merge(dfs_diesel_gasoil_countries)
df_diesel_gasoil_countries.head()
plot_df(df_diesel_gasoil_countries)
```
### Diesel/Gasoil Asian imports
```
end_date = datetime(2020, 5, 31)
dfs_imports = [
fetch_timeseries("unloading_state", diesel_gasoil, unit='bpd', frequency='month', end_date=end_date, destination_names="Australia"),
fetch_timeseries("unloading_state", diesel_gasoil, unit='bpd', frequency='month', end_date=end_date, destination_names="Indonesia"),
fetch_timeseries("unloading_state", diesel_gasoil, unit='bpd', frequency='month', end_date=end_date, destination_names="Philippines"),
fetch_timeseries("unloading_state", diesel_gasoil, unit='bpd', frequency='month', end_date=end_date, destination_names="Vietnam")
]
df_imports = merge(dfs_imports)
plot_df(df_imports)
```
|
github_jupyter
|
import multiprocessing as mp
# https://github.com/matplotlib/matplotlib/issues/15410#issuecomment-625027757
mp.set_start_method('forkserver')
%matplotlib inline
import warnings
import pandas as pd
import matplotlib.pyplot as plt
from datetime import datetime
from functools import reduce
from vortexasdk import CargoTimeSeries, Products, Geographies
from vortexasdk.utils import convert_to_list
warnings.filterwarnings("ignore")
plt.rcParams['figure.figsize'] = (15, 10)
plt.rcParams.update({'font.size': 14})
START_DATE = datetime(2019, 6, 10)
END_DATE = datetime(2020, 6, 10)
UNIT = 'b'
def get_product_id_exact(product_name):
if product_name is None:
return None
products = [p.id for p in Products().search(product_name).to_list() if p.name==product_name]
assert len(products) == 1
return products[0]
def get_geography_id_exact(geog_name):
if geog_name is None:
return None
geogs = [g.id for g in Geographies().search(geog_name).to_list() if g.name==geog_name]
assert len(geogs) == 1
return geogs[0]
def merge(data_frames):
return reduce(
lambda left, right: pd.merge(
left, right, left_index=True, right_index=True, how="outer"
),
data_frames,
)
def plot_df(df, title=None, unit=UNIT):
df.plot(title=title, grid=True)
plt.xlabel('date')
plt.ylabel('k' + unit);
def prepare_dataset(df_fs, product_names, destination_names, storage_names, filter_activity):
# just keep key and value
df_fs = df_fs[['key', 'value']]
# use kilo unit not unit
df_fs['value'] = df_fs['value'] / 1000
# rename columns
col_name = str((destination_names or " ")) + \
" " + str((storage_names) or " ") + \
" " + str((product_names) or " ") + \
": " + filter_activity
df_fs = df_fs.rename(columns={'key': 'date', 'value': col_name})
# remove time zone from timestamp
df_fs['date'] = pd.to_datetime(df_fs['date']).dt.tz_localize(None)
return df_fs.set_index('date')
def fetch_timeseries(filter_activity, product_names=None, destination_names=None, storage_names=None,
unit=UNIT, frequency='day', start_date=START_DATE, end_date=END_DATE):
# Generate IDs
product_ids = [get_product_id_exact(name) for name in convert_to_list(product_names)]
destination_ids = [get_geography_id_exact(name) for name in convert_to_list(destination_names)]
storage_ids = [get_geography_id_exact(name) for name in convert_to_list(storage_names)]
# Load Data
df = CargoTimeSeries().search(timeseries_frequency=frequency,
timeseries_unit=unit,
disable_geographic_exclusion_rules=True,
filter_products=product_ids,
filter_destinations=destination_ids,
filter_storage_locations=storage_ids,
filter_activity=filter_activity,
filter_time_min=start_date,
filter_time_max=end_date).to_df()
# Rename columns, set index etc
return prepare_dataset(df, product_names, destination_names, storage_names, filter_activity)
clean = "Clean Petroleum Products"
naphtha = "Naphtha"
diesel_gasoil = "Diesel/Gasoil"
gasoline_blending_components = "Gasoline/Blending Components"
jet_kero = "Jet/Kero"
clean_fs = fetch_timeseries("storing_state", clean)
clean_fs.head()
plot_df(clean_fs, "Global Clean Floating Storage")
data_frames = [
fetch_timeseries("storing_state", gasoline_blending_components),
fetch_timeseries("storing_state", diesel_gasoil),
fetch_timeseries("storing_state", naphtha),
fetch_timeseries("storing_state", jet_kero)
]
df_merged = merge(data_frames)
df_merged.head()
plot_df(df_merged)
dfs_asia = [
fetch_timeseries("storing_state", storage_names="Asia", product_names=gasoline_blending_components),
fetch_timeseries("storing_state", storage_names="Asia", product_names=diesel_gasoil),
fetch_timeseries("storing_state", storage_names="Asia", product_names=naphtha),
fetch_timeseries("storing_state", storage_names="Asia", product_names=jet_kero)
]
df_asia = merge(dfs_asia)
df_asia.head()
plot_df(df_asia)
dfs_diesel_gasoil_countries = [
fetch_timeseries("storing_state", product_names=diesel_gasoil, storage_names="South Korea"),
fetch_timeseries("storing_state", product_names=diesel_gasoil, storage_names="India"),
fetch_timeseries("storing_state", product_names=diesel_gasoil, storage_names="China"),
fetch_timeseries("storing_state", product_names=diesel_gasoil, storage_names=["Singapore", "Malaysia", "Indonesia"])
]
df_diesel_gasoil_countries = merge(dfs_diesel_gasoil_countries)
df_diesel_gasoil_countries.head()
plot_df(df_diesel_gasoil_countries)
end_date = datetime(2020, 5, 31)
dfs_imports = [
fetch_timeseries("unloading_state", diesel_gasoil, unit='bpd', frequency='month', end_date=end_date, destination_names="Australia"),
fetch_timeseries("unloading_state", diesel_gasoil, unit='bpd', frequency='month', end_date=end_date, destination_names="Indonesia"),
fetch_timeseries("unloading_state", diesel_gasoil, unit='bpd', frequency='month', end_date=end_date, destination_names="Philippines"),
fetch_timeseries("unloading_state", diesel_gasoil, unit='bpd', frequency='month', end_date=end_date, destination_names="Vietnam")
]
df_imports = merge(dfs_imports)
plot_df(df_imports)
| 0.499268 | 0.694141 |
# "An Essentials Guide to PyTorch Dataset and DataLoader Usage"
> "A brief guide for basic usage of PyTorch's Dataset and DataLoader classes."
- toc: true
- branch: master
- badges: true
- comments: true
- categories: [pytorch]
## Overview
In this short guide, we show a small representative example using the `Dataset` and `DataLoader` classes available in PyTorch for easy batching of training examples. This is more meant to be an onboarding for me with `fastpages`, but hopefully this example will be useful to those beginning to use PyTorch for their own applications.
## Setup
The first thing we need is the essential import: `torch`, i.e. PyTorch. Make sure that when you're running a notebook with code similar to this that you've imported `torch`, i.e. `import torch`, as shown below.
```
#collapse_hide
import torch
```
We'll then need a dataset to work with. For this small example, we'll use `numpy` to generate a random dataset for us. Specifically, we'll be working with a batch size of 32 later, so we'll create a dataset with exactly 50 batches, where each example has 5 features and a corresponding label between 0-9, inclusive. To do so, we use
* `np.random.randn` for generating the input examples
* `np.random.randint` for generating the labels
The exact code is shown below.
```
#collapse_show
import numpy as np
training_examples = np.random.randn(32 * 50, 5)
training_labels = np.random.randint(0, 10, size=(32*50,))
```
As a sanity check, let's look at the shapes. We'll want the size of the *whole* dataset to be (1600, 5), as we have $32*50$ examples, each with 5 features. Similarly, we'll want the size of the labels for the whole dataset to be (1600,), as we're essentially working with a list of 1600 labels.
```
#collapse_show
training_examples.shape, training_labels.shape
```
We can look at some of the labels, just for a sanity check that they look reasonable.
```
#collapse_show
training_labels[:10]
```
## Dataset Class and Instantiation
Now, we'll create a simple PyTorch dataset class. All you need to implement within this class is the `__getitem__` function and the `__len__` function.
* `__getitem__` is a function that takes in an index, and returns `dataset[index]`
* `__len__` returns the size of your dataset (in this case, that's 32*50).
When writing this class, you MUST subclass `torch.utils.data.Dataset`, as this is requirement for using the DataLoader class (see below).
```
class ExampleDataset(torch.utils.data.Dataset):
""" You can define the __init__ function any way you like"""
def __init__(self, examples, labels):
self.examples = examples
self.labels = labels
""" This function signature always should take in 1 argument, corresponding to the index you're going to access.
In this case, we're returning a tuple, corresponding to the training example and the corresponding label.
It will also be useful to convert the returned values to torch.Tensors, so we can push the data onto the
GPU later on. Note how the label is put into an array, but the example isn't. This is just a convention:
if we don't put `self.labels[index]` in a list, it'll just create a tensor of zeros with `self.labels[index]` zeros.
"""
def __getitem__(self, index):
return (torch.Tensor(self.examples[index]), torch.Tensor([self.labels[index]]))
""" This function signature always should take in 0 arguments, and the output should be an `int`. """
def __len__(self):
return len(self.examples)
```
Now, we can instantiate an instance of our `ExampleDataset` class, which subclasses `torch.utils.data.Dataset`. Note that we can specify how to initialize this via the `__init__` function, which takes in a list of examples, and a list of labels (i.e. what we've instantiated in our own setup).
```
training_dataset = ExampleDataset(training_examples, training_labels)
```
Sanity check - see the correspondence between accessing the dataset instance of the class above and the examples/labels we passed in.
```
training_dataset[0]
training_examples[0], training_labels[0]
```
We can iterate over this dataset using standard for loop syntax as well. The way you write the for loop depends on how `__getitem__` is set up. In our case, we return a tuple (example and label), so the for loop should also have a tuple.
```
example, label = training_dataset[0]
print(type(example), example.shape, type(label), label.shape)
from tqdm import tqdm
for example, label in tqdm(training_dataset):
continue
```
## Batching via the DataLoader class
To set up batching, we'll use the `torch.utils.data.DataLoader` class. All we have to do to create this DataLoader is to instantiate it with our dataset we created above (`training_dataset`). The arguments for `torch.utils.data.DataLoader` are worth looking at, but (generally) most important are:
* `dataset`: the PyTorch dataset class instance we'll pass in (e.g. `training_dataset`, this is why we had to do subclassing above)
* `batch_size` (optional, default is 1): the batch size we want when iterating (we'll pass in 32)
* `shuffle` (optional, default is `False`): whether we want to shuffle when iterating once the dataloader (note that if this is set to true, it'll shuffle every epoch; note also that we only really want to have this set to true for training, not necessarily for validation)
* `drop_last` (optional, default is `False`): whether to drop the last incomplete batch (we don't have to worry about this because the number of training examples is exactly divisible by 32)
```
training_dataloader = torch.utils.data.DataLoader(training_dataset, batch_size=32, shuffle=True)
```
Again, we can iterate, just like we did for `training_dataset`, but now, we get batches back, as we can see by printing the shapes. The magic happens in the `collate_fn` optional argument of the DataLoader class, but the default behavior is sufficient here for batching the examples and labels separately.
We'll first ensure that there are exactly 50 batches in our dataloader to work with.
```
assert len(training_dataloader) == 50
```
Now, mimicking the iteration from above, with the `ExampleDataset` instance:
```
for example, label in tqdm(training_dataloader):
continue
```
At some point, you may want to know information about a specific batch - accessing specific batches from the DataLoader is not as easy - I don't know of a way to grab a specific batch, other than doing something like the following.
```
training_dataloader_batches = [(example, label) for example, label in training_dataloader]
some_example, some_label = training_dataloader_batches[15]
some_example.shape, some_label.shape
```
However, you can always access the underlying dataset by literally doing `.dataset`, as shown below.
```
training_dataloader.dataset
training_dataloader.dataset[15]
```
## GPU Usage
Using the GPU is also trivial, even with the batches from the dataloader. Ensure that you have the GPU runtime set first, then run the following. You can verify that GPU is available with the condition shown below before the loop.
```
if torch.cuda.is_available():
print("Using GPU.")
for example, label in tqdm(training_dataloader):
if torch.cuda.is_available():
example, label = example.cuda(), label.cuda()
```
## Afterword and Resources
As mentioned above, it's useful to look at the [documentation](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) for `torch.utils.data.DataLoader`. Another way to do so within the notebook itself is to run the following within a cell of the notebook:
```
torch.utils.data.DataLoader?
```
There are many interesting things that you can do here, with the arguments allowed in the DataLoader. For example:
* You may be working with an image dataset large enough that you don't want to open all the images (e.g. using `PIL`) before feeding them through your model. In that case, you can lazily open them by passing in a `collate_fn` that opens the images before collating the examples of a batch, since `collate_fn` is only called for each iteration when iterating over the DataLoader, and not when the DataLoader is instantiated.
* You may not want to `shuffle` the dataset, as it might incur unnecessary computation. This is especially true if you have a separate DataLoader for your validation dataset, in which case there's no need to shuffle, as it won't affect the predictions.
* If you have access to multiple CPUs on whatever machine you're working on, you can use `num_workers` to load batches ahead of time on the other CPUs, i.e. the other workers.
* If you're working with a GPU, one of the more expensive steps in the pipeline is moving data from CPU to GPU - this can be sped up by using `pin_memory`, which ensures that the same space in the GPU RAM is used for the data being transferred from the CPU.
|
github_jupyter
|
#collapse_hide
import torch
#collapse_show
import numpy as np
training_examples = np.random.randn(32 * 50, 5)
training_labels = np.random.randint(0, 10, size=(32*50,))
#collapse_show
training_examples.shape, training_labels.shape
#collapse_show
training_labels[:10]
class ExampleDataset(torch.utils.data.Dataset):
""" You can define the __init__ function any way you like"""
def __init__(self, examples, labels):
self.examples = examples
self.labels = labels
""" This function signature always should take in 1 argument, corresponding to the index you're going to access.
In this case, we're returning a tuple, corresponding to the training example and the corresponding label.
It will also be useful to convert the returned values to torch.Tensors, so we can push the data onto the
GPU later on. Note how the label is put into an array, but the example isn't. This is just a convention:
if we don't put `self.labels[index]` in a list, it'll just create a tensor of zeros with `self.labels[index]` zeros.
"""
def __getitem__(self, index):
return (torch.Tensor(self.examples[index]), torch.Tensor([self.labels[index]]))
""" This function signature always should take in 0 arguments, and the output should be an `int`. """
def __len__(self):
return len(self.examples)
training_dataset = ExampleDataset(training_examples, training_labels)
training_dataset[0]
training_examples[0], training_labels[0]
example, label = training_dataset[0]
print(type(example), example.shape, type(label), label.shape)
from tqdm import tqdm
for example, label in tqdm(training_dataset):
continue
training_dataloader = torch.utils.data.DataLoader(training_dataset, batch_size=32, shuffle=True)
assert len(training_dataloader) == 50
for example, label in tqdm(training_dataloader):
continue
training_dataloader_batches = [(example, label) for example, label in training_dataloader]
some_example, some_label = training_dataloader_batches[15]
some_example.shape, some_label.shape
training_dataloader.dataset
training_dataloader.dataset[15]
if torch.cuda.is_available():
print("Using GPU.")
for example, label in tqdm(training_dataloader):
if torch.cuda.is_available():
example, label = example.cuda(), label.cuda()
torch.utils.data.DataLoader?
| 0.799755 | 0.992802 |
# Introduction to Coba
Coba is a Python package built to study the performance of contextual bandit (CB) learners via emperical experimentation. The CB problem considers how a computational agent can learn to make better choices over time. This means that in the field of CB research scientist study long sequences of situations in which a decision must be made. Each time a decision is made the effect of that decision is observed as some form of feedback. It is the job of CB algorithms then to integrate this feedback, along with knowledge of the situation in which the decision was made, in order to improve future decisions.
In order to create a research tool for the above problem that is both general and accessible Coba has been organized around four core concepts:
1. **Environments**: These are the sequences of situations in which decisions must be made. Individual environments should be semantically consistent.
2. **Learners**: These are the contextual bandit algorithms that Coba was built to study. Learners are able to make decisions and learn from feedback
3. **Experiments**: This is the emprical process where we determine how well selected *learners* perform in various *environments* of interest.
4. **Results**: This is the outcome of an experiment. It contains all the data necessary to analyze what happened and how well the learners performed.
The general Coba workflow is: 1) create or select the environments to use for evaluation, 2) select the CB learners to evaluate, 3) create an experiment using the selected learners and environments, and 4) run the experiment and analyze the results. In what follows we will walk you through this workflow below and show you how to use the many learners and environments provided by Coba out of the box.
## Your First Coba Experiment
### Selecting Environments
Every experiment in Coba begins by selecting which environments to use for learner evaluation. Coba provides a high level interface called **Environments** to create easily create and modify environments. Using this interface we are going to create a basic linear environment built primarily to debug CB learners. We're also going to apply a small transform to it **Binary**. This will transform our environment so that the action with the highest reward has a feedback value of 1 while all other actions have a fedback of 0 in each interaction. This will be useful for interpretation later on.
```
from coba.environments import Environments
environments = Environments.from_linear_synthetic(500)
```
### Selecting Learners
Next an experiment needs some learners to evaluate. Coba comes with a number of existing learners. We're going to pick three:
1. **RandomLearner**: This learner randomly selects actions. It is useful as a comparison to make sure learners are actually "learning".
2. **EpsilonBanditLearner**: This learner follows an epsilon greedy policy when selecting actions and ignores context features.
3. **VowpalLearner**: This learner is a wrapper around the Vowpal Wabbit ML package that implements several contextual bandit algorithms.
```
from coba.learners import RandomLearner, EpsilonBanditLearner, VowpalEpsilonLearner
```
### Running the Experiment
Now that we've selected our environments and learners we are ready to run our experiment. Experiments in Coba are orchestrated by the Experiment class. This class takes care of all the hard work of actually evaluating our learners against our environments. Creating and running our first experiment looks like this:
```
from coba.environments import Environments
from coba.learners import RandomLearner, EpsilonBanditLearner, VowpalEpsilonLearner
from coba.experiments import Experiment
environments = Environments.from_linear_synthetic(2000)
learners = [ RandomLearner(), EpsilonBanditLearner(0.1), VowpalEpsilonLearner() ]
Experiment(environments, learners).evaluate().plot_learners()
```
And just like that we've run our first experiment in Coba.
Taking a quick moment to explain the plot, the X-axis indicates how many times our learners have interacted with the simulation while the Y-axis indicates the average reward received since the beginning of the Experiment. The legend below the plot shows each learner and their hyperparameters.
Of course, running a single experiment often leads to more questions. Where is the **EpsilonBanditLearner**? In this case it is immediately beneath **RandomLearner**. There are a few reasons for why this happened so let's see if we can't figure this out by running a second experiment.
## Your Second Coba Experiment
As we mentioned above bandit learners don't consider context or action features when choosing actions. For our second experiment then we're going to modify the **LinearSyntheticSimulation** to see what parameters are causing this strange behavior. Look at the new code and notice the new parameter `n_action_features=0`.
```
from coba.environments import Environments
from coba.learners import RandomLearner, EpsilonBanditLearner, VowpalEpsilonLearner
from coba.experiments import Experiment
environments = Environments.from_linear_synthetic(2000,n_action_features=0).binary()
learners = [ RandomLearner(), EpsilonBanditLearner(0.1), VowpalEpsilonLearner() ]
Experiment(environments, learners).evaluate().plot_learners()
```
Look at that! **EpsilonBanditLearner** and **RandomLearner** are no longer identical. So what happened?
When we set `n_action_features=0` this changed the **LinearSyntheticSimulation** so that its actions no longer had "features". To understand what this means it helps to consider an example where we want to recommended a movie. Our actions in this case would be movies and each movie may have certain features that helps us decide what to recommend like its genre or critic rating. When our actions don't have features the environment is more like recommending an activity such as reading, watching a movie or going for a walk than a movie. That is, each action is distinguishable without needing features to describe it. Why do you think this changed the performance of **EpsilonBanditLearner**?
Ok, time for one final experiment. Let's see if we can get **EpsilonBanditLearner** performing competitively.
## One Final Experiment
For this final experiment we're going to make several changes to the **LinearSyntheticSimulation**.
```
from coba.environments import Environments
from coba.learners import RandomLearner, EpsilonBanditLearner, VowpalArgsLearner
from coba.experiments import Experiment
base_environment = Environments.from_linear_synthetic(
n_interactions=5000,
n_actions=10,
n_context_features=0,
n_action_features=0,
)
environments = base_environment.shuffle([1,2,3,4,5]).binary()
learners = [ RandomLearner(), EpsilonBanditLearner(0.1), VowpalArgsLearner() ]
result = Experiment(environments, learners).evaluate()
result.plot_learners(err='sd')
```
Wow, there was a lot more output that time. We'll come back to that in a second but first, look at that plot. Our underdog, **EpsilonBanditLearner** is finally hanging with the big boys. With context_features and action_features turned off we're now evaluating against a multi-armed bandit simulation. This is actually what bandit algorithms are designed for and sure enough we can see **EpsilonBanditLearner** performs well in this simulation.
What are those new lines, though, on the plot? Those weren't there before. Those are there because we specified `shuffle`. In fact, the reason there was so much more output with this experiment is because of `shuffle`. Applying shuffle to **Environments** makes `n` environments (where `n` is the length of the `shuffle` list). This means there is also `n` times more output. This can be seen if we print out environments.
```
Environments.from_linear_synthetic(
n_interactions=200,
n_actions=10,
n_context_features=0,
n_action_features=0,
).shuffle([1,2,3,4,5])
```
For each of these 5 environments the order of the interactions is shuffled according to the random seeds we provided (in this case 1, 2, 3, 4, and 5). The bars drawn on the plot is the standard deviation of our learners on each of the these shuffles while the solid line is the average performance across the five shuffles. To get a sense of this visually we can create a new plot that will show the shuffles along with the average.
```
result.plot_learners(err='se', each=True)
```
This plot shows our learner's average performance (The solid blue line in the middle) superimposed on top of the learner's performance on each of our five shuffles. We can see now that performance can actually vary quite a bit depending on the order of the interactions. What about with the **VowpalWabbit** learner when it is using epsilon greedy? (We should keep in mind that VW is a contextual bandit learner that we've already seen is much more general learner than **EpsilonBanditLearner** so its not really a fair comparison.)
## Conclusion
We hope this brief introduction has gotten you excited about the possibilities of COBA. What would you like to do first with Coba? Do you have an algorithm that you're hoping to publish and would like to easily test against other learners? Do you have a data set that you'd like to build a simulation from to see which contextual bandit algorithm performs best on it? Or are you just trying to learn more about machine learning and are just looking for an easy way to test introductory algorithms while you learn? All the above are easy to do with Coba.
We also welcome code contributions for new features. Feel free to reach out to mr2an@virginia.edu for ideas about features that Coba could benefit from. Coba is able to do a lot more than what we've shown here. It has its own environment syntax to allow you to define environments in a separate file for easy sharing. It is also able to import environments from a number of data formats. It is able to manage resource constraints to maximize through put of long running environments. And it can download remote data sets. All of this functionality has grown out of our own experimental needs and so we hope most of what we've built will be useful to others.
|
github_jupyter
|
from coba.environments import Environments
environments = Environments.from_linear_synthetic(500)
from coba.learners import RandomLearner, EpsilonBanditLearner, VowpalEpsilonLearner
from coba.environments import Environments
from coba.learners import RandomLearner, EpsilonBanditLearner, VowpalEpsilonLearner
from coba.experiments import Experiment
environments = Environments.from_linear_synthetic(2000)
learners = [ RandomLearner(), EpsilonBanditLearner(0.1), VowpalEpsilonLearner() ]
Experiment(environments, learners).evaluate().plot_learners()
from coba.environments import Environments
from coba.learners import RandomLearner, EpsilonBanditLearner, VowpalEpsilonLearner
from coba.experiments import Experiment
environments = Environments.from_linear_synthetic(2000,n_action_features=0).binary()
learners = [ RandomLearner(), EpsilonBanditLearner(0.1), VowpalEpsilonLearner() ]
Experiment(environments, learners).evaluate().plot_learners()
from coba.environments import Environments
from coba.learners import RandomLearner, EpsilonBanditLearner, VowpalArgsLearner
from coba.experiments import Experiment
base_environment = Environments.from_linear_synthetic(
n_interactions=5000,
n_actions=10,
n_context_features=0,
n_action_features=0,
)
environments = base_environment.shuffle([1,2,3,4,5]).binary()
learners = [ RandomLearner(), EpsilonBanditLearner(0.1), VowpalArgsLearner() ]
result = Experiment(environments, learners).evaluate()
result.plot_learners(err='sd')
Environments.from_linear_synthetic(
n_interactions=200,
n_actions=10,
n_context_features=0,
n_action_features=0,
).shuffle([1,2,3,4,5])
result.plot_learners(err='se', each=True)
| 0.671471 | 0.990786 |
# Chapter 5 - Ensmble Methods
```
import sys
sys.path.append("../")
from utils import *
np.random.seed(7)
```
## Bias-Variance Trade-off
$\newcommand{\coloneqq}{\mathrel{\vcenter{:}}=}$
$\newcommand{\E}{\mathbb{E}}$
$\newcommand{\y}{\mathbf{y}}$
Let us compute the bias-variance trade-off graph for a problem of polynomial fitting. Recall, that the error decomposition for the MSE loss function is: $$ MSE_{\y}\left(\widehat{\y}\right)=\E\left[\left(\widehat{\y}-\y^*\right)^2\right] = Var\left(\widehat{\y}\right) + Bias^2\left(\widehat{\y}\right) $$
Where the bias and variances of estimators are defined as: $$ Bias\left(\widehat{\y}\right) \coloneqq \E\left[\widehat{\y}\right] - \y, \quad Var\left(\widehat{\y}\right)\coloneqq \E\left[\left(\widehat{\y}-\E\left[\widehat{\y}\right]\right)^2\right]$$
As the $\E\left[\widehat{\y}\right]$ is over the selection of the training sets, we will first defined the "ground truth" model and retrieve a set $\mathbf{X},\y$ from it. Then, we will repeatedly sample Gaussian noise $\varepsilon$ and fit a polynomial model over $\mathbf{X},\y+\varepsilon$. In the code below `y_` denotes the true $\y$ values and `y` the responses after adding the noise.
```
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
# Generate data according to a polynomial model of degree 4
model = lambda x: x**4 - 2*x**3 - .5*x**2 + 1
X = np.linspace(-1.6, 2, 60)
y = model(X).astype(np.float64)
X_train, X_test, y_train_, y_test_ = train_test_split(X, y, test_size=.5, random_state=13)
# The following functions recieve two matrices of the true values and the predictions
# where rows represent different runs and columns the different responses in the run
def variance(y_pred):
return np.mean(np.var(y_pred - np.mean(y_pred, axis=0), axis=0, ddof=1))
def bias(y_pred, y_true):
mean_y = y_pred.mean(axis=0)
return np.mean((mean_y - y_true)**2)
def error(y_pred, y):
return np.mean((y_pred - y)**2)
ks, repetitions = list(range(11)), 100
biases, variances, errors = np.zeros(len(ks)), np.zeros(len(ks)), np.zeros(len(ks))
for i, k in enumerate(ks):
# Add noise to train and test samples
y_train = y_train_[np.newaxis, :] + np.random.normal(0, 3, size=(repetitions, len(y_train_)))
y_test = y_test_ + np.random.normal(size=len(y_test_))
# Fit model multiple times (each time over a slightly different training sample) and predict over test set
y_preds = np.array([make_pipeline(PolynomialFeatures(k), LinearRegression())\
.fit(X_train.reshape(-1,1), y_train[j,:])\
.predict(X_test.reshape(-1,1))
for j in range(repetitions)])
biases[i], variances[i], errors[i] = bias(y_preds, y_test_), variance(y_preds), error(y_preds, y_test_)
fig = go.Figure([
go.Scatter(x=ks, y=biases, name=r"$Bias^2$"),
go.Scatter(x=ks, y=variances, name=r"$Variance$"),
go.Scatter(x=ks, y=biases+variances, name=r"$Bias^2+Variance$"),
go.Scatter(x=ks, y=errors, name=r"$Generalization\,\,Error$")],
layout=go.Layout(title=r"$\text{Generalization Error Decomposition - Bias-Variance of Polynomial Fitting}$",
xaxis=dict(title=r"$\text{Degree of Fitted Polymonial}$"),
width=800, height=500))
fig.write_image(f"../figures/bias_variance_poly.png")
fig.show()
```
## Committee Decisions
Let $X_1,\ldots,X_T\overset{iid}{\sim}Ber\left(p\right)$ taking values in $\left\{\pm1\right\}$, with the probability of each being correct being $p>0.5$. We can bound the probability of the committee being correct by: $$\mathbb{P}\left(\sum X_i > 0\right) \geq 1-\exp\left(-\frac{T}{2p}\left(p-\frac{1}{2}\right)^2\right)$$
Let us show this bounding below empirically by sampling increasing amount of such Bernoulli random variables, and to do so for different values of $p$.
```
bound = np.vectorize(lambda p, T: 1-np.exp(-(T/(2*p))*(p-.5)**2))
ps = np.concatenate([[.5001], np.linspace(.55, 1, 14)])
Ts = [1,5,10,15,20,25,50,75,100,125,150,175,200,250,300,400,500,600]
frames = []
for p in ps:
theoretical = bound(p,Ts)
empirical = np.array([[np.sum(np.random.choice([1, -1], T, p=[p, 1-p])) > 0 for _ in range(100)] for T in Ts])
frames.append(go.Frame(data=[go.Scatter(x=Ts, y=theoretical, mode="markers+lines", name="Theoretical Bound",
line=dict(color="grey", dash='dash')),
go.Scatter(x=Ts, y=empirical.mean(axis=1),
error_y = dict(type="data", array=empirical.var(axis=1)),
mode="markers+lines", marker_color="black", name="Empirical Probability")],
layout=go.Layout(
title_text=r"$\text{{Committee Correctness Probability As Function of }}\
T\text{{: }}p={0}$".format(round(p,3)),
xaxis=dict(title=r"$T \text{ - Committee Size}$"),
yaxis=dict(title=r"$\text{Probability of Being Correct}$", range=[0.0001,1.01]))))
fig = go.Figure(data=frames[0]["data"],
frames=frames[1:],
layout=go.Layout(
title=frames[0]["layout"]["title"],
xaxis=frames[0]["layout"]["xaxis"],
yaxis=frames[0]["layout"]["yaxis"],
updatemenus=[dict(type="buttons", buttons=[AnimationButtons.play(frame_duration=1000),
AnimationButtons.pause()])] ))
animation_to_gif(fig, "../figures/committee_decision_correctness.gif", 700, width=600, height=450)
fig.show()
```
In this case, of uncorrelated committee members, we have shown the variance in the committee decision is: $$ Var\left(\sum X_i\right) = \frac{4}{T}p\left(1-p\right)$$
Let us simulate such a scenario and see what is the empirical variance we achieve
```
ps = np.concatenate([[.5001], np.linspace(.55, 1, 10)])
Ts = [1,5,10,15,20,25,50,75,100,125,150,175,200,250,300,400,500,600]
results = np.array([np.var(np.random.binomial(Ts, p, (10000, len(Ts))) >= (np.array(Ts)/2), axis=0, ddof=1) for p in ps])
df = pd.DataFrame(results, columns=Ts, index=ps)
fig = go.Figure(go.Heatmap(x=df.columns.tolist(), y=df.index.tolist(), z=df.values.tolist(), colorscale="amp"),
layout=go.Layout(title=r"$\text{Variance of Committee Decision - Independent Members}$",
xaxis=dict(title=r"$T\text{ - Committee Size}$", type="category"),
yaxis=dict(title=r"$p\text{ - Member Correctness Probability}$"),
width=800, height=500))
fig.write_image("../figures/uncorrelated_committee_decision.png")
fig.show()
```
For a set of correlated random variables, with correlation coefficient of $\rho$ and variance of $\sigma^2$, the variane of the committee's decision is: $$ Var\left(\sum X_i\right) = \rho \sigma^2 + \frac{1}{T}\left(1-\rho\right)\sigma^2 $$
Let us set $\sigma^2$ and investigate the relation between $\rho$ and $T$.
```
sigma = round((lambda p: p*(1-p))(.6), 3)
repeats = 10000
rho = np.linspace(0,1, 10)
Ts = np.array([1,5,10,15,20,25,50,75,100,125,150,175,200,250,300,400,500,600])
variances = np.zeros((len(rho), len(Ts)))
for i, r in enumerate(rho):
# Perform `repetitions` times T Bernoulli experiments
decisions = np.random.binomial(1, sigma, size=(repeats, max(Ts)))
change = np.c_[np.zeros(decisions.shape[0]), np.random.uniform(size=(repeats, max(Ts)-1)) <= r]
correlated_decisions = np.ma.array(decisions, mask=change).filled(fill_value=decisions[:,0][:, None])
correlated_decisions[correlated_decisions == 0] = -1
variances[i,:] = np.var(np.cumsum(correlated_decisions, axis=1) >= 0, axis=0)[Ts-1]
df = pd.DataFrame(variances, columns=Ts, index=rho)
fig = go.Figure(go.Heatmap(x=df.columns.tolist(), y=df.index.tolist(), z=df.values.tolist(), colorscale="amp"),
layout=go.Layout(title=rf"$\text{{Variance of Committee Decision - Correlated Committee Members - Member Decision Variance }}\sigma^2 = {sigma}$",
xaxis=dict(title=r"$T\text{ - Committee Size}$", type="category"),
yaxis=dict(title=r"$\rho\text{ - Correlation Between Members}$"),
width=500, height=300))
fig.write_image("../figures/correlated_committee_decision.png")
fig.show()
```
## Bootstrapping
### Empirical CDF
```
from statsmodels.distributions.empirical_distribution import ECDF
from scipy.stats import norm
data = np.random.normal(size=10000)
frames = []
for m in [5,10, 15, 20, 25, 50, 75, 100, 150, 200, 250, 500, 750, 1000,1500, 2000, 2500, 5000, 7500, 10000]:
ecdf = ECDF(data[:m])
frames.append(go.Frame(
data = [
go.Scatter(x=data[:m], y=[-.1]*m, mode="markers", marker=dict(size=5, color=norm.pdf(data[:m])), name="Samples"),
go.Scatter(x=ecdf.x, y=ecdf.y, marker_color="black", name="Empirical CDF"),
go.Scatter(x=np.linspace(-3,3,100), y=norm.cdf(np.linspace(-3,3,100), 0, 1), mode="lines",
line=dict(color="grey", dash='dash'), name="Theoretical CDF")],
layout = go.Layout(title=rf"$\text{{Empirical CDF of }}m={m}\text{{ Samples Drawn From }}\mathcal{{N}}\left(0,1\right)$")
))
fig = go.Figure(data = frames[0].data, frames=frames[1:],
layout=go.Layout(title=frames[0].layout.title,
updatemenus=[dict(type="buttons", buttons=[AnimationButtons.play(frame_duration=1000),
AnimationButtons.pause()])]))
animation_to_gif(fig, "../figures/empirical_cdf.gif", 700, width=600, height=450)
fig.show()
```
## AdaBoost
```
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
class StagedAdaBoostClassifier(AdaBoostClassifier):
def __init__(self, **kwargs):
super().__init__(*kwargs)
self.sample_weights = []
def _boost(self, iboost, X, y, sample_weight, random_state):
self.sample_weights.append(sample_weight.copy())
# self.res_list.append(super()._boost(iboost, X, y, sample_weight, random_state))
# return self.res_list[-1]
return super()._boost(iboost, X, y, sample_weight, random_state)
def _iteration_callback(self, iboost, X, y, sample_weight,
estimator_weight = None, estimator_error = None):
self.sample_weights.append(sample_weight.copy())
from sklearn.datasets import make_gaussian_quantiles
# Construct dataset of two sets of Gaussian quantiles
X1, y1 = make_gaussian_quantiles(cov=2., n_samples=50, n_features=2, n_classes=2, random_state=1)
X2, y2 = make_gaussian_quantiles(mean=(3, 3), cov=1.5, n_samples=50, n_features=2, n_classes=2, random_state=1)
X, y = np.concatenate((X1, X2)), np.concatenate((y1, - y2 + 1))
# Form grid of points to use for plotting decision boundaries
lims = np.array([X.min(axis=0), X.max(axis=0)]).T + np.array([-.2, .2])
xx, yy = list(map(np.ravel, np.meshgrid(np.arange(*lims[0], .2), np.arange(*lims[1], .2))))
# Fit AdaBoost classifier over training set
model = StagedAdaBoostClassifier().fit(X, y)
# Retrieve model train error at each iteration of fitting
staged_scores = list(model.staged_score(X, y))
# Predict labels of grid points at each iteration of fitting
staged_predictions = np.array(list(model.staged_predict(np.vstack([xx, yy]).T)))
# Create animation frames
frames = []
for i in range(len(staged_predictions)):
frames.append(go.Frame(
data=[
# Scatter of sample weights
go.Scatter(x=X[:,0], y= X[:,1], mode='markers', showlegend=False, marker=dict(color=y, colorscale=class_colors(2),
size=np.maximum(230*model.sample_weights[i]+1, np.ones(len(model.sample_weights[i]))*5)),
xaxis="x", yaxis="y"),
# Staged decision surface
go.Scatter(x=xx, y=yy, marker=dict(symbol = "square", colorscale=custom, color=staged_predictions[i,:]),
mode='markers', opacity = 0.4, showlegend=False, xaxis="x2", yaxis="y2"),
# Scatter of train samples with true class
go.Scatter(x=X[:,0], y=X[:,1], mode='markers', showlegend=False, xaxis="x2", yaxis="y2",
marker=dict(color=y, colorscale=class_colors(2), symbol=class_symbols[y])),
# Scatter of staged score
go.Scatter(x=list(range(i)), y=staged_scores[:i], mode='lines+markers', showlegend=False, marker_color="black",
xaxis="x3", yaxis="y3")
],
layout = go.Layout(title = rf"$\text{{AdaBoost Training - Iteration }}{i+1}/{len(staged_predictions)}$)"),
traces=[0, 1, 2, 3]))
fig = make_subplots(rows=2, cols=2, row_heights=[350, 200],
subplot_titles=(r"$\text{Sample Weights}$", r"$\text{Decisions Boundaries}$",
r"$\text{Ensemble Train Accuracy}$"),
specs=[[{}, {}], [{"colspan": 2}, None]])\
.add_traces(data=frames[0].data, rows=[1,1,1,2], cols=[1,2,2,1])\
.update(frames = frames)\
.update_layout(title=frames[0].layout.title,
updatemenus = [dict(type="buttons", buttons=[AnimationButtons.play(), AnimationButtons.pause()])],
width=600, height=550, margin=dict(t=100))\
.update_yaxes(range=[min(staged_scores)-.1, 1.1], autorange=False, row=2, col=1)\
.update_xaxes(range=[0, len(frames)], autorange=False, row=2, col=1)
animation_to_gif(fig, "../figures/adaboost.gif", 1000, width=600, height=550)
fig.show()
```
|
github_jupyter
|
import sys
sys.path.append("../")
from utils import *
np.random.seed(7)
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
# Generate data according to a polynomial model of degree 4
model = lambda x: x**4 - 2*x**3 - .5*x**2 + 1
X = np.linspace(-1.6, 2, 60)
y = model(X).astype(np.float64)
X_train, X_test, y_train_, y_test_ = train_test_split(X, y, test_size=.5, random_state=13)
# The following functions recieve two matrices of the true values and the predictions
# where rows represent different runs and columns the different responses in the run
def variance(y_pred):
return np.mean(np.var(y_pred - np.mean(y_pred, axis=0), axis=0, ddof=1))
def bias(y_pred, y_true):
mean_y = y_pred.mean(axis=0)
return np.mean((mean_y - y_true)**2)
def error(y_pred, y):
return np.mean((y_pred - y)**2)
ks, repetitions = list(range(11)), 100
biases, variances, errors = np.zeros(len(ks)), np.zeros(len(ks)), np.zeros(len(ks))
for i, k in enumerate(ks):
# Add noise to train and test samples
y_train = y_train_[np.newaxis, :] + np.random.normal(0, 3, size=(repetitions, len(y_train_)))
y_test = y_test_ + np.random.normal(size=len(y_test_))
# Fit model multiple times (each time over a slightly different training sample) and predict over test set
y_preds = np.array([make_pipeline(PolynomialFeatures(k), LinearRegression())\
.fit(X_train.reshape(-1,1), y_train[j,:])\
.predict(X_test.reshape(-1,1))
for j in range(repetitions)])
biases[i], variances[i], errors[i] = bias(y_preds, y_test_), variance(y_preds), error(y_preds, y_test_)
fig = go.Figure([
go.Scatter(x=ks, y=biases, name=r"$Bias^2$"),
go.Scatter(x=ks, y=variances, name=r"$Variance$"),
go.Scatter(x=ks, y=biases+variances, name=r"$Bias^2+Variance$"),
go.Scatter(x=ks, y=errors, name=r"$Generalization\,\,Error$")],
layout=go.Layout(title=r"$\text{Generalization Error Decomposition - Bias-Variance of Polynomial Fitting}$",
xaxis=dict(title=r"$\text{Degree of Fitted Polymonial}$"),
width=800, height=500))
fig.write_image(f"../figures/bias_variance_poly.png")
fig.show()
bound = np.vectorize(lambda p, T: 1-np.exp(-(T/(2*p))*(p-.5)**2))
ps = np.concatenate([[.5001], np.linspace(.55, 1, 14)])
Ts = [1,5,10,15,20,25,50,75,100,125,150,175,200,250,300,400,500,600]
frames = []
for p in ps:
theoretical = bound(p,Ts)
empirical = np.array([[np.sum(np.random.choice([1, -1], T, p=[p, 1-p])) > 0 for _ in range(100)] for T in Ts])
frames.append(go.Frame(data=[go.Scatter(x=Ts, y=theoretical, mode="markers+lines", name="Theoretical Bound",
line=dict(color="grey", dash='dash')),
go.Scatter(x=Ts, y=empirical.mean(axis=1),
error_y = dict(type="data", array=empirical.var(axis=1)),
mode="markers+lines", marker_color="black", name="Empirical Probability")],
layout=go.Layout(
title_text=r"$\text{{Committee Correctness Probability As Function of }}\
T\text{{: }}p={0}$".format(round(p,3)),
xaxis=dict(title=r"$T \text{ - Committee Size}$"),
yaxis=dict(title=r"$\text{Probability of Being Correct}$", range=[0.0001,1.01]))))
fig = go.Figure(data=frames[0]["data"],
frames=frames[1:],
layout=go.Layout(
title=frames[0]["layout"]["title"],
xaxis=frames[0]["layout"]["xaxis"],
yaxis=frames[0]["layout"]["yaxis"],
updatemenus=[dict(type="buttons", buttons=[AnimationButtons.play(frame_duration=1000),
AnimationButtons.pause()])] ))
animation_to_gif(fig, "../figures/committee_decision_correctness.gif", 700, width=600, height=450)
fig.show()
ps = np.concatenate([[.5001], np.linspace(.55, 1, 10)])
Ts = [1,5,10,15,20,25,50,75,100,125,150,175,200,250,300,400,500,600]
results = np.array([np.var(np.random.binomial(Ts, p, (10000, len(Ts))) >= (np.array(Ts)/2), axis=0, ddof=1) for p in ps])
df = pd.DataFrame(results, columns=Ts, index=ps)
fig = go.Figure(go.Heatmap(x=df.columns.tolist(), y=df.index.tolist(), z=df.values.tolist(), colorscale="amp"),
layout=go.Layout(title=r"$\text{Variance of Committee Decision - Independent Members}$",
xaxis=dict(title=r"$T\text{ - Committee Size}$", type="category"),
yaxis=dict(title=r"$p\text{ - Member Correctness Probability}$"),
width=800, height=500))
fig.write_image("../figures/uncorrelated_committee_decision.png")
fig.show()
sigma = round((lambda p: p*(1-p))(.6), 3)
repeats = 10000
rho = np.linspace(0,1, 10)
Ts = np.array([1,5,10,15,20,25,50,75,100,125,150,175,200,250,300,400,500,600])
variances = np.zeros((len(rho), len(Ts)))
for i, r in enumerate(rho):
# Perform `repetitions` times T Bernoulli experiments
decisions = np.random.binomial(1, sigma, size=(repeats, max(Ts)))
change = np.c_[np.zeros(decisions.shape[0]), np.random.uniform(size=(repeats, max(Ts)-1)) <= r]
correlated_decisions = np.ma.array(decisions, mask=change).filled(fill_value=decisions[:,0][:, None])
correlated_decisions[correlated_decisions == 0] = -1
variances[i,:] = np.var(np.cumsum(correlated_decisions, axis=1) >= 0, axis=0)[Ts-1]
df = pd.DataFrame(variances, columns=Ts, index=rho)
fig = go.Figure(go.Heatmap(x=df.columns.tolist(), y=df.index.tolist(), z=df.values.tolist(), colorscale="amp"),
layout=go.Layout(title=rf"$\text{{Variance of Committee Decision - Correlated Committee Members - Member Decision Variance }}\sigma^2 = {sigma}$",
xaxis=dict(title=r"$T\text{ - Committee Size}$", type="category"),
yaxis=dict(title=r"$\rho\text{ - Correlation Between Members}$"),
width=500, height=300))
fig.write_image("../figures/correlated_committee_decision.png")
fig.show()
from statsmodels.distributions.empirical_distribution import ECDF
from scipy.stats import norm
data = np.random.normal(size=10000)
frames = []
for m in [5,10, 15, 20, 25, 50, 75, 100, 150, 200, 250, 500, 750, 1000,1500, 2000, 2500, 5000, 7500, 10000]:
ecdf = ECDF(data[:m])
frames.append(go.Frame(
data = [
go.Scatter(x=data[:m], y=[-.1]*m, mode="markers", marker=dict(size=5, color=norm.pdf(data[:m])), name="Samples"),
go.Scatter(x=ecdf.x, y=ecdf.y, marker_color="black", name="Empirical CDF"),
go.Scatter(x=np.linspace(-3,3,100), y=norm.cdf(np.linspace(-3,3,100), 0, 1), mode="lines",
line=dict(color="grey", dash='dash'), name="Theoretical CDF")],
layout = go.Layout(title=rf"$\text{{Empirical CDF of }}m={m}\text{{ Samples Drawn From }}\mathcal{{N}}\left(0,1\right)$")
))
fig = go.Figure(data = frames[0].data, frames=frames[1:],
layout=go.Layout(title=frames[0].layout.title,
updatemenus=[dict(type="buttons", buttons=[AnimationButtons.play(frame_duration=1000),
AnimationButtons.pause()])]))
animation_to_gif(fig, "../figures/empirical_cdf.gif", 700, width=600, height=450)
fig.show()
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
class StagedAdaBoostClassifier(AdaBoostClassifier):
def __init__(self, **kwargs):
super().__init__(*kwargs)
self.sample_weights = []
def _boost(self, iboost, X, y, sample_weight, random_state):
self.sample_weights.append(sample_weight.copy())
# self.res_list.append(super()._boost(iboost, X, y, sample_weight, random_state))
# return self.res_list[-1]
return super()._boost(iboost, X, y, sample_weight, random_state)
def _iteration_callback(self, iboost, X, y, sample_weight,
estimator_weight = None, estimator_error = None):
self.sample_weights.append(sample_weight.copy())
from sklearn.datasets import make_gaussian_quantiles
# Construct dataset of two sets of Gaussian quantiles
X1, y1 = make_gaussian_quantiles(cov=2., n_samples=50, n_features=2, n_classes=2, random_state=1)
X2, y2 = make_gaussian_quantiles(mean=(3, 3), cov=1.5, n_samples=50, n_features=2, n_classes=2, random_state=1)
X, y = np.concatenate((X1, X2)), np.concatenate((y1, - y2 + 1))
# Form grid of points to use for plotting decision boundaries
lims = np.array([X.min(axis=0), X.max(axis=0)]).T + np.array([-.2, .2])
xx, yy = list(map(np.ravel, np.meshgrid(np.arange(*lims[0], .2), np.arange(*lims[1], .2))))
# Fit AdaBoost classifier over training set
model = StagedAdaBoostClassifier().fit(X, y)
# Retrieve model train error at each iteration of fitting
staged_scores = list(model.staged_score(X, y))
# Predict labels of grid points at each iteration of fitting
staged_predictions = np.array(list(model.staged_predict(np.vstack([xx, yy]).T)))
# Create animation frames
frames = []
for i in range(len(staged_predictions)):
frames.append(go.Frame(
data=[
# Scatter of sample weights
go.Scatter(x=X[:,0], y= X[:,1], mode='markers', showlegend=False, marker=dict(color=y, colorscale=class_colors(2),
size=np.maximum(230*model.sample_weights[i]+1, np.ones(len(model.sample_weights[i]))*5)),
xaxis="x", yaxis="y"),
# Staged decision surface
go.Scatter(x=xx, y=yy, marker=dict(symbol = "square", colorscale=custom, color=staged_predictions[i,:]),
mode='markers', opacity = 0.4, showlegend=False, xaxis="x2", yaxis="y2"),
# Scatter of train samples with true class
go.Scatter(x=X[:,0], y=X[:,1], mode='markers', showlegend=False, xaxis="x2", yaxis="y2",
marker=dict(color=y, colorscale=class_colors(2), symbol=class_symbols[y])),
# Scatter of staged score
go.Scatter(x=list(range(i)), y=staged_scores[:i], mode='lines+markers', showlegend=False, marker_color="black",
xaxis="x3", yaxis="y3")
],
layout = go.Layout(title = rf"$\text{{AdaBoost Training - Iteration }}{i+1}/{len(staged_predictions)}$)"),
traces=[0, 1, 2, 3]))
fig = make_subplots(rows=2, cols=2, row_heights=[350, 200],
subplot_titles=(r"$\text{Sample Weights}$", r"$\text{Decisions Boundaries}$",
r"$\text{Ensemble Train Accuracy}$"),
specs=[[{}, {}], [{"colspan": 2}, None]])\
.add_traces(data=frames[0].data, rows=[1,1,1,2], cols=[1,2,2,1])\
.update(frames = frames)\
.update_layout(title=frames[0].layout.title,
updatemenus = [dict(type="buttons", buttons=[AnimationButtons.play(), AnimationButtons.pause()])],
width=600, height=550, margin=dict(t=100))\
.update_yaxes(range=[min(staged_scores)-.1, 1.1], autorange=False, row=2, col=1)\
.update_xaxes(range=[0, len(frames)], autorange=False, row=2, col=1)
animation_to_gif(fig, "../figures/adaboost.gif", 1000, width=600, height=550)
fig.show()
| 0.512205 | 0.976084 |
```
import requests
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import datetime as dt
import bar_chart_race as bcr
import warnings
import numpy as np
%matplotlib inline
scotland_by_health_board_url = "https://www.gov.scot/binaries/content/documents/govscot/publications/statistics/2020/04/coronavirus-covid-19-trends-in-daily-data/documents/covid-19-data-by-nhs-board/covid-19-data-by-nhs-board/govscot%3Adocument/COVID-19%2Bdaily%2Bdata%2B-%2Bby%2BNHS%2BBoard%2B-%2B6%2BSeptember%2B2020.xlsx"
#publicly available url of daily updated covid numbers by health board area.
def get_xlsx_dict(url, xlsx_name):
resp = requests.get(url)
output = open(xlsx_name, 'wb')
output.write(resp.content)
output.close()
dict = pd.read_excel(xlsx_name, sheet_name=None)
return dict
#takes the url and extracts the xlsx from it and returns the pandas dictionary of sheets
dict_bhb = get_xlsx_dict(scotland_by_health_board_url, '/Users/Ben/Documents/Coding/GitHub/ScotlandBarRace/covid_bhb.xlsx')
#Dictionary of covid numbers by health board
def covid_fmt(df):
df.drop([0], axis=0).reset_index(drop = True)
row_start = 0
for x in range(3):
if df.iloc[x,1] == 'NHS Ayrshire & Arran':
df.columns = df.loc[x]
row_start = x+1
df = df.iloc[row_start:,:16]
#Columns Headers are moved from a row to actual columns.
#the columns are then limited to exclude both the problem row and also 2 columns full of nans
df = df.replace('*', '0')
df.head()
#'*' denotes near zero so reasonable replacement of 0 is used.
df = df.astype({x:float for x in df.columns[1:]})
# converts all numbers except date column into float
df = df.set_index(df.iloc[:,0])
# sets date columns as index as required for bcr format
df = df.drop(df.columns[0], axis = 1)
# drops now reduntant date column
return df
#Function that formats the selected sheet and returns a Bar Chart Race GIF
df_tot_cases = dict_bhb['Table 1 - Cumulative cases']
df_icu_cases = dict_bhb['Table 2 - ICU patients']
df_hosp_cases = dict_bhb['Table 3 - Hospital patients']
#dataframe for first three sheets of covid numbers spreadsheet
df = covid_fmt(df_hosp_cases)
df_current_month = df.iloc[:30,:].copy()
df.describe()
fig, axes = plt.subplots(8,2, figsize=(10,15), sharey=True, sharex=True)
axe = axes.ravel()
for i, c in enumerate(df.columns):
df_current_month[c].plot(ax=axe[i], legend = c)
#bcr.bar_chart_race(df = df,filename ='')
warnings.filterwarnings('ignore')
bcr.bar_chart_race(df = df)
```
|
github_jupyter
|
import requests
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import datetime as dt
import bar_chart_race as bcr
import warnings
import numpy as np
%matplotlib inline
scotland_by_health_board_url = "https://www.gov.scot/binaries/content/documents/govscot/publications/statistics/2020/04/coronavirus-covid-19-trends-in-daily-data/documents/covid-19-data-by-nhs-board/covid-19-data-by-nhs-board/govscot%3Adocument/COVID-19%2Bdaily%2Bdata%2B-%2Bby%2BNHS%2BBoard%2B-%2B6%2BSeptember%2B2020.xlsx"
#publicly available url of daily updated covid numbers by health board area.
def get_xlsx_dict(url, xlsx_name):
resp = requests.get(url)
output = open(xlsx_name, 'wb')
output.write(resp.content)
output.close()
dict = pd.read_excel(xlsx_name, sheet_name=None)
return dict
#takes the url and extracts the xlsx from it and returns the pandas dictionary of sheets
dict_bhb = get_xlsx_dict(scotland_by_health_board_url, '/Users/Ben/Documents/Coding/GitHub/ScotlandBarRace/covid_bhb.xlsx')
#Dictionary of covid numbers by health board
def covid_fmt(df):
df.drop([0], axis=0).reset_index(drop = True)
row_start = 0
for x in range(3):
if df.iloc[x,1] == 'NHS Ayrshire & Arran':
df.columns = df.loc[x]
row_start = x+1
df = df.iloc[row_start:,:16]
#Columns Headers are moved from a row to actual columns.
#the columns are then limited to exclude both the problem row and also 2 columns full of nans
df = df.replace('*', '0')
df.head()
#'*' denotes near zero so reasonable replacement of 0 is used.
df = df.astype({x:float for x in df.columns[1:]})
# converts all numbers except date column into float
df = df.set_index(df.iloc[:,0])
# sets date columns as index as required for bcr format
df = df.drop(df.columns[0], axis = 1)
# drops now reduntant date column
return df
#Function that formats the selected sheet and returns a Bar Chart Race GIF
df_tot_cases = dict_bhb['Table 1 - Cumulative cases']
df_icu_cases = dict_bhb['Table 2 - ICU patients']
df_hosp_cases = dict_bhb['Table 3 - Hospital patients']
#dataframe for first three sheets of covid numbers spreadsheet
df = covid_fmt(df_hosp_cases)
df_current_month = df.iloc[:30,:].copy()
df.describe()
fig, axes = plt.subplots(8,2, figsize=(10,15), sharey=True, sharex=True)
axe = axes.ravel()
for i, c in enumerate(df.columns):
df_current_month[c].plot(ax=axe[i], legend = c)
#bcr.bar_chart_race(df = df,filename ='')
warnings.filterwarnings('ignore')
bcr.bar_chart_race(df = df)
| 0.327346 | 0.478407 |
```
#hide
from utils import *
```
# A language model from scratch
## The data
```
from fastai2.text.all import *
path = untar_data(URLs.HUMAN_NUMBERS)
#hide
Path.BASE_PATH = path
path.ls()
lines = L()
with open(path/'train.txt') as f: lines += L(*f.readlines())
with open(path/'valid.txt') as f: lines += L(*f.readlines())
lines
text = ' . '.join([l.strip() for l in lines])
text[:100]
tokens = text.split(' ')
tokens[:10]
vocab = L(*tokens).unique()
vocab
word2idx = {w:i for i,w in enumerate(vocab)}
nums = L(word2idx[i] for i in tokens)
nums
```
## Our first language model from scratch
```
L((tokens[i:i+3], tokens[i+3]) for i in range(0,len(tokens)-4,3))
seqs = L((tensor(nums[i:i+3]), nums[i+3]) for i in range(0,len(nums)-4,3))
seqs
bs = 64
cut = int(len(seqs) * 0.8)
dls = DataLoaders.from_dsets(seqs[:cut], seqs[cut:], bs=64, shuffle=False)
```
### Our language model in PyTorch
```
class LMModel1(Module):
def __init__(self, vocab_sz, n_hidden):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.h_h = nn.Linear(n_hidden, n_hidden)
self.h_o = nn.Linear(n_hidden,vocab_sz)
def forward(self, x):
h = F.relu(self.h_h(self.i_h(x[:,0])))
h = h + self.i_h(x[:,1])
h = F.relu(self.h_h(h))
h = h + self.i_h(x[:,2])
h = F.relu(self.h_h(h))
return self.h_o(h)
learn = Learner(dls, LMModel1(len(vocab), 64), loss_func=F.cross_entropy,
metrics=accuracy)
learn.fit_one_cycle(4, 1e-3)
n,counts = 0,torch.zeros(len(vocab))
for x,y in dls.valid:
n += y.shape[0]
for i in range_of(vocab): counts[i] += (y==i).long().sum()
idx = torch.argmax(counts)
idx, vocab[idx.item()], counts[idx].item()/n
```
### Our first recurrent neural network
```
class LMModel2(Module):
def __init__(self, vocab_sz, n_hidden):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.h_h = nn.Linear(n_hidden, n_hidden)
self.h_o = nn.Linear(n_hidden,vocab_sz)
def forward(self, x):
h = 0
for i in range(3):
h = h + self.i_h(x[:,i])
h = F.relu(self.h_h(h))
return self.h_o(h)
learn = Learner(dls, LMModel2(len(vocab), 64), loss_func=F.cross_entropy,
metrics=accuracy)
learn.fit_one_cycle(4, 1e-3)
```
## Improving the RNN
### Maintaining the state of an RNN
```
class LMModel3(Module):
def __init__(self, vocab_sz, n_hidden):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.h_h = nn.Linear(n_hidden, n_hidden)
self.h_o = nn.Linear(n_hidden,vocab_sz)
self.h = 0
def forward(self, x):
for i in range(3):
self.h = self.h + self.i_h(x[:,i])
self.h = F.relu(self.h_h(self.h))
out = self.h_o(self.h)
self.h = self.h.detach()
return out
def reset(self): self.h = 0
m = len(seqs)//bs
m,bs,len(seqs)
def group_chunks(ds, bs):
m = len(ds) // bs
new_ds = L()
for i in range(m): new_ds += L(ds[i + m*j] for j in range(bs))
return new_ds
cut = int(len(seqs) * 0.8)
dls = DataLoaders.from_dsets(
group_chunks(seqs[:cut], bs),
group_chunks(seqs[cut:], bs),
bs=bs, drop_last=True, shuffle=False)
learn = Learner(dls, LMModel3(len(vocab), 64), loss_func=F.cross_entropy,
metrics=accuracy, cbs=ModelReseter)
learn.fit_one_cycle(10, 3e-3)
```
### Creating more signal
```
sl = 16
seqs = L((tensor(nums[i:i+sl]), tensor(nums[i+1:i+sl+1]))
for i in range(0,len(nums)-sl-1,sl))
cut = int(len(seqs) * 0.8)
dls = DataLoaders.from_dsets(group_chunks(seqs[:cut], bs),
group_chunks(seqs[cut:], bs),
bs=bs, drop_last=True, shuffle=False)
[L(vocab[o] for o in s) for s in seqs[0]]
class LMModel4(Module):
def __init__(self, vocab_sz, n_hidden):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.h_h = nn.Linear(n_hidden, n_hidden)
self.h_o = nn.Linear(n_hidden,vocab_sz)
self.h = 0
def forward(self, x):
outs = []
for i in range(sl):
self.h = self.h + self.i_h(x[:,i])
self.h = F.relu(self.h_h(self.h))
outs.append(self.h_o(self.h))
self.h = self.h.detach()
return torch.stack(outs, dim=1)
def reset(self): self.h = 0
def loss_func(inp, targ):
return F.cross_entropy(inp.view(-1, len(vocab)), targ.view(-1))
learn = Learner(dls, LMModel4(len(vocab), 64), loss_func=loss_func,
metrics=accuracy, cbs=ModelReseter)
learn.fit_one_cycle(15, 3e-3)
```
## Multilayer RNNs
## The model
```
class LMModel5(Module):
def __init__(self, vocab_sz, n_hidden, n_layers):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.rnn = nn.RNN(n_hidden, n_hidden, n_layers, batch_first=True)
self.h_o = nn.Linear(n_hidden, vocab_sz)
self.h = torch.zeros(n_layers, bs, n_hidden)
def forward(self, x):
res,h = self.rnn(self.i_h(x), self.h)
self.h = h.detach()
return self.h_o(res)
def reset(self): self.h.zero_()
learn = Learner(dls, LMModel5(len(vocab), 64, 2),
loss_func=CrossEntropyLossFlat(),
metrics=accuracy, cbs=ModelReseter)
learn.fit_one_cycle(15, 3e-3)
```
### Exploding or disappearing activations
## LSTM
### Building an LSTM from scratch
```
class LSTMCell(Module):
def __init__(self, ni, nh):
self.forget_gate = nn.Linear(ni + nh, nh)
self.input_gate = nn.Linear(ni + nh, nh)
self.cell_gate = nn.Linear(ni + nh, nh)
self.output_gate = nn.Linear(ni + nh, nh)
def forward(self, input, state):
h,c = state
h = torch.stack([h, input], dim=1)
forget = torch.sigmoid(self.forget_gate(h))
c = c * forget
inp = torch.sigmoid(self.input_gate(h))
cell = torch.tanh(self.cell_gate(h))
c = c + inp * cell
out = torch.sigmoid(self.output_gate(h))
h = outgate * torch.tanh(c)
return h, (h,c)
class LSTMCell(Module):
def __init__(self, ni, nh):
self.ih = nn.Linear(ni,4*nh)
self.hh = nn.Linear(nh,4*nh)
def forward(self, input, state):
h,c = state
#One big multiplication for all the gates is better than 4 smaller ones
gates = (self.ih(input) + self.hh(h)).chunk(4, 1)
ingate,forgetgate,outgate = map(torch.sigmoid, gates[:3])
cellgate = gates[3].tanh()
c = (forgetgate*c) + (ingate*cellgate)
h = outgate * c.tanh()
return h, (h,c)
t = torch.arange(0,10); t
t.chunk(2)
```
### Training a language model using LSTMs
```
class LMModel6(Module):
def __init__(self, vocab_sz, n_hidden, n_layers):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.rnn = nn.LSTM(n_hidden, n_hidden, n_layers, batch_first=True)
self.h_o = nn.Linear(n_hidden, vocab_sz)
self.h = [torch.zeros(2, bs, n_hidden) for _ in range(n_layers)]
def forward(self, x):
res,h = self.rnn(self.i_h(x), self.h)
self.h = [h_.detach() for h_ in h]
return self.h_o(res)
def reset(self):
for h in self.h: h.zero_()
learn = Learner(dls, LMModel6(len(vocab), 64, 2),
loss_func=CrossEntropyLossFlat(),
metrics=accuracy, cbs=ModelReseter)
learn.fit_one_cycle(15, 1e-2)
```
## Regularizing an LSTM
### Dropout
```
class Dropout(Module):
def __init__(self, p): self.p = p
def forward(self, x):
if not self.training: return x
mask = x.new(*x.shape).bernoulli_(1-p)
return x * mask.div_(1-p)
```
### AR and TAR regularization
### Training a weight-tied regularized LSTM
```
class LMModel7(Module):
def __init__(self, vocab_sz, n_hidden, n_layers, p):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.rnn = nn.LSTM(n_hidden, n_hidden, n_layers, batch_first=True)
self.drop = nn.Dropout(p)
self.h_o = nn.Linear(n_hidden, vocab_sz)
self.h_o.weight = self.i_h.weight
self.h = [torch.zeros(2, bs, n_hidden) for _ in range(n_layers)]
def forward(self, x):
raw,h = self.rnn(self.i_h(x), self.h)
out = self.drop(raw)
self.h = [h_.detach() for h_ in h]
return self.h_o(out),raw,out
def reset(self):
for h in self.h: h.zero_()
learn = Learner(dls, LMModel7(len(vocab), 64, 2, 0.5),
loss_func=CrossEntropyLossFlat(), metrics=accuracy,
cbs=[ModelReseter, RNNRegularizer(alpha=2, beta=1)])
learn = TextLearner(dls, LMModel7(len(vocab), 64, 2, 0.4),
loss_func=CrossEntropyLossFlat(), metrics=accuracy)
learn.fit_one_cycle(15, 1e-2, wd=0.1)
```
## Conclusion
## Questionnaire
1. If the dataset for your project is so big and complicated that working with it takes a significant amount of time, what should you do?
1. Why do we concatenating the documents in our dataset before creating a language model?
1. To use a standard fully connected network to predict the fourth word given the previous three words, what two tweaks do we need to make?
1. How can we share a weight matrix across multiple layers in PyTorch?
1. Write a module which predicts the third word given the previous two words of a sentence, without peeking.
1. What is a recurrent neural network?
1. What is hidden state?
1. What is the equivalent of hidden state in ` LMModel1`?
1. To maintain the state in an RNN why is it important to pass the text to the model in order?
1. What is an unrolled representation of an RNN?
1. Why can maintaining the hidden state in an RNN lead to memory and performance problems? How do we fix this problem?
1. What is BPTT?
1. Write code to print out the first few batches of the validation set, including converting the token IDs back into English strings, as we showed for batches of IMDb data in <<chapter_nlp>>.
1. What does the `ModelReseter` callback do? Why do we need it?
1. What are the downsides of predicting just one output word for each three input words?
1. Why do we need a custom loss function for `LMModel4`?
1. Why is the training of `LMModel4` unstable?
1. In the unrolled representation, we can see that a recurrent neural network actually has many layers. So why do we need to stack RNNs to get better results?
1. Draw a representation of a stacked (multilayer) RNN.
1. Why should we get better results in an RNN if we call `detach` less often? Why might this not happen in practice with a simple RNN?
1. Why can a deep network result in very large or very small activations? Why does this matter?
1. In a computer's floating point representation of numbers, which numbers are the most precise?
1. Why do vanishing gradients prevent training?
1. Why does it help to have two hidden states in the LSTM architecture? What is the purpose of each one?
1. What are these two states called in an LSTM?
1. What is tanh, and how is it related to sigmoid?
1. What is the purpose of this code in `LSTMCell`?: `h = torch.stack([h, input], dim=1)`
1. What does `chunk` to in PyTorch?
1. Study the refactored version of `LSTMCell` carefully to ensure you understand how and why it does the same thing as the non-refactored version.
1. Why can we use a higher learning rate for `LMModel6`?
1. What are the three regularisation techniques used in an AWD-LSTM model?
1. What is dropout?
1. Why do we scale the weights with dropout? Is this applied during training, inference, or both?
1. What is the purpose of this line from `Dropout`?: `if not self.training: return x`
1. Experiment with `bernoulli_` to understand how it works.
1. How do you set your model in training mode in PyTorch? In evaluation mode?
1. Write the equation for activation regularization (in maths or code, as you prefer). How is it different to weight decay?
1. Write the equation for temporal activation regularization (in maths or code, as you prefer). Why wouldn't we use this for computer vision problems?
1. What is "weight tying" in a language model?
### Further research
1. In ` LMModel2` why can `forward` start with `h=0`? Why don't we need to say `h=torch.zeros(โฆ)`?
1. Write the code for an LSTM from scratch (but you may refer to <<lstm>>).
1. Search on the Internet for the GRU architecture and implement it from scratch, and try training a model. See if you can get the similar results as we saw in this chapter. Compare it to the results of PyTorch's built in GRU module.
1. Have a look at the source code for AWD-LSTM in fastai, and try to map each of the lines of code to the concepts shown in this chapter.
|
github_jupyter
|
#hide
from utils import *
from fastai2.text.all import *
path = untar_data(URLs.HUMAN_NUMBERS)
#hide
Path.BASE_PATH = path
path.ls()
lines = L()
with open(path/'train.txt') as f: lines += L(*f.readlines())
with open(path/'valid.txt') as f: lines += L(*f.readlines())
lines
text = ' . '.join([l.strip() for l in lines])
text[:100]
tokens = text.split(' ')
tokens[:10]
vocab = L(*tokens).unique()
vocab
word2idx = {w:i for i,w in enumerate(vocab)}
nums = L(word2idx[i] for i in tokens)
nums
L((tokens[i:i+3], tokens[i+3]) for i in range(0,len(tokens)-4,3))
seqs = L((tensor(nums[i:i+3]), nums[i+3]) for i in range(0,len(nums)-4,3))
seqs
bs = 64
cut = int(len(seqs) * 0.8)
dls = DataLoaders.from_dsets(seqs[:cut], seqs[cut:], bs=64, shuffle=False)
class LMModel1(Module):
def __init__(self, vocab_sz, n_hidden):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.h_h = nn.Linear(n_hidden, n_hidden)
self.h_o = nn.Linear(n_hidden,vocab_sz)
def forward(self, x):
h = F.relu(self.h_h(self.i_h(x[:,0])))
h = h + self.i_h(x[:,1])
h = F.relu(self.h_h(h))
h = h + self.i_h(x[:,2])
h = F.relu(self.h_h(h))
return self.h_o(h)
learn = Learner(dls, LMModel1(len(vocab), 64), loss_func=F.cross_entropy,
metrics=accuracy)
learn.fit_one_cycle(4, 1e-3)
n,counts = 0,torch.zeros(len(vocab))
for x,y in dls.valid:
n += y.shape[0]
for i in range_of(vocab): counts[i] += (y==i).long().sum()
idx = torch.argmax(counts)
idx, vocab[idx.item()], counts[idx].item()/n
class LMModel2(Module):
def __init__(self, vocab_sz, n_hidden):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.h_h = nn.Linear(n_hidden, n_hidden)
self.h_o = nn.Linear(n_hidden,vocab_sz)
def forward(self, x):
h = 0
for i in range(3):
h = h + self.i_h(x[:,i])
h = F.relu(self.h_h(h))
return self.h_o(h)
learn = Learner(dls, LMModel2(len(vocab), 64), loss_func=F.cross_entropy,
metrics=accuracy)
learn.fit_one_cycle(4, 1e-3)
class LMModel3(Module):
def __init__(self, vocab_sz, n_hidden):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.h_h = nn.Linear(n_hidden, n_hidden)
self.h_o = nn.Linear(n_hidden,vocab_sz)
self.h = 0
def forward(self, x):
for i in range(3):
self.h = self.h + self.i_h(x[:,i])
self.h = F.relu(self.h_h(self.h))
out = self.h_o(self.h)
self.h = self.h.detach()
return out
def reset(self): self.h = 0
m = len(seqs)//bs
m,bs,len(seqs)
def group_chunks(ds, bs):
m = len(ds) // bs
new_ds = L()
for i in range(m): new_ds += L(ds[i + m*j] for j in range(bs))
return new_ds
cut = int(len(seqs) * 0.8)
dls = DataLoaders.from_dsets(
group_chunks(seqs[:cut], bs),
group_chunks(seqs[cut:], bs),
bs=bs, drop_last=True, shuffle=False)
learn = Learner(dls, LMModel3(len(vocab), 64), loss_func=F.cross_entropy,
metrics=accuracy, cbs=ModelReseter)
learn.fit_one_cycle(10, 3e-3)
sl = 16
seqs = L((tensor(nums[i:i+sl]), tensor(nums[i+1:i+sl+1]))
for i in range(0,len(nums)-sl-1,sl))
cut = int(len(seqs) * 0.8)
dls = DataLoaders.from_dsets(group_chunks(seqs[:cut], bs),
group_chunks(seqs[cut:], bs),
bs=bs, drop_last=True, shuffle=False)
[L(vocab[o] for o in s) for s in seqs[0]]
class LMModel4(Module):
def __init__(self, vocab_sz, n_hidden):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.h_h = nn.Linear(n_hidden, n_hidden)
self.h_o = nn.Linear(n_hidden,vocab_sz)
self.h = 0
def forward(self, x):
outs = []
for i in range(sl):
self.h = self.h + self.i_h(x[:,i])
self.h = F.relu(self.h_h(self.h))
outs.append(self.h_o(self.h))
self.h = self.h.detach()
return torch.stack(outs, dim=1)
def reset(self): self.h = 0
def loss_func(inp, targ):
return F.cross_entropy(inp.view(-1, len(vocab)), targ.view(-1))
learn = Learner(dls, LMModel4(len(vocab), 64), loss_func=loss_func,
metrics=accuracy, cbs=ModelReseter)
learn.fit_one_cycle(15, 3e-3)
class LMModel5(Module):
def __init__(self, vocab_sz, n_hidden, n_layers):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.rnn = nn.RNN(n_hidden, n_hidden, n_layers, batch_first=True)
self.h_o = nn.Linear(n_hidden, vocab_sz)
self.h = torch.zeros(n_layers, bs, n_hidden)
def forward(self, x):
res,h = self.rnn(self.i_h(x), self.h)
self.h = h.detach()
return self.h_o(res)
def reset(self): self.h.zero_()
learn = Learner(dls, LMModel5(len(vocab), 64, 2),
loss_func=CrossEntropyLossFlat(),
metrics=accuracy, cbs=ModelReseter)
learn.fit_one_cycle(15, 3e-3)
class LSTMCell(Module):
def __init__(self, ni, nh):
self.forget_gate = nn.Linear(ni + nh, nh)
self.input_gate = nn.Linear(ni + nh, nh)
self.cell_gate = nn.Linear(ni + nh, nh)
self.output_gate = nn.Linear(ni + nh, nh)
def forward(self, input, state):
h,c = state
h = torch.stack([h, input], dim=1)
forget = torch.sigmoid(self.forget_gate(h))
c = c * forget
inp = torch.sigmoid(self.input_gate(h))
cell = torch.tanh(self.cell_gate(h))
c = c + inp * cell
out = torch.sigmoid(self.output_gate(h))
h = outgate * torch.tanh(c)
return h, (h,c)
class LSTMCell(Module):
def __init__(self, ni, nh):
self.ih = nn.Linear(ni,4*nh)
self.hh = nn.Linear(nh,4*nh)
def forward(self, input, state):
h,c = state
#One big multiplication for all the gates is better than 4 smaller ones
gates = (self.ih(input) + self.hh(h)).chunk(4, 1)
ingate,forgetgate,outgate = map(torch.sigmoid, gates[:3])
cellgate = gates[3].tanh()
c = (forgetgate*c) + (ingate*cellgate)
h = outgate * c.tanh()
return h, (h,c)
t = torch.arange(0,10); t
t.chunk(2)
class LMModel6(Module):
def __init__(self, vocab_sz, n_hidden, n_layers):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.rnn = nn.LSTM(n_hidden, n_hidden, n_layers, batch_first=True)
self.h_o = nn.Linear(n_hidden, vocab_sz)
self.h = [torch.zeros(2, bs, n_hidden) for _ in range(n_layers)]
def forward(self, x):
res,h = self.rnn(self.i_h(x), self.h)
self.h = [h_.detach() for h_ in h]
return self.h_o(res)
def reset(self):
for h in self.h: h.zero_()
learn = Learner(dls, LMModel6(len(vocab), 64, 2),
loss_func=CrossEntropyLossFlat(),
metrics=accuracy, cbs=ModelReseter)
learn.fit_one_cycle(15, 1e-2)
class Dropout(Module):
def __init__(self, p): self.p = p
def forward(self, x):
if not self.training: return x
mask = x.new(*x.shape).bernoulli_(1-p)
return x * mask.div_(1-p)
class LMModel7(Module):
def __init__(self, vocab_sz, n_hidden, n_layers, p):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.rnn = nn.LSTM(n_hidden, n_hidden, n_layers, batch_first=True)
self.drop = nn.Dropout(p)
self.h_o = nn.Linear(n_hidden, vocab_sz)
self.h_o.weight = self.i_h.weight
self.h = [torch.zeros(2, bs, n_hidden) for _ in range(n_layers)]
def forward(self, x):
raw,h = self.rnn(self.i_h(x), self.h)
out = self.drop(raw)
self.h = [h_.detach() for h_ in h]
return self.h_o(out),raw,out
def reset(self):
for h in self.h: h.zero_()
learn = Learner(dls, LMModel7(len(vocab), 64, 2, 0.5),
loss_func=CrossEntropyLossFlat(), metrics=accuracy,
cbs=[ModelReseter, RNNRegularizer(alpha=2, beta=1)])
learn = TextLearner(dls, LMModel7(len(vocab), 64, 2, 0.4),
loss_func=CrossEntropyLossFlat(), metrics=accuracy)
learn.fit_one_cycle(15, 1e-2, wd=0.1)
| 0.688468 | 0.730326 |
<img src="https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/Digital Technical Engagement.png">
# IBM Cloud Pak for Data - Data Virtualization REST Services Class
### Where to find this notebook online
You can find a copy of this notebook at https://github.com/Db2-DTE-POC/CPDDVLAB.
### What is notebook does
This notebook is reusable class library to interact with the core RESTful services for Data Virtualiztion
```
# Import the class libraries
import requests
import ssl
import json
from pprint import pprint
from requests import Response
import pandas as pd
import time
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
from IPython.display import IFrame
from IPython.display import display, HTML
from pandas import json_normalize
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
# Run the DVRESTAPI class
# Used to construct and reuse an Autentication Key
# Used to construct RESTAPI URLs and JSON payloads
class DVRESTAPI():
def __init__(self, url, verify = False, proxies=None, ):
self.url = url
self.proxies = proxies
self.verify = verify
def authenticate(self, api, userid, password):
credentials = {'username':userid, 'password':password}
r = requests.post(self.url+api+'/preauth/signin', verify=self.verify, json=credentials, proxies=self.proxies)
if (r.status_code == 200):
bearerToken = "Bearer " + r.cookies["ibm-private-cloud-session"]
print('Token Retrieved')
self.headers = {'Content-Type':"application/json", 'Accept':"application/json", 'Authorization': bearerToken, 'Cache-Control': "no-cache"}
else:
print ('Unable to authenticate, no bearer token obtained')
def printResponse(self, r, code):
if (r.status_code == code):
pprint(r.json())
else:
print (r.status_code)
print (r.content)
def getRequest(self, api, json=None):
return requests.get(self.url+api, verify = self.verify, headers=self.headers, proxies = self.proxies, json=json)
def postRequest(self, api, json=None):
return requests.post(self.url+api, verify = self.verify, headers=self.headers, proxies = self.proxies, json=json)
def deleteRequest(self, api, json=None):
return requests.delete(self.url+api, verify = self.verify, headers=self.headers, proxies = self.proxies, json=json)
def getStatusCode(self, response):
return (response.status_code)
def getJSON(self, response):
return (response.json())
def getVirtualizedTables(self):
return self.getRequest('/icp4data-databases/dv/cpd-instance/dvapiserver/v1/mydata/tables')
def getVirtualizedTablesDF(self):
r = self.getVirtualizedTables()
if (self.getStatusCode(r)==200):
json = self.getJSON(r)
df = pd.DataFrame(json_normalize(json['tables']))
return df
else:
print(self.getStatusCode(r))
def getVirtualizedViews(self):
return self.getRequest('/icp4data-databases/dv/cpd-instance/dvapiserver/v1/mydata/views')
def getVirtualizedViewsDF(self):
r = self.getVirtualizedViews()
if (self.getStatusCode(r)==200):
json = self.getJSON(r)
df = pd.DataFrame(json_normalize(json['views']))
return df
else:
print(self.getStatusCode(r))
def grantPrivledgeToRole(self, objectName, objectSchema, roleToGrant):
json = {"objectName":objectName,"objectSchema":objectSchema,"roleToGrant":roleToGrant}
return self.postRequest('/icp4data-databases/dv/cpd-instance/dvapiserver/v1/privileges/roles',json);
def getRole(self, role):
return self.getRequest('/icp4data-databases/dv/cpd-instance/dvapiserver/v1/privileges/objects/role/'+str(role));
def foldData(self, sourceName, sourceTableDef, sources ):
json = {"sourceName":sourceName,"sourceTableDef":sourceTableDef,"sources":sources}
return self.postRequest('/icp4data-databases/dv/cpd-instance/dvapiserver/v1/virtualize/tables', json);
def addUser(self, username, displayName, email, user_roles, password):
json = {"username":username,"displayName":displayName,"email":email,"user_roles":user_roles,"password":password}
return self.postRequest('/api/v1/usermgmt/v1/user', json);
def dropUser(self, username):
return self.deleteRequest('/api/v1/usermgmt/v1/user/'+str(username));
def getUsers(self):
return self.getRequest('/api/v1/usermgmt/v1/usermgmt/users');
def getUsersDF(self):
r = self.getUsers()
if (self.getStatusCode(r)==200):
json = self.getJSON(r)
df = pd.DataFrame(json_normalize(json))
return df
else:
print(self.getStatusCode(r));
def addUserToDV(self, display_name, role, usersDF):
userrow = (usersDF.loc[usersDF['displayName'] == display_name])
uid = userrow['uid'].values[0]
username = userrow['username'].values[0]
json = {"users":[{"uid":uid,"username":username,"display_name":display_name,"role":role}],"serviceInstanceID":"1635944153872816"}
return self.postRequest('/zen-data/v2/serviceInstance/users', json);
def dropUserFromDV(self, display_name, usersDF):
userrow = (usersDF.loc[usersDF['displayName'] == display_name])
uid = userrow['uid'].values[0]
json = {"users":[uid],"serviceInstanceID":"1635944153872816"}
return self.deleteRequest('/zen-data/v2/serviceInstance/users', json);
def deleteVirtualizedTable(self, table_schema, table_name, data_source_table_name):
payload = {"table_schema":table_schema,"table_name":table_name,"data_source_table_name":data_source_table_name}
return self.deleteRequest('/icpd-instanceata-databases/dv/cpd-instance/dbapi/v4/federation', payload);
def deleteView(self, schema, view):
return self.deleteRequest('/icp4data-databases/dv/cpd-instance/dbapi/v4/federation/views/'+str(schema)+'/'+str(view))
def getDataSourcesAPI(self):
return self.getRequest('/icp4data-databases/dv/cpd-instance/dvapiserver/v1/datasource_nodes')
def getDataSources(self):
columns = ['cid','connection_id', 'dbname', 'srchostname', 'srcport','srctype','status','usr','uri']
dfTotal = pd.DataFrame(columns=columns)
r = self.getDataSourcesAPI()
if (self.getStatusCode(r)==200):
json = self.getJSON(r)
df = pd.DataFrame(json_normalize(json))
for index, row in df.iterrows():
if row['dscount']>'0':
dfTotal = pd.concat([dfTotal, pd.DataFrame(json_normalize(row['dataSources']))],ignore_index=True)
return(dfTotal[['srctype','srchostname', 'srcport', 'dbname', 'usr', 'status']])
else:
print(self.getStatusCode(r))
def getCacheDetails(self, cache):
r = self.getRequest('/icp4data-databases/dv/cpd-instance/dv-caching/api/v1/caches/'+str(cache))
if (self.getStatusCode(r)==200):
return databaseAPI.getJSON(r)
else:
print(self.getStatusCode(r))
print(json['message'])
def getCaches(self, type='Available'):
# type = 'Enabled', 'Disabled', 'Deleted', 'All'
r = self.getRequest('/icp4data-databases/dv/cpd-instance/dv-caching/api/v1/caches')
json = databaseAPI.getJSON(r)
if (self.getStatusCode(r)==200):
df = pd.DataFrame(json_normalize(json['caches']))
if (type == 'Available'):
return df[df["state"].isin(['Enabled','Disabled','Refreshing'])]
elif (type == 'Enabled'):
return df[df["state"] == 'Enabled']
elif (type == 'Disabled'):
return df[df["state"] == 'Disabled']
elif (type == 'Deleted'):
return df[df["state"] == 'Deleted']
elif (type == 'Refreshing'):
return df[df["state"] == 'Refreshing']
else:
print(self.getStatusCode(r))
print(json['message'])
def enableCache(self, cache):
r = self.postRequest('/icp4data-databases/dv/cpd-instance/dv-caching/api/v1/enable/'+str(cache));
json = databaseAPI.getJSON(r)
if (self.getStatusCode(r)==202):
print('Cache: ' + cache + " enabled.")
else:
print(self.getStatusCode(r))
print(json['message'])
def disableCache(self, cache):
r = self.postRequest('/icp4data-databases/dv/cpd-instance/dv-caching/api/v1/disable/'+str(cache));
json = databaseAPI.getJSON(r)
if (self.getStatusCode(r)==202):
print('Cache: ' + cache + " disabled.")
else:
print(self.getStatusCode(r))
print(json['message'])
def refreshCache(self, cache):
r = self.postRequest('/icp4data-databases/dv/cpd-instance/dv-caching/api/v1/refresh/'+str(cache));
json = databaseAPI.getJSON(r)
if (self.getStatusCode(r)==202):
print('Cache: ' + cache + " being refreshed. Check cache status.")
else:
print(self.getStatusCode(r))
print(json['message'])
from decimal import Decimal
class Timer():
def __init__(self):
self.totalTime = 0
self.time = 0
self.list = []
def wallTime(self, timing):
start = timing.find('Wall time: ') + 11
end = timing.find(' ms', start)
if end == -1:
endsec = timing.find(' s', start)
if endsec == -1:
endmin = timing.find('min', start)
minutes = Decimal(timing[start:endmin])
endsec = timing.find('s', start)
startsec = endmin+4
seconds = Decimal(timing[startsec:endsec])
return (minutes*60+seconds)*1000
else:
return Decimal(timing[start:endsec])*1000
else:
return Decimal(timing[start:end])
def timeTotal(self):
standardOutput = result.stdout
print(standardOutput)
self.time = self.wallTime(standardOutput)
self.list.append(self.time/1000)
self.totalTime = self.totalTime + self.time
print("Time: " + str(self.time/1000) + " s")
print("Total Time: " + str(self.totalTime/1000) + " s")
def getTotalTime(self):
return self.totalTime
def getLastTime(self):
return self.time
def getList(self):
return self.list
```
#### Credits: IBM 2021, Peter Kohlmann [kohlmann@ca.ibm.com]
|
github_jupyter
|
# Import the class libraries
import requests
import ssl
import json
from pprint import pprint
from requests import Response
import pandas as pd
import time
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
from IPython.display import IFrame
from IPython.display import display, HTML
from pandas import json_normalize
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
# Run the DVRESTAPI class
# Used to construct and reuse an Autentication Key
# Used to construct RESTAPI URLs and JSON payloads
class DVRESTAPI():
def __init__(self, url, verify = False, proxies=None, ):
self.url = url
self.proxies = proxies
self.verify = verify
def authenticate(self, api, userid, password):
credentials = {'username':userid, 'password':password}
r = requests.post(self.url+api+'/preauth/signin', verify=self.verify, json=credentials, proxies=self.proxies)
if (r.status_code == 200):
bearerToken = "Bearer " + r.cookies["ibm-private-cloud-session"]
print('Token Retrieved')
self.headers = {'Content-Type':"application/json", 'Accept':"application/json", 'Authorization': bearerToken, 'Cache-Control': "no-cache"}
else:
print ('Unable to authenticate, no bearer token obtained')
def printResponse(self, r, code):
if (r.status_code == code):
pprint(r.json())
else:
print (r.status_code)
print (r.content)
def getRequest(self, api, json=None):
return requests.get(self.url+api, verify = self.verify, headers=self.headers, proxies = self.proxies, json=json)
def postRequest(self, api, json=None):
return requests.post(self.url+api, verify = self.verify, headers=self.headers, proxies = self.proxies, json=json)
def deleteRequest(self, api, json=None):
return requests.delete(self.url+api, verify = self.verify, headers=self.headers, proxies = self.proxies, json=json)
def getStatusCode(self, response):
return (response.status_code)
def getJSON(self, response):
return (response.json())
def getVirtualizedTables(self):
return self.getRequest('/icp4data-databases/dv/cpd-instance/dvapiserver/v1/mydata/tables')
def getVirtualizedTablesDF(self):
r = self.getVirtualizedTables()
if (self.getStatusCode(r)==200):
json = self.getJSON(r)
df = pd.DataFrame(json_normalize(json['tables']))
return df
else:
print(self.getStatusCode(r))
def getVirtualizedViews(self):
return self.getRequest('/icp4data-databases/dv/cpd-instance/dvapiserver/v1/mydata/views')
def getVirtualizedViewsDF(self):
r = self.getVirtualizedViews()
if (self.getStatusCode(r)==200):
json = self.getJSON(r)
df = pd.DataFrame(json_normalize(json['views']))
return df
else:
print(self.getStatusCode(r))
def grantPrivledgeToRole(self, objectName, objectSchema, roleToGrant):
json = {"objectName":objectName,"objectSchema":objectSchema,"roleToGrant":roleToGrant}
return self.postRequest('/icp4data-databases/dv/cpd-instance/dvapiserver/v1/privileges/roles',json);
def getRole(self, role):
return self.getRequest('/icp4data-databases/dv/cpd-instance/dvapiserver/v1/privileges/objects/role/'+str(role));
def foldData(self, sourceName, sourceTableDef, sources ):
json = {"sourceName":sourceName,"sourceTableDef":sourceTableDef,"sources":sources}
return self.postRequest('/icp4data-databases/dv/cpd-instance/dvapiserver/v1/virtualize/tables', json);
def addUser(self, username, displayName, email, user_roles, password):
json = {"username":username,"displayName":displayName,"email":email,"user_roles":user_roles,"password":password}
return self.postRequest('/api/v1/usermgmt/v1/user', json);
def dropUser(self, username):
return self.deleteRequest('/api/v1/usermgmt/v1/user/'+str(username));
def getUsers(self):
return self.getRequest('/api/v1/usermgmt/v1/usermgmt/users');
def getUsersDF(self):
r = self.getUsers()
if (self.getStatusCode(r)==200):
json = self.getJSON(r)
df = pd.DataFrame(json_normalize(json))
return df
else:
print(self.getStatusCode(r));
def addUserToDV(self, display_name, role, usersDF):
userrow = (usersDF.loc[usersDF['displayName'] == display_name])
uid = userrow['uid'].values[0]
username = userrow['username'].values[0]
json = {"users":[{"uid":uid,"username":username,"display_name":display_name,"role":role}],"serviceInstanceID":"1635944153872816"}
return self.postRequest('/zen-data/v2/serviceInstance/users', json);
def dropUserFromDV(self, display_name, usersDF):
userrow = (usersDF.loc[usersDF['displayName'] == display_name])
uid = userrow['uid'].values[0]
json = {"users":[uid],"serviceInstanceID":"1635944153872816"}
return self.deleteRequest('/zen-data/v2/serviceInstance/users', json);
def deleteVirtualizedTable(self, table_schema, table_name, data_source_table_name):
payload = {"table_schema":table_schema,"table_name":table_name,"data_source_table_name":data_source_table_name}
return self.deleteRequest('/icpd-instanceata-databases/dv/cpd-instance/dbapi/v4/federation', payload);
def deleteView(self, schema, view):
return self.deleteRequest('/icp4data-databases/dv/cpd-instance/dbapi/v4/federation/views/'+str(schema)+'/'+str(view))
def getDataSourcesAPI(self):
return self.getRequest('/icp4data-databases/dv/cpd-instance/dvapiserver/v1/datasource_nodes')
def getDataSources(self):
columns = ['cid','connection_id', 'dbname', 'srchostname', 'srcport','srctype','status','usr','uri']
dfTotal = pd.DataFrame(columns=columns)
r = self.getDataSourcesAPI()
if (self.getStatusCode(r)==200):
json = self.getJSON(r)
df = pd.DataFrame(json_normalize(json))
for index, row in df.iterrows():
if row['dscount']>'0':
dfTotal = pd.concat([dfTotal, pd.DataFrame(json_normalize(row['dataSources']))],ignore_index=True)
return(dfTotal[['srctype','srchostname', 'srcport', 'dbname', 'usr', 'status']])
else:
print(self.getStatusCode(r))
def getCacheDetails(self, cache):
r = self.getRequest('/icp4data-databases/dv/cpd-instance/dv-caching/api/v1/caches/'+str(cache))
if (self.getStatusCode(r)==200):
return databaseAPI.getJSON(r)
else:
print(self.getStatusCode(r))
print(json['message'])
def getCaches(self, type='Available'):
# type = 'Enabled', 'Disabled', 'Deleted', 'All'
r = self.getRequest('/icp4data-databases/dv/cpd-instance/dv-caching/api/v1/caches')
json = databaseAPI.getJSON(r)
if (self.getStatusCode(r)==200):
df = pd.DataFrame(json_normalize(json['caches']))
if (type == 'Available'):
return df[df["state"].isin(['Enabled','Disabled','Refreshing'])]
elif (type == 'Enabled'):
return df[df["state"] == 'Enabled']
elif (type == 'Disabled'):
return df[df["state"] == 'Disabled']
elif (type == 'Deleted'):
return df[df["state"] == 'Deleted']
elif (type == 'Refreshing'):
return df[df["state"] == 'Refreshing']
else:
print(self.getStatusCode(r))
print(json['message'])
def enableCache(self, cache):
r = self.postRequest('/icp4data-databases/dv/cpd-instance/dv-caching/api/v1/enable/'+str(cache));
json = databaseAPI.getJSON(r)
if (self.getStatusCode(r)==202):
print('Cache: ' + cache + " enabled.")
else:
print(self.getStatusCode(r))
print(json['message'])
def disableCache(self, cache):
r = self.postRequest('/icp4data-databases/dv/cpd-instance/dv-caching/api/v1/disable/'+str(cache));
json = databaseAPI.getJSON(r)
if (self.getStatusCode(r)==202):
print('Cache: ' + cache + " disabled.")
else:
print(self.getStatusCode(r))
print(json['message'])
def refreshCache(self, cache):
r = self.postRequest('/icp4data-databases/dv/cpd-instance/dv-caching/api/v1/refresh/'+str(cache));
json = databaseAPI.getJSON(r)
if (self.getStatusCode(r)==202):
print('Cache: ' + cache + " being refreshed. Check cache status.")
else:
print(self.getStatusCode(r))
print(json['message'])
from decimal import Decimal
class Timer():
def __init__(self):
self.totalTime = 0
self.time = 0
self.list = []
def wallTime(self, timing):
start = timing.find('Wall time: ') + 11
end = timing.find(' ms', start)
if end == -1:
endsec = timing.find(' s', start)
if endsec == -1:
endmin = timing.find('min', start)
minutes = Decimal(timing[start:endmin])
endsec = timing.find('s', start)
startsec = endmin+4
seconds = Decimal(timing[startsec:endsec])
return (minutes*60+seconds)*1000
else:
return Decimal(timing[start:endsec])*1000
else:
return Decimal(timing[start:end])
def timeTotal(self):
standardOutput = result.stdout
print(standardOutput)
self.time = self.wallTime(standardOutput)
self.list.append(self.time/1000)
self.totalTime = self.totalTime + self.time
print("Time: " + str(self.time/1000) + " s")
print("Total Time: " + str(self.totalTime/1000) + " s")
def getTotalTime(self):
return self.totalTime
def getLastTime(self):
return self.time
def getList(self):
return self.list
| 0.148819 | 0.510192 |

[](https://colab.research.google.com/github/JohnSnowLabs/nlu/blob/master/examples/webinars_conferences_etc/multi_lingual_webinar/0_liners_intro.ipynb)
# Setup Dependencies
You need **java 8**, Spark NLP and PySpark installed in your enviroment
```
import os
! apt-get update -qq > /dev/null
# Install java
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! pip install nlu pyspark==2.4.7
import nlu
```
# Quick overview of easy 1-liners with NLU
## Spellchecking, Sentiment Classification, Part of Speech, Named Entity Recognition, Other classifirs?

# Spell Checking in 1 line

```
nlu.load('spell').predict('I also liek to life dangertus')
```
# Binary Sentiment classification in 1 Line

```
nlu.load('sentiment').predict('I love NLU and rainy days!')
```
# Part of Speech (POS) in 1 line

```
nlu.load('pos').predict('POS assigns each token in a sentence a grammatical label')
```
# Named Entity Recognition (NER) in 1 line

```
nlu.load('ner').predict("John Snow Labs congratulates the Amarican John Biden to winning the American election!", output_level='chunk')
nlu.load('ner').predict("John Snow Labs congratiulates John Biden to winning the American election!", output_level = 'document')
```
## Checkout other NER models
```
nlu.print_components(action='ner')
```

# Bertology Embeddings for Sentences and Tokens
```
nlu.load('bert').predict("Albert and Elmo are pretty good friends")
nlu.load('elmo').predict("Albert and Elmo are pretty good friends")
nlu.load('embed_sentence.bert').predict("Get me one embedding for this whole sentence")
```
# Checkout other Embedding Models
```
nlu.print_components(action='embed')
```
# There 1000+ models in 200+ languages waiting for you to be discoverd and put to good use!
## Checkout [the Modelshub](https://nlp.johnsnowlabs.com/models) and the [NLU Namespace](https://nlu.johnsnowlabs.com/docs/en/spellbook) for more models
```
```
|
github_jupyter
|
import os
! apt-get update -qq > /dev/null
# Install java
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! pip install nlu pyspark==2.4.7
import nlu
nlu.load('spell').predict('I also liek to life dangertus')
nlu.load('sentiment').predict('I love NLU and rainy days!')
nlu.load('pos').predict('POS assigns each token in a sentence a grammatical label')
nlu.load('ner').predict("John Snow Labs congratulates the Amarican John Biden to winning the American election!", output_level='chunk')
nlu.load('ner').predict("John Snow Labs congratiulates John Biden to winning the American election!", output_level = 'document')
nlu.print_components(action='ner')
nlu.load('bert').predict("Albert and Elmo are pretty good friends")
nlu.load('elmo').predict("Albert and Elmo are pretty good friends")
nlu.load('embed_sentence.bert').predict("Get me one embedding for this whole sentence")
nlu.print_components(action='embed')
| 0.241489 | 0.864253 |
### Solution-1
In this problem we use the ColumnarStructure and boolean indexing to create a distance map of the HIV protease dimer. We will use C-beta atoms instead of C-alpha atoms.
```
from pyspark.sql import SparkSession
from mmtfPyspark.io import mmtfReader
from mmtfPyspark.utils import traverseStructureHierarchy, ColumnarStructure
from mmtfPyspark import structureViewer
import numpy as np
from scipy.spatial.distance import pdist, squareform
import matplotlib.pyplot as plt
```
#### Configure Spark
```
spark = SparkSession.builder.appName("Solution-1").getOrCreate()
```
### Download an example structure
Here we download an HIV protease structure with a bound ligand (Nelfinavir).
```
pdb = mmtfReader.download_full_mmtf_files(["1OHR"])
```
Structures are represented as keyword-value pairs (tuples):
* key: structure identifier (e.g., PDB ID)
* value: MmtfStructure (structure data)
In this case, we only have one structure, so we can use the first() method to extract the data.
```
structure = pdb.values().first()
```
## Create a columnar structure from an MMTF structure
Here we convert an MMTF structure to a columnar structure. By specifying the firstModel flag, we
only retrieve data for the first model (this structure has only one model, anyways).
### TODO-1: create a ColumnarStructure
```
arrays = ColumnarStructure(structure, firstModelOnly=True)
```
### Get atom coordinates as numpy arrays
### TODO-2: get coordinates
```
x = arrays.get_x_coords()
y = arrays.get_y_coords()
z = arrays.get_z_coords()
```
### Get entity types
Entity types can be used to distinguish polymer from non-polymer groups and select specific components, e.g., all protein groups. The following entity types are available:
* **Polymer groups**
* PRO: protein
* DNA: DNA
* RNA: RNA
* PSR: saccharide
* **Non-polymer groups**
* LGO: ligand organic
* LGI: ligand inorganic
* SAC: saccaride
* WAT: water
```
entity_types = arrays.get_entity_types()
entity_types
```
### Get atom, group, and chain name arrays
```
atom_names = arrays.get_atom_names()
atom_names
group_names = arrays.get_group_names()
group_names
```
### Boolean array indexing
Boolean indexing is an efficient way to access selected elements from numpy arrays.
### TODO-3: create a boolean index to select:
* C-alpha atoms for glycine
* C-beta atoms for all other amino acids
This time, do the selection for the entire structure.
```
cb_idx = (entity_types == 'PRO') & \
( ((atom_names == 'CB') & (group_names != 'GLY')) | \
((atom_names == 'CA') & (group_names == 'GLY')) )
```
### TODO-4: Print the atom names for the selected atoms
```
atom_names[cb_idx]
```
Then, we apply this index to get the coordinates for the selected atoms
```
xc = x[cb_idx]
yc = y[cb_idx]
zc = z[cb_idx]
```
#### Combine separate x, y, and z arrays and swap axes
`[x0, x1, ..., xn],[y0, y1,...,yn],[z0, z1, ...,zn]`
to
`[x0, y0, z0],[x1, y1, z1], ..., [xn, yn, zn]`
```
coords = np.swapaxes(np.array([xc,yc,zc]), 0, 1)
```
#### Calculate distance map for the protein dimer
```
dist_matrix = squareform(pdist(coords), 'euclidean')
plt.pcolor(dist_matrix, cmap='RdBu')
plt.title('C-beta distance map')
plt.gca().set_aspect('equal')
plt.colorbar();
```
#### Calculate distance map for the protein dimer
Only consider distance <= 9. We use boolean indexing to set all distance > 9 to zero.
```
dist_matrix[dist_matrix > 9] = 0
plt.pcolor(dist_matrix, cmap='Greys')
plt.title('C-beta distance map')
plt.gca().set_aspect('equal')
plt.colorbar();
spark.stop()
```
|
github_jupyter
|
from pyspark.sql import SparkSession
from mmtfPyspark.io import mmtfReader
from mmtfPyspark.utils import traverseStructureHierarchy, ColumnarStructure
from mmtfPyspark import structureViewer
import numpy as np
from scipy.spatial.distance import pdist, squareform
import matplotlib.pyplot as plt
spark = SparkSession.builder.appName("Solution-1").getOrCreate()
pdb = mmtfReader.download_full_mmtf_files(["1OHR"])
structure = pdb.values().first()
arrays = ColumnarStructure(structure, firstModelOnly=True)
x = arrays.get_x_coords()
y = arrays.get_y_coords()
z = arrays.get_z_coords()
entity_types = arrays.get_entity_types()
entity_types
atom_names = arrays.get_atom_names()
atom_names
group_names = arrays.get_group_names()
group_names
cb_idx = (entity_types == 'PRO') & \
( ((atom_names == 'CB') & (group_names != 'GLY')) | \
((atom_names == 'CA') & (group_names == 'GLY')) )
atom_names[cb_idx]
xc = x[cb_idx]
yc = y[cb_idx]
zc = z[cb_idx]
coords = np.swapaxes(np.array([xc,yc,zc]), 0, 1)
dist_matrix = squareform(pdist(coords), 'euclidean')
plt.pcolor(dist_matrix, cmap='RdBu')
plt.title('C-beta distance map')
plt.gca().set_aspect('equal')
plt.colorbar();
dist_matrix[dist_matrix > 9] = 0
plt.pcolor(dist_matrix, cmap='Greys')
plt.title('C-beta distance map')
plt.gca().set_aspect('equal')
plt.colorbar();
spark.stop()
| 0.704872 | 0.988403 |
<div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="https://cocl.us/topNotebooksPython101Coursera">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png" width="750" align="center">
</a>
</div>
<a href="https://cognitiveclass.ai/">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center">
</a>
<h1>Reading Files Python</h1>
<p><strong>Welcome!</strong> This notebook will teach you about reading the text file in the Python Programming Language. By the end of this lab, you'll know how to read text files.</p>
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li><a href="download">Download Data</a></li>
<li><a href="read">Reading Text Files</a></li>
<li><a href="better">A Better Way to Open a File</a></li>
</ul>
<p>
Estimated time needed: <strong>40 min</strong>
</p>
</div>
<hr>
<h2 id="download">Download Data</h2>
```
# Download Example file
!wget -O /resources/data/Example1.txt https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/example1.txt
```
<hr>
<h2 id="read">Reading Text Files</h2>
One way to read or write a file in Python is to use the built-inย <code>open</code>ย function.ย The <code>open</code> function provides a <b>File object</b> thatย contains the methods and attributes you need in order to read, save, and manipulate the file. In this notebook, we will only cover <b>.txt</b> files. The first parameter you need is the file path and the file name. An example is shown as follow:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadOpen.png" width="500" />
The mode argument is optional and the default value is <b>r</b>. In this notebook we only cover two modes:ย
<ul>
<li><b>r</b> Read mode for reading files </li>
<li><b>w</b> Write mode for writing files</li>
</ul>
For the next example, we will use the text file <b>Example1.txt</b>. The file is shown as follow:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadFile.png" width="200" />
We read the file:
```
# Read the Example1.txt
example1 = "/resources/data/Example1.txt"
file1 = open(example1, "r")
```
We can view the attributes of the file.
The name of the file:
```
# Print the path of file
file1.name
```
The mode the file object is in:
```
# Print the mode of file, either 'r' or 'w'
file1.mode
```
We can read the file and assign it to a variable :
```
# Read the file
FileContent = file1.read()
FileContent
```
The <b>/n</b> means that there is a new line.
We can print the file:
```
# Print the file with '\n' as a new line
print(FileContent)
```
The file is of type string:
```
# Type of file content
type(FileContent)
```
We must close the file object:
```
# Close file after finish
file1.close()
```
<hr>
<h2 id="better">A Better Way to Open a File</h2>
Using the <code>with</code> statement is better practice, it automatically closes the file even if the code encounters an exception. The code will run everything in the indent block then close the file object.
```
# Open file using with
with open(example1, "r") as file1:
FileContent = file1.read()
print(FileContent)
```
The file object is closed, you can verify it by running the following cell:
```
# Verify if the file is closed
file1.closed
```
We can see the info in the file:
```
# See the content of file
print(FileContent)
```
The syntax is a little confusing as the file object is after the <code>as</code> statement. We also donโt explicitly close the file. Therefore we summarize the steps in a figure:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadWith.png" width="500" />
We donโt have to read the entire file, for example, we can read the first 4 characters by entering three as a parameter to the method **.read()**:
```
# Read first four characters
with open(example1, "r") as file1:
print(file1.read(4))
```
Once the method <code>.read(4)</code> is called the first 4 characters are called. If we call the method again, the next 4 characters are called. The output for the following cell will demonstrate the process for different inputs to the method <code>read()</code>:
```
# Read certain amount of characters
with open(example1, "r") as file1:
print(file1.read(4))
print(file1.read(4))
print(file1.read(7))
print(file1.read(15))
```
The process is illustrated in the below figure, and each color represents the part of the file read after the method <code>read()</code> is called:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadChar.png" width="500" />
Here is an example using the same file, but instead we read 16, 5, and then 9 characters at a time:
```
# Read certain amount of characters
with open(example1, "r") as file1:
print(file1.read(16))
print(file1.read(5))
print(file1.read(9))
```
We can also read one line of the file at a time using the method <code>readline()</code>:
```
# Read one line
with open(example1, "r") as file1:
print("first line: " + file1.readline())
```
We can use a loop to iterate through each line:
```
# Iterate through the lines
with open(example1,"r") as file1:
i = 0;
for line in file1:
print("Iteration", str(i), ": ", line)
i = i + 1;
```
We can use the method <code>readlines()</code> to save the text file to a list:
```
# Read all lines and save as a list
with open(example1, "r") as file1:
FileasList = file1.readlines()
```
Each element of the list corresponds to a line of text:
```
# Print the first line
FileasList[0]
# Print the second line
FileasList[1]
# Print the third line
FileasList[2]
```
<hr>
<h2>The last exercise!</h2>
<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work.
<hr>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<h2>Get IBM Watson Studio free of charge!</h2>
<p><a href="https://cocl.us/bottemNotebooksPython101Coursera"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p>
</div>
<h3>About the Authors:</h3>
<p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>
Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
<hr>
<p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
|
github_jupyter
|
# Download Example file
!wget -O /resources/data/Example1.txt https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/example1.txt
# Read the Example1.txt
example1 = "/resources/data/Example1.txt"
file1 = open(example1, "r")
# Print the path of file
file1.name
# Print the mode of file, either 'r' or 'w'
file1.mode
# Read the file
FileContent = file1.read()
FileContent
# Print the file with '\n' as a new line
print(FileContent)
# Type of file content
type(FileContent)
# Close file after finish
file1.close()
# Open file using with
with open(example1, "r") as file1:
FileContent = file1.read()
print(FileContent)
# Verify if the file is closed
file1.closed
# See the content of file
print(FileContent)
# Read first four characters
with open(example1, "r") as file1:
print(file1.read(4))
# Read certain amount of characters
with open(example1, "r") as file1:
print(file1.read(4))
print(file1.read(4))
print(file1.read(7))
print(file1.read(15))
# Read certain amount of characters
with open(example1, "r") as file1:
print(file1.read(16))
print(file1.read(5))
print(file1.read(9))
# Read one line
with open(example1, "r") as file1:
print("first line: " + file1.readline())
# Iterate through the lines
with open(example1,"r") as file1:
i = 0;
for line in file1:
print("Iteration", str(i), ": ", line)
i = i + 1;
# Read all lines and save as a list
with open(example1, "r") as file1:
FileasList = file1.readlines()
# Print the first line
FileasList[0]
# Print the second line
FileasList[1]
# Print the third line
FileasList[2]
| 0.431824 | 0.920074 |
# Preparing the dataset for hippocampus segmentation
In this notebook you will use the skills and methods that we have talked about during our EDA Lesson to prepare the hippocampus dataset using Python. Follow the Notebook, writing snippets of code where directed so using Task comments, similar to the one below, which expects you to put the proper imports in place. Write your code directly in the cell with TASK comment. Feel free to add cells as you see fit, but please make sure that code that performs that tasked activity sits in the same cell as the Task comment.
```
# TASK: Import the following libraries that we will use: nibabel, matplotlib, numpy
import matplotlib.pyplot as plt
import nibabel as nib
import numpy as np
import os
from PIL import Image
import glob
import shutil
```
It will help your understanding of the data a lot if you were able to use a tool that allows you to view NIFTI volumes, like [3D Slicer](https://www.slicer.org/). I will refer to Slicer throughout this Notebook and will be pasting some images showing what your output might look like.
## Loading NIFTI images using NiBabel
NiBabel is a python library for working with neuro-imaging formats (including NIFTI) that we have used in some of the exercises throughout the course. Our volumes and labels are in NIFTI format, so we will use nibabel to load and inspect them.
NiBabel documentation could be found here: https://nipy.org/nibabel/
Our dataset sits in two directories - *images* and *labels*. Each image is represented by a single file (we are fortunate to have our data converted to NIFTI) and has a corresponding label file which is named the same as the image file.
Note that our dataset is "dirty". There are a few images and labels that are not quite right. They should be quite obvious to notice, though. The dataset contains an equal amount of "correct" volumes and corresponding labels, and you don't need to alter values of any samples in order to get the clean dataset.
```
ls /data/TrainingSet/labels
ls /data/TrainingSet/images
# TASK: Your data sits in directory /data/TrainingSet.
# Load an image and a segmentation mask into variables called image and label
img = nib.load('/data/TrainingSet/images/hippocampus_001.nii.gz')
label = nib.load('/data/TrainingSet/labels/hippocampus_001.nii.gz')
# Nibabel can present your image data as a Numpy array by calling the method get_fdata()
# The array will contain a multi-dimensional Numpy array with numerical values representing voxel intensities.
# In our case, images and labels are 3-dimensional, so get_fdata will return a 3-dimensional array. You can verify this
# by accessing the .shape attribute. What are the dimensions of the input arrays?
img_np = img.get_fdata()
label_np = label.get_fdata()
print(f'img shape is {img_np.shape}')
print(f'label shape is {label_np.shape}')
# TASK: using matplotlib, visualize a few slices from the dataset, along with their labels.
# You can adjust plot sizes like so if you find them too small:
plt.rcParams["figure.figsize"] = (10,10)
fig1, n_ax1 = plt.subplots(1,5,figsize=(15,15))
n_ax1 = n_ax1.flatten()
for i in range(5):
n_ax1[i].imshow(img_np[:,:,(i*7)])
n_ax1[i].set_title(f'img Axial slice {i*7}')
fig2, n_ax2 = plt.subplots(1,5,figsize=(15,15))
n_ax2 = n_ax2.flatten()
for i in range(5):
n_ax2[i].imshow(label_np[:,:,(i*7)])
n_ax2[i].set_title(f'label Axial slice {i*7}')
fig3, n_ax3 = plt.subplots(1,5,figsize=(15,15))
n_ax3 = n_ax3.flatten()
for i in range(5):
n_ax3[i].imshow(img_np[:,i*10,:])
n_ax3[i].set_title(f'img Coronal slice {i*10}')
fig4, n_ax4 = plt.subplots(1,5,figsize=(15,15))
n_ax4 = n_ax4.flatten()
for i in range(5):
n_ax4[i].imshow(label_np[:,(i*10),:])
n_ax4[i].set_title(f'label Coronal slice {i*10}')
fig5, n_ax5 = plt.subplots(1,5,figsize=(15,15))
n_ax5 = n_ax5.flatten()
for i in range(5):
n_ax5[i].imshow(img_np[(i*7),:,:])
n_ax5[i].set_title(f'img Sagital slice {i*7}')
fig6, n_ax6 = plt.subplots(1,5,figsize=(15,15))
n_ax6 = n_ax6.flatten()
for i in range(5):
n_ax6[i].imshow(label_np[(i*7),:,:])
n_ax6[i].set_title(f'label Sagital slice {i*7}')
plt.imshow(label_np[14,:,:])
plt.imshow((img_np[:,:,14]+label_np[:,:,14]))
```
Load volume into 3D Slicer to validate that your visualization is correct and get a feel for the shape of structures.Try to get a visualization like the one below (hint: while Slicer documentation is not particularly great, there are plenty of YouTube videos available! Just look it up on YouTube if you are not sure how to do something)

```
# Stand out suggestion: use one of the simple Volume Rendering algorithms that we've
# implemented in one of our earlier lessons to visualize some of these volumes
print(nib.load('/data/TrainingSet/labels/hippocampus_010.nii.gz').header)
```
## Looking at single image data
In this section we will look closer at the NIFTI representation of our volumes. In order to measure the physical volume of hippocampi, we need to understand the relationship between the sizes of our voxels and the physical world.
```
# Nibabel supports many imaging formats, NIFTI being just one of them. I told you that our images
# are in NIFTI, but you should confirm if this is indeed the format that we are dealing with
# TASK: using .header_class attribute - what is the format of our images?
print(f'Img format is {img.header_class}')
print(f'Label format is {label.header_class}')
```
Further down we will be inspecting .header attribute that provides access to NIFTI metadata. You can use this resource as a reference for various fields: https://brainder.org/2012/09/23/the-nifti-file-format/
```
# TASK: How many bits per pixel are used?
print(f'Img: {img.header} \n')
print(f'Label: {label.header}')
```
#### Bits per voxel (pixel) is 8.
```
# TASK: What are the units of measurement?
'''
xyzt_units indicate the unit of measurements for dim.
From the Header, xyzt_units in binary is 10, translating to 2.
2 translates to NIFTI_UNITS_MM - millimeter.
'''
# TASK: Do we have a regular grid? What are grid spacings?
'''
pixdim is grid spacings.
pixdim = [1. 1. 1. 1. 1. 0. 0. 0.]
pixdim[1], pixdim[2], pixdim[3] = 1.,1.,1.
'''
# TASK: What dimensions represent axial, sagittal, and coronal slices? How do you know?
'''
sform_code = scanner
srow_x, srow_y, srow_z are given.
srow_x = [1. 0 0 1.]
srow_y = [0 1. 0 1.]
srow_z = [0 0 . 1.]
From NIFITI documentation 3D IMAGE (VOLUME) ORIENTATION AND LOCATION IN SPACE section:
In sform_code method, the (x,y,z) axes refer to a subject-based coordinate system,
with +x = Right +y = Anterior +z = Superior.
The srow_x, _y, _z vectors show that they translate to orthoganal i, j, k vectors.
Hence, x dimension is sagital (medial and lateral/ left and right since this is the right side of the brain)
y dimension is coronal (anterior and posterior)
z dimension is axial (superior and inferior)
'''
label_np[label_np > 0]
# By now you should have enough information to decide what are dimensions of a single voxel
# TASK: Compute the volume (in mmยณ) of a hippocampus using one of the labels you've loaded.
# You should get a number between ~2200 and ~4500
'''
One voxel = pixdim[1] * pixdim[2] * pixdim[3] = 1.0 mm^3
'''
print(f'Volume of hippocampus label is {np.count_nonzero(label_np > 0)} mm^3')
'''
Understand min and max value in image and label numpy array
'''
img = nib.load('/data/TrainingSet/images/hippocampus_003.nii.gz')
label = nib.load('/data/TrainingSet/labels/hippocampus_003.nii.gz')
img_np=img.get_fdata().astype(np.single)
label_np=label.get_fdata().astype(np.single)
print(f'img {np.amin(img_np), np.amax(img_np)}')
print(f'label {np.amin(label_np), np.amax(label_np)}')
print(np.argwhere(img_np >= np.amax(img_np)))
plt.subplots(1,2,figsize= (15,30))
plt.subplot(1,2,1)
plt.imshow(img_np[np.argwhere(img_np >= np.amax(img_np))[0][0]-10,:,:])
plt.subplot(1,2,2)
plt.imshow(label_np[np.argwhere(img_np >= np.amax(img_np))[0][0]-10,:,:])
img_np.mean()+img_np.std()
img_np= img_np/0xff
img_np= img_np/np.amax(img_np)
plt.subplot(1,2,1)
plt.imshow(img_np[29-10,:,:])
plt.subplot(1,2,2)
plt.imshow(label_np[29-10,:,:])
np.amax(img_np)
plt.hist(img_np.ravel(),bins=100)
plt.show
```
## Plotting some charts
```
ls /data
# TASK: Plot a histogram of all volumes that we have in our dataset and see how
# our dataset measures against a slice of a normal population represented by the chart below.
train_set_files = glob.glob('/data/TrainingSet/labels/*')
train_set_volumes = []
train_set = []
count = 0
for i in train_set_files:
img = nib.load(i)
img_np = img.get_fdata()
i_vol = np.count_nonzero(img_np>0)
i_dim = img.header['dim']
train_set_volumes.append(i_vol)
train_set.append([i_vol, i, i_dim])
count+=1
plt.hist(train_set_volumes, bins = 1000)
plt.xlim(1000,25000)
print(count)
nib.load('/data/TrainingSet/images/hippocampus_083.nii.gz').header['dim']-nib.load('/data/TrainingSet/labels/hippocampus_083.nii.gz').header['dim']
train_set
```
<img src="img/nomogram_fem_right.svg" width=400 align=left>
Do you see any outliers? Why do you think it's so (might be not immediately obvious, but it's always a good idea to inspect) outliers closer. If you haven't found the images that do not belong, the histogram may help you.
```
train_set=np.array(train_set)
hi_outlier = []
lo_outlier = []
no_outlier = []
for s in train_set:
if (int(s[0]) > 4600):
hi_outlier.append(s)
elif (int(s[0]) < 2800):
lo_outlier.append(s)
else:
no_outlier.append(s)
#outlier=np.array(outlier)
print(f'Number of hippocampus label volume greater than 4500: {len(hi_outlier)}')
print(f'Number of hippocampus label volume less than 2800: {len(lo_outlier)}')
print(f'Number of hippocampus label volume between 2900 and 4500 : {len(no_outlier)}')
hi_outlier
hi_outlier_label = nib.load(hi_outlier[0][1])
print(hi_outlier_label.header)
print(hi_outlier_label.header.get_data_shape())
print(hi_outlier_label.header['pixdim'])
print(hi_outlier_label.header['dim'])
print(hi_outlier_label.header.get_sform())
plt.imshow(nib.load('/data/TrainingSet/images/hippocampus_281.nii.gz').get_fdata()[:,260,:],aspect = 94/(512*2))
```
#### High Label Voxel volume Outlier has different dim, pixdim, and sform vectors.
#### hippocampus_281.nii.gz is not a hippocampus region
## Labels with volume between 2900mm^3 and 4500mm^3:
```
no_outlier_shape = {}
no_outlier_pixdim = {}
no_outlier_dim = {}
no_outlier_sform = {}
no_outlier_bitpix = {}
count = 0
for label in no_outlier:
count+=1
fp = label[1]
keyshape = nib.load(fp).header.get_data_shape()
no_outlier_shape.setdefault(keyshape,[])
no_outlier_shape[keyshape].append(fp)
keypixdim = str(nib.load(fp).header['pixdim'])
no_outlier_pixdim.setdefault(keypixdim,[])
no_outlier_pixdim[keypixdim].append(fp)
keydim = tuple(nib.load(fp).header['dim'])
no_outlier_dim.setdefault(keydim,[])
no_outlier_dim[keydim].append(fp)
keysf = str(nib.load(fp).header.get_sform())
no_outlier_sform.setdefault(keysf,[])
no_outlier_sform[keysf].append(fp)
keybp = str(nib.load(fp).header['bitpix'])
no_outlier_bitpix.setdefault(keybp,[])
no_outlier_bitpix[keybp].append(fp)
print(count)
no_outlier_pixdim.keys()
dim_keys=[i for i in no_outlier_dim.keys()]
dim_keys = np.array(dim_keys)
dim_keys[:, 1]
shape_keys = [(i) for i in no_outlier_shape.keys()]
shape_keys = np.array(shape_keys)
plt.hist(shape_keys[:,0])
plt.hist(shape_keys[:,1])
shape_keys[shape_keys[:,1]<42.5]
plt.hist(shape_keys[:,2])
shape_keys[shape_keys[:,2]<25]
no_outlier_shape[(34,53,24)]
plt.imshow(nib.load('/data/TrainingSet/images/hippocampus_243.nii.gz').get_fdata()[15,:,:])
plt.imshow(nib.load('/data/TrainingSet/labels/hippocampus_243.nii.gz').get_fdata()[15,:,:])
no_outlier_sform
no_outlier_bitpix.keys()
no_outlier_bitpix['32']
print(nib.load(no_outlier_bitpix['32'][0]).header)
plt.imshow(nib.load(no_outlier_bitpix['32'][0]).get_fdata()[14,:,:])
plt.imshow(nib.load('/data/TrainingSet/images/hippocampus_003.nii.gz').get_fdata()[14,:,:])
plt.imshow(nib.load('/data/TrainingSet/images/hippocampus_003.nii.gz').get_fdata()[:,:,14])
print(nib.load(no_outlier_bitpix['32'][1]).header)
plt.imshow(nib.load('/data/TrainingSet/images/hippocampus_243.nii.gz').get_fdata()[16,:,:])
```
#### Two NIFTI files have bitpix of 32, while the rest of label data set within acceptable hippocampus volume range have bitpix of 8. May need to rescale these two files or remove from dataset for training.
## Labels with hippocampus volume less than 2850mm^3, which is below the 2.5th Percentile for any age in the range 52-71
```
lo_outlier
lo_outlier_shape = {}
lo_outlier_pixdim = {}
lo_outlier_sform = {}
lo_outlier_bitpix = {}
for label in lo_outlier:
fp = label[1]
keyshape = nib.load(fp).header.get_data_shape()
lo_outlier_shape.setdefault(keyshape,[])
lo_outlier_shape[keyshape].append(fp)
keypixdim = str(nib.load(fp).header['pixdim'])
lo_outlier_pixdim.setdefault(keypixdim,[])
lo_outlier_pixdim[keypixdim].append(fp)
keysf = str(nib.load(fp).header.get_sform())
lo_outlier_sform.setdefault(keysf,[])
lo_outlier_sform[keysf].append(fp)
keybp = str(nib.load(fp).header['bitpix'])
lo_outlier_bitpix.setdefault(keybp,[])
lo_outlier_bitpix[keybp].append(fp)
lo_shape_keys = [(i) for i in lo_outlier_shape.keys()]
lo_shape_keys = np.array(lo_shape_keys)
plt.hist(lo_shape_keys[:,1])
len(lo_outlier_shape.keys())
lo_outlier_sform
plt.subplots(1,2)
plt.subplot(1,2,1)
plt.imshow(nib.load('/data/TrainingSet/images/hippocampus_279.nii.gz').get_fdata()[15,:,:])
plt.subplot(1,2,2)
plt.imshow(nib.load('/data/TrainingSet/labels/hippocampus_279.nii.gz').get_fdata()[15,:,:])
lo_outlier_bitpix
```
All data has bitpix of 8.
```
plt.figure(figsize=(15,15))
plt.xlim(0,5000)
plt.hist(train_set_volumes, bins = 1000)
plt.axvline(2200,color='r')
plt.show
```
### Did not identifier one discernable outlier in the low hippocampus volume set.
### Check that all remaining labels and images have the same dimensions
```
no_outlier2 = np.concatenate((no_outlier,lo_outlier))
difference = []
no_outlier2[0][1]
for i in no_outlier2:
label_p = '/data/TrainingSet/labels/'
images_p = '/data/TrainingSet/images/'
fN = i[1].split('/')[4]
delta = nib.load(label_p+fN).header['dim'] - nib.load(images_p + fN).header['dim']
difference.append([fN, delta])
difference
```
Found a second outlier!
NIFTI file 'hippocampus_010.nii.gz' has a mismatch in the dimensions of its mask and its image.
```
plt.subplots(1,2)
plt.subplot(1,2,1)
plt.imshow(nib.load('/data/TrainingSet/images/hippocampus_010.nii.gz').get_fdata()[16,:,:])
plt.title('Image')
plt.subplot(1,2,2)
plt.title('Label')
plt.imshow(nib.load('/data/TrainingSet/labels/hippocampus_010.nii.gz').get_fdata()[16,:,:])
```
The Image looks odd and doesn't match the label. The label may represent a Hippocampus, but it has no corresponding image to train to. Hence this file must be dropped 'hippocampus_010.nii.gz'
```
no_outlier2 = no_outlier2[no_outlier2[:,1]!='/data/TrainingSet/labels/hippocampus_010.nii.gz']
no_outlier2.shape
no_outlier2[:,1]=='/data/TrainingSet/labels/hippocampus_010.nii.gz'
```
## Identified outlier files: hippocampus_010.nii.gz and hippocampus_281.nii.gz
In the real world we would have precise information about the ages and conditions of our patients, and understanding how our dataset measures against population norm would be the integral part of clinical validation that we talked about in last lesson. Unfortunately, we do not have this information about this dataset, so we can only guess why it measures the way it is. If you would like to explore further, you can use the [calculator from HippoFit project](http://www.smanohar.com/biobank/calculator.html) to see how our dataset compares against different population slices
Did you notice anything odd about the label files? We hope you did! The mask seems to have two classes, labeled with values `1` and `2` respectively. If you visualized sagittal or axial views, you might have gotten a good guess of what those are. Class 1 is the anterior segment of the hippocampus and class 2 is the posterior one.
For the purpose of volume calculation we do not care about the distinction, however we will still train our network to differentiate between these two classes and the background
```
# TASK: Copy the clean dataset to the output folder inside section1/out. You will use it in the next Section
count=0
for f in no_outlier2:
print(f[1])
fn = f[1].split('/')[4]
shutil.copy(f[1], f'out/labels/{fn}')
shutil.copy(f'/data/TrainingSet/images/{fn}', f'out/images/{fn}')
```
## Final remarks
Congratulations! You have finished Section 1.
In this section you have inspected a dataset of MRI scans and related segmentations, represented as NIFTI files. We have visualized some slices, and understood the layout of the data. We have inspected file headers to understand what how the image dimensions relate to the physical world and we have understood how to measure our volume. We have then inspected dataset for outliers, and have created a clean set that is ready for consumption by our ML algorithm.
In the next section you will create training and testing pipelines for a UNet-based machine learning model, run and monitor the execution, and will produce test metrics. This will arm you with all you need to use the model in the clinical context and reason about its performance!
|
github_jupyter
|
# TASK: Import the following libraries that we will use: nibabel, matplotlib, numpy
import matplotlib.pyplot as plt
import nibabel as nib
import numpy as np
import os
from PIL import Image
import glob
import shutil
ls /data/TrainingSet/labels
ls /data/TrainingSet/images
# TASK: Your data sits in directory /data/TrainingSet.
# Load an image and a segmentation mask into variables called image and label
img = nib.load('/data/TrainingSet/images/hippocampus_001.nii.gz')
label = nib.load('/data/TrainingSet/labels/hippocampus_001.nii.gz')
# Nibabel can present your image data as a Numpy array by calling the method get_fdata()
# The array will contain a multi-dimensional Numpy array with numerical values representing voxel intensities.
# In our case, images and labels are 3-dimensional, so get_fdata will return a 3-dimensional array. You can verify this
# by accessing the .shape attribute. What are the dimensions of the input arrays?
img_np = img.get_fdata()
label_np = label.get_fdata()
print(f'img shape is {img_np.shape}')
print(f'label shape is {label_np.shape}')
# TASK: using matplotlib, visualize a few slices from the dataset, along with their labels.
# You can adjust plot sizes like so if you find them too small:
plt.rcParams["figure.figsize"] = (10,10)
fig1, n_ax1 = plt.subplots(1,5,figsize=(15,15))
n_ax1 = n_ax1.flatten()
for i in range(5):
n_ax1[i].imshow(img_np[:,:,(i*7)])
n_ax1[i].set_title(f'img Axial slice {i*7}')
fig2, n_ax2 = plt.subplots(1,5,figsize=(15,15))
n_ax2 = n_ax2.flatten()
for i in range(5):
n_ax2[i].imshow(label_np[:,:,(i*7)])
n_ax2[i].set_title(f'label Axial slice {i*7}')
fig3, n_ax3 = plt.subplots(1,5,figsize=(15,15))
n_ax3 = n_ax3.flatten()
for i in range(5):
n_ax3[i].imshow(img_np[:,i*10,:])
n_ax3[i].set_title(f'img Coronal slice {i*10}')
fig4, n_ax4 = plt.subplots(1,5,figsize=(15,15))
n_ax4 = n_ax4.flatten()
for i in range(5):
n_ax4[i].imshow(label_np[:,(i*10),:])
n_ax4[i].set_title(f'label Coronal slice {i*10}')
fig5, n_ax5 = plt.subplots(1,5,figsize=(15,15))
n_ax5 = n_ax5.flatten()
for i in range(5):
n_ax5[i].imshow(img_np[(i*7),:,:])
n_ax5[i].set_title(f'img Sagital slice {i*7}')
fig6, n_ax6 = plt.subplots(1,5,figsize=(15,15))
n_ax6 = n_ax6.flatten()
for i in range(5):
n_ax6[i].imshow(label_np[(i*7),:,:])
n_ax6[i].set_title(f'label Sagital slice {i*7}')
plt.imshow(label_np[14,:,:])
plt.imshow((img_np[:,:,14]+label_np[:,:,14]))
# Stand out suggestion: use one of the simple Volume Rendering algorithms that we've
# implemented in one of our earlier lessons to visualize some of these volumes
print(nib.load('/data/TrainingSet/labels/hippocampus_010.nii.gz').header)
# Nibabel supports many imaging formats, NIFTI being just one of them. I told you that our images
# are in NIFTI, but you should confirm if this is indeed the format that we are dealing with
# TASK: using .header_class attribute - what is the format of our images?
print(f'Img format is {img.header_class}')
print(f'Label format is {label.header_class}')
# TASK: How many bits per pixel are used?
print(f'Img: {img.header} \n')
print(f'Label: {label.header}')
# TASK: What are the units of measurement?
'''
xyzt_units indicate the unit of measurements for dim.
From the Header, xyzt_units in binary is 10, translating to 2.
2 translates to NIFTI_UNITS_MM - millimeter.
'''
# TASK: Do we have a regular grid? What are grid spacings?
'''
pixdim is grid spacings.
pixdim = [1. 1. 1. 1. 1. 0. 0. 0.]
pixdim[1], pixdim[2], pixdim[3] = 1.,1.,1.
'''
# TASK: What dimensions represent axial, sagittal, and coronal slices? How do you know?
'''
sform_code = scanner
srow_x, srow_y, srow_z are given.
srow_x = [1. 0 0 1.]
srow_y = [0 1. 0 1.]
srow_z = [0 0 . 1.]
From NIFITI documentation 3D IMAGE (VOLUME) ORIENTATION AND LOCATION IN SPACE section:
In sform_code method, the (x,y,z) axes refer to a subject-based coordinate system,
with +x = Right +y = Anterior +z = Superior.
The srow_x, _y, _z vectors show that they translate to orthoganal i, j, k vectors.
Hence, x dimension is sagital (medial and lateral/ left and right since this is the right side of the brain)
y dimension is coronal (anterior and posterior)
z dimension is axial (superior and inferior)
'''
label_np[label_np > 0]
# By now you should have enough information to decide what are dimensions of a single voxel
# TASK: Compute the volume (in mmยณ) of a hippocampus using one of the labels you've loaded.
# You should get a number between ~2200 and ~4500
'''
One voxel = pixdim[1] * pixdim[2] * pixdim[3] = 1.0 mm^3
'''
print(f'Volume of hippocampus label is {np.count_nonzero(label_np > 0)} mm^3')
'''
Understand min and max value in image and label numpy array
'''
img = nib.load('/data/TrainingSet/images/hippocampus_003.nii.gz')
label = nib.load('/data/TrainingSet/labels/hippocampus_003.nii.gz')
img_np=img.get_fdata().astype(np.single)
label_np=label.get_fdata().astype(np.single)
print(f'img {np.amin(img_np), np.amax(img_np)}')
print(f'label {np.amin(label_np), np.amax(label_np)}')
print(np.argwhere(img_np >= np.amax(img_np)))
plt.subplots(1,2,figsize= (15,30))
plt.subplot(1,2,1)
plt.imshow(img_np[np.argwhere(img_np >= np.amax(img_np))[0][0]-10,:,:])
plt.subplot(1,2,2)
plt.imshow(label_np[np.argwhere(img_np >= np.amax(img_np))[0][0]-10,:,:])
img_np.mean()+img_np.std()
img_np= img_np/0xff
img_np= img_np/np.amax(img_np)
plt.subplot(1,2,1)
plt.imshow(img_np[29-10,:,:])
plt.subplot(1,2,2)
plt.imshow(label_np[29-10,:,:])
np.amax(img_np)
plt.hist(img_np.ravel(),bins=100)
plt.show
ls /data
# TASK: Plot a histogram of all volumes that we have in our dataset and see how
# our dataset measures against a slice of a normal population represented by the chart below.
train_set_files = glob.glob('/data/TrainingSet/labels/*')
train_set_volumes = []
train_set = []
count = 0
for i in train_set_files:
img = nib.load(i)
img_np = img.get_fdata()
i_vol = np.count_nonzero(img_np>0)
i_dim = img.header['dim']
train_set_volumes.append(i_vol)
train_set.append([i_vol, i, i_dim])
count+=1
plt.hist(train_set_volumes, bins = 1000)
plt.xlim(1000,25000)
print(count)
nib.load('/data/TrainingSet/images/hippocampus_083.nii.gz').header['dim']-nib.load('/data/TrainingSet/labels/hippocampus_083.nii.gz').header['dim']
train_set
train_set=np.array(train_set)
hi_outlier = []
lo_outlier = []
no_outlier = []
for s in train_set:
if (int(s[0]) > 4600):
hi_outlier.append(s)
elif (int(s[0]) < 2800):
lo_outlier.append(s)
else:
no_outlier.append(s)
#outlier=np.array(outlier)
print(f'Number of hippocampus label volume greater than 4500: {len(hi_outlier)}')
print(f'Number of hippocampus label volume less than 2800: {len(lo_outlier)}')
print(f'Number of hippocampus label volume between 2900 and 4500 : {len(no_outlier)}')
hi_outlier
hi_outlier_label = nib.load(hi_outlier[0][1])
print(hi_outlier_label.header)
print(hi_outlier_label.header.get_data_shape())
print(hi_outlier_label.header['pixdim'])
print(hi_outlier_label.header['dim'])
print(hi_outlier_label.header.get_sform())
plt.imshow(nib.load('/data/TrainingSet/images/hippocampus_281.nii.gz').get_fdata()[:,260,:],aspect = 94/(512*2))
no_outlier_shape = {}
no_outlier_pixdim = {}
no_outlier_dim = {}
no_outlier_sform = {}
no_outlier_bitpix = {}
count = 0
for label in no_outlier:
count+=1
fp = label[1]
keyshape = nib.load(fp).header.get_data_shape()
no_outlier_shape.setdefault(keyshape,[])
no_outlier_shape[keyshape].append(fp)
keypixdim = str(nib.load(fp).header['pixdim'])
no_outlier_pixdim.setdefault(keypixdim,[])
no_outlier_pixdim[keypixdim].append(fp)
keydim = tuple(nib.load(fp).header['dim'])
no_outlier_dim.setdefault(keydim,[])
no_outlier_dim[keydim].append(fp)
keysf = str(nib.load(fp).header.get_sform())
no_outlier_sform.setdefault(keysf,[])
no_outlier_sform[keysf].append(fp)
keybp = str(nib.load(fp).header['bitpix'])
no_outlier_bitpix.setdefault(keybp,[])
no_outlier_bitpix[keybp].append(fp)
print(count)
no_outlier_pixdim.keys()
dim_keys=[i for i in no_outlier_dim.keys()]
dim_keys = np.array(dim_keys)
dim_keys[:, 1]
shape_keys = [(i) for i in no_outlier_shape.keys()]
shape_keys = np.array(shape_keys)
plt.hist(shape_keys[:,0])
plt.hist(shape_keys[:,1])
shape_keys[shape_keys[:,1]<42.5]
plt.hist(shape_keys[:,2])
shape_keys[shape_keys[:,2]<25]
no_outlier_shape[(34,53,24)]
plt.imshow(nib.load('/data/TrainingSet/images/hippocampus_243.nii.gz').get_fdata()[15,:,:])
plt.imshow(nib.load('/data/TrainingSet/labels/hippocampus_243.nii.gz').get_fdata()[15,:,:])
no_outlier_sform
no_outlier_bitpix.keys()
no_outlier_bitpix['32']
print(nib.load(no_outlier_bitpix['32'][0]).header)
plt.imshow(nib.load(no_outlier_bitpix['32'][0]).get_fdata()[14,:,:])
plt.imshow(nib.load('/data/TrainingSet/images/hippocampus_003.nii.gz').get_fdata()[14,:,:])
plt.imshow(nib.load('/data/TrainingSet/images/hippocampus_003.nii.gz').get_fdata()[:,:,14])
print(nib.load(no_outlier_bitpix['32'][1]).header)
plt.imshow(nib.load('/data/TrainingSet/images/hippocampus_243.nii.gz').get_fdata()[16,:,:])
lo_outlier
lo_outlier_shape = {}
lo_outlier_pixdim = {}
lo_outlier_sform = {}
lo_outlier_bitpix = {}
for label in lo_outlier:
fp = label[1]
keyshape = nib.load(fp).header.get_data_shape()
lo_outlier_shape.setdefault(keyshape,[])
lo_outlier_shape[keyshape].append(fp)
keypixdim = str(nib.load(fp).header['pixdim'])
lo_outlier_pixdim.setdefault(keypixdim,[])
lo_outlier_pixdim[keypixdim].append(fp)
keysf = str(nib.load(fp).header.get_sform())
lo_outlier_sform.setdefault(keysf,[])
lo_outlier_sform[keysf].append(fp)
keybp = str(nib.load(fp).header['bitpix'])
lo_outlier_bitpix.setdefault(keybp,[])
lo_outlier_bitpix[keybp].append(fp)
lo_shape_keys = [(i) for i in lo_outlier_shape.keys()]
lo_shape_keys = np.array(lo_shape_keys)
plt.hist(lo_shape_keys[:,1])
len(lo_outlier_shape.keys())
lo_outlier_sform
plt.subplots(1,2)
plt.subplot(1,2,1)
plt.imshow(nib.load('/data/TrainingSet/images/hippocampus_279.nii.gz').get_fdata()[15,:,:])
plt.subplot(1,2,2)
plt.imshow(nib.load('/data/TrainingSet/labels/hippocampus_279.nii.gz').get_fdata()[15,:,:])
lo_outlier_bitpix
plt.figure(figsize=(15,15))
plt.xlim(0,5000)
plt.hist(train_set_volumes, bins = 1000)
plt.axvline(2200,color='r')
plt.show
no_outlier2 = np.concatenate((no_outlier,lo_outlier))
difference = []
no_outlier2[0][1]
for i in no_outlier2:
label_p = '/data/TrainingSet/labels/'
images_p = '/data/TrainingSet/images/'
fN = i[1].split('/')[4]
delta = nib.load(label_p+fN).header['dim'] - nib.load(images_p + fN).header['dim']
difference.append([fN, delta])
difference
plt.subplots(1,2)
plt.subplot(1,2,1)
plt.imshow(nib.load('/data/TrainingSet/images/hippocampus_010.nii.gz').get_fdata()[16,:,:])
plt.title('Image')
plt.subplot(1,2,2)
plt.title('Label')
plt.imshow(nib.load('/data/TrainingSet/labels/hippocampus_010.nii.gz').get_fdata()[16,:,:])
no_outlier2 = no_outlier2[no_outlier2[:,1]!='/data/TrainingSet/labels/hippocampus_010.nii.gz']
no_outlier2.shape
no_outlier2[:,1]=='/data/TrainingSet/labels/hippocampus_010.nii.gz'
# TASK: Copy the clean dataset to the output folder inside section1/out. You will use it in the next Section
count=0
for f in no_outlier2:
print(f[1])
fn = f[1].split('/')[4]
shutil.copy(f[1], f'out/labels/{fn}')
shutil.copy(f'/data/TrainingSet/images/{fn}', f'out/images/{fn}')
| 0.654564 | 0.978156 |
# Stats in Paper
```
# imports
import ipdb, os, re
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from collections import defaultdict
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, classification_report
```
## IBM
```
IBM_train_csv = "~/Research/task-dataset-metric-nli-extraction/data/ibm/exp/few-shot-setup/NLP-TDMS/paperVersion/train.tsv"
IBM_test_csv = "~/Research/task-dataset-metric-nli-extraction/data/ibm/exp/few-shot-setup/NLP-TDMS/paperVersion/test.tsv"
train_IBM = pd.read_csv(IBM_train_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
# train_IBM['label'] = train_IBM.label.apply(lambda x: "true" if x else "false")
test_IBM = pd.read_csv(IBM_test_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
def get_stats(path_to_df):
unique_labels = path_to_df[(path_to_df.label == True)].TDM.tolist()
TDM = set()
Uniq_task = set()
Uniq_dataset = set()
Uniq_metric = set()
unknown_count = 0
avg_tdm_per_paper = defaultdict(lambda : 0)
for contrib in unique_labels:
split = contrib.split(';')
if(len(split) == 1):
unknown_count += 1
else:
if len(split) !=3:
# ipdb.set_trace()
task, dataset, metric, _ = split
else:
task, dataset, metric = split
t, d, m = task.strip(), dataset.strip(), metric.strip()
TDM.add(f"{t}#{d}#{m}")
Uniq_task.add(t)
Uniq_dataset.add(d)
Uniq_metric.add(m)
for paper in path_to_df[(path_to_df.label == True) & (path_to_df.TDM != 'unknown') ].title.tolist():
avg_tdm_per_paper[paper] += 1
print(f"Number of papers: {len(set(path_to_df[(path_to_df.label == True)].title.tolist()))}")
print(f"Unknown count: {unknown_count}")
print(f"Total leaderboards: {len(path_to_df[(path_to_df.label == True) & (path_to_df.TDM != 'unknown')].title.tolist())}")
print(f"Avg leaderboard per paper: {round(np.mean(list(avg_tdm_per_paper.values())), 2)}")
print(f"Distinc leaderboard: {len(TDM)}")
print(f"Distinct taks: {len(Uniq_task)}")
print(f"Distinc datasets: {len(Uniq_dataset)}")
print(f"Distinc metrics: {len(Uniq_metric)}")
print(f"Max leaderboard per paper: {round(np.max(list(avg_tdm_per_paper.values())), 2)}")
print(f"Min leaderboard per paper: {round(np.min(list(avg_tdm_per_paper.values())), 2)}")
return avg_tdm_per_paper
```
### Train
```
train_IBM.head()
train_IBM.title.nunique()
avg_tdm_per_paper = get_stats(train_IBM)
```
### Test
```
test_IBM.head()
metric = get_stats(test_IBM)
# Make sure that all leaderboard in test are present in train
count = []
for paper in test_IBM.TDM.to_list():
if paper not in train_IBM.TDM.to_list():
count.append(paper)
print(count)
count = []
for paper in train_IBM.TDM.to_list():
if paper not in test_IBM.TDM.to_list():
print(paper)
count.append(paper)
print(count)
```
### our dataset
```
# New_train_csv = "~/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_800/twofoldwithunk/fold1/train.tsv"
# New_test_csv = "~/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_800/twofoldwithunk/fold1/dev.tsv"
New_train_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold2/train.tsv"
New_test_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold2/dev.tsv"
# New_train_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_5000/10Neg5000unk/twofoldwithunk/fold2/train.tsv"
# New_test_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_5000/10Neg5000unk/twofoldwithunk/fold2/dev.tsv"
# New_train_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_1000/10Neg1000unk/twofoldwithunk/fold2/train.tsv"
# New_test_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_1000/10Neg1000unk/twofoldwithunk/fold2/dev.tsv"
# New_train_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_500/10Neg500unk/twofoldwithunk/fold1/train.tsv"
# New_test_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_500/10Neg500unk/twofoldwithunk/fold1/dev.tsv"
# New_train_csv = IBM_train_csv
# New_test_csv = IBM_test_csv
train_New = pd.read_csv(New_train_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
# train_IBM['label'] = train_IBM.label.apply(lambda x: "true" if x else "false")
test_New = pd.read_csv(New_test_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
len(train_New.drop_duplicates())
len(train_New)
len(train_New[(train_New.TDM=="unknown") & (train_New.label==False)])
train_New[ (train_New.label==True)]
train_New[(train_New.TDM=="unknown") & (train_New.label==True)]
train_New[(train_New.title=="1602.01595v4.pdf") & (train_New.label==True)]
train_New[(train_New.title=="2009.04534v2.pdf") & (train_New.label==True)]
train_New[(train_New.title=="2009.04534v2.pdf") & (train_New.label==True)]
train_New[(train_New.title=="2011.14859v2.pdf")]
avg_tdm_per_paper = get_stats(train_New)
((677+323)+(697+303))/2
((2931+1255)+(2935+1251))/2
1205+2981
print("Train")
print("======")
print(f"Avg Unknown count: {round((2934 + 3028)/2)}")
print(f"Avg Total leaderboards: {round((11690 + 11757)/2)}")
print(f"Avg leaderboard per paper: {round((4.13 + 4.15)/2, 1)}")
print(f"Avg Distinc leaderboard: {round((1791 + 1820)/2)}")
print(f"Avg Distinct taks: {round((286 + 291)/2)}")
print(f"Avg Distinc datasets: {round((905 + 912)/2)}")
print(f"Avg Distinc metrics: {round((547 + 553)/2)}")
avg_tdm_per_paper = get_stats(test_New)
print("Test")
print("======")
print(f"Avg Unknown count: {round((1252 + 1158)/2)}")
print(f"Avg Total leaderboards: {round((5094 + 5027)/2)}")
print(f"Avg leaderboard per paper: {round((4.14 + 4.1)/2, 1)}")
print(f"Avg Distinc leaderboard: {round((1556 + 1541)/2)}")
print(f"Avg Distinct taks: {round((254 + 250)/2)}")
print(f"Avg Distinc datasets: {round((806 + 790)/2)}")
print(f"Avg Distinc metrics: {round((472 + 466)/2)}")
count = []
for tdm in test_New.TDM.to_list():
if tdm not in train_New.TDM.to_list():
count.append(tdm)
print(count)
count = []
for tdm in train_New.TDM.to_list():
if tdm not in test_New.TDM.to_list():
count.append(tdm)
print(len(count))
# New_train_csv = "~/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_800/twofoldwithunk/fold1/train.tsv"
# New_test_csv = "~/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_800/twofoldwithunk/fold1/dev.tsv"
New_train_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold2/train.tsv"
New_test_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold2/dev.tsv"
train_New = pd.read_csv(New_train_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
# train_IBM['label'] = train_IBM.label.apply(lambda x: "true" if x else "false")
test_New = pd.read_csv(New_test_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
avg_tdm_per_paper = get_stats(train_New)
avg_tdm_per_paper = get_stats(test_New)
```
## Remove unknown label for paper with leaderboards
```
New_train_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold1/train.tsv"
New_test_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold1/dev.tsv"
# New_train_csv = IBM_train_csv
# New_test_csv = IBM_test_csv
train_New = pd.read_csv(New_train_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
# train_IBM['label'] = train_IBM.label.apply(lambda x: "true" if x else "false")
test_New = pd.read_csv(New_test_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
len(train_New)
train_New.drop_duplicates(inplace=True)
len(train_New)
train_New[(train_New.label==True) & (train_New.TDM=="unknown")]
train_New[(train_New.label==True) & (train_New.title=="1903.12290v2.pdf")]
train_New[(train_New.label==True) & (train_New.title=="1806.05228v2.pdf")]
train_New[(train_New.label==True) & (train_New.title=="1903.12290v2.pdf") & (train_New.TDM=="unknown")].index
papers = set(train_New.title.to_list())
for paper in papers:
if len(train_New[(train_New.label==True) & (train_New.title==paper)]) != 1:
train_New.drop(train_New[(train_New.label==True) & (train_New.title==paper) & (train_New.TDM=="unknown")].index, inplace=True)
len(train_New)
train_New[(train_New.label==True) & (train_New.title=="1903.12290v2.pdf")]
train_New[(train_New.label==True) & (train_New.TDM=="unknown")]
train_New[(train_New.label==True) & (train_New.title=="1411.1091v1.pdf")]
output = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000_correct/10Neg10000unk/twofoldwithunk/fold1/"
if not os.path.exists(output):
os.makedirs(output)
train_New.to_csv(f"{output}train.tsv",
header=False, index=False, sep="\t")
```
## Test
```
len(test_New)
test_New.drop_duplicates(inplace=True)
len(test_New)
papers = set(test_New.title.to_list())
for paper in papers:
if len(test_New[(test_New.label==True) & (test_New.title==paper)]) != 1:
test_New.drop(test_New[(test_New.label==True) & (test_New.title==paper) & (test_New.TDM=="unknown")].index, inplace=True)
len(test_New)
test_New.to_csv(f"{output}dev.tsv",
header=False, index=False, sep="\t")
```
# F2
```
New_train_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold2/train.tsv"
New_test_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold2/dev.tsv"
# New_train_csv = IBM_train_csv
# New_test_csv = IBM_test_csv
train_New = pd.read_csv(New_train_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
# train_IBM['label'] = train_IBM.label.apply(lambda x: "true" if x else "false")
test_New = pd.read_csv(New_test_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
len(train_New)
train_New.drop_duplicates(inplace=True)
len(train_New)
train_New[(train_New.label==True) & (train_New.TDM=="unknown")]
train_New[(train_New.label==True) & (train_New.title=="2009.04534v2.pdf")]
train_New[(train_New.label==True) & (train_New.title=="1411.1091v1.pdf")]
papers = set(train_New.title.to_list())
for paper in papers:
if len(train_New[(train_New.label==True) & (train_New.title==paper)]) != 1:
train_New.drop(train_New[(train_New.label==True) & (train_New.title==paper) & (train_New.TDM=="unknown")].index, inplace=True)
len(train_New)
train_New[(train_New.label==True) & (train_New.title=="2009.04534v2.pdf")]
train_New[(train_New.label==True) & (train_New.TDM=="unknown")]
train_New[(train_New.label==True) & (train_New.title=="1908.02262v1.pdf")]
output = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000_correct/10Neg10000unk/twofoldwithunk/fold2/"
if not os.path.exists(output):
os.makedirs(output)
train_New.to_csv(f"{output}train.tsv",
header=False, index=False, sep="\t")
```
## Test
```
len(test_New)
test_New.drop_duplicates(inplace=True)
len(test_New)
papers = set(test_New.title.to_list())
for paper in papers:
if len(test_New[(test_New.label==True) & (test_New.title==paper)]) != 1:
test_New.drop(test_New[(test_New.label==True) & (test_New.title==paper) & (test_New.TDM=="unknown")].index, inplace=True)
len(test_New)
test_New.to_csv(f"{output}dev.tsv",
header=False, index=False, sep="\t")
```
|
github_jupyter
|
# imports
import ipdb, os, re
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from collections import defaultdict
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, classification_report
IBM_train_csv = "~/Research/task-dataset-metric-nli-extraction/data/ibm/exp/few-shot-setup/NLP-TDMS/paperVersion/train.tsv"
IBM_test_csv = "~/Research/task-dataset-metric-nli-extraction/data/ibm/exp/few-shot-setup/NLP-TDMS/paperVersion/test.tsv"
train_IBM = pd.read_csv(IBM_train_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
# train_IBM['label'] = train_IBM.label.apply(lambda x: "true" if x else "false")
test_IBM = pd.read_csv(IBM_test_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
def get_stats(path_to_df):
unique_labels = path_to_df[(path_to_df.label == True)].TDM.tolist()
TDM = set()
Uniq_task = set()
Uniq_dataset = set()
Uniq_metric = set()
unknown_count = 0
avg_tdm_per_paper = defaultdict(lambda : 0)
for contrib in unique_labels:
split = contrib.split(';')
if(len(split) == 1):
unknown_count += 1
else:
if len(split) !=3:
# ipdb.set_trace()
task, dataset, metric, _ = split
else:
task, dataset, metric = split
t, d, m = task.strip(), dataset.strip(), metric.strip()
TDM.add(f"{t}#{d}#{m}")
Uniq_task.add(t)
Uniq_dataset.add(d)
Uniq_metric.add(m)
for paper in path_to_df[(path_to_df.label == True) & (path_to_df.TDM != 'unknown') ].title.tolist():
avg_tdm_per_paper[paper] += 1
print(f"Number of papers: {len(set(path_to_df[(path_to_df.label == True)].title.tolist()))}")
print(f"Unknown count: {unknown_count}")
print(f"Total leaderboards: {len(path_to_df[(path_to_df.label == True) & (path_to_df.TDM != 'unknown')].title.tolist())}")
print(f"Avg leaderboard per paper: {round(np.mean(list(avg_tdm_per_paper.values())), 2)}")
print(f"Distinc leaderboard: {len(TDM)}")
print(f"Distinct taks: {len(Uniq_task)}")
print(f"Distinc datasets: {len(Uniq_dataset)}")
print(f"Distinc metrics: {len(Uniq_metric)}")
print(f"Max leaderboard per paper: {round(np.max(list(avg_tdm_per_paper.values())), 2)}")
print(f"Min leaderboard per paper: {round(np.min(list(avg_tdm_per_paper.values())), 2)}")
return avg_tdm_per_paper
train_IBM.head()
train_IBM.title.nunique()
avg_tdm_per_paper = get_stats(train_IBM)
test_IBM.head()
metric = get_stats(test_IBM)
# Make sure that all leaderboard in test are present in train
count = []
for paper in test_IBM.TDM.to_list():
if paper not in train_IBM.TDM.to_list():
count.append(paper)
print(count)
count = []
for paper in train_IBM.TDM.to_list():
if paper not in test_IBM.TDM.to_list():
print(paper)
count.append(paper)
print(count)
# New_train_csv = "~/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_800/twofoldwithunk/fold1/train.tsv"
# New_test_csv = "~/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_800/twofoldwithunk/fold1/dev.tsv"
New_train_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold2/train.tsv"
New_test_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold2/dev.tsv"
# New_train_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_5000/10Neg5000unk/twofoldwithunk/fold2/train.tsv"
# New_test_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_5000/10Neg5000unk/twofoldwithunk/fold2/dev.tsv"
# New_train_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_1000/10Neg1000unk/twofoldwithunk/fold2/train.tsv"
# New_test_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_1000/10Neg1000unk/twofoldwithunk/fold2/dev.tsv"
# New_train_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_500/10Neg500unk/twofoldwithunk/fold1/train.tsv"
# New_test_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_500/10Neg500unk/twofoldwithunk/fold1/dev.tsv"
# New_train_csv = IBM_train_csv
# New_test_csv = IBM_test_csv
train_New = pd.read_csv(New_train_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
# train_IBM['label'] = train_IBM.label.apply(lambda x: "true" if x else "false")
test_New = pd.read_csv(New_test_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
len(train_New.drop_duplicates())
len(train_New)
len(train_New[(train_New.TDM=="unknown") & (train_New.label==False)])
train_New[ (train_New.label==True)]
train_New[(train_New.TDM=="unknown") & (train_New.label==True)]
train_New[(train_New.title=="1602.01595v4.pdf") & (train_New.label==True)]
train_New[(train_New.title=="2009.04534v2.pdf") & (train_New.label==True)]
train_New[(train_New.title=="2009.04534v2.pdf") & (train_New.label==True)]
train_New[(train_New.title=="2011.14859v2.pdf")]
avg_tdm_per_paper = get_stats(train_New)
((677+323)+(697+303))/2
((2931+1255)+(2935+1251))/2
1205+2981
print("Train")
print("======")
print(f"Avg Unknown count: {round((2934 + 3028)/2)}")
print(f"Avg Total leaderboards: {round((11690 + 11757)/2)}")
print(f"Avg leaderboard per paper: {round((4.13 + 4.15)/2, 1)}")
print(f"Avg Distinc leaderboard: {round((1791 + 1820)/2)}")
print(f"Avg Distinct taks: {round((286 + 291)/2)}")
print(f"Avg Distinc datasets: {round((905 + 912)/2)}")
print(f"Avg Distinc metrics: {round((547 + 553)/2)}")
avg_tdm_per_paper = get_stats(test_New)
print("Test")
print("======")
print(f"Avg Unknown count: {round((1252 + 1158)/2)}")
print(f"Avg Total leaderboards: {round((5094 + 5027)/2)}")
print(f"Avg leaderboard per paper: {round((4.14 + 4.1)/2, 1)}")
print(f"Avg Distinc leaderboard: {round((1556 + 1541)/2)}")
print(f"Avg Distinct taks: {round((254 + 250)/2)}")
print(f"Avg Distinc datasets: {round((806 + 790)/2)}")
print(f"Avg Distinc metrics: {round((472 + 466)/2)}")
count = []
for tdm in test_New.TDM.to_list():
if tdm not in train_New.TDM.to_list():
count.append(tdm)
print(count)
count = []
for tdm in train_New.TDM.to_list():
if tdm not in test_New.TDM.to_list():
count.append(tdm)
print(len(count))
# New_train_csv = "~/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_800/twofoldwithunk/fold1/train.tsv"
# New_test_csv = "~/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_800/twofoldwithunk/fold1/dev.tsv"
New_train_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold2/train.tsv"
New_test_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold2/dev.tsv"
train_New = pd.read_csv(New_train_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
# train_IBM['label'] = train_IBM.label.apply(lambda x: "true" if x else "false")
test_New = pd.read_csv(New_test_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
avg_tdm_per_paper = get_stats(train_New)
avg_tdm_per_paper = get_stats(test_New)
New_train_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold1/train.tsv"
New_test_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold1/dev.tsv"
# New_train_csv = IBM_train_csv
# New_test_csv = IBM_test_csv
train_New = pd.read_csv(New_train_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
# train_IBM['label'] = train_IBM.label.apply(lambda x: "true" if x else "false")
test_New = pd.read_csv(New_test_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
len(train_New)
train_New.drop_duplicates(inplace=True)
len(train_New)
train_New[(train_New.label==True) & (train_New.TDM=="unknown")]
train_New[(train_New.label==True) & (train_New.title=="1903.12290v2.pdf")]
train_New[(train_New.label==True) & (train_New.title=="1806.05228v2.pdf")]
train_New[(train_New.label==True) & (train_New.title=="1903.12290v2.pdf") & (train_New.TDM=="unknown")].index
papers = set(train_New.title.to_list())
for paper in papers:
if len(train_New[(train_New.label==True) & (train_New.title==paper)]) != 1:
train_New.drop(train_New[(train_New.label==True) & (train_New.title==paper) & (train_New.TDM=="unknown")].index, inplace=True)
len(train_New)
train_New[(train_New.label==True) & (train_New.title=="1903.12290v2.pdf")]
train_New[(train_New.label==True) & (train_New.TDM=="unknown")]
train_New[(train_New.label==True) & (train_New.title=="1411.1091v1.pdf")]
output = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000_correct/10Neg10000unk/twofoldwithunk/fold1/"
if not os.path.exists(output):
os.makedirs(output)
train_New.to_csv(f"{output}train.tsv",
header=False, index=False, sep="\t")
len(test_New)
test_New.drop_duplicates(inplace=True)
len(test_New)
papers = set(test_New.title.to_list())
for paper in papers:
if len(test_New[(test_New.label==True) & (test_New.title==paper)]) != 1:
test_New.drop(test_New[(test_New.label==True) & (test_New.title==paper) & (test_New.TDM=="unknown")].index, inplace=True)
len(test_New)
test_New.to_csv(f"{output}dev.tsv",
header=False, index=False, sep="\t")
New_train_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold2/train.tsv"
New_test_csv = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000/10Neg10000unk/twofoldwithunk/fold2/dev.tsv"
# New_train_csv = IBM_train_csv
# New_test_csv = IBM_test_csv
train_New = pd.read_csv(New_train_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
# train_IBM['label'] = train_IBM.label.apply(lambda x: "true" if x else "false")
test_New = pd.read_csv(New_test_csv,
sep="\t", names=["label", "title", "TDM", "Context"])
len(train_New)
train_New.drop_duplicates(inplace=True)
len(train_New)
train_New[(train_New.label==True) & (train_New.TDM=="unknown")]
train_New[(train_New.label==True) & (train_New.title=="2009.04534v2.pdf")]
train_New[(train_New.label==True) & (train_New.title=="1411.1091v1.pdf")]
papers = set(train_New.title.to_list())
for paper in papers:
if len(train_New[(train_New.label==True) & (train_New.title==paper)]) != 1:
train_New.drop(train_New[(train_New.label==True) & (train_New.title==paper) & (train_New.TDM=="unknown")].index, inplace=True)
len(train_New)
train_New[(train_New.label==True) & (train_New.title=="2009.04534v2.pdf")]
train_New[(train_New.label==True) & (train_New.TDM=="unknown")]
train_New[(train_New.label==True) & (train_New.title=="1908.02262v1.pdf")]
output = "/nfs/home/kabenamualus/Research/task-dataset-metric-nli-extraction/data/pwc_ibm_150_5_10_10000_correct/10Neg10000unk/twofoldwithunk/fold2/"
if not os.path.exists(output):
os.makedirs(output)
train_New.to_csv(f"{output}train.tsv",
header=False, index=False, sep="\t")
len(test_New)
test_New.drop_duplicates(inplace=True)
len(test_New)
papers = set(test_New.title.to_list())
for paper in papers:
if len(test_New[(test_New.label==True) & (test_New.title==paper)]) != 1:
test_New.drop(test_New[(test_New.label==True) & (test_New.title==paper) & (test_New.TDM=="unknown")].index, inplace=True)
len(test_New)
test_New.to_csv(f"{output}dev.tsv",
header=False, index=False, sep="\t")
| 0.275227 | 0.622488 |
# A QMCPy Quick Start
In this tutorial, we introduce QMCPy [1] by an example. QMCPy can be installed with **pip install qmcpy** or cloned from the [QMCSoftware GitHub repository](https://github.com/QMCSoftware/QMCSoftware).
Consider the problem of integrating the Keister function [2] with respect to a $d$-dimensional Gaussian measure:
$$f(\boldsymbol{x}) = \pi^{d/2} \cos(||\boldsymbol{x}||), \qquad \boldsymbol{x} \in \mathbb{R}^d, \qquad \boldsymbol{X} \sim \mathcal{N}(\boldsymbol{0}_d,\mathsf{I}_d/2),
\\ \mu = \mathbb{E}[f(\boldsymbol{X})] := \int_{\mathbb{R}^d} f(\boldsymbol{x}) \, \pi^{-d/2} \exp( - ||\boldsymbol{x}||^2) \, \rm d \boldsymbol{x}
\\ = \int_{[0,1]^d} \pi^{d/2} \cos\left(\sqrt{ \frac 12 \sum_{j=1}^d\Phi^{-1}(x_j)}\right) \, \rm d \boldsymbol{x},$$ where $||\boldsymbol{x}||$ is the Euclidean norm, $\mathsf{I}_d$ is the $d$-dimensional identity matrix, and
$\Phi$ denotes the standard normal cumulative distribution function. When $d=2$, $\mu \approx 1.80819$ and we can visualize the Keister function and realizations of the sampling points depending on the tolerance values, $\varepsilon$, in the following figure:

The Keister function is implemented below with help from NumPy [3] in the following code snippet:
```
import numpy as np
def keister(x):
"""
x: nxd numpy ndarray
n samples
d dimensions
returns n-vector of the Kesiter function
evaluated at the n input samples
"""
d = x.shape[1]
norm_x = np.sqrt((x**2).sum(1))
k = np.pi**(d/2) * np.cos(norm_x)
return k # size n vector
```
In addition to our Keister integrand and Gaussian true measure, we must select a discrete distribution, and a stopping criterion [4]. The stopping criterion determines the number of points at which to evaluate the integrand in order for the mean approximation to be accurate within a user-specified error tolerance, $\varepsilon$. The discrete distribution determines the sites at which the integrand is evaluated.
For this Keister example, we select the lattice sequence as the discrete distribution and corresponding cubature-based stopping criterion [5]. The discrete distribution, true measure, integrand, and stopping criterion are then constructed within the QMCPy framework below.
```
import qmcpy
d = 2
discrete_distrib = qmcpy.Lattice(dimension = d)
true_measure = qmcpy.Gaussian(discrete_distrib, mean = 0, covariance = 1/2)
integrand = qmcpy.CustomFun(true_measure,keister)
stopping_criterion = qmcpy.CubQMCLatticeG(integrand = integrand, abs_tol = 1e-3)
```
Calling *integrate* on the *stopping_criterion* instance returns the numerical solution and a data object. Printing the data object will provide a neat summary of the integration problem. For details of the output fields, refer to the online, searchable QMCPy Documentation at [https://qmcpy.readthedocs.io/](https://qmcpy.readthedocs.io/en/latest/algorithms.html#module-qmcpy.integrand.keister).
```
solution, data = stopping_criterion.integrate()
print(data)
```
This guide is not meant to be exhaustive but rather a quick introduction to the QMCPy framework and syntax. In an upcoming blog, we will take a closer look at low-discrepancy sequences such as the lattice sequence from the above example.
## References
1. Choi, S.-C. T., Hickernell, F., McCourt, M., Rathinavel J., & Sorokin, A. QMCPy: A quasi-Monte Carlo Python Library. https://qmcsoftware.github.io/QMCSoftware/. 2020.
2. Keister, B. D. Multidimensional Quadrature Algorithms. Computers in Physics 10, 119โ122 (1996).
3. Oliphant, T., Guide to NumPy https://ecs.wgtn.ac.nz/foswiki/pub/Support/ManualPagesAndDocumentation/numpybook.pdf (Trelgol Publishing USA, 2006).
4. Hickernell, F., Choi, S.-C. T., Jiang, L. & Jimenez Rugama, L. A. in WileyStatsRef-Statistics Reference Online (eds Davidian, M.et al.) (John Wiley & Sons Ltd., 2018).
5. Jimenez Rugama, L. A. & Hickernell, F. Adaptive Multidimensional Inte-gration Based on Rank-1 Lattices in Monte Carlo and Quasi-Monte Carlo Methods: MCQMC, Leuven, Belgium, April 2014 (eds Cools, R. & Nuyens, D.) 163.arXiv:1411.1966 (Springer-Verlag, Berlin, 2016), 407โ422.
|
github_jupyter
|
import numpy as np
def keister(x):
"""
x: nxd numpy ndarray
n samples
d dimensions
returns n-vector of the Kesiter function
evaluated at the n input samples
"""
d = x.shape[1]
norm_x = np.sqrt((x**2).sum(1))
k = np.pi**(d/2) * np.cos(norm_x)
return k # size n vector
import qmcpy
d = 2
discrete_distrib = qmcpy.Lattice(dimension = d)
true_measure = qmcpy.Gaussian(discrete_distrib, mean = 0, covariance = 1/2)
integrand = qmcpy.CustomFun(true_measure,keister)
stopping_criterion = qmcpy.CubQMCLatticeG(integrand = integrand, abs_tol = 1e-3)
solution, data = stopping_criterion.integrate()
print(data)
| 0.752559 | 0.995318 |
# Sampling from Databases
This notebook will create samples from the two databases built from scraping.
```
import sqlite3
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
**Note: This same dataset was used for another project that incorporated weather data, but this was not used in DMW.**
```
w_df = pd.read_csv('chicago-weather.csv', parse_dates=['StartTime(UTC)',
'EndTime(UTC)'])
w_df.head()
```
## TNP Dataset
Since the TNP Dataset was scraped monthly into individual databases, we'll iterate over each database and randomly sample 2% from each month.
```
tnp_dir = '/mnt/processed/private/msds2021/lt6/chicago-dataset/tnp/'
datasets = ['tnp_201901.db',
'tnp_201902.db',
'tnp_201903.db',
'tnp_201904.db',
'tnp_201905.db',
'tnp_201906.db',
'tnp_201907.db',
'tnp_201908.db',
'tnp_201909.db',
'tnp_201910.db',
'tnp_201911.db',
'tnp_201912.db']
cols = ['trip_id', 'trip_start_timestamp', 'trip_end_timestamp', 'trip_seconds',
'trip_miles', 'pickup_community_area', 'dropoff_community_area', 'fare', 'tip',
'additional_charges', 'trip_total', 'shared_trip_authorized',
'trips_pooled', 'pickup_centroid_latitude', 'pickup_centroid_longitude',
'dropoff_centroid_latitude', 'dropoff_centroid_longitude']
for d in datasets:
with sqlite3.connect(tnp_dir + d) as conn:
print('START:', d)
sql = f"""SELECT * FROM trips WHERE trip_id IN
(SELECT trip_id FROM trips ORDER BY RANDOM()
LIMIT (SELECT ROUND(COUNT(*) * 0.02) FROM trips))"""
df = pd.read_sql(sql, conn,
parse_dates=['trip_start_timestamp',
'trip_end_timestamp']).loc[:, cols]
# Build the Weather Mapper
wmap = pd.DataFrame({'ts':
df.trip_start_timestamp.unique()})
wmap['T/S'] = wmap.ts.apply(lambda x:
w_df.loc[((w_df['StartTime(UTC)'] <= x) &
(w_df['EndTime(UTC)'] >= x)),
['Type', 'Severity']].iloc[0, :]
.to_numpy()
if not w_df.loc[((w_df['StartTime(UTC)']
<= x) &
(w_df['EndTime(UTC)']
>= x)),
['Type', 'Severity']]
.empty
else [None, None])
wmap['Type'] = wmap['T/S'].apply(lambda x: x[0])
wmap['Severity'] = wmap['T/S'].apply(lambda x: x[1])
wmap.drop(columns=['T/S'], inplace=True)
wmap.set_index('ts', inplace=True)
# Merge Weather and TNP data
df = pd.merge(df, wmap,
left_on='trip_start_timestamp',
right_index=True)
# Write to CSV
df.to_csv('tnp_sample.csv.gz', mode='a')
print('COMPLETE:', d)
```
## Taxi Dataset
For the taxi dataset, we'll also randomly sample 2% from the whole database.
```
taxi_db = '/mnt/processed/private/msds2021/lt6/chicago-dataset/taxi/taxi.db'
with sqlite3.connect(taxi_db) as conn:
print('START: taxi.db')
sql = f"""SELECT * FROM taxi WHERE trip_id IN
(SELECT trip_id FROM taxi ORDER BY RANDOM()
LIMIT (SELECT ROUND(COUNT(*) * 0.02) FROM taxi))"""
df = pd.read_sql(sql, conn,
parse_dates=[x'trip_start_timestamp',
'trip_end_timestamp'])
# Build the Weather Mapper
wmap = pd.DataFrame({'ts':
df.trip_start_timestamp.unique()})
wmap['T/S'] = wmap.ts.apply(lambda x:
w_df.loc[((w_df['StartTime(UTC)'] <= x) &
(w_df['EndTime(UTC)'] >= x)),
['Type', 'Severity']].iloc[0, :]
.to_numpy()
if not w_df.loc[((w_df['StartTime(UTC)']
<= x) &
(w_df['EndTime(UTC)']
>= x)),
['Type', 'Severity']]
.empty
else [None, None])
wmap['Type'] = wmap['T/S'].apply(lambda x: x[0])
wmap['Severity'] = wmap['T/S'].apply(lambda x: x[1])
wmap.drop(columns=['T/S'], inplace=True)
wmap.set_index('ts', inplace=True)
# Merge Weather and TNP data
df = pd.merge(df, wmap,
left_on='trip_start_timestamp',
right_index=True)
# Write to CSV
df.to_csv('taxi_sample.csv.gz', mode='a')
print('COMPLETE: taxi.db')
```
|
github_jupyter
|
import sqlite3
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
w_df = pd.read_csv('chicago-weather.csv', parse_dates=['StartTime(UTC)',
'EndTime(UTC)'])
w_df.head()
tnp_dir = '/mnt/processed/private/msds2021/lt6/chicago-dataset/tnp/'
datasets = ['tnp_201901.db',
'tnp_201902.db',
'tnp_201903.db',
'tnp_201904.db',
'tnp_201905.db',
'tnp_201906.db',
'tnp_201907.db',
'tnp_201908.db',
'tnp_201909.db',
'tnp_201910.db',
'tnp_201911.db',
'tnp_201912.db']
cols = ['trip_id', 'trip_start_timestamp', 'trip_end_timestamp', 'trip_seconds',
'trip_miles', 'pickup_community_area', 'dropoff_community_area', 'fare', 'tip',
'additional_charges', 'trip_total', 'shared_trip_authorized',
'trips_pooled', 'pickup_centroid_latitude', 'pickup_centroid_longitude',
'dropoff_centroid_latitude', 'dropoff_centroid_longitude']
for d in datasets:
with sqlite3.connect(tnp_dir + d) as conn:
print('START:', d)
sql = f"""SELECT * FROM trips WHERE trip_id IN
(SELECT trip_id FROM trips ORDER BY RANDOM()
LIMIT (SELECT ROUND(COUNT(*) * 0.02) FROM trips))"""
df = pd.read_sql(sql, conn,
parse_dates=['trip_start_timestamp',
'trip_end_timestamp']).loc[:, cols]
# Build the Weather Mapper
wmap = pd.DataFrame({'ts':
df.trip_start_timestamp.unique()})
wmap['T/S'] = wmap.ts.apply(lambda x:
w_df.loc[((w_df['StartTime(UTC)'] <= x) &
(w_df['EndTime(UTC)'] >= x)),
['Type', 'Severity']].iloc[0, :]
.to_numpy()
if not w_df.loc[((w_df['StartTime(UTC)']
<= x) &
(w_df['EndTime(UTC)']
>= x)),
['Type', 'Severity']]
.empty
else [None, None])
wmap['Type'] = wmap['T/S'].apply(lambda x: x[0])
wmap['Severity'] = wmap['T/S'].apply(lambda x: x[1])
wmap.drop(columns=['T/S'], inplace=True)
wmap.set_index('ts', inplace=True)
# Merge Weather and TNP data
df = pd.merge(df, wmap,
left_on='trip_start_timestamp',
right_index=True)
# Write to CSV
df.to_csv('tnp_sample.csv.gz', mode='a')
print('COMPLETE:', d)
taxi_db = '/mnt/processed/private/msds2021/lt6/chicago-dataset/taxi/taxi.db'
with sqlite3.connect(taxi_db) as conn:
print('START: taxi.db')
sql = f"""SELECT * FROM taxi WHERE trip_id IN
(SELECT trip_id FROM taxi ORDER BY RANDOM()
LIMIT (SELECT ROUND(COUNT(*) * 0.02) FROM taxi))"""
df = pd.read_sql(sql, conn,
parse_dates=[x'trip_start_timestamp',
'trip_end_timestamp'])
# Build the Weather Mapper
wmap = pd.DataFrame({'ts':
df.trip_start_timestamp.unique()})
wmap['T/S'] = wmap.ts.apply(lambda x:
w_df.loc[((w_df['StartTime(UTC)'] <= x) &
(w_df['EndTime(UTC)'] >= x)),
['Type', 'Severity']].iloc[0, :]
.to_numpy()
if not w_df.loc[((w_df['StartTime(UTC)']
<= x) &
(w_df['EndTime(UTC)']
>= x)),
['Type', 'Severity']]
.empty
else [None, None])
wmap['Type'] = wmap['T/S'].apply(lambda x: x[0])
wmap['Severity'] = wmap['T/S'].apply(lambda x: x[1])
wmap.drop(columns=['T/S'], inplace=True)
wmap.set_index('ts', inplace=True)
# Merge Weather and TNP data
df = pd.merge(df, wmap,
left_on='trip_start_timestamp',
right_index=True)
# Write to CSV
df.to_csv('taxi_sample.csv.gz', mode='a')
print('COMPLETE: taxi.db')
| 0.362631 | 0.811228 |
## Cluster using KMeans
Not technically a graph method, but included here for evaluation purposes.
Each document is represented by its row in the appropriate generation probabilities matrix, as a sparse vector of generation probabilities q(d<sub>i</sub>|d<sub>j</sub>).
We use `MiniBatchKMeans` from the scikit-learn library to generate clusters, setting k=20 (since this is the 20 newsgroups dataset). The `MiniBatchKMeans` is preferred because of the size of our dataset (`KMeans` is generally safe to use for dataset sizes < 10k, but `MiniBatchKMeans` recommended for larger datasets).
**NOTE:** We will run this notebook multiple times for different values of `NUM_HOPS` (and once more for generating baseline K-Means clusters for the original TD Matrix).
```
import numpy as np
import os
import pandas as pd
from scipy.sparse import load_npz
from sklearn.cluster import MiniBatchKMeans
```
### Set NUM_HOPS parameter
We will run this notebook multiple times for different values of the `NUM_HOPS` parameter below.
```
NUM_HOPS = 1
```
### Constants
```
NUM_CLUSTERS = 20 # dataset is 20 newsgroups
DATA_DIR = "../data"
LABEL_FILEPATH = os.path.join(DATA_DIR, "labels.tsv")
PREDS_FILEPATH_TEMPLATE = os.path.join(DATA_DIR, "kmeans-preds-g{:d}.tsv")
GENPROBS_FILEPATH_TEMPLATE = os.path.join(DATA_DIR, "genprobs_{:d}.npy")
# # reusing for predictions for TD Matrix
# PREDS_FILEPATH_TEMPLATE = os.path.join(DATA_DIR, "kmeans-preds-td.tsv")
# GENPROBS_FILEPATH_TEMPLATE = os.path.join(DATA_DIR, "tdmatrix.npz")
```
### Generate doc_id mappings
Generating mappings to map the generated `doc_id` values to row IDs in the generation probability matrix.
```
row2docid_labels = {}
flabels = open(LABEL_FILEPATH, "r")
num_nodes = 0
for line in flabels:
doc_id, label = line.strip().split('\t')
row2docid_labels[num_nodes] = (doc_id, label)
num_nodes += 1
flabels.close()
```
### Load Data
```
X = np.load(GENPROBS_FILEPATH_TEMPLATE.format(NUM_HOPS))
# # reusing for predictions for TD Matrix
# X = load_npz(GENPROBS_FILEPATH_TEMPLATE)
```
### KMeans Clustering
```
kmeans = MiniBatchKMeans(n_clusters=NUM_CLUSTERS, random_state=42)
kmeans.fit(X)
preds = kmeans.predict(X)
```
### Write out predictions
```
num_predicted = 0
fpreds = open(PREDS_FILEPATH_TEMPLATE.format(NUM_HOPS), "w")
for row_id, pred in enumerate(preds):
if num_predicted % 1000 == 0:
print("{:d} rows predicted".format(num_predicted))
doc_id, label = row2docid_labels[row_id]
fpreds.write("{:s}\t{:s}\t{:d}\n".format(doc_id, label, pred))
num_predicted += 1
print("{:d} rows predicted, COMPLETE".format(num_predicted))
fpreds.close()
pred_df = pd.read_csv(PREDS_FILEPATH_TEMPLATE.format(NUM_HOPS),
delimiter="\t",
names=["doc_id", "label", "prediction"])
pred_df.head()
```
|
github_jupyter
|
import numpy as np
import os
import pandas as pd
from scipy.sparse import load_npz
from sklearn.cluster import MiniBatchKMeans
NUM_HOPS = 1
NUM_CLUSTERS = 20 # dataset is 20 newsgroups
DATA_DIR = "../data"
LABEL_FILEPATH = os.path.join(DATA_DIR, "labels.tsv")
PREDS_FILEPATH_TEMPLATE = os.path.join(DATA_DIR, "kmeans-preds-g{:d}.tsv")
GENPROBS_FILEPATH_TEMPLATE = os.path.join(DATA_DIR, "genprobs_{:d}.npy")
# # reusing for predictions for TD Matrix
# PREDS_FILEPATH_TEMPLATE = os.path.join(DATA_DIR, "kmeans-preds-td.tsv")
# GENPROBS_FILEPATH_TEMPLATE = os.path.join(DATA_DIR, "tdmatrix.npz")
row2docid_labels = {}
flabels = open(LABEL_FILEPATH, "r")
num_nodes = 0
for line in flabels:
doc_id, label = line.strip().split('\t')
row2docid_labels[num_nodes] = (doc_id, label)
num_nodes += 1
flabels.close()
X = np.load(GENPROBS_FILEPATH_TEMPLATE.format(NUM_HOPS))
# # reusing for predictions for TD Matrix
# X = load_npz(GENPROBS_FILEPATH_TEMPLATE)
kmeans = MiniBatchKMeans(n_clusters=NUM_CLUSTERS, random_state=42)
kmeans.fit(X)
preds = kmeans.predict(X)
num_predicted = 0
fpreds = open(PREDS_FILEPATH_TEMPLATE.format(NUM_HOPS), "w")
for row_id, pred in enumerate(preds):
if num_predicted % 1000 == 0:
print("{:d} rows predicted".format(num_predicted))
doc_id, label = row2docid_labels[row_id]
fpreds.write("{:s}\t{:s}\t{:d}\n".format(doc_id, label, pred))
num_predicted += 1
print("{:d} rows predicted, COMPLETE".format(num_predicted))
fpreds.close()
pred_df = pd.read_csv(PREDS_FILEPATH_TEMPLATE.format(NUM_HOPS),
delimiter="\t",
names=["doc_id", "label", "prediction"])
pred_df.head()
| 0.266166 | 0.857589 |
# Data Merging
Notebook to merge data from provided files
```
# Import libraries necessary for this project
import numpy as np
import pandas as pd
# Allows the use of display() for DataFrames
from IPython.display import display
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
import missingno as msno
from utils import *
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_rows', 500)
%load_ext autoreload
%autoreload 2
%matplotlib inline
import gc
gc.collect()
```
Utility function returns values counts to evaluate most frequent value for category columns.
```
most_frequent = lambda x: x.value_counts().index[0]
```
## Ext Bureau
bureau.csv
All client's previous credits provided by other financial institutions that were reported to Credit Bureau (for clients who have a loan in our sample).
For every loan in our sample, there are as many rows as number of credits the client had in Credit Bureau before the application date.
```
bureau = pd.read_csv('input/bureau.csv.zip')
bureau.head(2)
```
bureau_balance.csv
Monthly balances of previous credits in Credit Bureau.
This table has one row for each month of history of every previous credit reported to Credit Bureau โ i.e the table has (#loans in sample * # of relative previous credits * # of months where we have some history observable for the previous credits) rows.
```
bureau_balance = pd.read_csv('input/bureau_balance.csv.zip')
display(bureau_balance.head(2))
```
Merging bureau and balance data on SK_ID_BUREAU.
```
bureau_balance_by_id = bureau_balance.groupby('SK_ID_BUREAU')
bureau_grouped_size = bureau_balance_by_id['MONTHS_BALANCE'].size()
bureau_grouped_max = bureau_balance_by_id['MONTHS_BALANCE'].max()
bureau_grouped_min = bureau_balance_by_id['MONTHS_BALANCE'].min()
# create separate column for each STATUS in bureau_balance table.
bureau_counts = bureau_balance_by_id['STATUS'].value_counts(normalize = False)
bureau_counts_unstacked = bureau_counts.unstack('STATUS')
bureau_counts_unstacked.columns = ['STATUS_0', 'STATUS_1','STATUS_2','STATUS_3','STATUS_4','STATUS_5','STATUS_C','STATUS_X',]
bureau_counts_unstacked['MONTHS_COUNT'] = bureau_grouped_size
bureau_counts_unstacked['MONTHS_MIN'] = bureau_grouped_min
bureau_counts_unstacked['MONTHS_MAX'] = bureau_grouped_max
bureau = bureau.join(bureau_counts_unstacked, how='left', on='SK_ID_BUREAU')
bureau.head(2)
```
Relation between applicant to bureau is one to many, so to join bureau data we can e.g. avarage data per applicant.
Let's group by sk_id_curr and take average, this way we get data that we can merge with app_train/test data
```
bureau_by_skid = bureau.groupby('SK_ID_CURR')
avg_bureau = bureau_by_skid.mean()
avg_bureau['BUREAU_CNT'] = bureau[['SK_ID_BUREAU', 'SK_ID_CURR']].groupby('SK_ID_CURR').count()['SK_ID_BUREAU']
avg_bureau.drop(columns=['SK_ID_BUREAU'], inplace=True)
avg_bureau = avg_bureau.reset_index()
avg_bureau.head(2)
bureau_cols = avg_bureau.columns.tolist()
```
### Previous applications groups by sk id as well and take average, like for bureau data
previous_application.csv
All previous applications for Home Credit loans of clients who have loans in our sample.
There is one row for each previous application related to loans in our data sample.
```
previous_application = pd.read_csv('input/previous_application.csv.zip')
previous_application.head(2)
```
Previous application relation to applicant data is one-to-many, so to join data we need to avarage data per applicant.
```
avg_previous_app = previous_application.groupby('SK_ID_CURR').mean()
cnt_previous_app = previous_application[['SK_ID_CURR', 'SK_ID_PREV']].groupby('SK_ID_CURR').count()
# engeneering new column - number of previous applications
avg_previous_app['NUM_APPS'] = cnt_previous_app['SK_ID_PREV']
avg_previous_app.drop(columns=['SK_ID_PREV'], inplace=True)
```
Reset index after removing column
```
avg_previous_app = avg_previous_app.reset_index()
```
Rename conflicting with other files columns.
```
avg_previous_app = avg_previous_app.rename(index=str, columns={'AMT_ANNUITY': 'PREV_APP_AMT_ANNUITY'})
prev_app_cols = avg_previous_app.columns.tolist()
avg_previous_app.head(2)
```
### Credit card balance
POS_CASH_balance.csv
Monthly balance snapshots of previous POS (point of sales) and cash loans that the applicant had with Home Credit.
This table has one row for each month of history of every previous credit in Home Credit (consumer credit and cash loans) related to loans in our sample โ i.e. the table has (#loans in sample * # of relative previous credits * # of months in which we have some history observable for the previous credits) rows.
```
POS_CASH_balance = pd.read_csv('input/POS_CASH_balance.csv.zip')
POS_CASH_balance.head(2)
```
Processing categorical columns. Additional columns added: number of unique statuses and most frequent status.
```
# number of unqiue statues
nunique_status = POS_CASH_balance[['SK_ID_CURR', 'NAME_CONTRACT_STATUS']].groupby('SK_ID_CURR').nunique()
# find most frequent status
max_status = POS_CASH_balance[['SK_ID_CURR', 'NAME_CONTRACT_STATUS']].groupby('SK_ID_CURR').agg(most_frequent)
POS_CASH_balance['NUNIQUE_STATUS'] = nunique_status['NAME_CONTRACT_STATUS']
POS_CASH_balance['MAX_STATUS'] = max_status['NAME_CONTRACT_STATUS']
POS_CASH_balance.drop(columns=['SK_ID_PREV', 'NAME_CONTRACT_STATUS'], inplace=True)
POS_CASH_balance.head(2)
```
Avarage data per applicant to be able to join to main data file.
```
avg_POS_CASH_balance = POS_CASH_balance.groupby('SK_ID_CURR').mean().reset_index()
avg_POS_CASH_balance.head(2)
pos_cash_cols = avg_POS_CASH_balance.columns.tolist()
```
### Payment data
installments_payments.csv
Repayment history for the previously disbursed credits in Home Credit related to the loans in our sample.
There is a) one row for every payment that was made plus b) one row each for missed payment.
One row is equivalent to one payment of one installment OR one installment corresponding to one payment of one previous Home Credit credit related to loans in our sample.
```
installments_payments = pd.read_csv('input/installments_payments.csv.zip')
installments_payments.head(2)
installments_payments.drop(columns=['SK_ID_PREV'], inplace=True)
```
Avarage data per applicant to be able to join to main data file. Group be skid and take mean, max and min values for each grouping.
```
avg_payments = installments_payments.groupby('SK_ID_CURR').mean().reset_index()
avg_payments_cols = avg_payments.columns.tolist()
avg_payments.head(2)
max_payments = installments_payments.groupby('SK_ID_CURR').max().reset_index()
max_payments_cols = max_payments.columns.tolist()
max_payments.head(2)
min_payments = installments_payments.groupby('SK_ID_CURR').min().reset_index()
min_payments_cols = min_payments.columns.tolist()
min_payments.head(2)
```
### Credit cards data
credit_card_balance.csv
Monthly balance snapshots of previous credit cards that the applicant has with Home Credit.
This table has one row for each month of history of every previous credit in Home Credit (consumer credit and cash loans) related to loans in our sample โ i.e. the table has (#loans in sample * # of relative previous credit cards * # of months where we have some history observable for the previous credit card) rows.
```
credit_card_balance = pd.read_csv('input/credit_card_balance.csv.zip')
credit_card_balance.head(2)
# processing categorical columns before merge merge
# find number of unique statuses
nunique_status = credit_card_balance[['SK_ID_CURR', 'NAME_CONTRACT_STATUS']].groupby('SK_ID_CURR').nunique()
# find most frequent status
max_status = credit_card_balance[['SK_ID_CURR', 'NAME_CONTRACT_STATUS']].groupby('SK_ID_CURR').agg(most_frequent)
credit_card_balance['NUNIQUE_STATUS'] = nunique_status['NAME_CONTRACT_STATUS']
credit_card_balance['MAX_STATUS'] = max_status['NAME_CONTRACT_STATUS']
credit_card_balance.drop(columns=['SK_ID_PREV', 'NAME_CONTRACT_STATUS'], inplace=True)
credit_card_balance.head(2)
credit_card_balance = credit_card_balance.groupby('SK_ID_CURR').mean().reset_index()
credit_card_balance.head(2)
```
#### Conflicting columns renaming
```
cols_rename = {'MONTHS_BALANCE': 'CC_MONTHS_BALANCE',
'NUNIQUE_STATUS': 'CC_NUNIQUE_STATUS',
'SK_DPD': 'CC_SK_DPD',
'SK_DPD_DEF': 'CC_SK_DPD_DEF'}
credit_card_balance = credit_card_balance.rename(index=str, columns=cols_rename)
credit_card_balance_cols = credit_card_balance.columns.tolist()
cols_rename = {'AMT_ANNUITY': 'B_AMT_ANNUITY'}
avg_bureau = avg_bureau.rename(index=str, columns=cols_rename)
bureau_cols = avg_bureau.columns.tolist()
cols_rename = {'AMT_CREDIT': 'P_AMT_CREDIT',
'AMT_GOODS_PRICE': 'P_AMT_GOODS_PRICE',
'HOUR_APPR_PROCESS_START': 'P_HOUR_APPR_PROCESS_START'}
avg_previous_app = avg_previous_app.rename(index=str, columns=cols_rename)
prev_app_cols = avg_previous_app.columns.tolist()
```
### Check Column Names Conflicts
Files contains the same column names, to join them with out automatic renaming we need to find rename those beforehand.
bureau_cols
prev_app_cols
pos_cash_cols
avg_payments_cols
min_payments_cols
max_payments_cols
credit_card_balance_cols
```
set(bureau_cols).intersection(prev_app_cols)
all_cols = set(bureau_cols + prev_app_cols)
all_cols.intersection(pos_cash_cols)
all_cols = set(bureau_cols + prev_app_cols + pos_cash_cols)
all_cols.intersection(avg_payments_cols)
all_cols = set(bureau_cols + prev_app_cols + pos_cash_cols + avg_payments_cols)
all_cols.intersection(credit_card_balance_cols)
app_train_cols = app_train.columns.tolist()
all_cols = set(bureau_cols + prev_app_cols + pos_cash_cols + avg_payments_cols + credit_card_balance_cols)
all_cols.intersection(app_train_cols)
```
## Merging everything to train and test data
Join prepared data and store into train.csv and test.csv to use all provided data by models.
```
app_train = pd.read_csv('input/application_train.csv.zip')
app_train = app_train.merge(avg_bureau, how='left', on='SK_ID_CURR')
app_train = app_train.merge(avg_previous_app, how='left', on='SK_ID_CURR')
app_train = app_train.merge(avg_POS_CASH_balance, how='left', on='SK_ID_CURR')
app_train = app_train.merge(credit_card_balance, how='left', on='SK_ID_CURR')
app_train = app_train.merge(avg_payments, how='left', on='SK_ID_CURR')
#app_train = app_train.merge(min_payments, how='left', on='SK_ID_CURR')
#app_train = app_train.merge(max_payments, how='left', on='SK_ID_CURR')
app_train.shape
app_train.to_csv('input/train.csv', index=False)
app_train.head(2)
```
Test data
```
app_test = pd.read_csv('input/application_test.csv.zip')
app_test = app_test.merge(avg_bureau, how='left', on='SK_ID_CURR')
app_test = app_test.merge(avg_previous_app, how='left', on='SK_ID_CURR')
app_test = app_test.merge(avg_POS_CASH_balance, how='left', on='SK_ID_CURR')
app_test = app_test.merge(credit_card_balance, how='left', on='SK_ID_CURR')
app_test = app_test.merge(avg_payments, how='left', on='SK_ID_CURR')
#app_test = app_test.merge(min_payments, how='left', on='SK_ID_CURR')
#app_test = app_test.merge(max_payments, how='left', on='SK_ID_CURR')
app_test.shape
app_test.to_csv('input/test.csv', index=False)
app_test.head(2)
```
|
github_jupyter
|
# Import libraries necessary for this project
import numpy as np
import pandas as pd
# Allows the use of display() for DataFrames
from IPython.display import display
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
import missingno as msno
from utils import *
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_rows', 500)
%load_ext autoreload
%autoreload 2
%matplotlib inline
import gc
gc.collect()
most_frequent = lambda x: x.value_counts().index[0]
bureau = pd.read_csv('input/bureau.csv.zip')
bureau.head(2)
bureau_balance = pd.read_csv('input/bureau_balance.csv.zip')
display(bureau_balance.head(2))
bureau_balance_by_id = bureau_balance.groupby('SK_ID_BUREAU')
bureau_grouped_size = bureau_balance_by_id['MONTHS_BALANCE'].size()
bureau_grouped_max = bureau_balance_by_id['MONTHS_BALANCE'].max()
bureau_grouped_min = bureau_balance_by_id['MONTHS_BALANCE'].min()
# create separate column for each STATUS in bureau_balance table.
bureau_counts = bureau_balance_by_id['STATUS'].value_counts(normalize = False)
bureau_counts_unstacked = bureau_counts.unstack('STATUS')
bureau_counts_unstacked.columns = ['STATUS_0', 'STATUS_1','STATUS_2','STATUS_3','STATUS_4','STATUS_5','STATUS_C','STATUS_X',]
bureau_counts_unstacked['MONTHS_COUNT'] = bureau_grouped_size
bureau_counts_unstacked['MONTHS_MIN'] = bureau_grouped_min
bureau_counts_unstacked['MONTHS_MAX'] = bureau_grouped_max
bureau = bureau.join(bureau_counts_unstacked, how='left', on='SK_ID_BUREAU')
bureau.head(2)
bureau_by_skid = bureau.groupby('SK_ID_CURR')
avg_bureau = bureau_by_skid.mean()
avg_bureau['BUREAU_CNT'] = bureau[['SK_ID_BUREAU', 'SK_ID_CURR']].groupby('SK_ID_CURR').count()['SK_ID_BUREAU']
avg_bureau.drop(columns=['SK_ID_BUREAU'], inplace=True)
avg_bureau = avg_bureau.reset_index()
avg_bureau.head(2)
bureau_cols = avg_bureau.columns.tolist()
previous_application = pd.read_csv('input/previous_application.csv.zip')
previous_application.head(2)
avg_previous_app = previous_application.groupby('SK_ID_CURR').mean()
cnt_previous_app = previous_application[['SK_ID_CURR', 'SK_ID_PREV']].groupby('SK_ID_CURR').count()
# engeneering new column - number of previous applications
avg_previous_app['NUM_APPS'] = cnt_previous_app['SK_ID_PREV']
avg_previous_app.drop(columns=['SK_ID_PREV'], inplace=True)
avg_previous_app = avg_previous_app.reset_index()
avg_previous_app = avg_previous_app.rename(index=str, columns={'AMT_ANNUITY': 'PREV_APP_AMT_ANNUITY'})
prev_app_cols = avg_previous_app.columns.tolist()
avg_previous_app.head(2)
POS_CASH_balance = pd.read_csv('input/POS_CASH_balance.csv.zip')
POS_CASH_balance.head(2)
# number of unqiue statues
nunique_status = POS_CASH_balance[['SK_ID_CURR', 'NAME_CONTRACT_STATUS']].groupby('SK_ID_CURR').nunique()
# find most frequent status
max_status = POS_CASH_balance[['SK_ID_CURR', 'NAME_CONTRACT_STATUS']].groupby('SK_ID_CURR').agg(most_frequent)
POS_CASH_balance['NUNIQUE_STATUS'] = nunique_status['NAME_CONTRACT_STATUS']
POS_CASH_balance['MAX_STATUS'] = max_status['NAME_CONTRACT_STATUS']
POS_CASH_balance.drop(columns=['SK_ID_PREV', 'NAME_CONTRACT_STATUS'], inplace=True)
POS_CASH_balance.head(2)
avg_POS_CASH_balance = POS_CASH_balance.groupby('SK_ID_CURR').mean().reset_index()
avg_POS_CASH_balance.head(2)
pos_cash_cols = avg_POS_CASH_balance.columns.tolist()
installments_payments = pd.read_csv('input/installments_payments.csv.zip')
installments_payments.head(2)
installments_payments.drop(columns=['SK_ID_PREV'], inplace=True)
avg_payments = installments_payments.groupby('SK_ID_CURR').mean().reset_index()
avg_payments_cols = avg_payments.columns.tolist()
avg_payments.head(2)
max_payments = installments_payments.groupby('SK_ID_CURR').max().reset_index()
max_payments_cols = max_payments.columns.tolist()
max_payments.head(2)
min_payments = installments_payments.groupby('SK_ID_CURR').min().reset_index()
min_payments_cols = min_payments.columns.tolist()
min_payments.head(2)
credit_card_balance = pd.read_csv('input/credit_card_balance.csv.zip')
credit_card_balance.head(2)
# processing categorical columns before merge merge
# find number of unique statuses
nunique_status = credit_card_balance[['SK_ID_CURR', 'NAME_CONTRACT_STATUS']].groupby('SK_ID_CURR').nunique()
# find most frequent status
max_status = credit_card_balance[['SK_ID_CURR', 'NAME_CONTRACT_STATUS']].groupby('SK_ID_CURR').agg(most_frequent)
credit_card_balance['NUNIQUE_STATUS'] = nunique_status['NAME_CONTRACT_STATUS']
credit_card_balance['MAX_STATUS'] = max_status['NAME_CONTRACT_STATUS']
credit_card_balance.drop(columns=['SK_ID_PREV', 'NAME_CONTRACT_STATUS'], inplace=True)
credit_card_balance.head(2)
credit_card_balance = credit_card_balance.groupby('SK_ID_CURR').mean().reset_index()
credit_card_balance.head(2)
cols_rename = {'MONTHS_BALANCE': 'CC_MONTHS_BALANCE',
'NUNIQUE_STATUS': 'CC_NUNIQUE_STATUS',
'SK_DPD': 'CC_SK_DPD',
'SK_DPD_DEF': 'CC_SK_DPD_DEF'}
credit_card_balance = credit_card_balance.rename(index=str, columns=cols_rename)
credit_card_balance_cols = credit_card_balance.columns.tolist()
cols_rename = {'AMT_ANNUITY': 'B_AMT_ANNUITY'}
avg_bureau = avg_bureau.rename(index=str, columns=cols_rename)
bureau_cols = avg_bureau.columns.tolist()
cols_rename = {'AMT_CREDIT': 'P_AMT_CREDIT',
'AMT_GOODS_PRICE': 'P_AMT_GOODS_PRICE',
'HOUR_APPR_PROCESS_START': 'P_HOUR_APPR_PROCESS_START'}
avg_previous_app = avg_previous_app.rename(index=str, columns=cols_rename)
prev_app_cols = avg_previous_app.columns.tolist()
set(bureau_cols).intersection(prev_app_cols)
all_cols = set(bureau_cols + prev_app_cols)
all_cols.intersection(pos_cash_cols)
all_cols = set(bureau_cols + prev_app_cols + pos_cash_cols)
all_cols.intersection(avg_payments_cols)
all_cols = set(bureau_cols + prev_app_cols + pos_cash_cols + avg_payments_cols)
all_cols.intersection(credit_card_balance_cols)
app_train_cols = app_train.columns.tolist()
all_cols = set(bureau_cols + prev_app_cols + pos_cash_cols + avg_payments_cols + credit_card_balance_cols)
all_cols.intersection(app_train_cols)
app_train = pd.read_csv('input/application_train.csv.zip')
app_train = app_train.merge(avg_bureau, how='left', on='SK_ID_CURR')
app_train = app_train.merge(avg_previous_app, how='left', on='SK_ID_CURR')
app_train = app_train.merge(avg_POS_CASH_balance, how='left', on='SK_ID_CURR')
app_train = app_train.merge(credit_card_balance, how='left', on='SK_ID_CURR')
app_train = app_train.merge(avg_payments, how='left', on='SK_ID_CURR')
#app_train = app_train.merge(min_payments, how='left', on='SK_ID_CURR')
#app_train = app_train.merge(max_payments, how='left', on='SK_ID_CURR')
app_train.shape
app_train.to_csv('input/train.csv', index=False)
app_train.head(2)
app_test = pd.read_csv('input/application_test.csv.zip')
app_test = app_test.merge(avg_bureau, how='left', on='SK_ID_CURR')
app_test = app_test.merge(avg_previous_app, how='left', on='SK_ID_CURR')
app_test = app_test.merge(avg_POS_CASH_balance, how='left', on='SK_ID_CURR')
app_test = app_test.merge(credit_card_balance, how='left', on='SK_ID_CURR')
app_test = app_test.merge(avg_payments, how='left', on='SK_ID_CURR')
#app_test = app_test.merge(min_payments, how='left', on='SK_ID_CURR')
#app_test = app_test.merge(max_payments, how='left', on='SK_ID_CURR')
app_test.shape
app_test.to_csv('input/test.csv', index=False)
app_test.head(2)
| 0.282592 | 0.866867 |
# Deep Gaussian Processes
## Introduction
In this notebook, we provide a GPyTorch implementation of deep Gaussian processes, where training and inference is performed using the method of Salimbeni et al., 2017 (https://arxiv.org/abs/1705.08933) adapted to CG-based inference.
We'll be training a simple two layer deep GP on the `elevators` UCI dataset.
```
%set_env CUDA_VISIBLE_DEVICES=0
import torch
import tqdm
import gpytorch
from gpytorch.means import ConstantMean, LinearMean
from gpytorch.kernels import RBFKernel, ScaleKernel
from gpytorch.variational import VariationalStrategy, CholeskyVariationalDistribution
from gpytorch.distributions import MultivariateNormal
from gpytorch.models import ApproximateGP, GP
from gpytorch.mlls import VariationalELBO, AddedLossTerm
from gpytorch.likelihoods import GaussianLikelihood
from gpytorch.models.deep_gps import DeepGPLayer, DeepGP
from gpytorch.mlls import DeepApproximateMLL
```
### Loading Data
For this example notebook, we'll be using the `elevators` UCI dataset used in the paper. Running the next cell downloads a copy of the dataset that has already been scaled and normalized appropriately. For this notebook, we'll simply be splitting the data using the first 80% of the data as training and the last 20% as testing.
**Note**: Running the next cell will attempt to download a ~400 KB dataset file to the current directory.
```
import urllib.request
import os
from scipy.io import loadmat
from math import floor
# this is for running the notebook in our testing framework
smoke_test = ('CI' in os.environ)
if not smoke_test and not os.path.isfile('../elevators.mat'):
print('Downloading \'elevators\' UCI dataset...')
urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1jhWL3YUHvXIaftia4qeAyDwVxo6j1alk', '../elevators.mat')
if smoke_test: # this is for running the notebook in our testing framework
X, y = torch.randn(1000, 3), torch.randn(1000)
else:
data = torch.Tensor(loadmat('../elevators.mat')['data'])
X = data[:, :-1]
X = X - X.min(0)[0]
X = 2 * (X / X.max(0)[0]) - 1
y = data[:, -1]
train_n = int(floor(0.8 * len(X)))
train_x = X[:train_n, :].contiguous()
train_y = y[:train_n].contiguous()
test_x = X[train_n:, :].contiguous()
test_y = y[train_n:].contiguous()
if torch.cuda.is_available():
train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda()
from torch.utils.data import TensorDataset, DataLoader
train_dataset = TensorDataset(train_x, train_y)
train_loader = DataLoader(train_dataset, batch_size=1024, shuffle=True)
```
## Defining GP layers
In GPyTorch, defining a GP involves extending one of our abstract GP models and defining a `forward` method that returns the prior. For deep GPs, things are similar, but there are two abstract GP models that must be overwritten: one for hidden layers and one for the deep GP model itself.
In the next cell, we define an example deep GP hidden layer. This looks very similar to every other variational GP you might define. However, there are a few key differences:
1. Instead of extending `ApproximateGP`, we extend `DeepGPLayer`.
2. `DeepGPLayers` need a number of input dimensions, a number of output dimensions, and a number of samples. This is kind of like a linear layer in a standard neural network -- `input_dims` defines how many inputs this hidden layer will expect, and `output_dims` defines how many hidden GPs to create outputs for.
In this particular example, we make a particularly fancy `DeepGPLayer` that has "skip connections" with previous layers, similar to a ResNet.
```
class ToyDeepGPHiddenLayer(DeepGPLayer):
def __init__(self, input_dims, output_dims, num_inducing=128, mean_type='constant'):
if output_dims is None:
inducing_points = torch.randn(num_inducing, input_dims)
batch_shape = torch.Size([])
else:
inducing_points = torch.randn(output_dims, num_inducing, input_dims)
batch_shape = torch.Size([output_dims])
variational_distribution = CholeskyVariationalDistribution(
num_inducing_points=num_inducing,
batch_shape=batch_shape
)
variational_strategy = VariationalStrategy(
self,
inducing_points,
variational_distribution,
learn_inducing_locations=True
)
super(ToyDeepGPHiddenLayer, self).__init__(variational_strategy, input_dims, output_dims)
if mean_type == 'constant':
self.mean_module = ConstantMean(batch_shape=batch_shape)
else:
self.mean_module = LinearMean(input_dims)
self.covar_module = ScaleKernel(
RBFKernel(batch_shape=batch_shape, ard_num_dims=input_dims),
batch_shape=batch_shape, ard_num_dims=None
)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return MultivariateNormal(mean_x, covar_x)
def __call__(self, x, *other_inputs, **kwargs):
"""
Overriding __call__ isn't strictly necessary, but it lets us add concatenation based skip connections
easily. For example, hidden_layer2(hidden_layer1_outputs, inputs) will pass the concatenation of the first
hidden layer's outputs and the input data to hidden_layer2.
"""
if len(other_inputs):
if isinstance(x, gpytorch.distributions.MultitaskMultivariateNormal):
x = x.rsample()
processed_inputs = [
inp.unsqueeze(0).expand(self.num_samples, *inp.shape)
for inp in other_inputs
]
x = torch.cat([x] + processed_inputs, dim=-1)
return super().__call__(x, are_samples=bool(len(other_inputs)))
```
## Building the deep GP
Now that we've defined a class for our hidden layers and a class for our output layer, we can build our deep GP. To do this, we create a `Module` whose forward is simply responsible for forwarding through the various layers.
This also allows for various network connectivities easily. For example calling,
```
hidden_rep2 = self.second_hidden_layer(hidden_rep1, inputs)
```
in forward would cause the second hidden layer to use both the output of the first hidden layer and the input data as inputs, concatenating the two together.
```
num_output_dims = 2 if smoke_test else 10
class DeepGP(DeepGP):
def __init__(self, train_x_shape):
hidden_layer = ToyDeepGPHiddenLayer(
input_dims=train_x_shape[-1],
output_dims=num_output_dims,
mean_type='linear',
)
last_layer = ToyDeepGPHiddenLayer(
input_dims=hidden_layer.output_dims,
output_dims=None,
mean_type='constant',
)
super().__init__()
self.hidden_layer = hidden_layer
self.last_layer = last_layer
self.likelihood = GaussianLikelihood()
def forward(self, inputs):
hidden_rep1 = self.hidden_layer(inputs)
output = self.last_layer(hidden_rep1)
return output
def predict(self, test_loader):
with torch.no_grad():
mus = []
variances = []
lls = []
for x_batch, y_batch in test_loader:
preds = model.likelihood(model(x_batch))
mus.append(preds.mean)
variances.append(preds.variance)
lls.append(model.likelihood.log_marginal(y_batch, model(x_batch)))
return torch.cat(mus, dim=-1), torch.cat(variances, dim=-1), torch.cat(lls, dim=-1)
model = DeepGP(train_x.shape)
if torch.cuda.is_available():
model = model.cuda()
```
## Objective function (approximate marginal log likelihood/ELBO)
Because deep GPs use some amounts of internal sampling (even in the stochastic variational setting), we need to handle the objective function (e.g. the ELBO) in a slightly different way. To do this, wrap the standard objective function (e.g. `~gpytorch.mlls.VariationalELBO`) with a `gpytorch.mlls.DeepApproximateMLL`.
## Training/Testing
The training loop for a deep GP looks similar to a standard GP model with stochastic variational inference.
```
# this is for running the notebook in our testing framework
num_epochs = 1 if smoke_test else 10
num_samples = 3 if smoke_test else 10
optimizer = torch.optim.Adam([
{'params': model.parameters()},
], lr=0.01)
mll = DeepApproximateMLL(VariationalELBO(model.likelihood, model, train_x.shape[-2]))
epochs_iter = tqdm.notebook.tqdm(range(num_epochs), desc="Epoch")
for i in epochs_iter:
# Within each iteration, we will go over each minibatch of data
minibatch_iter = tqdm.notebook.tqdm(train_loader, desc="Minibatch", leave=False)
for x_batch, y_batch in minibatch_iter:
with gpytorch.settings.num_likelihood_samples(num_samples):
optimizer.zero_grad()
output = model(x_batch)
loss = -mll(output, y_batch)
loss.backward()
optimizer.step()
minibatch_iter.set_postfix(loss=loss.item())
```
The output distribution of a deep GP in this framework is actually a mixture of `num_samples` Gaussians for each output. We get predictions the same way with all GPyTorch models, but we do currently need to do some reshaping to get the means and variances in a reasonable form.
Note that you may have to do more epochs of training than this example to get optimal performance; however, the performance on this particular dataset is pretty good after 10.
```
import gpytorch
import math
test_dataset = TensorDataset(test_x, test_y)
test_loader = DataLoader(test_dataset, batch_size=1024)
model.eval()
predictive_means, predictive_variances, test_lls = model.predict(test_loader)
rmse = torch.mean(torch.pow(predictive_means.mean(0) - test_y, 2)).sqrt()
print(f"RMSE: {rmse.item()}, NLL: {-test_lls.mean().item()}")
```
|
github_jupyter
|
%set_env CUDA_VISIBLE_DEVICES=0
import torch
import tqdm
import gpytorch
from gpytorch.means import ConstantMean, LinearMean
from gpytorch.kernels import RBFKernel, ScaleKernel
from gpytorch.variational import VariationalStrategy, CholeskyVariationalDistribution
from gpytorch.distributions import MultivariateNormal
from gpytorch.models import ApproximateGP, GP
from gpytorch.mlls import VariationalELBO, AddedLossTerm
from gpytorch.likelihoods import GaussianLikelihood
from gpytorch.models.deep_gps import DeepGPLayer, DeepGP
from gpytorch.mlls import DeepApproximateMLL
import urllib.request
import os
from scipy.io import loadmat
from math import floor
# this is for running the notebook in our testing framework
smoke_test = ('CI' in os.environ)
if not smoke_test and not os.path.isfile('../elevators.mat'):
print('Downloading \'elevators\' UCI dataset...')
urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1jhWL3YUHvXIaftia4qeAyDwVxo6j1alk', '../elevators.mat')
if smoke_test: # this is for running the notebook in our testing framework
X, y = torch.randn(1000, 3), torch.randn(1000)
else:
data = torch.Tensor(loadmat('../elevators.mat')['data'])
X = data[:, :-1]
X = X - X.min(0)[0]
X = 2 * (X / X.max(0)[0]) - 1
y = data[:, -1]
train_n = int(floor(0.8 * len(X)))
train_x = X[:train_n, :].contiguous()
train_y = y[:train_n].contiguous()
test_x = X[train_n:, :].contiguous()
test_y = y[train_n:].contiguous()
if torch.cuda.is_available():
train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda()
from torch.utils.data import TensorDataset, DataLoader
train_dataset = TensorDataset(train_x, train_y)
train_loader = DataLoader(train_dataset, batch_size=1024, shuffle=True)
class ToyDeepGPHiddenLayer(DeepGPLayer):
def __init__(self, input_dims, output_dims, num_inducing=128, mean_type='constant'):
if output_dims is None:
inducing_points = torch.randn(num_inducing, input_dims)
batch_shape = torch.Size([])
else:
inducing_points = torch.randn(output_dims, num_inducing, input_dims)
batch_shape = torch.Size([output_dims])
variational_distribution = CholeskyVariationalDistribution(
num_inducing_points=num_inducing,
batch_shape=batch_shape
)
variational_strategy = VariationalStrategy(
self,
inducing_points,
variational_distribution,
learn_inducing_locations=True
)
super(ToyDeepGPHiddenLayer, self).__init__(variational_strategy, input_dims, output_dims)
if mean_type == 'constant':
self.mean_module = ConstantMean(batch_shape=batch_shape)
else:
self.mean_module = LinearMean(input_dims)
self.covar_module = ScaleKernel(
RBFKernel(batch_shape=batch_shape, ard_num_dims=input_dims),
batch_shape=batch_shape, ard_num_dims=None
)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return MultivariateNormal(mean_x, covar_x)
def __call__(self, x, *other_inputs, **kwargs):
"""
Overriding __call__ isn't strictly necessary, but it lets us add concatenation based skip connections
easily. For example, hidden_layer2(hidden_layer1_outputs, inputs) will pass the concatenation of the first
hidden layer's outputs and the input data to hidden_layer2.
"""
if len(other_inputs):
if isinstance(x, gpytorch.distributions.MultitaskMultivariateNormal):
x = x.rsample()
processed_inputs = [
inp.unsqueeze(0).expand(self.num_samples, *inp.shape)
for inp in other_inputs
]
x = torch.cat([x] + processed_inputs, dim=-1)
return super().__call__(x, are_samples=bool(len(other_inputs)))
hidden_rep2 = self.second_hidden_layer(hidden_rep1, inputs)
num_output_dims = 2 if smoke_test else 10
class DeepGP(DeepGP):
def __init__(self, train_x_shape):
hidden_layer = ToyDeepGPHiddenLayer(
input_dims=train_x_shape[-1],
output_dims=num_output_dims,
mean_type='linear',
)
last_layer = ToyDeepGPHiddenLayer(
input_dims=hidden_layer.output_dims,
output_dims=None,
mean_type='constant',
)
super().__init__()
self.hidden_layer = hidden_layer
self.last_layer = last_layer
self.likelihood = GaussianLikelihood()
def forward(self, inputs):
hidden_rep1 = self.hidden_layer(inputs)
output = self.last_layer(hidden_rep1)
return output
def predict(self, test_loader):
with torch.no_grad():
mus = []
variances = []
lls = []
for x_batch, y_batch in test_loader:
preds = model.likelihood(model(x_batch))
mus.append(preds.mean)
variances.append(preds.variance)
lls.append(model.likelihood.log_marginal(y_batch, model(x_batch)))
return torch.cat(mus, dim=-1), torch.cat(variances, dim=-1), torch.cat(lls, dim=-1)
model = DeepGP(train_x.shape)
if torch.cuda.is_available():
model = model.cuda()
# this is for running the notebook in our testing framework
num_epochs = 1 if smoke_test else 10
num_samples = 3 if smoke_test else 10
optimizer = torch.optim.Adam([
{'params': model.parameters()},
], lr=0.01)
mll = DeepApproximateMLL(VariationalELBO(model.likelihood, model, train_x.shape[-2]))
epochs_iter = tqdm.notebook.tqdm(range(num_epochs), desc="Epoch")
for i in epochs_iter:
# Within each iteration, we will go over each minibatch of data
minibatch_iter = tqdm.notebook.tqdm(train_loader, desc="Minibatch", leave=False)
for x_batch, y_batch in minibatch_iter:
with gpytorch.settings.num_likelihood_samples(num_samples):
optimizer.zero_grad()
output = model(x_batch)
loss = -mll(output, y_batch)
loss.backward()
optimizer.step()
minibatch_iter.set_postfix(loss=loss.item())
import gpytorch
import math
test_dataset = TensorDataset(test_x, test_y)
test_loader = DataLoader(test_dataset, batch_size=1024)
model.eval()
predictive_means, predictive_variances, test_lls = model.predict(test_loader)
rmse = torch.mean(torch.pow(predictive_means.mean(0) - test_y, 2)).sqrt()
print(f"RMSE: {rmse.item()}, NLL: {-test_lls.mean().item()}")
| 0.815747 | 0.987664 |
# Bokeh Userguide
https://docs.bokeh.org/en/latest/docs/user_guide.html
## Quickstart
๋ณด์บก(Bokeh)์ ์นํ๋ผ์ฐ์ ๋ฅผ ํตํด ๊ตฌํํ๊ธฐ ์ํ ๋์ ์ธ ์๊ฐํ ๋ผ์ด๋ธ๋ฌ๋ฆฌ์
๋๋ค. ๊ฐ๋จํ๊ฒ ๋์ ์ธ ๋ฐ์ดํฐ ์๊ฐํ๋ฅผ ๊ฐ๋ฅํ๊ฒ ํ๋ฉด์ ๋์์ ๊ฐ๋ ฅํ ์๊ฐํ ํด๋ก์ ๊ธฐ๋ฅํ๊ธฐ ์ํด์ ๋ณด์บก์ ์ฌ์ฉ์์๊ฒ ๋๊ฐ์ง ์ธํฐํ์ด์ค๋ฅผ ์ ๊ณตํฉ๋๋ค.
* bokeh.models: ๋ฎ์ ์์ค์ ์ธํฐํ์ด์ค๋ก ๊ฐ๋ฐ์๊ฐ ๋ณด๋ค ์์ ์ ์๋๋ฅผ ๋ฐ์ํด ๋์ ๊ทธ๋ํ๋ฅผ ์์ฑ์ํฌ ์ ์๊ฒ ํฉ๋๋ค.
* bokeh.plotting: ๋์ ์์์ ์ธํฐํ์ด์ค๋ก ๋ณด๋ค ์์ฝ๊ฒ ๋์ ๊ทธ๋ํ๋ฅผ ์์ฑํค์ค ์ ์๊ฒ ํฉ๋๋ค.
### Getting Started _ bokeh.plotting
#### bokeh.plotting์ ์ฌ์ฉ ์์
1. ํ์ด์ฌ์ list, ๋ํ์ด์ array, ํ๋ค์ค์ series ํํ์ ๋ฐ์ดํฐ์ ์ค๋นํ๋ค.
<br><br>
2. bokeh์ ๊ฒฐ๊ณผ๋ฌผ ์ ์ฅ ์ฝ๋๋ฅผ ์์ฑํ๋ค.(output_file() ํน์ output_notebook()์ ์ฌ์ฉํ๋ค)
<br><br>
3. ๋ํ์ง ๊ฐ์ฒด(figure)๋ฅผ ์์ฑํ๊ณ ์ธ๋ถ ์ต์
์ ์ค์ ํด์ค๋ค.
<br><br>
4. ๊ทธ๋ํ ๋ฉ์๋(์์. line)๋ฅผ ์์ฑํ๊ณ ์ธ๋ถ ์ต์
์ ์ค์ ํด์ค๋ค.
<br><br>
5. show()๋ save() ๋ฉ์๋๋ฅผ ์ฌ์ฉํด์ ๊ฒฐ๊ณผ๋ฌผ์ ์ฒ๋ฆฌํ๋ค.
<p>๋ณด์บก์ ์ฒซ ์์์ผ๋ก ๊ฐ๋จํ ์ ํ ๊ทธ๋ํ(line plot)๋ฅผ ๊ทธ๋ ค๋ด
์๋ค. <br>(์๋ ์ฝ๋๋ฅผ ์ฅฌํผํฐ๋
ธํธ๋ถ์์ ๋๋ฆฐ๋ค๋ฉด ์๋ก์ด ํ์ด์ง์ ๊ทธ๋ํ๊ฐ ๋ํ๋ ๊ฒ์
๋๋ค.)</p>
## jupyter notebook์์ ๊ทธ๋ํ ์ถ๋ ฅํ๊ธฐ
* output_file("ํ์ผ๋ช
.html")์ ์
๋ ฅํด์ฃผ๋ฉด ๊ฒฐ๊ณผ๊ฐ์ด html ํ์ผํํ๋ก ์ ์ฅ๋จ๊ณผ ๋์์ ์น๋ธ๋ผ์ฐ์ ์ ์๋ก์ด ํญ์ด ์ด๋ฆฌ๋ฉด์ ํด๋น html ํ์ผ(๊ฒฐ๊ณผ๊ฐ)์ด ์ถ๋ ฅ๋๋ค. ์ฐ๋ฆฌ๋ ํ์ฌ bokeh์์ ๊ทธ๋ํ๋ฅผ ๊ทธ๋ฆฌ๋ ์ฃผ์ ๊ธฐ๋ฅ์ ์ฐ์ตํ๊ณ ์๊ธฐ์ ์ฅฌํผํฐ ๋
ธํธ๋ถ ์์์ ๊ทธ๋ํ๊ฐ ํํ๋๋๋ก ํ์. <br>
(์ด๋ฅผ ์ํด ๊ฐ ์ฝ๋์ output_file() ์ฝ๋๊ฐ ์๋ค๋ฉด ์ด๋ฅผ ์ฃผ์์ฒ๋ฆฌํด์ ๋จ๊ฒจ๋์๋ค.)
<br><br>
* ์๋ ์ฝ๋๋ bokeh์ ๊ทธ๋ํ๋ฅผ jupyter notebook ์์ ํํํ๋ ๋ฐฉ๋ฒ์ด๋ค.(๋งํฌ: https://stackoverflow.com/questions/51512907/how-to-stop-bokeh-from-opening-a-new-tab-in-jupyter-notebook)<br>
์ด๋ ๊ฒ ์ค์ ์ ํ๋ฒ ํด์ฃผ๋ฉด ๊ฐ ๊ทธ๋ํ๋ฅผ ๊ทธ๋ฆด๋ ๋ฐ๋ก ์ ์ฅ์ค์ ์ ์ํด์ค๋ ๋ฐ๋ก jupyter notebook ์์ ํํ๋๋ค.
```
import bokeh.io
# bokeh ๊ฒฐ๊ณผ๊ฐ์ ํํ ๋ฐฉ๋ฒ์ resetํด์ค๋ค.
bokeh.io.reset_output()
# bokeh ๊ฒฐ๊ณผ๊ฐ์ ํํ ๋ฐฉ๋ฒ์
bokeh.io.output_notebook()
# ๋ณด์บก์์ ๊ฐ๋จํ๊ฒ ๊ทธ๋ํ๋ฅผ ๊ทธ๋ฆฌ๋ ์ธํฐํ์ด์ค์ธ bokeh.plotting์์ ์ํ๋ ๋ชจ๋์ ๊ฐ์ ธ์ต์๋ค.
# figure : matplotlib์์์ ๋ง์ฐฌ๊ฐ์ง๋ก ๋ํ์ง ์ญํ ์ ํฉ๋๋ค.
# output_file : ์ต์ข
๊ฒฐ๊ณผ๋ฌผ์ ์ด๋ค ์ด๋ฆ, ์ด๋ค ํ์์ผ๋ก ์ ์ฅํ ๊ฒ์ธ์ง ์ค์ ํฉ๋๋ค.
# show : ๊ทธ๋ฆผ์ ๋ณด์ฌ์ฃผ๋ ์ญํ ์ ํฉ๋๋ค.
from bokeh.plotting import figure, output_file, show
# ๋ฐ์ดํฐ๋ฅผ ์ค๋นํฉ๋๋ค.
x = [1, 2, 3, 4, 5]
y = [6, 7, 2, 4, 5]
# # ์๊ฐํ ๊ฒฐ๊ณผ๋ฌผ์ lines๋ผ๋ ์ด๋ฆ์ html ๋ฌธ์๋ก ์ ์ฅํจ์ ์ ์ธํฉ๋๋ค.
# output_file("lines.html")
# figure๋ฅผ ํตํด์๋ ๋ํ์ง์ ์ต์
์ ์ค์ ํ ์ ์์ต๋๋ค.
# title๋ ๋ํ์ง์ ์ ๋ชฉ, x_axis_label๋ x์ถ์ ์ด๋ค ๊ฐ๋ค์ด ์ค๋์ง, y_axis_label์ y์ถ์ ์ด๋ค ๊ฐ๋ค์ด ์ค๋์ง
p = figure(title="simple line example", x_axis_label='x', y_axis_label='y')
# ์ ํ ๊ทธ๋ํ(line)์ ๊ทธ๋ ค์ค๋๋ค.figure์ ๊ฐ์ฒด๋กํ๊ณ line์ ๋ฉ์๋๋ก ํด์ ์์ฑํฉ๋๋ค.
p.line(x, y, legend="Temp.", line_width=2)
# show๋ก ํด๋น ๊ทธ๋ํ๋ฅผ ๋ด
๋๋ค.
show(p)
```
### ๋ค์ํ ๊ทธ๋ํ๋ฅผ ํ๋์ figure์ ํํํ๊ธฐ _ bokeh.plotting
```
from bokeh.plotting import figure, output_file, show
# ๊ทธ๋ํ๋ก ํํํ ๋ฐ์ดํฐ๋ฅผ ์ค๋นํ๋ค.
x = [0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
y0 = [i**2 for i in x]
y1 = [10**i for i in x]
y2 = [10**(i**2) for i in x]
# # ๊ฒฐ๊ณผ๊ฐ์ ์ ์ฅํ ํํ๋ฅผ ์ค์ ํ๋ค.(html ๋ฌธ์๋ก ์ ์ฅ)
# output_file("log_lines.html")
# figure๋ฅผ ๋ง๋ค๊ณ ์ค์ ํด์ค๋ค.
# tools = ํด๋น ๊ทธ๋ํ์ ๊ธฐ๋ฅ์ ์ค์ ํด์ค๋ค.(์๋ ์ธ๋ถ ์ค๋ช
)
## >>> pan : ๊ทธ๋ํ๋ฅผ ์์ง์ผ ์ ์๋ค, box_zoom : ๊ทธ๋ํ๋ฅผ ์ค์ธ ํ ์ ์๋ค
## >>> reset : ๊ทธ๋ํ๋ฅผ ์์ํ๋ก ๋๋๋ฆด ์ ์๋ค. save : ๊ทธ๋ํ๋ฅผ png ํ์ผ๋ก ์ ์ฅํ ์ ์๋ค.
# y_axis_type = y์ถ์ ๊ฐ์ผ๋ก ๋ฐ๋ ๋ฐ์ดํฐ๋ฅผ ์ด๋ค ํํ๋ก ๊ฐ๊ณตํด์ ํํํ ์ง ์ ํ๋ค.
## >>> y_axis_type = "log"์ ๊ฒฝ์ฐ y์ถ ์ธ์๋ก ๋ฐ๋ ๊ฐ๋ค์ ๋ํด log๋ฅผ ์ทจํด์ค๋ค.
# y_range = y์ถ์ ๊ฐ์ ๋ฒ์๋ฅผ ์ค์ ํด์ค๋ค.
## x_axis_label(y_axis_label) = x์ถ, y์ถ์ ์ด๋ฆ์ ๋ถ์ฌ์ค๋ค.
p = figure(
tools="pan,box_zoom,reset,save",
y_axis_type="log", y_range=[0.001, 10**11], title="log axis example",
x_axis_label='sections', y_axis_label='particles'
)
# ๊ธฐ๋ณธ์์ผ๋ก ์ ์ ๋ง๋ค๊ณ ํด๋น ์ ์ ๋ฒ๋ก์ y=x๋ผ๋ ์ด๋ฆ์ผ๋ก ๋ค์ด๊ฐ๋ค.
p.line(x, x, legend="y=x")
# x,x ์ขํ์ ๊ธฐ๋ณธ์์ผ๋ก ์์ ๊ทธ๋ฆฌ๊ณ ๊ทธ ์์ white๋ก ์ฑ์ด๋ค.
## circle๋ฉ์๋์ fill_color๋ก ์ ์์ ์์ ์ฑ์ธ ์ ์๋ค.
p.circle(x, x, legend="y=x", fill_color="white", size=8)
# line ๋ฉ์๋์ line_width ์ต์
์ผ๋ก ํด๋น ๊ทธ๋ํ์ ๊ตต๊ธฐ๋ฅผ ์ ํ ์ ์๋ค.
p.line(x, y0, legend="y=x^2", line_width=3)
# line ๋ฉ์๋์ line_color ์ต์
์ผ๋ก ํด๋น ๊ทธ๋ํ์ ์์์ ์ ํ ์ ์๋ค.
p.line(x, y1, legend="y=10^x", line_color="red")
# circle ๋ฉ์๋์ line_color ์ต์
์ผ๋ก ์ ํ
๋๋ฆฌ ์ ์ ์์์ ์ ํ ์ ์๋ค.
p.circle(x, y1, legend="y=10^x", fill_color="red", line_color="red", size=6)
# line ๋ฉ์๋์ line_dash ์ต์
์ ํด๋น ๊ฐ(ํฝ์
)์ ๊ฐ๊ฒฉ์ผ๋ก ๊ทธ๋ํ๋ฅผ ํํํ๋ค.
p.line(x, y2, legend="y=10^x^2", line_color="orange", line_dash="4 4")
# ๊ฒฐ๊ณผ๊ฐ์ ์ถ๋ ฅํ๋ค.
show(p)
```
## bokeh๊ณผ ๋ค๋ฅธ ๊ฒ๋ค์ ๊ด๊ณ
### github๊ณผ bokeh
bokeh์ github์์ ์ ๊ณตํ๋ notebook preview๋ก ๋ฏธ๋ฆฌ๋ณด๊ธฐ๊ฐ ์ ๊ณต๋์ง ์๋๋ค. ๊ทธ ์ด์ ๋ bokeh์ ์๋ฐ์คํฌ๋ฆฝํธ๋ก ์์ฑ๋์ด์๋๋ฐ github์ ๋ฏธ๋ฆฌ๋ณด๊ธฐ์์ ๋ชจ๋ ์๋ฐ์คํฌ๋ฆฝํธ์ฝ๋๋ฅผ ์ง์๋ฒ๋ฆฌ๊ธฐ ๋๋ฌธ์ด๋ค.
### R๊ณผ bokeh, Scala์ bokeh
bokeh์ R๊ณผ Scalar์์๋ ์๋ํ๋ค.
### bokeh์ sample data ๋ค์ด๋ก๋
bokeh์ ๊ทธ๋ํ๋ฅผ ๊ทธ๋ ค๋ณผ ์ ์๋ ์ํ๋ฐ์ดํฐ๋ฅผ ์ ๊ณตํ๋๋ฐ, window command line์ด๋ window bash์์ ์๋์ ๋ช
๋ น์ด๋ก ๋ค์ด๋ก๋ ๊ฐ๋ฅํ๋ค.
<table>
<tr>
<th> bokeh sampledata </th>
</tr>
</table>
## bokeh์ ์ฃผ์ ๊ฐ๋
๋ค
1. plot
plot๋ figure์ ๊ทธ๋ ค์ง๋ ๊ฒ๋ค์ ํต์นญํ๋ ์ฉ์ด๋ค.
์๋ ํ์ ํ๋ glyphs์ guide, annotation, range ๋ฑ๋ฑ์
๋ชจ๋ plot๋ฅผ ๊ตฌ์ฑํ๋ ์์๋ค์ด๋ค.
<br><br>
2. glyphs
glyphs๋ figure์ ๊ทธ๋ ค์ง๋ ๋ํ(์ ์ด๋ ์)์ ์๋ฏธํ๋ค.
<br><br>
3. guides and annotations
guide๋ figure ์ ๋ณด๋ฅผ ๋์์ฃผ๋ ๋๊ตฌ๋ก grid(๊ฒฉ์)์
bands(๋ฌถ์)๋ฑ์ ํฌ๊ดํ๋ค. annotation์ ๊ทธ๋ํ์ x,y์ถ์
๋ผ๋ฒจ ๊ทธ๋ฆฌ๊ณ ํ์ดํ์ ์๋ฏธํ๋ค.
<br><br>
4. ranges
ranges๋ plot์ x์ถ,y์ถ์ ํ๊ธฐ๋๋ ์ซ์์ ๋ฒ์๋ฅผ ์๋ฏธํ๋ค.
๊ตฌ์ฒด์ ์ผ๋ก figure ๊ฐ์ฒด์ x_range, y_range ์ต์
์ ์ฌ์ฉํด
์ค์ ๋๋ค. ๊ฐ์ ๋ฆฌ์คํธ๋ ํํ ํํ๋ก ์
๋ ฅ๋๋ฉฐ (๋ฒ์์์,
๋ฒ์ ๋) ํน์ [๋ฒ์์์ , ๋ฒ์๋]์ผ๋ก ํํ๋๋ค.
<br><br>
5. resources
resources๋ ๊ฒฐ๊ณผ๋ฌผ์ ์ด๋ค ํํ๋ก ์ถ๋ ฅํ ๊ฒ์ธ์ง๋ฅผ ์ ํ๋
๊ฒ์ผ๋ก ๋ณดํต output_file()์ด ์ฌ์ฉ๋๋ค. ํ์ง๋ง ์ถ๋ ฅ๋จ์์
๋ฐ๋ก ํ์ธํ๊ณ ์ถ๋ค๋ฉด output_file()์ ์ต์
์ผ๋ก mode =
"inline"์ ์ฌ์ฉํ์.
### ์๋ ์ฝ๋ ์ค์ต์ ์ํ numpy ๊ธฐ๋ฅ ์์ต_np.random.random
* np.random.random(N): 0๋ถํฐ 1์ฌ์ด์ ๋๋ค์ค์ N๊ฐ๋ฅผ ์ธ์๋กํ๋ array๋ฅผ ๋ฐํํ๋ค.
<br><br>
* array * K = array์ ๊ฐ ์ธ์์ K๋ฅผ ๊ณฑํ ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๋ค.
```
import numpy as np
np.random.random(2)
k = np.random.random(size=10) * 100
```
### ์ค์ต: ์๊น๊ณผ ํฌ๊ธฐ๋ฅผ ๋ฒกํฐํํ ๊ทธ๋ํ๋ฅผ ๊ทธ๋ ค๋ณด์
```
import numpy as np
from bokeh.plotting import figure, output_file, show
# ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ฑํ๋ค.
N = 4000
# np.random.random(size = ์ซ์)
## >> np.random.random(size = ์ซ์)๋ ํด๋น ์ซ์์ ํด๋นํ๋ ์ฌ์ด์ฆ์ array๋ฅผ ๋ฐํํ๋ค.
## >> (np.random.random(N)์ 0์ด์ 1์ดํ์ ์ซ์ N๊ฐ๋ฅผ ๋๋ค์ผ๋ก ๋ฐํํ๋ค.)
## >> numpy์ array์ ๋จ์๊ณฑ์ ํ๋ฉด ๊ฐ ์์๋ณ๋ก ๊ณฑ์
์ ํ๊ฒ ๋๋ค.(๋ด์ ๊ณฑ์ด ์๋๋ค)
x = np.random.random(size=N) * 100
y = np.random.random(size=N) * 100
radii = np.random.random(size=N) * 1.5
# color์ ๋ฆฌ์คํธ๋ฅผ ๋ง๋ค์ด์ค๋ค.
## >> color ์ฝ๋๋ 16์ง์(hex)์ฌ์ผ ํ๋ค. ๋ฐ๋ผ์ 16์ง์ ํํ๋ก ์ปฌ๋ฌ์ฝ๋๋ฅผ ๋ง๋ค์ด์ค๋ค.
## >> %02x๋ 16์ง์ ํํํํ๋ฅผ ์๋ฏธํ๋ค.
## >> 150์ 16์ง์๋ก ๋ฐ๊ฟ์ฃผ๋ฉด 96์ด๋๋ค.
colors = [
"#%02x%02x%02x" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)
]
# # html ํ์ผ๋ก ๊ฒฐ๊ณผ๊ฐ์ ๋ฐ๊ณ cdn ๋ชจ๋๋ฅผ ์ฌ์ฉํ์ฌ ํด๋น ํ์ผ์ ์ ์ฅํ๋ค.
# output_file("color_scatter.html", title="color_scatter.py example", mode="cdn")
# ์ฌ์ฉํ tool๋ค์ ์ ์ํด์ค๋ค.(์ ์๋ ์์๋ก ๊ธฐ๋ฅ์ด ๋์ด๋๋ ๊ฒ์ ์๋๋ค. ๋์ด์์๋ ๋ฐ๋ก ์ ํด์ ธ ์์)
# crosshair์ ๊ฒฝ์ฐ ์ญ์์ ์ ์ปค์๋ฅผ ์ ๊ณตํ๋ ๊ธฐ๋ฅ์ด๋ค.
# pan์ ๊ฒฝ์ฐ ๊ทธ๋ํ๋ฅผ ์ด๋ํ๋ ๊ธฐ๋ฅ์ด๋ค.
# wheel_zoom์ ๊ฒฝ์ฐ ์คํฌ๋กค์ ์ด์ฉํด์ ํ๋๋ฅผ ํ๋ ๊ธฐ๋ฅ์ด๋ค.
# box_zoom์ ๊ฒฝ์ฐ ๋ฐ์ค๋ฅผ ์ ํํด ํ๋๋ฅผ ํ๋ ๊ธฐ๋ฅ์ด๋ค.
# box_select์ ๊ฒฝ์ฐ ๋ฐ์ค๋ฅผ ์ ํํ๋ ๊ธฐ๋ฅ์ด๋ค(์ ํ์ด์ธ์ ์์ญ๋ง ์ ํ์)
# lasso_select์ ๊ฒฝ์ฐ ์ํ๋ ํํ์ ๋ํ์ผ๋ก ์์ญ์ ์ ํํ๋ ๊ธฐ๋ฅ์ด๋ค.
TOOLS = "crosshair,pan,wheel_zoom,box_zoom,reset,box_select,lasso_select"
# ๋ํ์ง(figure)๋ฅผ ์
ํ
ํ๋ค. ์์์ ์ ์ํ tool์ ๋ถ๋ฌ์ค๊ณ x_range์ y_range ์ต์
์ผ๋ก x,y์ถ์ ๋ฒ์๋ฅผ ์ค์ ํด์ค๋ค.
p = figure(tools=TOOLS, x_range=(0, 100), y_range=(0, 100))
# add a circle renderer with vectorized colors and sizes
# x์ y๋ฆฌ์คํธ๋ฅผ ์ด์ฉํด์ ์์ ๊ทธ๋ ค์ค๋ค. radius(๋ฐ์ง๋ฆ)๋ ๊ฐ์ ์ค๋ค.
# fill_colord๊ฐ์ผ๋ก ์์ ๋ง๋ color๋ค์ ๋ฆฌ์คํธ๋ฅผ ์
๋ ฅํด์ค๋ค.
# fill_alpha๋ ์์ ์ ๋ช
๋์ ๊ด๋ จ๋ ์ต์
์ผ๋ก 1์ ๊ฒฝ์ฐ ํด๋น ์์ ์จ์ ํ ํํํ๋ ๊ฒ์ด๊ณ 1 ์ดํ๋ ํฌ๋ช
๋๋ฅผ ์ฃผ๋ ๊ฒ์ด๋ค.
# line_color๋ ์์ ํ
๋๋ฆฌ ์ต์
์ผ๋ก None์ ์ฃผ๋ฉด ์์ด ํ
๋๋ฆฌ ์์ด ํํ๋๋ค.
p.circle(x, y, radius=radii, fill_color=colors, fill_alpha=0.6, line_color=None)
# show the results
show(p)
```
## ์ฌ๋ฌ ๊ทธ๋ํ์ ๊ฑธ์ณ์ Tool์ ๊ธฐ๋ฅ์ ์ฐ๋์ํค๊ธฐ (panning, brushing)
* <h3>panning</h3>: ๋ํ์ง๋ฅผ ๋๋๊ทธํด์ ๋ํ์ง ๋ด์์ ์๊ฐ์ด ์์ง์ด๋ ๊ธฐ๋ฅ์ด๋ค.
<br>
* <h3>brushing</h3>:
<br>
<p> ์์ ๋ ๊ธฐ๋ฅ์ ๋ณดํต ํ๋์ ๊ทธ๋ํ์์ ์๋ํ์ง๋ง ์ฌ๋ฌ๊ฐ์ ๊ทธ๋ํ๋ฅผ ๋์์ ๊ทธ๋ ค์ ๋์์ ์ฌ๋ฌ๊ฐ์ ๊ทธ๋ํ์์ ์๋์ํค๋ ๊ฒ๋ํ ๊ฐ๋ฅํ๋ค.</p>
#### <p> ์ผ๋จ ์ฌ๋ฌ๊ฐ์ ๊ทธ๋ํ์์ panning์ ๊ตฌํํ๋ ์ฝ๋๋ฅผ ๋ณด์</p>
```
import numpy as np
from bokeh.layouts import gridplot
from bokeh.plotting import figure, output_file, show
# ํํํ ๋ฐ์ดํฐ๋ฅผ ์ค๋นํ์
N = 100
## np.linspace(์ถ๋ฐ, ๋, ๋๋ ๋ฉ์ด๋ฆฌ ์) : ์ถ๋ฐ์ง์ ๋ถํฐ ๋ ์ง์ ๊น์ง ๊ฐ์ ๊ฐ๊ฒฉ์ผ๋ก ๋๋๋ฉ์ด๋ฆฌ์ ๋งํผ ์ชผ๊ฐ ๋ฆฌ์คํธ๋ฅผ ์๋ฏธํ๋ค.
x = np.linspace(0, 4*np.pi, N)
y0 = np.sin(x)
y1 = np.cos(x)
y2 = np.sin(x) + np.cos(x)
# # ๊ฒฐ๊ณผ๋ฌผ ์ถ๋ ฅํํ๋ฅผ ์ค์ ํ๋ค.
# output_file("linked_panning.html")
# ๋ํ์ง๋ฅผ ์๋ก ์ธํ
ํด์ค๋ค. >> ๊ทธ๋ํ๊ฐ 3๊ฐ๋ผ๋ฉด ๋ํ์ง ์ญ์ 3๊ฐ๋ฅผ ์ธํ
ํด์ผํ๋ค
s1 = figure(width=250, plot_height=250, title=None)
# circle r๊ทธ๋ํ๋ฅผ ๊ทธ๋ ค์ค๋ค. ์ฌ๊ธฐ์ alpha๋ ์์ ํฌ๋ช
๋๋ฅผ ์๋ฏธํ๋ค.
s1.circle(x, y0, size=10, color="navy", alpha= 0.1)
# ๋๋ฒ์งธ ๋ํ์ง๋ฅผ ๋ง๋ค๊ณ range๋ ์ฒซ๋ฒ์งธ ๋ํ์ง์ ๊ฐ์ ๋ฐ๋๋ค.
s2 = figure(width=250, height=250, x_range=s1.x_range, y_range=s1.y_range, title=None)
s2.triangle(x, y1, size=10, color="firebrick", alpha=0.5)
# ์ธ๋ฒ์งธ ๋ํ์ง์ ๊ทธ๋ํ๋ฅผ ๊ทธ๋ ค์ค๋ค.
s3 = figure(width=250, height=250, x_range=s1.x_range, title=None)
s3.square(x, y2, size=10, color="olive", alpha=0.5)
# 3๊ฐ์ ๊ทธ๋ํ๋ฅผ gridplot์ ์ด์ฉํด์ ๋ฌถ์ด์ค๋ค.
# >> gridplot()๋ figure๋ค์ ํ๋๋ก ๋ฌถ์ด์ฃผ๋ ๊ฐ์ฒด์ด๋ค.
# >> ์ด๋ ์ธ์๋ ๊ทธ๋ํ๋ค์ ๋ฆฌ์คํธ๋ฅผ ๋ ํ๋ฒ ์ค๊ดํธํด์ ๋ฐ๋๋ค.
# ์ต์
๊ฐ์ผ๋ก toolbar_location์ ๊ฐ์ None์ผ๋ก ๋ฐ์์ tool ์น์
์ ์จ๊ฒจ์ค๋ค.
p = gridplot([[s1, s2, s3]], toolbar_location=None)
# show the results
show(p)
```
#### ์ฌ๋ฌ ๊ทธ๋ํ์ ๊ฑธ์ณ์ brushing์ ์ฌ์ฉํด๋ณด์
> ์ฌ๋ฌ๊ทธ๋ํ์์์ brushing์ด๋ ํ๋์ ๊ทธ๋ํ์์์ ์ด๋ค ์์ญ์ ์ ํํ๋ฉด ๋ค๋ฅธ ๊ทธ๋ํ์์ ํด๋น ์์ญ์ด ์ ํ๋๋ ๊ฒ์ ์๋ฏธํ๋ค.
> ์ด๋ฅผ ์ํด์๋ ๋ ๊ทธ๋ํ(plot)์ด ๊ณตํต ์์๋ฅผ ํฌํจํ๊ณ ์์ด์ผ ํ๋ค. ์๋ฅผ ๋ค์ด์ ๋ ๊ทธ๋ํ์ x์ถ์ ๊ฐ(๋ฐ์ดํฐ)์ด ๊ฐ์์ผ ํ๋ค. ์ด ๊ฒฝ์ฐ ํ ๊ทธ๋ํ์์์ ํน์ ์์ญ์ ์ ํํ๋ฉด ํด๋น ์์ญ์ x๊ฐ๋ค์ ํด๋นํ๋ ๋ค๋ฅธ ๊ทธ๋ํ์ ์์ญ์ ๋ณด์ฌ์ฃผ๋ ํ์์ด๋ค.
```
import numpy as np
from bokeh.plotting import *
from bokeh.models import ColumnDataSource
# ๋ฐ์ดํฐ ์
์ ์ค๋นํ๋ค.
N = 300
x = np.linspace(0, 4*np.pi, N)
y0 = np.sin(x)
y1 = np.cos(x)
# # ์ ์ฅํํ๋ฅผ html ํ์์ผ๋ก ์ค์ ํ๋ค.
# output_file("linked_brushing.html")
# plot์ ๋ค์ด๊ฐ ์ด ๋ฐ์ดํฐ๋ค์ dictionary๋ก ๋ง๋ค์ด์ ColumnDataSource ๊ฐ์ฒด ํ์ฑ์ ์ํด ์ธ์๋ก ๋ฃ์ด์ค๋ค.
# ์ด ColumnDataSource ๊ฐ์ฒด๋ ์ดํ ๊ทธ๋ํ์ source ์ต์
์ ๊ฐ์ผ๋ก ๋ค์ด๊ฐ์
# ์ด ๊ทธ๋ํ์ ์ด๋ค ์์๊ฐ ๋ค๋ฅธ ๊ทธ๋ํ์ ์ด๋ค ์์์ ๊ฐ์์ง ํ์
ํ๋๋ฐ ์ฌ์ฉ๋๋ค.
source = ColumnDataSource(data=dict(x=x, y0=y0, y1=y1))
TOOLS = "pan,wheel_zoom,box_zoom,reset,save,box_select,lasso_select"
# ๋ํ์ง(figure)๋ฅผ ๋ง๋ค์ด์ฃผ๊ณ ๊ทธ๋ฆผ(renderer)์ ๊ทธ๋ ค์ค๋ค
left = figure(tools=TOOLS, width=350, height=350, title=None)
left.circle('x', 'y0', source=source)
# ๋ํ์ง๋ฅผ ์์ฑํ๊ณ ๊ทธ๋ฆผ์ ๊ทธ๋ ค์ค๋ค.
right = figure(tools=TOOLS, width=350, height=350, title=None)
right.circle('x', 'y1', source=source)
# ๋ ๊ทธ๋ํ(๋๊ฐ์ ๋ํ์ง)๋ฅผ ํ๋๋ก ๋ฌถ์ด์ค๋ค.(gridplot()๋ฅผ ์ด์ฉ)
p = gridplot([[left, right]])
# show the results
show(p)
# ์๋ ์ฝ๋๋ฅผ ์คํ์ํค๋ ค๋ฉด stocks ๋ฐ์ดํฐ๊ฐ ํ์ํ๋ค
import bokeh.sampledata
bokeh.sampledata.download("stocks")
import numpy as np
from bokeh.plotting import figure, output_file, show
from bokeh.sampledata.stocks import AAPL
# Sampledata์ ์๋ AAPL ๋ฐ์ดํฐ๋ dict ํํ์ ๋ฐ์ดํฐ๋ก
# ['date', 'open', 'high', 'low', 'close', 'volume', 'adj_close']๋ฅผ ํค๋ก ๊ฐ์ง๊ณ ์๋ค.
# print(type(AAPL))
# print(AAPL.keys())
# ์ํ๋ฐ์ดํฐ์์ ํ์ํ ๋ฐ์ดํฐ๋ง ๋ฝ์์ ๋ณ์๋ก ์ง์ ํด์ค๋ค.
aapl = np.array(AAPL['adj_close'])
aapl_dates = np.array(AAPL['date'], dtype=np.datetime64)
window_size = 30
# np.ones(K): K๊ฐ์ 1๋ก ์ด๋ฃจ์ด์ง array๋ฅผ ์์ฑํ๋ค.
# np.one(K)/float(window_size): numpy์ array์ ๊ทธ๋ฅ '*' ํน์ '/'๋ฅผ ์ฌ์ฉํ๋ฉด ๊ฐ ์์์ ๊ณฑํ๊ธฐ ํน์ ๋๋๊ธฐ๊ฐ ์ ์ฉ๋๋ค.
window = np.ones(window_size)/float(window_size)
# np.convolve๋ ํฉ์ฑ๊ณฑ์ ํด์ฃผ๋ ๊ฒ์ด๋ค. (array1, array2, ํฉ์ฑ๊ณฑ๋ชจ๋)๋ฅผ ์ธ์๋ก ๋ฐ๋๋ค.
aapl_avg = np.convolve(aapl, window, 'same')
# # ์ ์ฅํํ๋ฅผ ์ค์ ํ๋ค.
# output_file("stocks.html", title="stocks.py example")
# ์๋ก์ด ๋ํ์ง๋ฅผ ๋ง๋ค์ด์ค๋ค. x์ถ์ ๊ฐ์ ์ ํ(type)์ ์ค์ ํด์ค๋ค(x_axis_type๋ฅผ ์ด์ฉํ๋ค)
p = figure(plot_width=800, plot_height=350, x_axis_type="datetime")
# ๋ํ์ง ์์ ๊ทธ๋ํ๋ฅผ ๊ทธ๋ ค์ค๋ค.
p.circle(aapl_dates, aapl, size=4, color='darkgrey', alpha=0.2, legend='close')
p.line(aapl_dates, aapl_avg, color='navy', legend='avg')
p.title.text = "AAPL One-Month Average"
# <figure>.legend.location์ผ๋ก ๋ฒ๋ก์ ์์น๋ฅผ ์ง์ ํ ์ ์์ผ๋ฉฐ, ๊ฐ์ผ๋ก ์ซ์๊ฐ ์๋ ๋ฌธ์์ด์ ๋ฐ๋๋ค.
p.legend.location = "top_left"
# <figure>.grid.grid_line_alpha๋ ๋ํ์ง ์์ grid(๊ฒฉ์)์ ํฌ๋ช
๋๋ฅผ ์๋ฏธํ๋ค. 0๋ถํฐ 1์ฌ์ด์ ๊ฐ์ผ๋ก ์ค์ ํ ์ ์๋ค.
p.grid.grid_line_alpha = 1
p.xaxis.axis_label = 'Date'
p.yaxis.axis_label = 'Price'
# <figure>ygrid.band_fill_color๋ ๋ํ์ง ์์ ๊ฒฉ์ ๊ฐ๋ก์ค์ ์์์ ๋ฃ์ด์ฃผ๋ ์ต์
์ด๋ค.
p.ygrid.band_fill_color = "olive"
p.ygrid.band_fill_alpha = 0.1
# show the results
show(p)
```
|
github_jupyter
|
import bokeh.io
# bokeh ๊ฒฐ๊ณผ๊ฐ์ ํํ ๋ฐฉ๋ฒ์ resetํด์ค๋ค.
bokeh.io.reset_output()
# bokeh ๊ฒฐ๊ณผ๊ฐ์ ํํ ๋ฐฉ๋ฒ์
bokeh.io.output_notebook()
# ๋ณด์บก์์ ๊ฐ๋จํ๊ฒ ๊ทธ๋ํ๋ฅผ ๊ทธ๋ฆฌ๋ ์ธํฐํ์ด์ค์ธ bokeh.plotting์์ ์ํ๋ ๋ชจ๋์ ๊ฐ์ ธ์ต์๋ค.
# figure : matplotlib์์์ ๋ง์ฐฌ๊ฐ์ง๋ก ๋ํ์ง ์ญํ ์ ํฉ๋๋ค.
# output_file : ์ต์ข
๊ฒฐ๊ณผ๋ฌผ์ ์ด๋ค ์ด๋ฆ, ์ด๋ค ํ์์ผ๋ก ์ ์ฅํ ๊ฒ์ธ์ง ์ค์ ํฉ๋๋ค.
# show : ๊ทธ๋ฆผ์ ๋ณด์ฌ์ฃผ๋ ์ญํ ์ ํฉ๋๋ค.
from bokeh.plotting import figure, output_file, show
# ๋ฐ์ดํฐ๋ฅผ ์ค๋นํฉ๋๋ค.
x = [1, 2, 3, 4, 5]
y = [6, 7, 2, 4, 5]
# # ์๊ฐํ ๊ฒฐ๊ณผ๋ฌผ์ lines๋ผ๋ ์ด๋ฆ์ html ๋ฌธ์๋ก ์ ์ฅํจ์ ์ ์ธํฉ๋๋ค.
# output_file("lines.html")
# figure๋ฅผ ํตํด์๋ ๋ํ์ง์ ์ต์
์ ์ค์ ํ ์ ์์ต๋๋ค.
# title๋ ๋ํ์ง์ ์ ๋ชฉ, x_axis_label๋ x์ถ์ ์ด๋ค ๊ฐ๋ค์ด ์ค๋์ง, y_axis_label์ y์ถ์ ์ด๋ค ๊ฐ๋ค์ด ์ค๋์ง
p = figure(title="simple line example", x_axis_label='x', y_axis_label='y')
# ์ ํ ๊ทธ๋ํ(line)์ ๊ทธ๋ ค์ค๋๋ค.figure์ ๊ฐ์ฒด๋กํ๊ณ line์ ๋ฉ์๋๋ก ํด์ ์์ฑํฉ๋๋ค.
p.line(x, y, legend="Temp.", line_width=2)
# show๋ก ํด๋น ๊ทธ๋ํ๋ฅผ ๋ด
๋๋ค.
show(p)
from bokeh.plotting import figure, output_file, show
# ๊ทธ๋ํ๋ก ํํํ ๋ฐ์ดํฐ๋ฅผ ์ค๋นํ๋ค.
x = [0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
y0 = [i**2 for i in x]
y1 = [10**i for i in x]
y2 = [10**(i**2) for i in x]
# # ๊ฒฐ๊ณผ๊ฐ์ ์ ์ฅํ ํํ๋ฅผ ์ค์ ํ๋ค.(html ๋ฌธ์๋ก ์ ์ฅ)
# output_file("log_lines.html")
# figure๋ฅผ ๋ง๋ค๊ณ ์ค์ ํด์ค๋ค.
# tools = ํด๋น ๊ทธ๋ํ์ ๊ธฐ๋ฅ์ ์ค์ ํด์ค๋ค.(์๋ ์ธ๋ถ ์ค๋ช
)
## >>> pan : ๊ทธ๋ํ๋ฅผ ์์ง์ผ ์ ์๋ค, box_zoom : ๊ทธ๋ํ๋ฅผ ์ค์ธ ํ ์ ์๋ค
## >>> reset : ๊ทธ๋ํ๋ฅผ ์์ํ๋ก ๋๋๋ฆด ์ ์๋ค. save : ๊ทธ๋ํ๋ฅผ png ํ์ผ๋ก ์ ์ฅํ ์ ์๋ค.
# y_axis_type = y์ถ์ ๊ฐ์ผ๋ก ๋ฐ๋ ๋ฐ์ดํฐ๋ฅผ ์ด๋ค ํํ๋ก ๊ฐ๊ณตํด์ ํํํ ์ง ์ ํ๋ค.
## >>> y_axis_type = "log"์ ๊ฒฝ์ฐ y์ถ ์ธ์๋ก ๋ฐ๋ ๊ฐ๋ค์ ๋ํด log๋ฅผ ์ทจํด์ค๋ค.
# y_range = y์ถ์ ๊ฐ์ ๋ฒ์๋ฅผ ์ค์ ํด์ค๋ค.
## x_axis_label(y_axis_label) = x์ถ, y์ถ์ ์ด๋ฆ์ ๋ถ์ฌ์ค๋ค.
p = figure(
tools="pan,box_zoom,reset,save",
y_axis_type="log", y_range=[0.001, 10**11], title="log axis example",
x_axis_label='sections', y_axis_label='particles'
)
# ๊ธฐ๋ณธ์์ผ๋ก ์ ์ ๋ง๋ค๊ณ ํด๋น ์ ์ ๋ฒ๋ก์ y=x๋ผ๋ ์ด๋ฆ์ผ๋ก ๋ค์ด๊ฐ๋ค.
p.line(x, x, legend="y=x")
# x,x ์ขํ์ ๊ธฐ๋ณธ์์ผ๋ก ์์ ๊ทธ๋ฆฌ๊ณ ๊ทธ ์์ white๋ก ์ฑ์ด๋ค.
## circle๋ฉ์๋์ fill_color๋ก ์ ์์ ์์ ์ฑ์ธ ์ ์๋ค.
p.circle(x, x, legend="y=x", fill_color="white", size=8)
# line ๋ฉ์๋์ line_width ์ต์
์ผ๋ก ํด๋น ๊ทธ๋ํ์ ๊ตต๊ธฐ๋ฅผ ์ ํ ์ ์๋ค.
p.line(x, y0, legend="y=x^2", line_width=3)
# line ๋ฉ์๋์ line_color ์ต์
์ผ๋ก ํด๋น ๊ทธ๋ํ์ ์์์ ์ ํ ์ ์๋ค.
p.line(x, y1, legend="y=10^x", line_color="red")
# circle ๋ฉ์๋์ line_color ์ต์
์ผ๋ก ์ ํ
๋๋ฆฌ ์ ์ ์์์ ์ ํ ์ ์๋ค.
p.circle(x, y1, legend="y=10^x", fill_color="red", line_color="red", size=6)
# line ๋ฉ์๋์ line_dash ์ต์
์ ํด๋น ๊ฐ(ํฝ์
)์ ๊ฐ๊ฒฉ์ผ๋ก ๊ทธ๋ํ๋ฅผ ํํํ๋ค.
p.line(x, y2, legend="y=10^x^2", line_color="orange", line_dash="4 4")
# ๊ฒฐ๊ณผ๊ฐ์ ์ถ๋ ฅํ๋ค.
show(p)
import numpy as np
np.random.random(2)
k = np.random.random(size=10) * 100
import numpy as np
from bokeh.plotting import figure, output_file, show
# ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ฑํ๋ค.
N = 4000
# np.random.random(size = ์ซ์)
## >> np.random.random(size = ์ซ์)๋ ํด๋น ์ซ์์ ํด๋นํ๋ ์ฌ์ด์ฆ์ array๋ฅผ ๋ฐํํ๋ค.
## >> (np.random.random(N)์ 0์ด์ 1์ดํ์ ์ซ์ N๊ฐ๋ฅผ ๋๋ค์ผ๋ก ๋ฐํํ๋ค.)
## >> numpy์ array์ ๋จ์๊ณฑ์ ํ๋ฉด ๊ฐ ์์๋ณ๋ก ๊ณฑ์
์ ํ๊ฒ ๋๋ค.(๋ด์ ๊ณฑ์ด ์๋๋ค)
x = np.random.random(size=N) * 100
y = np.random.random(size=N) * 100
radii = np.random.random(size=N) * 1.5
# color์ ๋ฆฌ์คํธ๋ฅผ ๋ง๋ค์ด์ค๋ค.
## >> color ์ฝ๋๋ 16์ง์(hex)์ฌ์ผ ํ๋ค. ๋ฐ๋ผ์ 16์ง์ ํํ๋ก ์ปฌ๋ฌ์ฝ๋๋ฅผ ๋ง๋ค์ด์ค๋ค.
## >> %02x๋ 16์ง์ ํํํํ๋ฅผ ์๋ฏธํ๋ค.
## >> 150์ 16์ง์๋ก ๋ฐ๊ฟ์ฃผ๋ฉด 96์ด๋๋ค.
colors = [
"#%02x%02x%02x" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)
]
# # html ํ์ผ๋ก ๊ฒฐ๊ณผ๊ฐ์ ๋ฐ๊ณ cdn ๋ชจ๋๋ฅผ ์ฌ์ฉํ์ฌ ํด๋น ํ์ผ์ ์ ์ฅํ๋ค.
# output_file("color_scatter.html", title="color_scatter.py example", mode="cdn")
# ์ฌ์ฉํ tool๋ค์ ์ ์ํด์ค๋ค.(์ ์๋ ์์๋ก ๊ธฐ๋ฅ์ด ๋์ด๋๋ ๊ฒ์ ์๋๋ค. ๋์ด์์๋ ๋ฐ๋ก ์ ํด์ ธ ์์)
# crosshair์ ๊ฒฝ์ฐ ์ญ์์ ์ ์ปค์๋ฅผ ์ ๊ณตํ๋ ๊ธฐ๋ฅ์ด๋ค.
# pan์ ๊ฒฝ์ฐ ๊ทธ๋ํ๋ฅผ ์ด๋ํ๋ ๊ธฐ๋ฅ์ด๋ค.
# wheel_zoom์ ๊ฒฝ์ฐ ์คํฌ๋กค์ ์ด์ฉํด์ ํ๋๋ฅผ ํ๋ ๊ธฐ๋ฅ์ด๋ค.
# box_zoom์ ๊ฒฝ์ฐ ๋ฐ์ค๋ฅผ ์ ํํด ํ๋๋ฅผ ํ๋ ๊ธฐ๋ฅ์ด๋ค.
# box_select์ ๊ฒฝ์ฐ ๋ฐ์ค๋ฅผ ์ ํํ๋ ๊ธฐ๋ฅ์ด๋ค(์ ํ์ด์ธ์ ์์ญ๋ง ์ ํ์)
# lasso_select์ ๊ฒฝ์ฐ ์ํ๋ ํํ์ ๋ํ์ผ๋ก ์์ญ์ ์ ํํ๋ ๊ธฐ๋ฅ์ด๋ค.
TOOLS = "crosshair,pan,wheel_zoom,box_zoom,reset,box_select,lasso_select"
# ๋ํ์ง(figure)๋ฅผ ์
ํ
ํ๋ค. ์์์ ์ ์ํ tool์ ๋ถ๋ฌ์ค๊ณ x_range์ y_range ์ต์
์ผ๋ก x,y์ถ์ ๋ฒ์๋ฅผ ์ค์ ํด์ค๋ค.
p = figure(tools=TOOLS, x_range=(0, 100), y_range=(0, 100))
# add a circle renderer with vectorized colors and sizes
# x์ y๋ฆฌ์คํธ๋ฅผ ์ด์ฉํด์ ์์ ๊ทธ๋ ค์ค๋ค. radius(๋ฐ์ง๋ฆ)๋ ๊ฐ์ ์ค๋ค.
# fill_colord๊ฐ์ผ๋ก ์์ ๋ง๋ color๋ค์ ๋ฆฌ์คํธ๋ฅผ ์
๋ ฅํด์ค๋ค.
# fill_alpha๋ ์์ ์ ๋ช
๋์ ๊ด๋ จ๋ ์ต์
์ผ๋ก 1์ ๊ฒฝ์ฐ ํด๋น ์์ ์จ์ ํ ํํํ๋ ๊ฒ์ด๊ณ 1 ์ดํ๋ ํฌ๋ช
๋๋ฅผ ์ฃผ๋ ๊ฒ์ด๋ค.
# line_color๋ ์์ ํ
๋๋ฆฌ ์ต์
์ผ๋ก None์ ์ฃผ๋ฉด ์์ด ํ
๋๋ฆฌ ์์ด ํํ๋๋ค.
p.circle(x, y, radius=radii, fill_color=colors, fill_alpha=0.6, line_color=None)
# show the results
show(p)
import numpy as np
from bokeh.layouts import gridplot
from bokeh.plotting import figure, output_file, show
# ํํํ ๋ฐ์ดํฐ๋ฅผ ์ค๋นํ์
N = 100
## np.linspace(์ถ๋ฐ, ๋, ๋๋ ๋ฉ์ด๋ฆฌ ์) : ์ถ๋ฐ์ง์ ๋ถํฐ ๋ ์ง์ ๊น์ง ๊ฐ์ ๊ฐ๊ฒฉ์ผ๋ก ๋๋๋ฉ์ด๋ฆฌ์ ๋งํผ ์ชผ๊ฐ ๋ฆฌ์คํธ๋ฅผ ์๋ฏธํ๋ค.
x = np.linspace(0, 4*np.pi, N)
y0 = np.sin(x)
y1 = np.cos(x)
y2 = np.sin(x) + np.cos(x)
# # ๊ฒฐ๊ณผ๋ฌผ ์ถ๋ ฅํํ๋ฅผ ์ค์ ํ๋ค.
# output_file("linked_panning.html")
# ๋ํ์ง๋ฅผ ์๋ก ์ธํ
ํด์ค๋ค. >> ๊ทธ๋ํ๊ฐ 3๊ฐ๋ผ๋ฉด ๋ํ์ง ์ญ์ 3๊ฐ๋ฅผ ์ธํ
ํด์ผํ๋ค
s1 = figure(width=250, plot_height=250, title=None)
# circle r๊ทธ๋ํ๋ฅผ ๊ทธ๋ ค์ค๋ค. ์ฌ๊ธฐ์ alpha๋ ์์ ํฌ๋ช
๋๋ฅผ ์๋ฏธํ๋ค.
s1.circle(x, y0, size=10, color="navy", alpha= 0.1)
# ๋๋ฒ์งธ ๋ํ์ง๋ฅผ ๋ง๋ค๊ณ range๋ ์ฒซ๋ฒ์งธ ๋ํ์ง์ ๊ฐ์ ๋ฐ๋๋ค.
s2 = figure(width=250, height=250, x_range=s1.x_range, y_range=s1.y_range, title=None)
s2.triangle(x, y1, size=10, color="firebrick", alpha=0.5)
# ์ธ๋ฒ์งธ ๋ํ์ง์ ๊ทธ๋ํ๋ฅผ ๊ทธ๋ ค์ค๋ค.
s3 = figure(width=250, height=250, x_range=s1.x_range, title=None)
s3.square(x, y2, size=10, color="olive", alpha=0.5)
# 3๊ฐ์ ๊ทธ๋ํ๋ฅผ gridplot์ ์ด์ฉํด์ ๋ฌถ์ด์ค๋ค.
# >> gridplot()๋ figure๋ค์ ํ๋๋ก ๋ฌถ์ด์ฃผ๋ ๊ฐ์ฒด์ด๋ค.
# >> ์ด๋ ์ธ์๋ ๊ทธ๋ํ๋ค์ ๋ฆฌ์คํธ๋ฅผ ๋ ํ๋ฒ ์ค๊ดํธํด์ ๋ฐ๋๋ค.
# ์ต์
๊ฐ์ผ๋ก toolbar_location์ ๊ฐ์ None์ผ๋ก ๋ฐ์์ tool ์น์
์ ์จ๊ฒจ์ค๋ค.
p = gridplot([[s1, s2, s3]], toolbar_location=None)
# show the results
show(p)
import numpy as np
from bokeh.plotting import *
from bokeh.models import ColumnDataSource
# ๋ฐ์ดํฐ ์
์ ์ค๋นํ๋ค.
N = 300
x = np.linspace(0, 4*np.pi, N)
y0 = np.sin(x)
y1 = np.cos(x)
# # ์ ์ฅํํ๋ฅผ html ํ์์ผ๋ก ์ค์ ํ๋ค.
# output_file("linked_brushing.html")
# plot์ ๋ค์ด๊ฐ ์ด ๋ฐ์ดํฐ๋ค์ dictionary๋ก ๋ง๋ค์ด์ ColumnDataSource ๊ฐ์ฒด ํ์ฑ์ ์ํด ์ธ์๋ก ๋ฃ์ด์ค๋ค.
# ์ด ColumnDataSource ๊ฐ์ฒด๋ ์ดํ ๊ทธ๋ํ์ source ์ต์
์ ๊ฐ์ผ๋ก ๋ค์ด๊ฐ์
# ์ด ๊ทธ๋ํ์ ์ด๋ค ์์๊ฐ ๋ค๋ฅธ ๊ทธ๋ํ์ ์ด๋ค ์์์ ๊ฐ์์ง ํ์
ํ๋๋ฐ ์ฌ์ฉ๋๋ค.
source = ColumnDataSource(data=dict(x=x, y0=y0, y1=y1))
TOOLS = "pan,wheel_zoom,box_zoom,reset,save,box_select,lasso_select"
# ๋ํ์ง(figure)๋ฅผ ๋ง๋ค์ด์ฃผ๊ณ ๊ทธ๋ฆผ(renderer)์ ๊ทธ๋ ค์ค๋ค
left = figure(tools=TOOLS, width=350, height=350, title=None)
left.circle('x', 'y0', source=source)
# ๋ํ์ง๋ฅผ ์์ฑํ๊ณ ๊ทธ๋ฆผ์ ๊ทธ๋ ค์ค๋ค.
right = figure(tools=TOOLS, width=350, height=350, title=None)
right.circle('x', 'y1', source=source)
# ๋ ๊ทธ๋ํ(๋๊ฐ์ ๋ํ์ง)๋ฅผ ํ๋๋ก ๋ฌถ์ด์ค๋ค.(gridplot()๋ฅผ ์ด์ฉ)
p = gridplot([[left, right]])
# show the results
show(p)
# ์๋ ์ฝ๋๋ฅผ ์คํ์ํค๋ ค๋ฉด stocks ๋ฐ์ดํฐ๊ฐ ํ์ํ๋ค
import bokeh.sampledata
bokeh.sampledata.download("stocks")
import numpy as np
from bokeh.plotting import figure, output_file, show
from bokeh.sampledata.stocks import AAPL
# Sampledata์ ์๋ AAPL ๋ฐ์ดํฐ๋ dict ํํ์ ๋ฐ์ดํฐ๋ก
# ['date', 'open', 'high', 'low', 'close', 'volume', 'adj_close']๋ฅผ ํค๋ก ๊ฐ์ง๊ณ ์๋ค.
# print(type(AAPL))
# print(AAPL.keys())
# ์ํ๋ฐ์ดํฐ์์ ํ์ํ ๋ฐ์ดํฐ๋ง ๋ฝ์์ ๋ณ์๋ก ์ง์ ํด์ค๋ค.
aapl = np.array(AAPL['adj_close'])
aapl_dates = np.array(AAPL['date'], dtype=np.datetime64)
window_size = 30
# np.ones(K): K๊ฐ์ 1๋ก ์ด๋ฃจ์ด์ง array๋ฅผ ์์ฑํ๋ค.
# np.one(K)/float(window_size): numpy์ array์ ๊ทธ๋ฅ '*' ํน์ '/'๋ฅผ ์ฌ์ฉํ๋ฉด ๊ฐ ์์์ ๊ณฑํ๊ธฐ ํน์ ๋๋๊ธฐ๊ฐ ์ ์ฉ๋๋ค.
window = np.ones(window_size)/float(window_size)
# np.convolve๋ ํฉ์ฑ๊ณฑ์ ํด์ฃผ๋ ๊ฒ์ด๋ค. (array1, array2, ํฉ์ฑ๊ณฑ๋ชจ๋)๋ฅผ ์ธ์๋ก ๋ฐ๋๋ค.
aapl_avg = np.convolve(aapl, window, 'same')
# # ์ ์ฅํํ๋ฅผ ์ค์ ํ๋ค.
# output_file("stocks.html", title="stocks.py example")
# ์๋ก์ด ๋ํ์ง๋ฅผ ๋ง๋ค์ด์ค๋ค. x์ถ์ ๊ฐ์ ์ ํ(type)์ ์ค์ ํด์ค๋ค(x_axis_type๋ฅผ ์ด์ฉํ๋ค)
p = figure(plot_width=800, plot_height=350, x_axis_type="datetime")
# ๋ํ์ง ์์ ๊ทธ๋ํ๋ฅผ ๊ทธ๋ ค์ค๋ค.
p.circle(aapl_dates, aapl, size=4, color='darkgrey', alpha=0.2, legend='close')
p.line(aapl_dates, aapl_avg, color='navy', legend='avg')
p.title.text = "AAPL One-Month Average"
# <figure>.legend.location์ผ๋ก ๋ฒ๋ก์ ์์น๋ฅผ ์ง์ ํ ์ ์์ผ๋ฉฐ, ๊ฐ์ผ๋ก ์ซ์๊ฐ ์๋ ๋ฌธ์์ด์ ๋ฐ๋๋ค.
p.legend.location = "top_left"
# <figure>.grid.grid_line_alpha๋ ๋ํ์ง ์์ grid(๊ฒฉ์)์ ํฌ๋ช
๋๋ฅผ ์๋ฏธํ๋ค. 0๋ถํฐ 1์ฌ์ด์ ๊ฐ์ผ๋ก ์ค์ ํ ์ ์๋ค.
p.grid.grid_line_alpha = 1
p.xaxis.axis_label = 'Date'
p.yaxis.axis_label = 'Price'
# <figure>ygrid.band_fill_color๋ ๋ํ์ง ์์ ๊ฒฉ์ ๊ฐ๋ก์ค์ ์์์ ๋ฃ์ด์ฃผ๋ ์ต์
์ด๋ค.
p.ygrid.band_fill_color = "olive"
p.ygrid.band_fill_alpha = 0.1
# show the results
show(p)
| 0.320715 | 0.94079 |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.metrics import silhouette_score
from sklearn.cluster import KMeans
np.random.seed(2)
df = pd.read_csv("fastext_model_100.csv").drop('Unnamed: 0', axis=1)
df.head()
for n_comp in [10,20,30,40,50]:
model_pca = PCA(n_components=n_comp)
data_pca = model_pca.fit_transform(df)
print(f"# componentes {n_comp}\t{sum(model_pca.explained_variance_ratio_)}")
model_pca = PCA(n_components=20)
data_pca = model_pca.fit_transform(df)
plt.figure(figsize=(10,6))
plt.plot(model_pca.explained_variance_ratio_)
plt.xlabel('Principal component index')
plt.ylabel('Explained variance ratio')
plt.show()
model_pca = PCA(n_components=5)
data_pca = model_pca.fit_transform(df)
sum(model_pca.explained_variance_ratio_)
data_tsne_list = []
for perp in range(5, 55, 5):
model_tsne = TSNE(random_state=0, verbose=0, perplexity=perp)
data_tsne = model_tsne.fit_transform(data_pca)
data_tsne_list.append({"perp": perp, "tsne": data_tsne})
len(data_tsne_list)
fig, axs = plt.subplots(5, 2, figsize=(15,20))
idx = 0
for i in range(0,5):
for j in range(0,2):
data = data_tsne_list[idx]; idx+=1
axs[i, j].scatter(data["tsne"][:,0], data["tsne"][:,1])
axs[i, j].set_title(f'Perplexity = {data["perp"]}')
```
# PCA Clustering
```
X = pd.DataFrame(data_pca)
X.info()
X.describe().transpose()
range_values = range(1,10)
sum_squares = []
silhouette_coefs = []
for i in range_values:
kmeans = KMeans(i)
kmeans.fit(X)
sum_squares.append(kmeans.inertia_)
labels = kmeans.labels_
if i > 1: silhouette_coefs.append(silhouette_score(X, labels, metric='euclidean'))
else: silhouette_coefs.append(0)
plt.figure(figsize=(10,6))
plt.plot(range_values, sum_squares)
plt.title('Elbow method',{'fontsize':18})
plt.xlabel('# of clusters')
plt.ylabel('Squere sum of cluster')
plt.figure(figsize=(10,6))
plt.plot(range_values, silhouette_coefs)
plt.title('Silhouette method',{'fontsize':18})
plt.xlabel('# of clusters')
plt.ylabel('Silhouette coef of cluster')
```
# T-SNE Clustering
```
model_tsne = TSNE(random_state=0, verbose=0, perplexity=10)
data_tsne = model_tsne.fit_transform(data_pca)
X_tsne = pd.DataFrame(data_tsne)
X_tsne.info()
X_tsne.describe().transpose()
range_values = range(1,10)
sum_squares = []
silhouette_coefs = []
for i in range_values:
kmeans = KMeans(i)
kmeans.fit(X_tsne)
sum_squares.append(kmeans.inertia_)
labels = kmeans.labels_
if i > 1: silhouette_coefs.append(silhouette_score(X_tsne, labels, metric='euclidean'))
else: silhouette_coefs.append(0)
plt.figure(figsize=(10,6))
plt.plot(range_values, sum_squares)
plt.title('Elbow method',{'fontsize':18})
plt.xlabel('# of clusters')
plt.ylabel('Squere sum of cluster')
plt.figure(figsize=(10,6))
plt.plot(range_values, silhouette_coefs)
plt.title('Silhouette method',{'fontsize':18})
plt.xlabel('# of clusters')
plt.ylabel('Silhouette coef of cluster')
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.metrics import silhouette_score
from sklearn.cluster import KMeans
np.random.seed(2)
df = pd.read_csv("fastext_model_100.csv").drop('Unnamed: 0', axis=1)
df.head()
for n_comp in [10,20,30,40,50]:
model_pca = PCA(n_components=n_comp)
data_pca = model_pca.fit_transform(df)
print(f"# componentes {n_comp}\t{sum(model_pca.explained_variance_ratio_)}")
model_pca = PCA(n_components=20)
data_pca = model_pca.fit_transform(df)
plt.figure(figsize=(10,6))
plt.plot(model_pca.explained_variance_ratio_)
plt.xlabel('Principal component index')
plt.ylabel('Explained variance ratio')
plt.show()
model_pca = PCA(n_components=5)
data_pca = model_pca.fit_transform(df)
sum(model_pca.explained_variance_ratio_)
data_tsne_list = []
for perp in range(5, 55, 5):
model_tsne = TSNE(random_state=0, verbose=0, perplexity=perp)
data_tsne = model_tsne.fit_transform(data_pca)
data_tsne_list.append({"perp": perp, "tsne": data_tsne})
len(data_tsne_list)
fig, axs = plt.subplots(5, 2, figsize=(15,20))
idx = 0
for i in range(0,5):
for j in range(0,2):
data = data_tsne_list[idx]; idx+=1
axs[i, j].scatter(data["tsne"][:,0], data["tsne"][:,1])
axs[i, j].set_title(f'Perplexity = {data["perp"]}')
X = pd.DataFrame(data_pca)
X.info()
X.describe().transpose()
range_values = range(1,10)
sum_squares = []
silhouette_coefs = []
for i in range_values:
kmeans = KMeans(i)
kmeans.fit(X)
sum_squares.append(kmeans.inertia_)
labels = kmeans.labels_
if i > 1: silhouette_coefs.append(silhouette_score(X, labels, metric='euclidean'))
else: silhouette_coefs.append(0)
plt.figure(figsize=(10,6))
plt.plot(range_values, sum_squares)
plt.title('Elbow method',{'fontsize':18})
plt.xlabel('# of clusters')
plt.ylabel('Squere sum of cluster')
plt.figure(figsize=(10,6))
plt.plot(range_values, silhouette_coefs)
plt.title('Silhouette method',{'fontsize':18})
plt.xlabel('# of clusters')
plt.ylabel('Silhouette coef of cluster')
model_tsne = TSNE(random_state=0, verbose=0, perplexity=10)
data_tsne = model_tsne.fit_transform(data_pca)
X_tsne = pd.DataFrame(data_tsne)
X_tsne.info()
X_tsne.describe().transpose()
range_values = range(1,10)
sum_squares = []
silhouette_coefs = []
for i in range_values:
kmeans = KMeans(i)
kmeans.fit(X_tsne)
sum_squares.append(kmeans.inertia_)
labels = kmeans.labels_
if i > 1: silhouette_coefs.append(silhouette_score(X_tsne, labels, metric='euclidean'))
else: silhouette_coefs.append(0)
plt.figure(figsize=(10,6))
plt.plot(range_values, sum_squares)
plt.title('Elbow method',{'fontsize':18})
plt.xlabel('# of clusters')
plt.ylabel('Squere sum of cluster')
plt.figure(figsize=(10,6))
plt.plot(range_values, silhouette_coefs)
plt.title('Silhouette method',{'fontsize':18})
plt.xlabel('# of clusters')
plt.ylabel('Silhouette coef of cluster')
| 0.583915 | 0.802826 |
Notebook written by [Zhedong Zheng](https://github.com/zhedongzheng)

```
import tensorflow as tf
import numpy as np
VOCAB_SIZE = 5000
MAX_LEN = 400
BATCH_SIZE = 32
EMBED_DIM = 50
FILTERS = 250
N_CLASS = 2
N_EPOCH = 2
LR = {'start': 5e-3, 'end': 5e-4, 'steps': 1500}
def forward(x, mode):
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
x = tf.contrib.layers.embed_sequence(x, VOCAB_SIZE, EMBED_DIM)
x = tf.layers.dropout(x, 0.2, training=is_training)
feat_map = []
for k_size in [3, 4, 5]:
_x = tf.layers.conv1d(x, FILTERS, k_size, activation=tf.nn.relu)
_x = tf.layers.max_pooling1d(_x, _x.get_shape().as_list()[1], 1)
_x = tf.reshape(_x, (tf.shape(x)[0], FILTERS))
feat_map.append(_x)
x = tf.concat(feat_map, -1)
x = tf.layers.dropout(x, 0.2, training=is_training)
x = tf.layers.dense(x, FILTERS, tf.nn.relu)
logits = tf.layers.dense(x, N_CLASS)
return logits
def model_fn(features, labels, mode):
logits = forward(features, mode)
if mode == tf.estimator.ModeKeys.PREDICT:
preds = tf.argmax(logits, -1)
return tf.estimator.EstimatorSpec(mode, predictions=preds)
if mode == tf.estimator.ModeKeys.TRAIN:
global_step = tf.train.get_global_step()
lr_op = tf.train.exponential_decay(
LR['start'], global_step, LR['steps'], LR['end']/LR['start'])
loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=labels))
train_op = tf.train.AdamOptimizer(lr_op).minimize(
loss_op, global_step=global_step)
lth = tf.train.LoggingTensorHook({'lr': lr_op}, every_n_iter=100)
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss_op, train_op=train_op, training_hooks=[lth])
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.imdb.load_data(num_words=VOCAB_SIZE)
X_train = tf.keras.preprocessing.sequence.pad_sequences(X_train, MAX_LEN)
X_test = tf.keras.preprocessing.sequence.pad_sequences(X_test, MAX_LEN)
estimator = tf.estimator.Estimator(model_fn)
for _ in range(N_EPOCH):
estimator.train(tf.estimator.inputs.numpy_input_fn(
x = X_train, y = y_train,
batch_size = BATCH_SIZE,
shuffle = True))
y_pred = np.fromiter(estimator.predict(tf.estimator.inputs.numpy_input_fn(
x = X_test,
batch_size = BATCH_SIZE,
shuffle = False)), np.int32)
print("\nValidation Accuracy: %.4f\n" % (y_pred==y_test).mean())
```
|
github_jupyter
|
import tensorflow as tf
import numpy as np
VOCAB_SIZE = 5000
MAX_LEN = 400
BATCH_SIZE = 32
EMBED_DIM = 50
FILTERS = 250
N_CLASS = 2
N_EPOCH = 2
LR = {'start': 5e-3, 'end': 5e-4, 'steps': 1500}
def forward(x, mode):
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
x = tf.contrib.layers.embed_sequence(x, VOCAB_SIZE, EMBED_DIM)
x = tf.layers.dropout(x, 0.2, training=is_training)
feat_map = []
for k_size in [3, 4, 5]:
_x = tf.layers.conv1d(x, FILTERS, k_size, activation=tf.nn.relu)
_x = tf.layers.max_pooling1d(_x, _x.get_shape().as_list()[1], 1)
_x = tf.reshape(_x, (tf.shape(x)[0], FILTERS))
feat_map.append(_x)
x = tf.concat(feat_map, -1)
x = tf.layers.dropout(x, 0.2, training=is_training)
x = tf.layers.dense(x, FILTERS, tf.nn.relu)
logits = tf.layers.dense(x, N_CLASS)
return logits
def model_fn(features, labels, mode):
logits = forward(features, mode)
if mode == tf.estimator.ModeKeys.PREDICT:
preds = tf.argmax(logits, -1)
return tf.estimator.EstimatorSpec(mode, predictions=preds)
if mode == tf.estimator.ModeKeys.TRAIN:
global_step = tf.train.get_global_step()
lr_op = tf.train.exponential_decay(
LR['start'], global_step, LR['steps'], LR['end']/LR['start'])
loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=labels))
train_op = tf.train.AdamOptimizer(lr_op).minimize(
loss_op, global_step=global_step)
lth = tf.train.LoggingTensorHook({'lr': lr_op}, every_n_iter=100)
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss_op, train_op=train_op, training_hooks=[lth])
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.imdb.load_data(num_words=VOCAB_SIZE)
X_train = tf.keras.preprocessing.sequence.pad_sequences(X_train, MAX_LEN)
X_test = tf.keras.preprocessing.sequence.pad_sequences(X_test, MAX_LEN)
estimator = tf.estimator.Estimator(model_fn)
for _ in range(N_EPOCH):
estimator.train(tf.estimator.inputs.numpy_input_fn(
x = X_train, y = y_train,
batch_size = BATCH_SIZE,
shuffle = True))
y_pred = np.fromiter(estimator.predict(tf.estimator.inputs.numpy_input_fn(
x = X_test,
batch_size = BATCH_SIZE,
shuffle = False)), np.int32)
print("\nValidation Accuracy: %.4f\n" % (y_pred==y_test).mean())
| 0.783368 | 0.862815 |
<a href="https://colab.research.google.com/github/BenjaminMidtvedt/DeepTrack-2.0/blob/master/1_MNIST.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%matplotlib inline
import sys
sys.path.insert(0, "../..")
# %pip install deeptrack==0.6.1
```
# Example 1. MNIST
Trains a fully connected neural network to identify handwritten digits using MNIST dataset.
## 1. Setup
Imports and defines the objects needed for this example.
```
import os
import numpy as np
import matplotlib.pyplot as plt
import itertools
import deeptrack as dt
from deeptrack.extras import datasets
#Download dataset from the cloud
datasets.load("MNIST")
PATH_TO_DATASET = os.path.abspath("./datasets/MNIST")
TRAINING_SET_PATH = os.path.join(PATH_TO_DATASET, "training_set.npy")
TRAINING_LABELS_PATH = os.path.join(PATH_TO_DATASET, "training_labels.npy")
VALIDATION_SET_PATH = os.path.join(PATH_TO_DATASET, "validation_set.npy")
VALIDATION_LABELS_PATH = os.path.join(PATH_TO_DATASET, "validation_labels.npy")
```
## 2. Defining the dataset
### 2.1 Loading the data
The dataset is how we provide the network with training data. For this example we create the dataset by loading it from storage using `LoadImage`.
```
# Load the images from storage
get_training_images = dt.LoadImage(path=TRAINING_SET_PATH)
get_training_labels = dt.LoadImage(path=TRAINING_LABELS_PATH)
get_validation_images = dt.LoadImage(path=VALIDATION_SET_PATH)
get_validation_labels = dt.LoadImage(path=VALIDATION_LABELS_PATH)
```
Note that we don't load the images yet, we have just created the objects that will do so. First we normalize the data.
```
normalization = dt.NormalizeMinMax(0, 1)
get_training_images >>= normalization
get_validation_images >>= normalization
```
Since all training data is contained in a single file, we explicitly load the images
```
training_images = get_training_images.resolve()
training_labels = get_training_labels.resolve()
validation_images = get_validation_images.resolve()
validation_labels = get_validation_labels.resolve()
```
We want to continuously generate new data for the network to train on. For this, we use the Dataset feature.
```
training_data_iterator = itertools.cycle(training_images)
training_label_iterator = itertools.cycle(training_labels)
training_iterator = dt.Dataset(
data=training_data_iterator,
label=training_label_iterator
)
```
### 2.2 Augmenting the training set
In order to expand the dataset we augment it.
Affine augmentations consist of translating, rescaling, rotating and shearing
```
# How much to scale in x and y
scale = {
"x": lambda: 0.8 + np.random.rand() * 0.4,
"y": lambda: 0.8 + np.random.rand() * 0.4
}
# How much to translate in x and y
translate_px = {
"x": lambda: int(np.random.randint(-2, 3)),
"y": lambda: int(np.random.randint(-2, 3))
}
# Dummy property: whether to rotate or shear
should_rotate= lambda: np.random.randint(2)
# If should rotate, how much
rotate = lambda should_rotate: (-0.35 + np.random.rand() * 0.7) * should_rotate
# If not should rotate, how much shear
shear = lambda should_rotate: (-0.35 + np.random.rand() * 0.7) * (1 - should_rotate)
affine_transform = dt.Affine(
scale=scale,
translate_px=translate_px,
should_rotate=should_rotate,
# shear=shear,
order=2,
mode="constant"
)
```
We also distort the images elastically.
```
elastic_transform = dt.ElasticTransformation(
alpha=lambda: np.random.rand() * 60, # Amplitude of distortions
sigma=lambda: 5 + np.random.rand() * 2, # Granularity of distortions
ignore_last_dim=True, # Last dimension is not a channel, so it should be augmented
mode="constant"
)
```
Finally, since these distortions may cause pixels to fall outside the range of (0, 1), we clip the values.
```
clip = dt.Clip(0, 1)
```
We add the augmentations to the pipeline
```
augmentation = elastic_transform >> affine_transform >> clip
augmented_training_set = training_iterator >> augmentation
```
### 2.3 Defining the target
The training iterator resolves images. We can extract the label that we provided to the Dataset feature by just calling `get_property`.
```
def get_label(image):
return np.array(image.get_property("label")).squeeze()
```
### 2.3 Visualizing the dataset
To ensure the data and the labels match up we plot 8 images print their correspoding label. To convert the objects we created to an numpy array, we call the method `resolve()`. Since we flattened the images we need to reshape the images again to visualize them.
```
NUMBER_OF_IMAGES = 8
for image_index in range(NUMBER_OF_IMAGES):
augmented_training_set.update()
original_image = training_iterator()
plt.figure(figsize=(15, 3))
plt.subplot(1, 6, 1)
plt.imshow(original_image)
plt.axis('off')
plt.title("Original image")
for sub_plt in range(3, 7):
# Only update the augmentation
augmentation.update()
augmented_image = augmented_training_set()
plt.subplot(1, 6, sub_plt)
plt.imshow(augmented_image)
plt.axis('off')
plt.title("Augmented image")
plt.show()
```
## 3. Defining the network
The network used is a fully connected neural network. Here we define the network architecture, loss function and the optimizer.
```
model = dt.models.FullyConnected(
input_shape=(28, 28, 1),
dense_layers_dimensions=(500, 500, 500, 500),
number_of_outputs=10,
dropout=(0.25, 0.25),
output_activation="softmax",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
optimizer="rmsprop"
)
```
## 4. Training the network
The network is trained for 100 epochs using standard Keras syntax.
```
TRAIN_MODEL = True
if TRAIN_MODEL:
generator = dt.generators.ContinuousGenerator(
augmented_training_set & (augmented_training_set >> get_label),
batch_size=32,
min_data_size=1000,
max_data_size=1001,
)
with generator:
h = model.fit(
generator,
validation_data=(np.array(validation_images)[:500],
np.array(validation_labels)[:500].squeeze() ),
epochs=200
)
plt.plot(h.history["loss"], 'g')
plt.plot(h.history["val_loss"], 'r')
plt.legend(["Loss", "Validation loss"])
plt.yscale("log")
plt.show()
else:
model_path = datasets.load_model("MNIST")
model.load_weights(model_path)
```
## 5. Evaluating the training
```
array_of_images = validation_images
array_of_labels = validation_labels
predicted_digits = np.argmax(model.predict(array_of_images), axis=1)
accuracy = np.mean(np.array(array_of_labels) == predicted_digits)
print("Accuracy:", accuracy)
print("Error rate:", 1 - accuracy)
```
### 5.1 Prediction vs actual
We show a few images, the true digit and the predicted digit
```
NUMBER_OF_IMAGES = 8
TITLE_STRING = "The image shows the digit {0} \n The model predicted the digit {1}"
for image_index in range(NUMBER_OF_IMAGES):
image_to_show = np.reshape(array_of_images[image_index], (28, 28))
plt.imshow(image_to_show)
plt.title(TITLE_STRING.format(array_of_labels[image_index]._value, predicted_digits[image_index]))
plt.show()
```
### 5.2 Visualizing errors
We show a few images which the model predicted incorrectly.
```
NUMBER_OF_IMAGES = 16
model_is_wrong = predicted_digits != array_of_labels.squeeze()
array_of_hard_images = array_of_images._value[model_is_wrong]
array_of_hard_labels = array_of_labels._value[model_is_wrong]
inaccurately_predicted_digits = predicted_digits[model_is_wrong]
for image_index in range(NUMBER_OF_IMAGES):
image_to_show = np.reshape(array_of_hard_images[image_index], (28, 28))
plt.imshow(image_to_show)
plt.title(TITLE_STRING.format(array_of_hard_labels[image_index], inaccurately_predicted_digits[image_index]))
plt.show()
```
|
github_jupyter
|
%matplotlib inline
import sys
sys.path.insert(0, "../..")
# %pip install deeptrack==0.6.1
import os
import numpy as np
import matplotlib.pyplot as plt
import itertools
import deeptrack as dt
from deeptrack.extras import datasets
#Download dataset from the cloud
datasets.load("MNIST")
PATH_TO_DATASET = os.path.abspath("./datasets/MNIST")
TRAINING_SET_PATH = os.path.join(PATH_TO_DATASET, "training_set.npy")
TRAINING_LABELS_PATH = os.path.join(PATH_TO_DATASET, "training_labels.npy")
VALIDATION_SET_PATH = os.path.join(PATH_TO_DATASET, "validation_set.npy")
VALIDATION_LABELS_PATH = os.path.join(PATH_TO_DATASET, "validation_labels.npy")
# Load the images from storage
get_training_images = dt.LoadImage(path=TRAINING_SET_PATH)
get_training_labels = dt.LoadImage(path=TRAINING_LABELS_PATH)
get_validation_images = dt.LoadImage(path=VALIDATION_SET_PATH)
get_validation_labels = dt.LoadImage(path=VALIDATION_LABELS_PATH)
normalization = dt.NormalizeMinMax(0, 1)
get_training_images >>= normalization
get_validation_images >>= normalization
training_images = get_training_images.resolve()
training_labels = get_training_labels.resolve()
validation_images = get_validation_images.resolve()
validation_labels = get_validation_labels.resolve()
training_data_iterator = itertools.cycle(training_images)
training_label_iterator = itertools.cycle(training_labels)
training_iterator = dt.Dataset(
data=training_data_iterator,
label=training_label_iterator
)
# How much to scale in x and y
scale = {
"x": lambda: 0.8 + np.random.rand() * 0.4,
"y": lambda: 0.8 + np.random.rand() * 0.4
}
# How much to translate in x and y
translate_px = {
"x": lambda: int(np.random.randint(-2, 3)),
"y": lambda: int(np.random.randint(-2, 3))
}
# Dummy property: whether to rotate or shear
should_rotate= lambda: np.random.randint(2)
# If should rotate, how much
rotate = lambda should_rotate: (-0.35 + np.random.rand() * 0.7) * should_rotate
# If not should rotate, how much shear
shear = lambda should_rotate: (-0.35 + np.random.rand() * 0.7) * (1 - should_rotate)
affine_transform = dt.Affine(
scale=scale,
translate_px=translate_px,
should_rotate=should_rotate,
# shear=shear,
order=2,
mode="constant"
)
elastic_transform = dt.ElasticTransformation(
alpha=lambda: np.random.rand() * 60, # Amplitude of distortions
sigma=lambda: 5 + np.random.rand() * 2, # Granularity of distortions
ignore_last_dim=True, # Last dimension is not a channel, so it should be augmented
mode="constant"
)
clip = dt.Clip(0, 1)
augmentation = elastic_transform >> affine_transform >> clip
augmented_training_set = training_iterator >> augmentation
def get_label(image):
return np.array(image.get_property("label")).squeeze()
NUMBER_OF_IMAGES = 8
for image_index in range(NUMBER_OF_IMAGES):
augmented_training_set.update()
original_image = training_iterator()
plt.figure(figsize=(15, 3))
plt.subplot(1, 6, 1)
plt.imshow(original_image)
plt.axis('off')
plt.title("Original image")
for sub_plt in range(3, 7):
# Only update the augmentation
augmentation.update()
augmented_image = augmented_training_set()
plt.subplot(1, 6, sub_plt)
plt.imshow(augmented_image)
plt.axis('off')
plt.title("Augmented image")
plt.show()
model = dt.models.FullyConnected(
input_shape=(28, 28, 1),
dense_layers_dimensions=(500, 500, 500, 500),
number_of_outputs=10,
dropout=(0.25, 0.25),
output_activation="softmax",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
optimizer="rmsprop"
)
TRAIN_MODEL = True
if TRAIN_MODEL:
generator = dt.generators.ContinuousGenerator(
augmented_training_set & (augmented_training_set >> get_label),
batch_size=32,
min_data_size=1000,
max_data_size=1001,
)
with generator:
h = model.fit(
generator,
validation_data=(np.array(validation_images)[:500],
np.array(validation_labels)[:500].squeeze() ),
epochs=200
)
plt.plot(h.history["loss"], 'g')
plt.plot(h.history["val_loss"], 'r')
plt.legend(["Loss", "Validation loss"])
plt.yscale("log")
plt.show()
else:
model_path = datasets.load_model("MNIST")
model.load_weights(model_path)
array_of_images = validation_images
array_of_labels = validation_labels
predicted_digits = np.argmax(model.predict(array_of_images), axis=1)
accuracy = np.mean(np.array(array_of_labels) == predicted_digits)
print("Accuracy:", accuracy)
print("Error rate:", 1 - accuracy)
NUMBER_OF_IMAGES = 8
TITLE_STRING = "The image shows the digit {0} \n The model predicted the digit {1}"
for image_index in range(NUMBER_OF_IMAGES):
image_to_show = np.reshape(array_of_images[image_index], (28, 28))
plt.imshow(image_to_show)
plt.title(TITLE_STRING.format(array_of_labels[image_index]._value, predicted_digits[image_index]))
plt.show()
NUMBER_OF_IMAGES = 16
model_is_wrong = predicted_digits != array_of_labels.squeeze()
array_of_hard_images = array_of_images._value[model_is_wrong]
array_of_hard_labels = array_of_labels._value[model_is_wrong]
inaccurately_predicted_digits = predicted_digits[model_is_wrong]
for image_index in range(NUMBER_OF_IMAGES):
image_to_show = np.reshape(array_of_hard_images[image_index], (28, 28))
plt.imshow(image_to_show)
plt.title(TITLE_STRING.format(array_of_hard_labels[image_index], inaccurately_predicted_digits[image_index]))
plt.show()
| 0.544559 | 0.982707 |
# yvBoost & yveCRV Vaults
> "How has the introduction of the yvBOOST Vault impacted usage of the yveCRV Vault?"
- toc:true
- branch: master
- badges: true
- comments: false
- author: Scott Simpson
- categories: [Curve, Yearn]
- hide: false
```
#hide
#Imports & settings
!pip install plotly --upgrade
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
%matplotlib inline
#%load_ext google.colab.data_table
%load_ext rpy2.ipython
%R options(tidyverse.quiet = TRUE)
%R options(lubridate.quiet = TRUE)
%R options(jsonlite.quiet = TRUE)
%R suppressMessages(library(tidyverse))
%R suppressMessages(library(lubridate))
%R suppressMessages(library(jsonlite))
%R suppressMessages(options(dplyr.summarise.inform = FALSE))
#hide
%%R
#Grab base query from Flipside
df_yvBoost = fromJSON('https://api.flipsidecrypto.com/api/v2/queries/d87b9578-9bdd-4ed1-8da9-da0156d7575b/data/latest', simplifyDataFrame = TRUE)
df_yveCRV = fromJSON('https://api.flipsidecrypto.com/api/v2/queries/8dc06609-8d0e-4f5d-b754-e61082b336fa/data/latest', simplifyDataFrame = TRUE)
#fix the column names
names(df_yvBoost)<-tolower(names(df_yvBoost))
names(df_yveCRV)<-tolower(names(df_yveCRV))
#Change the date to date format
df_yvBoost$date <- as.Date(parse_datetime(df_yvBoost$date))
df_yveCRV$date <- as.Date(parse_datetime(df_yveCRV$date))
#create a date sequence from min date to max date
full_date_range <- tibble(date = seq(min(df_yvBoost$date), max(df_yvBoost$date), by = "days"))
#join in the df_yvBoost frame
df_yvBoost <- full_date_range %>%
left_join(df_yvBoost, by=c('date'))
#Get rid of the dodgy ytoken prices
df_yvBoost <- df_yvBoost %>%
mutate(ytoken_price = if_else(ytoken_price < 1, NA_real_, ytoken_price))
#fill the na prices
df_yvBoost <- df_yvBoost %>%
fill(vault_name) %>%
fill(vault_symbol) %>%
fill(asset_symbol) %>%
fill(exposure) %>%
fill(pricing_symbol, .direction = "downup") %>%
fill(ytoken_price, .direction = "downup") %>%
fill(token_price, .direction = "downup") %>%
replace_na(list(mint_amount = 0, burn_amount = 0, net_ytoken_increase = 0))
#yvBoost tokens on issue
df_yvBoost <- df_yvBoost %>%
arrange(date) %>%
mutate(yvBoost_equiv_CRV_deposit = net_ytoken_increase * ytoken_price,
yvBoost_on_issue = cumsum(net_ytoken_increase))
#define ROI as the annualised 7 day vault return - assume 52 weeks/year
#calculate the 7 day lagged value of ytoken_price
df_yvBoost <- df_yvBoost %>%
mutate(ytoken_lag7 = lag(ytoken_price, n=7, order_by = date)) %>%
mutate(yvBoost_7day_ROI = (ytoken_price - ytoken_lag7) * 52 * 100) %>%
fill(yvBoost_7day_ROI, .direction = "downup")
#create a date sequence from min date to max date
full_date_range <- tibble(date = seq(min(df_yveCRV$date), max(df_yveCRV$date), by = "days"))
#join in the df_yveCRV frame
df_yveCRV <- full_date_range %>%
left_join(df_yveCRV, by=c('date'))
#fill the na prices
df_yveCRV <- df_yveCRV %>%
replace_na(list(threecrv_deposit = 0, yvecrv_minted = 0))
#df_yveCRV tokens on issue
df_yveCRV <- df_yveCRV %>%
arrange(date) %>%
mutate(yveCRV_on_issue = cumsum(yvecrv_minted))
#Grab yveCRV prices from coingecko
cg_api = paste('https://api.coingecko.com/api/v3/coins/vecrv-dao-yvault/market_chart/range?vs_currency=usd&from=',
as.numeric(as.POSIXct(min(df_yveCRV$date))),
'&to=',
as.numeric(as.POSIXct(max(df_yveCRV$date))),
sep = "")
yveCRV_prices = fromJSON(cg_api, simplifyDataFrame = TRUE, flatten = TRUE)
yveCRV_prices <- as_tibble(yveCRV_prices$prices) %>%
mutate(date = as.Date(as.POSIXct(V1/1000, origin="1970-01-01")),
yveCRV_price = V2) %>%
select(date, yveCRV_price)
#join back into the main df
df_yveCRV <- df_yveCRV %>%
left_join(yveCRV_prices, by = c('date'))
#Grab CRV prices from coingecko
cg_api = paste('https://api.coingecko.com/api/v3/coins/curve-dao-token/market_chart/range?vs_currency=usd&from=',
as.numeric(as.POSIXct(min(df_yveCRV$date))),
'&to=',
as.numeric(as.POSIXct(max(df_yveCRV$date))),
sep = "")
CRV_prices = fromJSON(cg_api, simplifyDataFrame = TRUE, flatten = TRUE)
CRV_prices <- as_tibble(CRV_prices$prices) %>%
mutate(date = as.Date(as.POSIXct(V1/1000, origin="1970-01-01")),
CRV_price = V2) %>%
select(date, CRV_price)
#define ROI as the annualised 7 day vault return - assume 52 weeks/year
#ROI is 3CRV received over tokens on issue
df_yveCRV <- df_yveCRV %>%
# drop_na(price) %>%
mutate(yveCRV_7day_ROI = if_else(threecrv_deposit == 0, NA_real_, threecrv_deposit / (yveCRV_on_issue*yveCRV_price)) * 52 * 100) %>%
fill(yveCRV_7day_ROI, .direction = "down")
rm(list = c('full_date_range','yveCRV_prices', 'cg_api'))
#Want date, yveCRV minted, net yvBoost minted
df <- df_yveCRV %>%
left_join(df_yvBoost %>% select(date, net_ytoken_increase, yvBoost_on_issue, yvBoost_7day_ROI, yvBoost_equiv_CRV_deposit), by = c('date'))
#fix dates back up
df$date <- as_datetime(df$date)
#create a week field
df <- df %>%
mutate(week = floor_date(date, 'week'))
#join CRV prices back into the main df
df <- df %>%
left_join(CRV_prices, by = c('date'))
#roll up by week
df_week <- df %>%
group_by(week) %>%
summarise(yveCRV_minted = sum(yvecrv_minted),
yveCRV_7day_ROI = mean(yveCRV_7day_ROI, na.rm = TRUE),
yvBoost_minted = sum(net_ytoken_increase),
yvBoost_7day_ROI = mean(yvBoost_7day_ROI, na.rm = TRUE),
yvBoost_minted_usd = sum(yvBoost_equiv_CRV_deposit * CRV_price),
yvBoost_equiv_CRV_deposit = sum(yvBoost_equiv_CRV_deposit),
yveCRV_minted_usd = sum(yvecrv_minted * CRV_price),
CRV_price = mean(CRV_price),
yveCRV_price = mean(yveCRV_price)
) %>%
ungroup() %>%
arrange(week) %>%
replace_na(list(yvBoost_minted = 0, yvBoost_equiv_CRV_deposit =0, yvBoost_minted_usd = 0)) %>%
mutate(yveCRV_on_issue = cumsum(yveCRV_minted),
yvBoost_on_issue = cumsum(yvBoost_minted),
yvBoost_equiv_CRV_deposit_on_issue = cumsum(yvBoost_equiv_CRV_deposit),
yveCRV_on_issue_usd = yveCRV_on_issue * CRV_price,
yvBoost_on_issue_usd = yvBoost_on_issue * yveCRV_price
)
```
# Yearn yVaults, yveCRV-DAO and yvBOOST
## Yearn yVaults
Yearn is a DeFi protocol which automates yield farming. Users have tokens which they want to hold - Yearn puts those tokens to work by finding the best yield farming opportunities across DeFi. It does this in a gas efficient way for the user, so even small deposits can get decent returns over time.
Yearn users deposit their tokens into yVaults, and receive a token in return which is proportional to their share of the vault capital. A yVault is a smart contract with one ore more Strategies sitting behind it. The Strategies are the yield-farming recipes which are created by clever humans (Strategists) and monitored & managed by bots (Keepers). The yVault contains logic which automatically allocates the vault deposits to whichever combination of Strategies gives the best return for the users. The rewards from the yield farming accrue into the vault, so the value of the vault token is always increasing. When a user withdraws from the yVault, they get more tokens than they deposited. This additional amount is the yield the vault has earned on their behalf.
Each yVault is structured around a particular underlying token - there are vaults for Eth, USDC, WBTC and many others. Users can deposit in the yVault native token, or they can deposit using any other token & take advantage of Zaps. Zaps are smart contracts which take an input token and swap it for the underlying token in a gas efficient manner. The swap may occur via a number of dexs or dex aggregators, but this is abstracted away for the user. There is a similar feature when withdrawing - the user can withdraw the underlying token from a yVault, or choose to receive their funds in ETH, WBTC, DAI, USDC or USDT. It's important to know that whatever token the user deposits or withdraws, they maintain price exposure to the *underlying token* of the vault whilst deposited.
## yveCRV yVault
Now the veCRV-DAO yVault (also known as the yveCRV Vault) is a little different to the others. It starts with the CRV token, the governance & reward token from [Curve](https://curve.fi). Curve is a dex which specialises in stableswaps - swaps between tokens which have approximately the same value. Examples are ETH/stETH or swaps between dollar pegged stablecoins. Curve has optimised their swap code to make these swaps efficient from both a liquidity impact and fee perspected - see this [post](https://scottincrypto.github.io/analytics/curve/2021/09/19/Curve_Stableswaps.html) for a further exploration of Curve & stablecoin swaps.
The CRV token has voting rights in the Curve DAO which makes decisions on the Curve protocol - things like fees, LP rewards, swap parameters and pools launched. In some Curve pools, liquidity providers receive CRV tokens to incentivise liquidity in the pools. CRV tokens are also available on the open market. To encourage users to stay as CRV hodlers, there is a facility to lock CRV tokens into the CRV DAO for a fixed period of up to 4 years. Users receive veCRV tokens (voting escrow Curve Tokens) for doing this, and more tokens are received the longer the locking period. veCRV holders can still participate in governance voting, they receive 50% of Curve trading fees and they qualify for boosted rewards (up to 2.5x) when they provide liquidity in Curve.
The fees generated for veCRV holders are collected in the form of 3CRV tokens (shares in the [Curve tripool](https://curve.fi/3pool)), which can be redeemed for stablecoins if desired. Fee distribution for veCRV holders happens weekly and users need to collect these manually and pay the gas cost for the transactions.
In true Yearn fashion there is a vault & a strategy to maximise the returns from this CRV locking process. This is the yveCRV yVault and it's different to the other vaults in that you *can't withdraw your tokens*. Yearn takes CRV tokens and locks them with the CRV DAO for the maximum 4 year period and continually renews this lock. This maximises the veCRV returns to the yVault. In addition, all Yearn vaults send 10% of earned CRV into this vault for additional boost. The returns to the users are in the form of the 3CRV tokens earned by the veCRV - like the CRV staking contract, these are collectable weekly as an income stream, and must be collected manually. yVault depositors receive yveCRV-DAO tokens as their share in the vault.
## yvBoost yVault
Finally we get to the yvBoost yVault. This vault builds on the yveCRV yVault, but automates the process of collecting the weekly rewards. The Strategy behind this vault collects the 3CRV rewards each week, swaps them for more yveCRV then deposits them back into the vault. The yvBoost vault is a standard Yearn yVault - you can withdraw part or all of your outstanding deposit and any accrued gains at any time. Working in the native tokens of the yVault, users can deposit or withdraw yveCRV-DAO tokens. Alternatively, users can take advantage of the Zap function and deposit pretty much any ERC-20 token into the vault. They can also withdraw using Zap and collect WETH, WBTC, DAI, USDT or USDC. As the Zap conversions occur on the way in and out, the user maintains price exposure to the yveCRV-DAO tokens whilst deposited in the vault.
The yveCRV yVault launched 4-5 months before the yvBoost yVault. We will examine what impact the launch of the yvBoost yVault had on the usage of the yveCRV yVault.
# yVault Usage - Total Value Locked in USD
If we are to understand the impact of the yvBoost vault on the usage of the yveCRV vault, first we must define what we mean by usage. For a Yearn vault, this is best defined by the Total Value Locked (TVL). This is the sum of the value of the tokens deposited and withdrawn from the vault by users. For the yveCRV vault this is simple - as there are no withdrawals, we just need to sum the deposits. For the yvBoost vault, we need to add the deposits & subtract the withdrawal transactions. We need a common baseline for comparing TVLs, as:
- yveCRV deposits are denominated in CRV
- yvBoost deposits are denominated in yveCRV-DAO
Let's have a look at the TVL denominated in USD over time for these two vaults, shown in the graph below. There isn't too much to discern from this - the value of both vaults declined post May 2021. For the yvBoost vault, this may have been due to withdrawals being higher than deposits, or it could be due to the USD value of yveCRV-DAO tokens dropping. Same with the yveCRV graph - we know that there are no withdrawals, but were the rises due to price or deposits? It seems we need an alternative approach.
```
#@title
#hide_input
#time plot by week for inputs
df_p = %R df_week %>% select(week, yveCRV_on_issue_usd, yvBoost_on_issue_usd) %>% rename("yveCRV-DAO" = yveCRV_on_issue_usd, "yvBoost" = yvBoost_on_issue_usd) %>% pivot_longer(!week, names_to='measure', values_to='tokens')
fig = px.line(df_p
, x = "week"
, y = "tokens"
, color = 'measure'
, labels=dict(week="Week", measure="Vault", tokens='USD Value')
, title= "Vault TVL in USD"
, template="simple_white", width=800, height=800/1.618
)
fig.update_layout(legend=dict(
yanchor="bottom",
y=0.01,
xanchor="right",
x=0.99,
title_text=None
))
fig.update_yaxes(title_text='Amount (USD)')
fig.update_xaxes(title_text=None)
fig.show()
```
# yVault Usage - Total Value Locked in CRV Tokens
When users deposit CRV into the yveCRV vault, they receive 1 yveCRV-DAO token for each CRV deposited. Given that yveCRV-DAO tokens are then deposited into the yvBoost vault, it makes sense to look at the TVL of these two vaults denominated in CRV. The graph below shows the TVL of the two vaults in equivalent CRV tokens. Now we can see the underlying usage of the vaults independent of the USD price volatility. We saw rapid growth of the yveCRV vault to March 2021, then it levelled off considerably. The launch of yvBoost in April saw rapid takeup of this vault, with the TVL approaching that of the yveCRV vault within 6-7 weeks. At the same time, the yveCRV vault TVL also rose rapidly. This growth in yveCRV was driven by the yvBoost growth because yvBoost requires yveCRV-DAO tokens to deposit. These are obtained either by depositing CRV into yveCRV and minting new tokens, or by purchasing them on the secondary market. The yvBoost vault makes the choice based on what is best value at the time - we will examine the relative pricing a bit later.
```
#@title
#hide_input
#time plot by week for inputs
df_p = %R df_week %>% select(week, yveCRV_on_issue, yvBoost_equiv_CRV_deposit_on_issue) %>% rename("yveCRV-DAO" = yveCRV_on_issue, "yvBoost" = yvBoost_equiv_CRV_deposit_on_issue) %>% pivot_longer(!week, names_to='measure', values_to='tokens')
fig = px.line(df_p
, x = "week"
, y = "tokens"
, color = 'measure'
, labels=dict(week="Week", measure="Vault", tokens='Equivalent CRV')
, title= "Vault Equivalent Locked CRV Tokens"
, template="simple_white", width=800, height=800/1.618
)
fig.update_layout(legend=dict(
yanchor="bottom",
y=0.01,
xanchor="right",
x=0.99,
title_text=None
))
fig.update_yaxes(title_text='Equivalent CRV Tokens')
fig.update_xaxes(title_text=None)
fig.show()
```
The graph below shows the same data as above, but looks at the net token growth over time rather than the TVL of each vault. Here we see more clearly the impact of yvBoost on the growth of yveCRV. yveCRV grew steadily from Jan-Mar 2021, then growth dropped to almost zero for a month or so. The growth kicked off again once yvBoost launched, and slowed in line with the slowing in yvBoost growth at the end of June.
```
#@title
#hide_input
df_p = %R df_week %>% select(week, yveCRV_minted, yvBoost_equiv_CRV_deposit) #%>% pivot_longer(!week, names_to='measure', values_to='tokens')
fig = make_subplots(rows=2, cols=1, subplot_titles=("yveCRV-DAO Net Token Growth", "yvBoost Net Token Growth"))
fig.append_trace(go.Bar(x=df_p["week"], y=df_p["yveCRV_minted"], name="CRV"), row=1, col=1)
fig.append_trace(go.Bar(x=df_p["week"], y=df_p["yvBoost_equiv_CRV_deposit"], name="CRV"), row=2, col=1)
fig.update_layout(width=800, height=600)
fig.update_layout(template="simple_white", showlegend=False)
fig.update_yaxes(title_text='Value in CRV', row=1, col=1)#, range=[0, 3.5e6])
fig.update_yaxes(title_text='Value in CRV', row=2, col=1)#, range=[0, 3.5e6])
fig.show()
```
# Impact of Pricing
We saw above that yveCRV-DAO tokens were created in the yvCRV vault for use in yvBoost in the period from April-June 2021. The chart below looks at the relative price of yveCRV tokens to CRV tokens. Remember that you can have a one-way transaction and convert CRV to yveCRV. In doing so you give up the ability to get your CRV back, but gain a perpetual share of the revenue of the vault in return. The relative price chart below shows what market participants are valuing this loss of flexibility and future income stream at. In the period from Feb-June, yveCRV traded at an average of 90% of CRV, roughly a 10% discount. This dropped to 75% from June to August, then fell dramatically to below 40% in September. This means that the market is valuing the flexibility of the unencumbered CRV tokens over the income stream by a factor of nearly 3! With yveCRV trading at this much of a discount, it is not surprising that there had been very low growth in the yveCRV since July 2020. Any required yveCRV tokens (for depositing in yvBoost) can be purchased on the open market at a steep discount to the underlying CRV tokens.
It's unclear what is driving this pricing mismatch - perhaps the competition for CRV token locking with [Convex](https://www.convexfinance.com/) is causing people to exit their yveCRV positions in search of better yields elsewhere.
```
#@title
#hide_input
#time plot by week for inputs
df_p = %R df_week %>% select(week, yveCRV_price, CRV_price) %>% mutate(ratio = yveCRV_price / CRV_price) #%>% rename("yveCRV-DAO" = yveCRV_price, "CRV" = CRV_price)
fig = px.line(df_p
, x = "week"
, y = "ratio"
# , color = 'measure'
, labels=dict(week="Week", ratio="Ratio")
, title= "yveCRV:CRV Price Ratio"
, template="simple_white", width=800, height=800/1.618
)
fig.update_layout(legend=dict(
yanchor="top",
y=0.99,
xanchor="right",
x=0.99,
title_text=None
))
fig.update_yaxes(title_text='yveCRV Price in CRV')
fig.update_xaxes(title_text=None)
fig.show()
```
# Vault Return on Investment
The pricing above is curious - why is yveCRV valued at such a steep discount to CRV? We will see if there is any impact from the Return on Investment of the vaults in question.
Vault ROI for a normal Yearn yVault is calculated from the price of the yToken relative to the underlying deposited token. Each Vault has an exchange rate built into it - deposit 1 token, get a bit less than one yvtoken in return. Over time, 1 deposit token will buy less and less yvtokens - the price is always rising. For a given investment, the ROI is determined by the following formula, where the Buy & Sell prices are the number of native tokens required to buy the yvtokens:

The approach used by the Yearn team on the https://yearn.finance page is to annualise this ROI based on a 7 day rolling period. We have applied this approach to the yvBoost vault.
The ROI calculations for the yveCRV vault is a little simpler - each week there is a deposit of 3CRV tokens into the vault which are the income stream generated from the locked CRV tokens. We take the value of the deposited 3CRV tokens over the value of the CRV tokens in the vault over each 7 day period and annualise it to get the vault ROI. Note, this assumes that users withdraw their returns from the vault.
The ROI of the two vaults are shown in the chart below. We can see that the yvBoost vault tracks the return of the yveCRV vault, with a slight discount on average. It's not clear whether there is any impact on yveCRV/yvBoost usage due to ROI as they track each other fairly reliably. What we do see, however, is an impact on ROI of the yveCRV:CRV pricing ratio above. We can see the yvBoost ROI flippen the yveCRV ROI in late August - this is a direct result of the steep discount of yveCRV relative to CRV. The underlying income of the yvBoost vault is driven by the number of CRV tokens locked, but the value of the vault has fallen due to the drop in yveCRV price relative to CRV. This increases the vault ROI dramatically, amplified by the fact that the underlying 3CRV earnings of the vault are swapped into more yveCRV at a discount. This apparent pricing mismatch is an excellent opportunity to get some great returns from the yvBoost vault.
```
#@title
#hide_input
#time plot by week for inputs
df_p = %R df_week %>% select(week, yveCRV_7day_ROI, yvBoost_7day_ROI) %>% rename("yveCRV-DAO" = yveCRV_7day_ROI, "yvBoost" = yvBoost_7day_ROI) %>% pivot_longer(!week, names_to='measure', values_to='tokens')
fig = px.line(df_p
, x = "week"
, y = "tokens"
, color = 'measure'
, labels=dict(week="Week", measure="Vault", tokens='Equivalent CRV')
, title= "Vault Annualised 7 Day ROI"
, template="simple_white", width=800, height=800/1.618
)
fig.update_layout(legend=dict(
yanchor="top",
y=0.99,
xanchor="right",
x=0.99,
title_text=None
))
fig.update_yaxes(title_text='Annualised ROI %')
fig.update_xaxes(title_text=None)
fig.show()
```
# Conclusions
We attempted to answer the question of whether there was an impact on the usage of the yveCRV vault from the introduction of the yvBoost vault - a more improved & automated version of yveCRV. We saw that the usage of yveCRV had already dropped prior to the introduction of yvBoost, so there was no opportunity for yveCRV usage to fall much further. The launch of yvBoost, however, generated an upswing of deposits to the yveCRV vault, as yveCRV-DAO tokens were required to deposit into the yvBoost vault. The relative pricing of yveCRV-DAO to CRV meant that minting fresh yveCRV-DAO tokens was the best way to enter yvBoost. Since the initial 6 week upswing of yvBoost, we have seen the growth of both vaults level out with yvBoost declining a little. A surprising observation was the steep drop in value of yveCRV-DAO relative to CRV - it's clear that people want out of their locked CRV positions and are prepared to take a big haircut to do so.
- All on-chain data sourced from the curated tables at [Flipside Crypto](https://flipsidecrypto.com)
- Pricing data sourced from [Coingecko](https://coingecko.com)
|
github_jupyter
|
#hide
#Imports & settings
!pip install plotly --upgrade
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
%matplotlib inline
#%load_ext google.colab.data_table
%load_ext rpy2.ipython
%R options(tidyverse.quiet = TRUE)
%R options(lubridate.quiet = TRUE)
%R options(jsonlite.quiet = TRUE)
%R suppressMessages(library(tidyverse))
%R suppressMessages(library(lubridate))
%R suppressMessages(library(jsonlite))
%R suppressMessages(options(dplyr.summarise.inform = FALSE))
#hide
%%R
#Grab base query from Flipside
df_yvBoost = fromJSON('https://api.flipsidecrypto.com/api/v2/queries/d87b9578-9bdd-4ed1-8da9-da0156d7575b/data/latest', simplifyDataFrame = TRUE)
df_yveCRV = fromJSON('https://api.flipsidecrypto.com/api/v2/queries/8dc06609-8d0e-4f5d-b754-e61082b336fa/data/latest', simplifyDataFrame = TRUE)
#fix the column names
names(df_yvBoost)<-tolower(names(df_yvBoost))
names(df_yveCRV)<-tolower(names(df_yveCRV))
#Change the date to date format
df_yvBoost$date <- as.Date(parse_datetime(df_yvBoost$date))
df_yveCRV$date <- as.Date(parse_datetime(df_yveCRV$date))
#create a date sequence from min date to max date
full_date_range <- tibble(date = seq(min(df_yvBoost$date), max(df_yvBoost$date), by = "days"))
#join in the df_yvBoost frame
df_yvBoost <- full_date_range %>%
left_join(df_yvBoost, by=c('date'))
#Get rid of the dodgy ytoken prices
df_yvBoost <- df_yvBoost %>%
mutate(ytoken_price = if_else(ytoken_price < 1, NA_real_, ytoken_price))
#fill the na prices
df_yvBoost <- df_yvBoost %>%
fill(vault_name) %>%
fill(vault_symbol) %>%
fill(asset_symbol) %>%
fill(exposure) %>%
fill(pricing_symbol, .direction = "downup") %>%
fill(ytoken_price, .direction = "downup") %>%
fill(token_price, .direction = "downup") %>%
replace_na(list(mint_amount = 0, burn_amount = 0, net_ytoken_increase = 0))
#yvBoost tokens on issue
df_yvBoost <- df_yvBoost %>%
arrange(date) %>%
mutate(yvBoost_equiv_CRV_deposit = net_ytoken_increase * ytoken_price,
yvBoost_on_issue = cumsum(net_ytoken_increase))
#define ROI as the annualised 7 day vault return - assume 52 weeks/year
#calculate the 7 day lagged value of ytoken_price
df_yvBoost <- df_yvBoost %>%
mutate(ytoken_lag7 = lag(ytoken_price, n=7, order_by = date)) %>%
mutate(yvBoost_7day_ROI = (ytoken_price - ytoken_lag7) * 52 * 100) %>%
fill(yvBoost_7day_ROI, .direction = "downup")
#create a date sequence from min date to max date
full_date_range <- tibble(date = seq(min(df_yveCRV$date), max(df_yveCRV$date), by = "days"))
#join in the df_yveCRV frame
df_yveCRV <- full_date_range %>%
left_join(df_yveCRV, by=c('date'))
#fill the na prices
df_yveCRV <- df_yveCRV %>%
replace_na(list(threecrv_deposit = 0, yvecrv_minted = 0))
#df_yveCRV tokens on issue
df_yveCRV <- df_yveCRV %>%
arrange(date) %>%
mutate(yveCRV_on_issue = cumsum(yvecrv_minted))
#Grab yveCRV prices from coingecko
cg_api = paste('https://api.coingecko.com/api/v3/coins/vecrv-dao-yvault/market_chart/range?vs_currency=usd&from=',
as.numeric(as.POSIXct(min(df_yveCRV$date))),
'&to=',
as.numeric(as.POSIXct(max(df_yveCRV$date))),
sep = "")
yveCRV_prices = fromJSON(cg_api, simplifyDataFrame = TRUE, flatten = TRUE)
yveCRV_prices <- as_tibble(yveCRV_prices$prices) %>%
mutate(date = as.Date(as.POSIXct(V1/1000, origin="1970-01-01")),
yveCRV_price = V2) %>%
select(date, yveCRV_price)
#join back into the main df
df_yveCRV <- df_yveCRV %>%
left_join(yveCRV_prices, by = c('date'))
#Grab CRV prices from coingecko
cg_api = paste('https://api.coingecko.com/api/v3/coins/curve-dao-token/market_chart/range?vs_currency=usd&from=',
as.numeric(as.POSIXct(min(df_yveCRV$date))),
'&to=',
as.numeric(as.POSIXct(max(df_yveCRV$date))),
sep = "")
CRV_prices = fromJSON(cg_api, simplifyDataFrame = TRUE, flatten = TRUE)
CRV_prices <- as_tibble(CRV_prices$prices) %>%
mutate(date = as.Date(as.POSIXct(V1/1000, origin="1970-01-01")),
CRV_price = V2) %>%
select(date, CRV_price)
#define ROI as the annualised 7 day vault return - assume 52 weeks/year
#ROI is 3CRV received over tokens on issue
df_yveCRV <- df_yveCRV %>%
# drop_na(price) %>%
mutate(yveCRV_7day_ROI = if_else(threecrv_deposit == 0, NA_real_, threecrv_deposit / (yveCRV_on_issue*yveCRV_price)) * 52 * 100) %>%
fill(yveCRV_7day_ROI, .direction = "down")
rm(list = c('full_date_range','yveCRV_prices', 'cg_api'))
#Want date, yveCRV minted, net yvBoost minted
df <- df_yveCRV %>%
left_join(df_yvBoost %>% select(date, net_ytoken_increase, yvBoost_on_issue, yvBoost_7day_ROI, yvBoost_equiv_CRV_deposit), by = c('date'))
#fix dates back up
df$date <- as_datetime(df$date)
#create a week field
df <- df %>%
mutate(week = floor_date(date, 'week'))
#join CRV prices back into the main df
df <- df %>%
left_join(CRV_prices, by = c('date'))
#roll up by week
df_week <- df %>%
group_by(week) %>%
summarise(yveCRV_minted = sum(yvecrv_minted),
yveCRV_7day_ROI = mean(yveCRV_7day_ROI, na.rm = TRUE),
yvBoost_minted = sum(net_ytoken_increase),
yvBoost_7day_ROI = mean(yvBoost_7day_ROI, na.rm = TRUE),
yvBoost_minted_usd = sum(yvBoost_equiv_CRV_deposit * CRV_price),
yvBoost_equiv_CRV_deposit = sum(yvBoost_equiv_CRV_deposit),
yveCRV_minted_usd = sum(yvecrv_minted * CRV_price),
CRV_price = mean(CRV_price),
yveCRV_price = mean(yveCRV_price)
) %>%
ungroup() %>%
arrange(week) %>%
replace_na(list(yvBoost_minted = 0, yvBoost_equiv_CRV_deposit =0, yvBoost_minted_usd = 0)) %>%
mutate(yveCRV_on_issue = cumsum(yveCRV_minted),
yvBoost_on_issue = cumsum(yvBoost_minted),
yvBoost_equiv_CRV_deposit_on_issue = cumsum(yvBoost_equiv_CRV_deposit),
yveCRV_on_issue_usd = yveCRV_on_issue * CRV_price,
yvBoost_on_issue_usd = yvBoost_on_issue * yveCRV_price
)
#@title
#hide_input
#time plot by week for inputs
df_p = %R df_week %>% select(week, yveCRV_on_issue_usd, yvBoost_on_issue_usd) %>% rename("yveCRV-DAO" = yveCRV_on_issue_usd, "yvBoost" = yvBoost_on_issue_usd) %>% pivot_longer(!week, names_to='measure', values_to='tokens')
fig = px.line(df_p
, x = "week"
, y = "tokens"
, color = 'measure'
, labels=dict(week="Week", measure="Vault", tokens='USD Value')
, title= "Vault TVL in USD"
, template="simple_white", width=800, height=800/1.618
)
fig.update_layout(legend=dict(
yanchor="bottom",
y=0.01,
xanchor="right",
x=0.99,
title_text=None
))
fig.update_yaxes(title_text='Amount (USD)')
fig.update_xaxes(title_text=None)
fig.show()
#@title
#hide_input
#time plot by week for inputs
df_p = %R df_week %>% select(week, yveCRV_on_issue, yvBoost_equiv_CRV_deposit_on_issue) %>% rename("yveCRV-DAO" = yveCRV_on_issue, "yvBoost" = yvBoost_equiv_CRV_deposit_on_issue) %>% pivot_longer(!week, names_to='measure', values_to='tokens')
fig = px.line(df_p
, x = "week"
, y = "tokens"
, color = 'measure'
, labels=dict(week="Week", measure="Vault", tokens='Equivalent CRV')
, title= "Vault Equivalent Locked CRV Tokens"
, template="simple_white", width=800, height=800/1.618
)
fig.update_layout(legend=dict(
yanchor="bottom",
y=0.01,
xanchor="right",
x=0.99,
title_text=None
))
fig.update_yaxes(title_text='Equivalent CRV Tokens')
fig.update_xaxes(title_text=None)
fig.show()
#@title
#hide_input
df_p = %R df_week %>% select(week, yveCRV_minted, yvBoost_equiv_CRV_deposit) #%>% pivot_longer(!week, names_to='measure', values_to='tokens')
fig = make_subplots(rows=2, cols=1, subplot_titles=("yveCRV-DAO Net Token Growth", "yvBoost Net Token Growth"))
fig.append_trace(go.Bar(x=df_p["week"], y=df_p["yveCRV_minted"], name="CRV"), row=1, col=1)
fig.append_trace(go.Bar(x=df_p["week"], y=df_p["yvBoost_equiv_CRV_deposit"], name="CRV"), row=2, col=1)
fig.update_layout(width=800, height=600)
fig.update_layout(template="simple_white", showlegend=False)
fig.update_yaxes(title_text='Value in CRV', row=1, col=1)#, range=[0, 3.5e6])
fig.update_yaxes(title_text='Value in CRV', row=2, col=1)#, range=[0, 3.5e6])
fig.show()
#@title
#hide_input
#time plot by week for inputs
df_p = %R df_week %>% select(week, yveCRV_price, CRV_price) %>% mutate(ratio = yveCRV_price / CRV_price) #%>% rename("yveCRV-DAO" = yveCRV_price, "CRV" = CRV_price)
fig = px.line(df_p
, x = "week"
, y = "ratio"
# , color = 'measure'
, labels=dict(week="Week", ratio="Ratio")
, title= "yveCRV:CRV Price Ratio"
, template="simple_white", width=800, height=800/1.618
)
fig.update_layout(legend=dict(
yanchor="top",
y=0.99,
xanchor="right",
x=0.99,
title_text=None
))
fig.update_yaxes(title_text='yveCRV Price in CRV')
fig.update_xaxes(title_text=None)
fig.show()
#@title
#hide_input
#time plot by week for inputs
df_p = %R df_week %>% select(week, yveCRV_7day_ROI, yvBoost_7day_ROI) %>% rename("yveCRV-DAO" = yveCRV_7day_ROI, "yvBoost" = yvBoost_7day_ROI) %>% pivot_longer(!week, names_to='measure', values_to='tokens')
fig = px.line(df_p
, x = "week"
, y = "tokens"
, color = 'measure'
, labels=dict(week="Week", measure="Vault", tokens='Equivalent CRV')
, title= "Vault Annualised 7 Day ROI"
, template="simple_white", width=800, height=800/1.618
)
fig.update_layout(legend=dict(
yanchor="top",
y=0.99,
xanchor="right",
x=0.99,
title_text=None
))
fig.update_yaxes(title_text='Annualised ROI %')
fig.update_xaxes(title_text=None)
fig.show()
| 0.36557 | 0.655122 |
# The `dataset` Module
```
from sklearn import datasets
import numpy as np
datasets.*?
boston = datasets.load_boston()
print(boston.DESCR)
X, y = boston.data, boston.target
```
# Creating Sample Data
```
datasets.make_*?
X, y = datasets.make_regression(n_samples=1000, n_features=1,
n_informative=1, noise=15,
bias=1000, random_state=0)
import matplotlib.pyplot as plt
%matplotlib inline
plt.scatter(X, y);
X, y = datasets.make_blobs(n_samples=300, centers=4,
cluster_std=0.6, random_state=0)
plt.scatter(X[:, 0], X[:, 1], s=50);
```
# Scaling Data
```
from sklearn import preprocessing
X, y = boston.data, boston.target
X[:, :3].mean(axis=0)
X[:, :3].std(axis=0)
plt.plot(X[:, :3]);
```
### `preprocessing.scale`
`scale` centers and scales the data using the following formula:

```
X_2 = preprocessing.scale(X[:, :3])
X_2.mean(axis=0)
X_2.std(axis=0)
plt.plot(X_2);
```
### `StandardScaler`
Same as `preprocessing.scale` but persists scale settings across uses.
```
scaler = preprocessing.StandardScaler()
scaler.fit(X[:, :3])
X_3 = scaler.transform(X[:, :3])
X_3.mean(axis=0)
X_3.std(axis=0)
plt.plot(X_3);
```
### `MinMaxScaler`
Scales data within a specified range.
```
scaler = preprocessing.MinMaxScaler()
scaler.fit(X[:, :3])
X_4 = scaler.transform(X[:, :3])
X_4.max(axis=0)
X_4.std(axis=0)
plt.plot(X_4);
scaler = preprocessing.MinMaxScaler(feature_range=(-4, 4))
scaler.fit(X[:, :3])
X_5 = scaler.transform(X[:, :3])
plt.plot(X_5);
```
# Binarizing Data
### `preprocessing.binarize`
```
new_target = preprocessing.binarize(boston.target, threshold=boston.target.mean())
new_target[:, :5]
(boston.target[:5] > boston.target.mean()).astype(int)
```
### `Binarizer`
```
bin = preprocessing.Binarizer(boston.target.mean())
new_target = bin.fit_transform(boston.target)
new_target[:, :5]
```
# Working with Categorical Variables
### `OneHotEncoder`
```
iris = datasets.load_iris()
X = iris.data
y = iris.target
d = np.column_stack((X, y))
encoder = preprocessing.OneHotEncoder()
encoder.fit_transform(d[:, -1:]).toarray()[:5]
```
### `DictVectorizer`
```
from sklearn.feature_extraction import DictVectorizer
dv = DictVectorizer()
dict = [{'species': iris.target_names[i]} for i in y]
dv.fit_transform(dict).toarray()[:5]
```
### Patsy
```
import patsy
patsy.dmatrix('0 + C(species)', {'species': iris.target})
```
# Binarizing Label Features
### `LabelBinarizer`
```
from sklearn.preprocessing import LabelBinarizer
binarizer = LabelBinarizer()
new_target = binarizer.fit_transform(y)
y.shape, new_target.shape
new_target[:5]
new_target[-5:]
binarizer.classes_
```
### `LabelBinarizer` and labels
```
binarizer = LabelBinarizer(neg_label=-1000, pos_label=1000)
binarizer.fit_transform(y)[:5]
```
# Inputing Missing Values through Various Strategies
```
iris = datasets.load_iris()
iris_X = iris.data
masking_array = np.random.binomial(1, .25, iris_X.shape).astype(bool)
iris_X[masking_array] = np.nan
masking_array[:5]
iris_X[:5]
```
By default, Imputer fills in missing values with the mean.
```
impute = preprocessing.Imputer()
iris_X_prime = impute.fit_transform(iris_X)
iris_X_prime[:5]
impute = preprocessing.Imputer(strategy='median')
iris_X_prime = impute.fit_transform(iris_X)
iris_X_prime[:5]
iris_X[np.isnan(iris_X)] = -1
iris_X[:5]
impute = preprocessing.Imputer(missing_values=-1)
iris_X_prime = impute.fit_transform(iris_X)
iris_X_prime[:5]
```
# Using Pipelines for Multiple Preprocessing Steps
```
mat = datasets.make_spd_matrix(10)
masking_array = np.random.binomial(1, .1, mat.shape).astype(bool)
mat[masking_array] = np.nan
mat[:4, :4]
```
How to create a pipeline:
```
from sklearn import pipeline
pipe = pipeline.Pipeline([('impute', impute), ('scaler', scaler)])
pipe
new_mat = pipe.fit_transform(mat)
new_mat[:4, :4]
```
To be included in Pipeline, objects should have `fit`, `transform`, and `fit_transform` methods.
# Reducing Dimensionality with PCA (Principal Component Analysis)
```
iris = datasets.load_iris()
iris_X = iris.data
from sklearn import decomposition
pca = decomposition.PCA()
pca
iris_pca = pca.fit_transform(iris_X)
iris_pca[:5]
```
PCA transforms the covariances of the data into column vectors that show certain percentages of the variance:
```
pca.explained_variance_ratio_
```
High-dimensionality is problematic in data analysis. Consider representing data in fewer dimensions when models overfit on high-dimensional datasets.
```
pca = decomposition.PCA(n_components=2)
iris_X_prime = pca.fit_transform(iris_X)
iris_X.shape, iris_X_prime.shape
plt.scatter(iris_X_prime[:50, 0], iris_X_prime[:50, 1]);
plt.scatter(iris_X_prime[50:100, 0], iris_X_prime[50:100, 1]);
plt.scatter(iris_X_prime[100:150, 0], iris_X_prime[100:150, 1]);
pca.explained_variance_ratio_.sum()
```
You can create a PCA with the desired variance to be explained:
```
pca = decomposition.PCA(n_components=.98)
iris_X_prime = pca.fit(iris_X)
pca.explained_variance_ratio_.sum()
```
# Using Factor Analysis for Decomposition
Factor analysis differs from PCA in that it makes assumptions about which implicit features underlie the explicit features of a dataset.
```
from sklearn.decomposition import FactorAnalysis
fa = FactorAnalysis(n_components=2)
iris_two_dim = fa.fit_transform(iris.data)
iris_two_dim[:5]
```
# Kernel PCA for Nonlinear Dimensionality Reduction
When data is not lineraly seperable, Kernel PCA can help. Here, data is projected by the kernel function and then PCA is performed.
```
A1_mean = [1, 1]
A1_cov = [[2, .99], [1, 1]]
A1 = np.random.multivariate_normal(A1_mean, A1_cov, 50)
A2_mean = [5, 5]
A2_cov = [[2, .99], [1, 1]]
A2 = np.random.multivariate_normal(A2_mean, A2_cov, 50)
A = np.vstack((A1, A2))
B_mean = [5, 0]
B_cov = [[.5, -1], [-.9, .5]]
B = np.random.multivariate_normal(B_mean, B_cov, 100)
plt.scatter(A[:, 0], A[:, 1]);
plt.scatter(B[:, 0], B[:, 1]);
kpca = decomposition.KernelPCA(kernel='cosine', n_components=1)
AB = np.vstack((A, B))
AB_transformed = kpca.fit_transform(AB)
plt.scatter(AB_transformed[:50], np.zeros(AB_transformed[:50].shape), alpha=0.5);
plt.scatter(AB_transformed[50:], np.zeros(AB_transformed[50:].shape)+0.001, alpha=0.5);
pca = decomposition.PCA(n_components=2)
AB_prime = pca.fit_transform(AB)
plt.scatter(AB_prime[:, 0], np.zeros(AB_prime[:, 0].shape), alpha=0.5);
plt.scatter(AB_prime[:, 1], np.zeros(AB_prime[:, 1].shape)+0.001, alpha=0.5);
```
# Using Truncated SVD to Reduce Dimensionality
Singular Value Decomposition (SVD) factors a matrix `M` into three matrices: `U`, `ฮฃ`, and `V`. Whereas PCA factors the covariance matrix, SVD factors the data matrix itself.
Given an `n x n` matrix, SVD will create an `n`-column matrix. Truncated SVD will create an arbitrary columned dataset based on the specified number.
```
iris = datasets.load_iris()
iris_data = iris.data
itis_target = iris.target
from sklearn.decomposition import TruncatedSVD
svd = TruncatedSVD(2)
iris_transformed = svd.fit_transform(iris_data)
iris_data[:5]
iris_transformed[:5]
plt.scatter(iris_data[:50, 0], iris_data[:50, 2]);
plt.scatter(iris_data[50:100, 0], iris_data[50:100, 2]);
plt.scatter(iris_data[100:150, 0], iris_data[100:150, 2]);
plt.scatter(iris_transformed[:50, 0], -iris_transformed[:50, 1]);
plt.scatter(iris_transformed[50:100, 0], -iris_transformed[50:100, 1]);
plt.scatter(iris_transformed[100:150, 0], -iris_transformed[100:150, 1]);
```
### How It Works
```
from scipy.linalg import svd
D = np.array([[1, 2], [1, 3], [1, 4]])
D
U, S, V = svd(D, full_matrices=False)
U.shape, S.shape, V.shape
np.dot(U.dot(np.diag(S)), V)
new_S = S[0]
new_U = U[:, 0]
new_U.dot(new_S)
```
# Decomposition to Classify with DictionaryLearning
`DictionaryLearning` assumes that the features are the basis for the resulting datasets.
```
from sklearn.decomposition import DictionaryLearning
dl = DictionaryLearning(3) # 3 species of iris
transformed = dl.fit_transform(iris_data[::2])
transformed[:5]
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(transformed[0:25, 0], transformed[0:25, 1], transformed[0:25, 2]);
ax.scatter(transformed[25:50, 0], transformed[25:50, 1], transformed[25:50, 2]);
ax.scatter(transformed[50:75, 0], transformed[50:75, 1], transformed[50:75, 2]);
transformed = dl.transform(iris_data[1::2])
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(transformed[0:25, 0], transformed[0:25, 1], transformed[0:25, 2]);
ax.scatter(transformed[25:50, 0], transformed[25:50, 1], transformed[25:50, 2]);
ax.scatter(transformed[50:75, 0], transformed[50:75, 1], transformed[50:75, 2]);
```
# Putting it All Together with Pipelines
```
iris = datasets.load_iris()
iris_data = iris.data
mask = np.random.binomial(1, .25, iris_data.shape).astype(bool)
iris_data[mask] = np.nan
iris_data[:5]
pca = decomposition.PCA()
imputer = preprocessing.Imputer()
pipe = pipeline.Pipeline([('imputer', imputer), ('pca', pca)])
iris_data_transformed = pipe.fit_transform(iris_data)
iris_data_transformed[:5]
pipe2 = pipeline.make_pipeline(imputer, pca)
pipe2.steps
iris_data_transformed2 = pipe2.fit_transform(iris_data)
iris_data_transformed2[:5]
```
# Using Gaussian Processes for Regression
```
boston = datasets.load_boston()
boston_X = boston.data
boston_y = boston.target
train_set = np.random.choice([True, False], len(boston_y), p=[.75, .25])
from sklearn.gaussian_process import GaussianProcess
gp = GaussianProcess()
gp.fit(boston_X[train_set], boston_y[train_set])
test_preds = gp.predict(boston_X[~train_set])
f, ax = plt.subplots(figsize=(10, 7), nrows=3)
f.tight_layout()
ax[0].plot(range(len(test_preds)), test_preds, label='Predicted Values');
ax[0].plot(range(len(test_preds)), boston_y[~train_set], label='Actual Values');
ax[0].set_title('Predicted vs Actual');
ax[0].legend(loc='best');
ax[1].plot(range(len(test_preds)), test_preds - boston_y[~train_set]);
ax[1].set_title('Plotted Residuals');
ax[2].hist(test_preds - boston_y[~train_set]);
ax[2].set_title('Histogram of Residuals');
```
You can tune `regr` and `thea0` to get different predictions:
```
gp = GaussianProcess(regr='linear', theta0=5e-1)
gp.fit(boston_X[train_set], boston_y[train_set]);
linear_preds = gp.predict(boston_X[~train_set])
f, ax = plt.subplots(figsize=(7, 5))
f.tight_layout()
ax.hist(test_preds - boston_y[~train_set], label='Residuals Original', color='b', alpha=.5);
ax.hist(linear_preds - boston_y[~train_set], label='Residuals Linear', color='r', alpha=.5);
ax.set_title('Residuals');
ax.legend(loc='best');
f, ax = plt.subplots(figsize=(10, 7), nrows=3)
f.tight_layout()
ax[0].plot(range(len(linear_preds)), linear_preds, label='Predicted Linear Values');
ax[0].plot(range(len(linear_preds)), boston_y[~train_set], label='Actual Values');
ax[0].set_title('Predicted Linear vs Actual');
ax[0].legend(loc='best');
ax[1].plot(range(len(linear_preds)), linear_preds - boston_y[~train_set]);
ax[1].set_title('Plotted Residuals');
ax[2].hist(linear_preds - boston_y[~train_set]);
ax[2].set_title('Histogram of Residuals');
np.power(test_preds - boston_y[~train_set], 2).mean(), np.power(linear_preds - boston_y[~train_set], 2).mean()
```
### Measuring Uncertainty
```
test_preds, MSE = gp.predict(boston_X[~train_set], eval_MSE=True)
MSE[:5]
f, ax = plt.subplots(figsize=(7, 5))
n = 20
rng = range(n)
ax.scatter(rng, test_preds[:n]);
ax.errorbar(rng, test_preds[:n], yerr=1.96*MSE[:n]);
ax.set_title('Predictions with Error Bars');
ax.set_xlim((-1, 21));
```
# Defining the Gaussian Process Object Directly
```
from sklearn.gaussian_process import regression_models
X, y = datasets.make_regression(1000, 1, 1)
regression_models.constant(X)[:5]
regression_models.linear(X)[:5]
regression_models.quadratic(X)[:5]
```
# Using Stochastic Gradient Descent for Regression
```
X, y = datasets.make_regression((int(1e6)))
```
Size of the regression (MB):
```
X.nbytes / 1e6
from sklearn import linear_model
sgd = linear_model.SGDRegressor()
train = np.random.choice([True, False], size=len(y), p=[.75, .25])
sgd.fit(X[train], y[train])
```
|
github_jupyter
|
from sklearn import datasets
import numpy as np
datasets.*?
boston = datasets.load_boston()
print(boston.DESCR)
X, y = boston.data, boston.target
datasets.make_*?
X, y = datasets.make_regression(n_samples=1000, n_features=1,
n_informative=1, noise=15,
bias=1000, random_state=0)
import matplotlib.pyplot as plt
%matplotlib inline
plt.scatter(X, y);
X, y = datasets.make_blobs(n_samples=300, centers=4,
cluster_std=0.6, random_state=0)
plt.scatter(X[:, 0], X[:, 1], s=50);
from sklearn import preprocessing
X, y = boston.data, boston.target
X[:, :3].mean(axis=0)
X[:, :3].std(axis=0)
plt.plot(X[:, :3]);
X_2 = preprocessing.scale(X[:, :3])
X_2.mean(axis=0)
X_2.std(axis=0)
plt.plot(X_2);
scaler = preprocessing.StandardScaler()
scaler.fit(X[:, :3])
X_3 = scaler.transform(X[:, :3])
X_3.mean(axis=0)
X_3.std(axis=0)
plt.plot(X_3);
scaler = preprocessing.MinMaxScaler()
scaler.fit(X[:, :3])
X_4 = scaler.transform(X[:, :3])
X_4.max(axis=0)
X_4.std(axis=0)
plt.plot(X_4);
scaler = preprocessing.MinMaxScaler(feature_range=(-4, 4))
scaler.fit(X[:, :3])
X_5 = scaler.transform(X[:, :3])
plt.plot(X_5);
new_target = preprocessing.binarize(boston.target, threshold=boston.target.mean())
new_target[:, :5]
(boston.target[:5] > boston.target.mean()).astype(int)
bin = preprocessing.Binarizer(boston.target.mean())
new_target = bin.fit_transform(boston.target)
new_target[:, :5]
iris = datasets.load_iris()
X = iris.data
y = iris.target
d = np.column_stack((X, y))
encoder = preprocessing.OneHotEncoder()
encoder.fit_transform(d[:, -1:]).toarray()[:5]
from sklearn.feature_extraction import DictVectorizer
dv = DictVectorizer()
dict = [{'species': iris.target_names[i]} for i in y]
dv.fit_transform(dict).toarray()[:5]
import patsy
patsy.dmatrix('0 + C(species)', {'species': iris.target})
from sklearn.preprocessing import LabelBinarizer
binarizer = LabelBinarizer()
new_target = binarizer.fit_transform(y)
y.shape, new_target.shape
new_target[:5]
new_target[-5:]
binarizer.classes_
binarizer = LabelBinarizer(neg_label=-1000, pos_label=1000)
binarizer.fit_transform(y)[:5]
iris = datasets.load_iris()
iris_X = iris.data
masking_array = np.random.binomial(1, .25, iris_X.shape).astype(bool)
iris_X[masking_array] = np.nan
masking_array[:5]
iris_X[:5]
impute = preprocessing.Imputer()
iris_X_prime = impute.fit_transform(iris_X)
iris_X_prime[:5]
impute = preprocessing.Imputer(strategy='median')
iris_X_prime = impute.fit_transform(iris_X)
iris_X_prime[:5]
iris_X[np.isnan(iris_X)] = -1
iris_X[:5]
impute = preprocessing.Imputer(missing_values=-1)
iris_X_prime = impute.fit_transform(iris_X)
iris_X_prime[:5]
mat = datasets.make_spd_matrix(10)
masking_array = np.random.binomial(1, .1, mat.shape).astype(bool)
mat[masking_array] = np.nan
mat[:4, :4]
from sklearn import pipeline
pipe = pipeline.Pipeline([('impute', impute), ('scaler', scaler)])
pipe
new_mat = pipe.fit_transform(mat)
new_mat[:4, :4]
iris = datasets.load_iris()
iris_X = iris.data
from sklearn import decomposition
pca = decomposition.PCA()
pca
iris_pca = pca.fit_transform(iris_X)
iris_pca[:5]
pca.explained_variance_ratio_
pca = decomposition.PCA(n_components=2)
iris_X_prime = pca.fit_transform(iris_X)
iris_X.shape, iris_X_prime.shape
plt.scatter(iris_X_prime[:50, 0], iris_X_prime[:50, 1]);
plt.scatter(iris_X_prime[50:100, 0], iris_X_prime[50:100, 1]);
plt.scatter(iris_X_prime[100:150, 0], iris_X_prime[100:150, 1]);
pca.explained_variance_ratio_.sum()
pca = decomposition.PCA(n_components=.98)
iris_X_prime = pca.fit(iris_X)
pca.explained_variance_ratio_.sum()
from sklearn.decomposition import FactorAnalysis
fa = FactorAnalysis(n_components=2)
iris_two_dim = fa.fit_transform(iris.data)
iris_two_dim[:5]
A1_mean = [1, 1]
A1_cov = [[2, .99], [1, 1]]
A1 = np.random.multivariate_normal(A1_mean, A1_cov, 50)
A2_mean = [5, 5]
A2_cov = [[2, .99], [1, 1]]
A2 = np.random.multivariate_normal(A2_mean, A2_cov, 50)
A = np.vstack((A1, A2))
B_mean = [5, 0]
B_cov = [[.5, -1], [-.9, .5]]
B = np.random.multivariate_normal(B_mean, B_cov, 100)
plt.scatter(A[:, 0], A[:, 1]);
plt.scatter(B[:, 0], B[:, 1]);
kpca = decomposition.KernelPCA(kernel='cosine', n_components=1)
AB = np.vstack((A, B))
AB_transformed = kpca.fit_transform(AB)
plt.scatter(AB_transformed[:50], np.zeros(AB_transformed[:50].shape), alpha=0.5);
plt.scatter(AB_transformed[50:], np.zeros(AB_transformed[50:].shape)+0.001, alpha=0.5);
pca = decomposition.PCA(n_components=2)
AB_prime = pca.fit_transform(AB)
plt.scatter(AB_prime[:, 0], np.zeros(AB_prime[:, 0].shape), alpha=0.5);
plt.scatter(AB_prime[:, 1], np.zeros(AB_prime[:, 1].shape)+0.001, alpha=0.5);
iris = datasets.load_iris()
iris_data = iris.data
itis_target = iris.target
from sklearn.decomposition import TruncatedSVD
svd = TruncatedSVD(2)
iris_transformed = svd.fit_transform(iris_data)
iris_data[:5]
iris_transformed[:5]
plt.scatter(iris_data[:50, 0], iris_data[:50, 2]);
plt.scatter(iris_data[50:100, 0], iris_data[50:100, 2]);
plt.scatter(iris_data[100:150, 0], iris_data[100:150, 2]);
plt.scatter(iris_transformed[:50, 0], -iris_transformed[:50, 1]);
plt.scatter(iris_transformed[50:100, 0], -iris_transformed[50:100, 1]);
plt.scatter(iris_transformed[100:150, 0], -iris_transformed[100:150, 1]);
from scipy.linalg import svd
D = np.array([[1, 2], [1, 3], [1, 4]])
D
U, S, V = svd(D, full_matrices=False)
U.shape, S.shape, V.shape
np.dot(U.dot(np.diag(S)), V)
new_S = S[0]
new_U = U[:, 0]
new_U.dot(new_S)
from sklearn.decomposition import DictionaryLearning
dl = DictionaryLearning(3) # 3 species of iris
transformed = dl.fit_transform(iris_data[::2])
transformed[:5]
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(transformed[0:25, 0], transformed[0:25, 1], transformed[0:25, 2]);
ax.scatter(transformed[25:50, 0], transformed[25:50, 1], transformed[25:50, 2]);
ax.scatter(transformed[50:75, 0], transformed[50:75, 1], transformed[50:75, 2]);
transformed = dl.transform(iris_data[1::2])
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(transformed[0:25, 0], transformed[0:25, 1], transformed[0:25, 2]);
ax.scatter(transformed[25:50, 0], transformed[25:50, 1], transformed[25:50, 2]);
ax.scatter(transformed[50:75, 0], transformed[50:75, 1], transformed[50:75, 2]);
iris = datasets.load_iris()
iris_data = iris.data
mask = np.random.binomial(1, .25, iris_data.shape).astype(bool)
iris_data[mask] = np.nan
iris_data[:5]
pca = decomposition.PCA()
imputer = preprocessing.Imputer()
pipe = pipeline.Pipeline([('imputer', imputer), ('pca', pca)])
iris_data_transformed = pipe.fit_transform(iris_data)
iris_data_transformed[:5]
pipe2 = pipeline.make_pipeline(imputer, pca)
pipe2.steps
iris_data_transformed2 = pipe2.fit_transform(iris_data)
iris_data_transformed2[:5]
boston = datasets.load_boston()
boston_X = boston.data
boston_y = boston.target
train_set = np.random.choice([True, False], len(boston_y), p=[.75, .25])
from sklearn.gaussian_process import GaussianProcess
gp = GaussianProcess()
gp.fit(boston_X[train_set], boston_y[train_set])
test_preds = gp.predict(boston_X[~train_set])
f, ax = plt.subplots(figsize=(10, 7), nrows=3)
f.tight_layout()
ax[0].plot(range(len(test_preds)), test_preds, label='Predicted Values');
ax[0].plot(range(len(test_preds)), boston_y[~train_set], label='Actual Values');
ax[0].set_title('Predicted vs Actual');
ax[0].legend(loc='best');
ax[1].plot(range(len(test_preds)), test_preds - boston_y[~train_set]);
ax[1].set_title('Plotted Residuals');
ax[2].hist(test_preds - boston_y[~train_set]);
ax[2].set_title('Histogram of Residuals');
gp = GaussianProcess(regr='linear', theta0=5e-1)
gp.fit(boston_X[train_set], boston_y[train_set]);
linear_preds = gp.predict(boston_X[~train_set])
f, ax = plt.subplots(figsize=(7, 5))
f.tight_layout()
ax.hist(test_preds - boston_y[~train_set], label='Residuals Original', color='b', alpha=.5);
ax.hist(linear_preds - boston_y[~train_set], label='Residuals Linear', color='r', alpha=.5);
ax.set_title('Residuals');
ax.legend(loc='best');
f, ax = plt.subplots(figsize=(10, 7), nrows=3)
f.tight_layout()
ax[0].plot(range(len(linear_preds)), linear_preds, label='Predicted Linear Values');
ax[0].plot(range(len(linear_preds)), boston_y[~train_set], label='Actual Values');
ax[0].set_title('Predicted Linear vs Actual');
ax[0].legend(loc='best');
ax[1].plot(range(len(linear_preds)), linear_preds - boston_y[~train_set]);
ax[1].set_title('Plotted Residuals');
ax[2].hist(linear_preds - boston_y[~train_set]);
ax[2].set_title('Histogram of Residuals');
np.power(test_preds - boston_y[~train_set], 2).mean(), np.power(linear_preds - boston_y[~train_set], 2).mean()
test_preds, MSE = gp.predict(boston_X[~train_set], eval_MSE=True)
MSE[:5]
f, ax = plt.subplots(figsize=(7, 5))
n = 20
rng = range(n)
ax.scatter(rng, test_preds[:n]);
ax.errorbar(rng, test_preds[:n], yerr=1.96*MSE[:n]);
ax.set_title('Predictions with Error Bars');
ax.set_xlim((-1, 21));
from sklearn.gaussian_process import regression_models
X, y = datasets.make_regression(1000, 1, 1)
regression_models.constant(X)[:5]
regression_models.linear(X)[:5]
regression_models.quadratic(X)[:5]
X, y = datasets.make_regression((int(1e6)))
X.nbytes / 1e6
from sklearn import linear_model
sgd = linear_model.SGDRegressor()
train = np.random.choice([True, False], size=len(y), p=[.75, .25])
sgd.fit(X[train], y[train])
| 0.70304 | 0.972152 |
# Now You Code 3: Final Grade in IST256
# Part 1
Our Course Syllabus has a grading scale here:
http://ist256.syr.edu/syllabus/#grading-scale
Write a Python program to input a number of points earned out of 600 and then
outputs the registrar letter grade.
For example:
IST256 Grade Calculator
Enter total points out of 600: 550
Grade: A-
## Step 1: Problem Analysis
Inputs: Grade
Outputs: Letter grade
Algorithm (Steps in Program): input grade, use if else to put the grade in a range and then output its letter grade.
```
#Step 2: write code here
grade = float(input("Enter your number grade here: "))
if (299>=grade>=0):
print("You have an F")
elif (359>=grade>=300):
print("You have a D")
elif (389>=grade>=360):
print("You have a C-")
elif (419>=grade>=390):
print("You have a C")
elif (449>=grade>=420):
print("You have a C+")
elif (479>=grade>=450):
print("You have a B-")
elif (509>=grade>=480):
print("You have a B")
elif (539>=grade>=510):
print("You have a B+")
elif (569>=grade>=540):
print("You have an A-")
elif (600>=grade>=570):
print("You have an A")
```
# Part 2
Now that you got it working, re-write your code to handle bad input. Specifically:
- non integer values
- integer values outside the 0 to 600 range.
**Note:** Exception handling is not part of our algorithm. It's a programming concern, not a problem-solving concern!
```
## Step 2 (again): write code again but handle errors with try...except
try:
grade = float(input("Enter your number grade here: "))
if (grade>600) or (grade<0):
print("This is not a valid grade!")
elif (299>=grade>=0):
print("You have an F")
elif (359>=grade>=300):
print("You have a D")
elif (389>=grade>=360):
print("You have a C-")
elif (419>=grade>=390):
print("You have a C")
elif (449>=grade>=420):
print("You have a C+")
elif (479>=grade>=450):
print("You have a B-")
elif (509>=grade>=480):
print("You have a B")
elif (539>=grade>=510):
print("You have a B+")
elif (569>=grade>=540):
print("You have an A-")
elif (600>=grade>=570):
print("You have an A")
except ValueError:
print("That's not a valid grade!")
```
## Step 3: Questions
1. What specific Python Error are we handling (please provide the name of it)? ValueError
2. How many times must you execute this program and check the output before you canbe reasonably assured your code is correct? Explain. 12 for each different possible output outcome.
3. When testing this program do you think its more important to test numbers in the middle of the grade range or exactly on the boundary between one grade range and the next. Justify your response. Between one grade range and the next because there is more of a possibility of there being a problem in those kinds of situations because you could have messed up your code and the number could come up as the wrong letter.
## Reminder of Evaluation Criteria
1. What the problem attempted (analysis, code, and answered questions) ?
2. What the problem analysis thought out? (does the program match the plan?)
3. Does the code execute without syntax error?
4. Does the code solve the intended problem?
5. Is the code well written? (easy to understand, modular, and self-documenting, handles errors)
|
github_jupyter
|
#Step 2: write code here
grade = float(input("Enter your number grade here: "))
if (299>=grade>=0):
print("You have an F")
elif (359>=grade>=300):
print("You have a D")
elif (389>=grade>=360):
print("You have a C-")
elif (419>=grade>=390):
print("You have a C")
elif (449>=grade>=420):
print("You have a C+")
elif (479>=grade>=450):
print("You have a B-")
elif (509>=grade>=480):
print("You have a B")
elif (539>=grade>=510):
print("You have a B+")
elif (569>=grade>=540):
print("You have an A-")
elif (600>=grade>=570):
print("You have an A")
## Step 2 (again): write code again but handle errors with try...except
try:
grade = float(input("Enter your number grade here: "))
if (grade>600) or (grade<0):
print("This is not a valid grade!")
elif (299>=grade>=0):
print("You have an F")
elif (359>=grade>=300):
print("You have a D")
elif (389>=grade>=360):
print("You have a C-")
elif (419>=grade>=390):
print("You have a C")
elif (449>=grade>=420):
print("You have a C+")
elif (479>=grade>=450):
print("You have a B-")
elif (509>=grade>=480):
print("You have a B")
elif (539>=grade>=510):
print("You have a B+")
elif (569>=grade>=540):
print("You have an A-")
elif (600>=grade>=570):
print("You have an A")
except ValueError:
print("That's not a valid grade!")
| 0.106041 | 0.890628 |
# Docker Engineใฎใคใณในใใผใซ
---
ๆง็ฏใใOpenHPC็ฐๅขใซ[Docker Engine](https://www.docker.com/)ใใคใณในใใผใซใใพใใ
## ๅๆๆกไปถ
ใใฎNotebookใๅฎ่กใใใใใฎๅๆๆกไปถใๆบใใใฆใใใใจใ็ขบ่ชใใพใใ
ไปฅไธใฎใใจใๅๆๆกไปถใจใใพใใ
* ๆง็ฏๆธใฎOpenHPC็ฐๅขใใใ
* OpenHPC็ฐๅขใฎๅใใผใใซๅฏพใใฆAnsibleใงๆไฝใงใใใใใซ่จญๅฎใใใฆใใ
VCใใผใใไฝๆๆใซๆๅฎใใๅคใ็ขบ่ชใใใใใซ `group_vars` ใใกใคใซๅใฎไธ่ฆงใ่กจ็คบใใพใใ
```
!ls -1 group_vars/*.yml | sed -e 's/^group_vars\///' -e 's/\.yml//' | sort
```
ๅใใผใใซๅฏพใใฆAnsibleใซใใๆไฝใ่กใใใใจใ็ขบ่ชใใพใใๆไฝๅฏพ่ฑกใจใชใ UnitGroup ๅใๆๅฎใใฆใใ ใใใ
```
# (ไพ)
# ugroup_name = 'OpenHPC'
ugroup_name =
```
็้็ขบ่ชใ่กใใพใใ
```
!ansible {ugroup_name} -m ping
```
่จ็ฎใใผใใฎใฐใซใผใใซๅฏพใใฆ็้็ขบ่ชใ่กใใพใใ
```
target = f'{ugroup_name}_compute'
!ansible {target} -m ping
```
## Docker Engineใฎใคใณในใใผใซ
[Install Docker Engine on CentOS](https://docs.docker.com/engine/install/centos/)ใฎๆ้ ใซๅพใ Docekr Engineใ่จ็ฎใใผใใซใคใณในใใผใซใใพใใ
Docker ใฎใฌใใธใใชใ่ฟฝๅ ใใพใใ
```
!ansible {target} -b -m yum -a 'name=yum-utils'
!ansible {target} -b -a 'yum-config-manager --add-repo \
https://download.docker.com/linux/centos/docker-ce.repo'
```
Docker Engine ใใคใณในใใผใซใใพใใ
```
!ansible {target} -b -m yum -a 'name=docker-ce,docker-ce-cli,containerd.io'
```
Docker Engineใฎใตใผใในใ้ๅงใใใพใใ
```
!ansible {target} -b -m systemd -a 'name=docker enabled=yes state=started'
```
Docker Engineใๅฉ็จใงใใใใจใ็ขบ่ชใใใใใซ `docker info`ใณใใณใใๅฎ่กใใฆใฟใพใใ
```
!ansible {target} -a 'docker info'
```
ใณใณใใใๅฎ่กใงใใใใจใ็ขบ่ชใใใใใซ`hello-world`ใณใณใใใๅฎ่กใใฆใฟใพใใ
```
!ansible {target} -a 'docker run --rm hello-world'
```
## NVIDIA Container Toolkit ใฎใคใณในใใผใซ
[Setting up NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#id2)ใฎๆ้ ใซๅพใใคใณในใใผใซใ่กใใพใใ
> ่จ็ฎใใผใใงGPUใๅฉ็จใงใใๅ ดๅใฎใฟ NVIDIA Container Toolkitใฎใคใณในใใผใซใๅฟ
่ฆใจใชใใพใใ่จ็ฎใใผใใงGPUใๅฉ็จใงใใชใๅ ดๅใฏใใฎ็ฏใฎๅฎ่กใฏไธ่ฆใงใใ
ใฌใใธใใชใ่ฟฝๅ ใใพใใ
```
!ansible {target} -b -m shell -a \
'distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | tee /etc/yum.repos.d/nvidia-docker.repo'
```
`nvidia-docker2`ใใใฑใผใธใใคใณในใใผใซใใพใใ
```
!ansible {target} -b -m dnf -a \
'name=nvidia-docker2 update_cache=yes'
```
ๅคๆดใๅๆ ใใใใใซ Docker Engine ใๅ่ตทๅใใพใใ
```
!ansible {target} -b -a 'systemctl restart docker'
```
ๅคๆดใๅๆ ใใใใใจใ็ขบ่ชใใใใใซDockerใฎๆ
ๅ ฑใ่กจ็คบใใใพใใ`Server`ใฎ `Runtimes`ใซ`nvidia`ใ่ฟฝๅ ใใใใใจใ็ขบ่ชใใฆใใ ใใใ
่กจ็คบไพใไปฅไธใซ็คบใใพใใ
```
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
scan: Docker Scan (Docker Inc.)
Server:
(ไธญ็ฅ)
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux nvidia runc
Default Runtime: runc
(ไปฅ้็ฅ)
```
```
!ansible {target} -b -a 'docker info'
```
|
github_jupyter
|
!ls -1 group_vars/*.yml | sed -e 's/^group_vars\///' -e 's/\.yml//' | sort
# (ไพ)
# ugroup_name = 'OpenHPC'
ugroup_name =
!ansible {ugroup_name} -m ping
target = f'{ugroup_name}_compute'
!ansible {target} -m ping
!ansible {target} -b -m yum -a 'name=yum-utils'
!ansible {target} -b -a 'yum-config-manager --add-repo \
https://download.docker.com/linux/centos/docker-ce.repo'
!ansible {target} -b -m yum -a 'name=docker-ce,docker-ce-cli,containerd.io'
!ansible {target} -b -m systemd -a 'name=docker enabled=yes state=started'
!ansible {target} -a 'docker info'
!ansible {target} -a 'docker run --rm hello-world'
!ansible {target} -b -m shell -a \
'distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | tee /etc/yum.repos.d/nvidia-docker.repo'
!ansible {target} -b -m dnf -a \
'name=nvidia-docker2 update_cache=yes'
!ansible {target} -b -a 'systemctl restart docker'
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
scan: Docker Scan (Docker Inc.)
Server:
(ไธญ็ฅ)
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux nvidia runc
Default Runtime: runc
(ไปฅ้็ฅ)
!ansible {target} -b -a 'docker info'
| 0.393036 | 0.812979 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.