code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
# Temporal-Difference Methods
In this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.
While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.
---
### Part 0: Explore CliffWalkingEnv
We begin by importing the necessary packages.
```
import sys
# !{sys.executable} -m pip install seaborn
import gym
import numpy as np
from collections import defaultdict, deque
import matplotlib.pyplot as plt
%matplotlib inline
import check_test
from plot_utils import plot_values
```
Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment.
```
env = gym.make('CliffWalking-v0')
```
The agent moves through a $4\times 12$ gridworld, with states numbered as follows:
```
[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35],
[36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]
```
At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.
The agent has 4 potential actions:
```
UP = 0
RIGHT = 1
DOWN = 2
LEFT = 3
```
Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below.
```
print(env.action_space)
print(env.observation_space)
```
In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function.
_**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._
```
# define the optimal state-value function
V_opt = np.zeros((4,12))
V_opt[0:13][0] = -np.arange(3, 15)[::-1]
V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1
V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2
V_opt[3][0] = -13
plot_values(V_opt)
```
### Part 1: TD Control: Sarsa
In this section, you will write your own implementation of the Sarsa control algorithm.
Your algorithm has four arguments:
- `env`: This is an instance of an OpenAI Gym environment.
- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.
- `alpha`: This is the step-size parameter for the update step.
- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).
The algorithm returns as output:
- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
Please complete the function in the code cell below.
(_Feel free to define additional functions to help you to organize your code._)
```
import random
def update_Q(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None):
# """ updates the action-value function estimate using the most recent episode """
# old_Q = Q[state][action]
# Q_next = Q[next_state][next_action] if next_state is not None else 0
# Q[state][action] = old_Q + (alpha*((reward + (gamma*Q_next)) - old_Q))
# return Q[state][action]
""" updates the action-value function estimate using the most recent episode """
old_Q = Q[state][action]
Q_next = Q[next_state][next_action] if next_state is not None else 0
Q[state][action] = old_Q + (alpha*((reward + (gamma*Q_next)) - old_Q))
return Q[state][action]
def epsilon_greedy(Q, state, nA, eps):
"""Selects epsilon-greedy action for supplied state.
Params
======
Q (dictionary): action-value function
state (int): current state
nA (int): number actions in the environment
eps (float): epsilon
"""
if random.random() > eps: # select greedy action with probability epsilon
return np.argmax(Q[state])
else: # otherwise, select an action randomly
return random.choice(np.arange(env.action_space.n))
def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100):
nA = env.action_space.n # number of actions
Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays
# monitor performance
tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores
avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100 == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
score = 0 # initialize score
state = env.reset() # start episode
eps = 1.0 / i_episode # set value of epsilon
action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection
while True:
next_state, reward, done, info = env.step(action) # take action A, observe R, S'
score += reward # add reward to agent's score
if not done:
next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action
Q[state][action] = update_Q(alpha, gamma, Q, \
state, action, reward, next_state, next_action)
state = next_state # S <- S'
action = next_action # A <- A'
if done:
Q[state][action] = update_Q(alpha, gamma, Q, \
state, action, reward)
tmp_scores.append(score) # append score
break
if (i_episode % plot_every == 0):
avg_scores.append(np.mean(tmp_scores))
# plot performance
plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)
plt.show()
# print best 100-episode performance
print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores))
return Q
```
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function.
If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
```
# obtain the estimated optimal policy and corresponding action-value function
Q_sarsa = sarsa(env, 5000, .01)
# print the estimated optimal policy
policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12)
check_test.run_check('td_control_check', policy_sarsa)
print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):")
print(policy_sarsa)
# plot the estimated optimal state-value function
V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)])
plot_values(V_sarsa)
```
### Part 2: TD Control: Q-learning
In this section, you will write your own implementation of the Q-learning control algorithm.
Your algorithm has four arguments:
- `env`: This is an instance of an OpenAI Gym environment.
- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.
- `alpha`: This is the step-size parameter for the update step.
- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).
The algorithm returns as output:
- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
Please complete the function in the code cell below.
(_Feel free to define additional functions to help you to organize your code._)
```
def update_Q(alpha, gamma, Q, state, action, reward, next_state=None):
""" updates the action-value function estimate using the most recent episode """
old_Q = Q[state][action]
Q_next = max(Q[next_state]) if next_state is not None else 0
# Q_next = 0
# for next_action1 in Q[next_state]:
# Q_next = max(Q_next, Q[next_state][next_action1])
Q[state][action] = old_Q + (alpha*((reward + (gamma*Q_next)) - old_Q))
return Q[state][action]
def epsilon_greedy(Q, state, nA, eps):
"""Selects epsilon-greedy action for supplied state.
Params
======
Q (dictionary): action-value function
state (int): current state
nA (int): number actions in the environment
eps (float): epsilon
"""
if random.random() > eps: # select greedy action with probability epsilon
return np.argmax(Q[state])
else: # otherwise, select an action randomly
return random.choice(np.arange(env.action_space.n))
def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100):
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(env.nA))
# monitor performance
tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores
avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100 == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
score = 0 # initialize score
state = env.reset() # start episode
nA = env.action_space.n
eps = 1.0 / i_episode # set value of epsilon
action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection
while True:
next_state, reward, done, info = env.step(action) # take action A, observe R, S'
score += reward # add reward to agent's score
if not done:
next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action
Q[state][action] = update_Q(alpha, gamma, Q, \
state, action, reward, next_state)
state = next_state # S <- S'
action = next_action # A <- A'
if done:
Q[state][action] = update_Q(alpha, gamma, Q, \
state, action, reward)
tmp_scores.append(score) # append score
break
if (i_episode % plot_every == 0):
avg_scores.append(np.mean(tmp_scores))
# plot performance
plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)
plt.show()
# print best 100-episode performance
print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores))
return Q
```
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function.
If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
```
# obtain the estimated optimal policy and corresponding action-value function
Q_sarsamax = q_learning(env, 5000, .01, 100)
# print the estimated optimal policy
policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12))
check_test.run_check('td_control_check', policy_sarsamax)
print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):")
print(policy_sarsamax)
# plot the estimated optimal state-value function
plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)])
```
### Part 3: TD Control: Expected Sarsa
In this section, you will write your own implementation of the Expected Sarsa control algorithm.
Your algorithm has four arguments:
- `env`: This is an instance of an OpenAI Gym environment.
- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.
- `alpha`: This is the step-size parameter for the update step.
- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).
The algorithm returns as output:
- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
Please complete the function in the code cell below.
(_Feel free to define additional functions to help you to organize your code._)
```
def update_Q(alpha, gamma, Q, state, action, reward, next_state=None):
""" updates the action-value function estimate using the most recent episode """
old_Q = Q[state][action]
Q_next_len = len(Q[next_state])
Q_next = max(Q[next_state]) if next_state is not None else 0
# Q_next = 0
# for next_action1 in Q[next_state]:
# Q_next = max(Q_next, Q[next_state][next_action1])
Q_next = Q_next/Q_next_len
Q[state][action] = old_Q + (alpha*((reward + (gamma*Q_next)) - old_Q))
return Q[state][action]
def epsilon_greedy(Q, state, nA, eps):
"""Selects epsilon-greedy action for supplied state.
Params
======
Q (dictionary): action-value function
state (int): current state
nA (int): number actions in the environment
eps (float): epsilon
"""
if random.random() > eps: # select greedy action with probability epsilon
return np.argmax(Q[state])
else: # otherwise, select an action randomly
return random.choice(np.arange(env.action_space.n))
def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100):
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(env.nA))
# monitor performance
tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores
avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100 == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
score = 0 # initialize score
state = env.reset() # start episode
nA = env.action_space.n
eps = 1.0 / i_episode # set value of epsilon
action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection
while True:
next_state, reward, done, info = env.step(action) # take action A, observe R, S'
score += reward # add reward to agent's score
if not done:
next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action
Q[state][action] = update_Q(alpha, gamma, Q, \
state, action, reward, next_state)
state = next_state # S <- S'
action = next_action # A <- A'
if done:
Q[state][action] = update_Q(alpha, gamma, Q, \
state, action, reward)
tmp_scores.append(score) # append score
break
if (i_episode % plot_every == 0):
avg_scores.append(np.mean(tmp_scores))
# plot performance
plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)
plt.show()
# print best 100-episode performance
print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores))
return Q
```
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function.
If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
```
# obtain the estimated optimal policy and corresponding action-value function
Q_expsarsa = expected_sarsa(env, 10000, 1, 100)
# print the estimated optimal policy
policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12)
check_test.run_check('td_control_check', policy_expsarsa)
print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):")
print(policy_expsarsa)
# plot the estimated optimal state-value function
plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)])
```
|
github_jupyter
|
import sys
# !{sys.executable} -m pip install seaborn
import gym
import numpy as np
from collections import defaultdict, deque
import matplotlib.pyplot as plt
%matplotlib inline
import check_test
from plot_utils import plot_values
env = gym.make('CliffWalking-v0')
[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35],
[36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]
UP = 0
RIGHT = 1
DOWN = 2
LEFT = 3
print(env.action_space)
print(env.observation_space)
# define the optimal state-value function
V_opt = np.zeros((4,12))
V_opt[0:13][0] = -np.arange(3, 15)[::-1]
V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1
V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2
V_opt[3][0] = -13
plot_values(V_opt)
import random
def update_Q(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None):
# """ updates the action-value function estimate using the most recent episode """
# old_Q = Q[state][action]
# Q_next = Q[next_state][next_action] if next_state is not None else 0
# Q[state][action] = old_Q + (alpha*((reward + (gamma*Q_next)) - old_Q))
# return Q[state][action]
""" updates the action-value function estimate using the most recent episode """
old_Q = Q[state][action]
Q_next = Q[next_state][next_action] if next_state is not None else 0
Q[state][action] = old_Q + (alpha*((reward + (gamma*Q_next)) - old_Q))
return Q[state][action]
def epsilon_greedy(Q, state, nA, eps):
"""Selects epsilon-greedy action for supplied state.
Params
======
Q (dictionary): action-value function
state (int): current state
nA (int): number actions in the environment
eps (float): epsilon
"""
if random.random() > eps: # select greedy action with probability epsilon
return np.argmax(Q[state])
else: # otherwise, select an action randomly
return random.choice(np.arange(env.action_space.n))
def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100):
nA = env.action_space.n # number of actions
Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays
# monitor performance
tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores
avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100 == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
score = 0 # initialize score
state = env.reset() # start episode
eps = 1.0 / i_episode # set value of epsilon
action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection
while True:
next_state, reward, done, info = env.step(action) # take action A, observe R, S'
score += reward # add reward to agent's score
if not done:
next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action
Q[state][action] = update_Q(alpha, gamma, Q, \
state, action, reward, next_state, next_action)
state = next_state # S <- S'
action = next_action # A <- A'
if done:
Q[state][action] = update_Q(alpha, gamma, Q, \
state, action, reward)
tmp_scores.append(score) # append score
break
if (i_episode % plot_every == 0):
avg_scores.append(np.mean(tmp_scores))
# plot performance
plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)
plt.show()
# print best 100-episode performance
print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores))
return Q
# obtain the estimated optimal policy and corresponding action-value function
Q_sarsa = sarsa(env, 5000, .01)
# print the estimated optimal policy
policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12)
check_test.run_check('td_control_check', policy_sarsa)
print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):")
print(policy_sarsa)
# plot the estimated optimal state-value function
V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)])
plot_values(V_sarsa)
def update_Q(alpha, gamma, Q, state, action, reward, next_state=None):
""" updates the action-value function estimate using the most recent episode """
old_Q = Q[state][action]
Q_next = max(Q[next_state]) if next_state is not None else 0
# Q_next = 0
# for next_action1 in Q[next_state]:
# Q_next = max(Q_next, Q[next_state][next_action1])
Q[state][action] = old_Q + (alpha*((reward + (gamma*Q_next)) - old_Q))
return Q[state][action]
def epsilon_greedy(Q, state, nA, eps):
"""Selects epsilon-greedy action for supplied state.
Params
======
Q (dictionary): action-value function
state (int): current state
nA (int): number actions in the environment
eps (float): epsilon
"""
if random.random() > eps: # select greedy action with probability epsilon
return np.argmax(Q[state])
else: # otherwise, select an action randomly
return random.choice(np.arange(env.action_space.n))
def q_learning(env, num_episodes, alpha, gamma=1.0, plot_every=100):
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(env.nA))
# monitor performance
tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores
avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100 == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
score = 0 # initialize score
state = env.reset() # start episode
nA = env.action_space.n
eps = 1.0 / i_episode # set value of epsilon
action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection
while True:
next_state, reward, done, info = env.step(action) # take action A, observe R, S'
score += reward # add reward to agent's score
if not done:
next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action
Q[state][action] = update_Q(alpha, gamma, Q, \
state, action, reward, next_state)
state = next_state # S <- S'
action = next_action # A <- A'
if done:
Q[state][action] = update_Q(alpha, gamma, Q, \
state, action, reward)
tmp_scores.append(score) # append score
break
if (i_episode % plot_every == 0):
avg_scores.append(np.mean(tmp_scores))
# plot performance
plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)
plt.show()
# print best 100-episode performance
print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores))
return Q
# obtain the estimated optimal policy and corresponding action-value function
Q_sarsamax = q_learning(env, 5000, .01, 100)
# print the estimated optimal policy
policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12))
check_test.run_check('td_control_check', policy_sarsamax)
print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):")
print(policy_sarsamax)
# plot the estimated optimal state-value function
plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)])
def update_Q(alpha, gamma, Q, state, action, reward, next_state=None):
""" updates the action-value function estimate using the most recent episode """
old_Q = Q[state][action]
Q_next_len = len(Q[next_state])
Q_next = max(Q[next_state]) if next_state is not None else 0
# Q_next = 0
# for next_action1 in Q[next_state]:
# Q_next = max(Q_next, Q[next_state][next_action1])
Q_next = Q_next/Q_next_len
Q[state][action] = old_Q + (alpha*((reward + (gamma*Q_next)) - old_Q))
return Q[state][action]
def epsilon_greedy(Q, state, nA, eps):
"""Selects epsilon-greedy action for supplied state.
Params
======
Q (dictionary): action-value function
state (int): current state
nA (int): number actions in the environment
eps (float): epsilon
"""
if random.random() > eps: # select greedy action with probability epsilon
return np.argmax(Q[state])
else: # otherwise, select an action randomly
return random.choice(np.arange(env.action_space.n))
def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100):
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(env.nA))
# monitor performance
tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores
avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100 == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
## TODO: complete the function
score = 0 # initialize score
state = env.reset() # start episode
nA = env.action_space.n
eps = 1.0 / i_episode # set value of epsilon
action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection
while True:
next_state, reward, done, info = env.step(action) # take action A, observe R, S'
score += reward # add reward to agent's score
if not done:
next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action
Q[state][action] = update_Q(alpha, gamma, Q, \
state, action, reward, next_state)
state = next_state # S <- S'
action = next_action # A <- A'
if done:
Q[state][action] = update_Q(alpha, gamma, Q, \
state, action, reward)
tmp_scores.append(score) # append score
break
if (i_episode % plot_every == 0):
avg_scores.append(np.mean(tmp_scores))
# plot performance
plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)
plt.show()
# print best 100-episode performance
print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores))
return Q
# obtain the estimated optimal policy and corresponding action-value function
Q_expsarsa = expected_sarsa(env, 10000, 1, 100)
# print the estimated optimal policy
policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12)
check_test.run_check('td_control_check', policy_expsarsa)
print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):")
print(policy_expsarsa)
# plot the estimated optimal state-value function
plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)])
| 0.426083 | 0.947962 |
# kaggle_quora: single model of yuhaitao
比赛baseline
参考:
https://www.kaggle.com/shujian/single-rnn-with-4-folds-clr
https://www.kaggle.com/gmhost/gru-capsule
https://github.com/dennybritz/cnn-text-classification-tf
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
# Any results you write to the current directory are saved as output.
```
# load package
```
import os
import time
import random
import re
from tqdm import tqdm
from IPython.display import display
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.model_selection import GridSearchCV, StratifiedKFold
from sklearn.metrics import f1_score, roc_auc_score
from collections import Counter
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
```
# global parameters
```
data_dir = "../input/"
train_file = os.path.join(data_dir, "train.csv")
test_file = os.path.join(data_dir, "test.csv")
embedding_size = 300
max_len = 50
max_features = 120000
batch_size = 512
use_local_test = False
```
# Data preprocess
```
# 将特殊字符单独挑出
puncts = [',', '.', '"', ':', ')', '(', '-', '!', '?', '|', ';', "'", '$', '&', '/', '[', ']', '>', '%', '=', '#', '*', '+', '\\', '•', '~', '@', '£',
'·', '_', '{', '}', '©', '^', '®', '`', '<', '→', '°', '€', '™', '›', '♥', '←', '×', '§', '″', '′', 'Â', '█', '½', 'à', '…',
'“', '★', '”', '–', '●', 'â', '►', '−', '¢', '²', '¬', '░', '¶', '↑', '±', '¿', '▾', '═', '¦', '║', '―', '¥', '▓', '—', '‹', '─',
'▒', ':', '¼', '⊕', '▼', '▪', '†', '■', '’', '▀', '¨', '▄', '♫', '☆', 'é', '¯', '♦', '¤', '▲', 'è', '¸', '¾', 'Ã', '⋅', '‘', '∞',
'∙', ')', '↓', '、', '│', '(', '»', ',', '♪', '╩', '╚', '³', '・', '╦', '╣', '╔', '╗', '▬', '❤', 'ï', 'Ø', '¹', '≤', '‡', '√', ]
def clean_text(x):
x = str(x)
for punct in puncts:
if punct in x:
# x = x.replace(punct, f' {punct} ') # 这是python3.6语法
x = x.replace(punct, ' '+punct+' ')
return x
# 清洗数字
def clean_numbers(x):
if bool(re.search(r'\d', x)):
x = re.sub('[0-9]{5,}', '#####', x)
x = re.sub('[0-9]{4}', '####', x)
x = re.sub('[0-9]{3}', '###', x)
x = re.sub('[0-9]{2}', '##', x)
return x
# 清洗拼写
mispell_dict = {"aren't" : "are not",
"can't" : "cannot",
"couldn't" : "could not",
"didn't" : "did not",
"doesn't" : "does not",
"don't" : "do not",
"hadn't" : "had not",
"hasn't" : "has not",
"haven't" : "have not",
"he'd" : "he would",
"he'll" : "he will",
"he's" : "he is",
"i'd" : "I would",
"i'd" : "I had",
"i'll" : "I will",
"i'm" : "I am",
"isn't" : "is not",
"it's" : "it is",
"it'll":"it will",
"i've" : "I have",
"let's" : "let us",
"mightn't" : "might not",
"mustn't" : "must not",
"shan't" : "shall not",
"she'd" : "she would",
"she'll" : "she will",
"she's" : "she is",
"shouldn't" : "should not",
"that's" : "that is",
"there's" : "there is",
"they'd" : "they would",
"they'll" : "they will",
"they're" : "they are",
"they've" : "they have",
"we'd" : "we would",
"we're" : "we are",
"weren't" : "were not",
"we've" : "we have",
"what'll" : "what will",
"what're" : "what are",
"what's" : "what is",
"what've" : "what have",
"where's" : "where is",
"who'd" : "who would",
"who'll" : "who will",
"who're" : "who are",
"who's" : "who is",
"who've" : "who have",
"won't" : "will not",
"wouldn't" : "would not",
"you'd" : "you would",
"you'll" : "you will",
"you're" : "you are",
"you've" : "you have",
"'re": " are",
"wasn't": "was not",
"we'll":" will",
"didn't": "did not",
"tryin'":"trying"}
def _get_mispell(mispell_dict):
mispell_re = re.compile('(%s)' % '|'.join(mispell_dict.keys()))
return mispell_dict, mispell_re
mispellings, mispellings_re = _get_mispell(mispell_dict)
def replace_typical_misspell(text):
def replace(match):
return mispellings[match.group(0)]
return mispellings_re.sub(replace, text)
def load_and_prec(use_local_test=True):
train_df = pd.read_csv(train_file)
test_df = pd.read_csv(test_file)
print("Train shape : ",train_df.shape)
print("Test shape : ",test_df.shape)
display(train_df.head())
display(test_df.head())
# 小写
train_df["question_text"] = train_df["question_text"].str.lower()
test_df["question_text"] = test_df["question_text"].str.lower()
# 数字清洗
train_df["question_text"] = train_df["question_text"].apply(lambda x: clean_numbers(x))
test_df["question_text"] = test_df["question_text"].apply(lambda x: clean_numbers(x))
# 清洗拼写
train_df["question_text"] = train_df["question_text"].apply(lambda x: replace_typical_misspell(x))
test_df["question_text"] = test_df["question_text"].apply(lambda x: replace_typical_misspell(x))
# 数据清洗
train_df["question_text"] = train_df["question_text"].apply(lambda x: clean_text(x))
test_df["question_text"] = test_df["question_text"].apply(lambda x: clean_text(x))
## fill up the missing values
train_X = train_df["question_text"].fillna("_##_").values
test_X = test_df["question_text"].fillna("_##_").values
## Tokenize the sentences
# 这个方法把所有字母都小写了
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(list(train_X))
train_X = tokenizer.texts_to_sequences(train_X)
test_X = tokenizer.texts_to_sequences(test_X)
## Get the target values
train_Y = train_df['target'].values
print(np.sum(train_Y))
# # 在pad之前把前30个词去掉
# train_cut = []
# test_cut = []
# for x in train_X:
# train_cut.append([i for i in x if i>30])
# for x in test_X:
# test_cut.append([i for i in x if i>30])
# train_X = train_cut
# test_X = test_cut
## Pad the sentences
train_X = pad_sequences(train_X, maxlen=max_len, padding="post", truncating="post")
test_X = pad_sequences(test_X, maxlen=max_len, padding="post", truncating="post")
# # # 把最常用的40个词去掉,pad为0
# # train_X = np.where(train_X>=40, train_X, 0)
# # test_X = np.where(test_X>=40, test_X, 0)
#shuffling the data
np.random.seed(20190101)
trn_idx = np.random.permutation(len(train_X))
train_X = train_X[trn_idx]
train_Y = train_Y[trn_idx]
# 使用本地测试集
if use_local_test:
train_X, local_test_X = (train_X[:-4*len(test_X)], train_X[-4*len(test_X):])
train_Y, local_test_Y = (train_Y[:-4*len(test_X)], train_Y[-4*len(test_X):])
else:
local_test_X = np.zeros(shape=[1,max_len], dtype=np.int32)
local_test_Y = np.zeros(shape=[1], dtype=np.int32)
print(train_X.shape)
print(local_test_X.shape)
print(test_X.shape)
print(len(tokenizer.word_index))
return train_X, test_X, train_Y, local_test_X, local_test_Y, tokenizer.word_index
# load_and_prec()
```
# load embeddings
```
def load_glove(word_index):
EMBEDDING_FILE = '../input/embeddings/glove.840B.300d/glove.840B.300d.txt'
def get_coefs(word,*arr): return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE))
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
embed_size = all_embs.shape[1]
# word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
return embedding_matrix
def load_fasttext(word_index):
"""
这个加载词向量还没有细看
"""
EMBEDDING_FILE = '../input/embeddings/wiki-news-300d-1M/wiki-news-300d-1M.vec'
def get_coefs(word,*arr):
return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE) if len(o)>100)
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
embed_size = all_embs.shape[1]
# word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
return embedding_matrix
def load_para(word_index):
EMBEDDING_FILE = '../input/embeddings/paragram_300_sl999/paragram_300_sl999.txt'
def get_coefs(word,*arr): return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE, encoding="utf8", errors='ignore') if len(o)>100 and o.split(" ")[0] in word_index)
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
embed_size = all_embs.shape[1]
embedding_matrix = np.random.normal(emb_mean, emb_std, (max_features, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
return embedding_matrix
```
# Models
text_rnn(Bi-GRU)
```
# dense layer
def dense(inputs, hidden, use_bias=True, scope="dense"):
"""
全连接层
"""
with tf.variable_scope(scope):
shape = tf.shape(inputs)
dim = inputs.get_shape().as_list()[-1]
out_shape = [shape[idx] for idx in range(
len(inputs.get_shape().as_list()) - 1)] + [hidden]
# 三维的inputs,reshape成二维
flat_inputs = tf.reshape(inputs, [-1, dim])
W = tf.get_variable("W", [dim, hidden], initializer=tf.contrib.layers.xavier_initializer())
res = tf.matmul(flat_inputs, W)
if use_bias:
b = tf.get_variable(
"b", [hidden], initializer=tf.constant_initializer(0.1))
res = tf.nn.bias_add(res, b)
# outshape就是input的最后一维变成hidden
res = tf.reshape(res, out_shape)
return res
# dot-product attention
def dot_attention(inputs, memory, mask, hidden, keep_prob, scope="dot_attention"):
"""
门控attention层
"""
def softmax_mask(val, mask):
return -1e30 * (1 - tf.cast(mask, tf.float32)) + val
with tf.variable_scope(scope):
JX = tf.shape(inputs)[1] # inputs的1维度,应该是c_maxlen
with tf.variable_scope("attention"):
# inputs_的shape:[batch_size, c_maxlen, hidden]
inputs_ = tf.nn.relu(
dense(inputs, hidden, use_bias=False, scope="inputs"))
memory_ = tf.nn.relu(
dense(memory, hidden, use_bias=False, scope="memory"))
# 三维矩阵相乘,结果的shape是[batch_size, c_maxlen, q_maxlen]
outputs = tf.matmul(inputs_, tf.transpose(
memory_, [0, 2, 1])) / (hidden ** 0.5)
# 将mask平铺成与outputs相同的形状,这里考虑,改进成input和memory都需要mask
mask = tf.tile(tf.expand_dims(mask, axis=1), [1, JX, 1])
logits = tf.nn.softmax(softmax_mask(outputs, mask))
outputs = tf.matmul(logits, memory)
# res:[batch_size, c_maxlen, 12*hidden]
res = tf.concat([inputs, outputs], axis=2)
return res
# with tf.variable_scope("gate"):
# """
# attention * gate
# """
# dim = res.get_shape().as_list()[-1]
# d_res = dropout(res, keep_prob=keep_prob, is_train=is_train)
# gate = tf.nn.sigmoid(dense(d_res, dim, use_bias=False))
# return res * gate # 向量的逐元素相乘
# 定义一个多层的双向gru类,使用cudnn加速
class cudnn_gru:
def __init__(self, num_layers, num_units, input_size, scope=None):
self.num_layers = num_layers
self.grus = []
self.inits = []
self.dropout_mask = []
self.scope = scope
for layer in range(num_layers):
input_size_ = input_size if layer == 0 else 2 * num_units
gru_fw = tf.contrib.cudnn_rnn.CudnnGRU(
1, num_units, name="f_cudnn_gru")
gru_bw = tf.contrib.cudnn_rnn.CudnnGRU(
1, num_units, name="b_cudnn_gru")
self.grus.append((gru_fw, gru_bw, ))
def __call__(self, inputs, seq_len, keep_prob, concat_layers=True):
# cudnn GRU需要交换张量的维度,可能是便于计算
outputs = [tf.transpose(inputs, [1, 0, 2])]
out_states = []
with tf.variable_scope(self.scope):
for layer in range(self.num_layers):
gru_fw, gru_bw = self.grus[layer]
with tf.variable_scope("fw_{}".format(layer)):
out_fw, (fw_state,) = gru_fw(outputs[-1])
with tf.variable_scope("bw_{}".format(layer)):
inputs_bw = tf.reverse_sequence(outputs[-1], seq_lengths=seq_len, seq_dim=0, batch_dim=1)
out_bw, (bw_state,) = gru_bw(inputs_bw)
out_bw = tf.reverse_sequence(out_bw, seq_lengths=seq_len, seq_dim=0, batch_dim=1)
outputs.append(tf.concat([out_fw, out_bw], axis=2))
out_states.append(tf.concat([fw_state, bw_state], axis=-1))
if concat_layers:
res = tf.concat(outputs[1:], axis=2)
final_state = tf.squeeze(tf.transpose(tf.concat(out_states, axis=0), [1,0,2]), axis=1)
else:
res = outputs[-1]
final_state = tf.squeeze(out_states[-1], axis=0)
res = tf.transpose(res, [1, 0, 2])
return res, final_state
class model_text_rnn_attention(object):
"""
使用简单的双向GRU,并接一个attention层。
"""
def __init__(self, embedding_matrix, sequence_length=50, num_classes=1,
embedding_size=300, trainable=True):
# Placeholders for input, output and dropout
self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x")
self.input_y = tf.placeholder(tf.int32, [None], name="input_y")
self.keep_prob = tf.placeholder(tf.float32, name="keep_prob")
# Some variables
self.embedding_matrix = tf.get_variable("embedding_matrix", initializer=tf.constant(
embedding_matrix, dtype=tf.float32), trainable=False)
self.global_step = tf.get_variable('global_step', shape=[], dtype=tf.int32,
initializer=tf.constant_initializer(0), trainable=False)
with tf.name_scope("process"):
self.seq_len = tf.reduce_sum(tf.cast(tf.cast(self.input_x, dtype=tf.bool), dtype=tf.int32), axis=1, name="seq_len")
self.mask = tf.cast(self.input_x, dtype=tf.bool)
# The structure of the model
self.layers(num_classes)
# optimizer
if trainable:
self.learning_rate = tf.train.exponential_decay(
learning_rate=0.001, global_step=self.global_step, decay_steps=1000, decay_rate=0.95)
self.opt = tf.train.AdamOptimizer(learning_rate=self.learning_rate, epsilon=1e-8)
self.train_op = self.opt.minimize(self.loss, global_step=self.global_step)
def layers(self, num_classes):
# Embedding layer
with tf.variable_scope("embedding"):
self.embedding_inputs = tf.nn.embedding_lookup(self.embedding_matrix, self.input_x)
self.embedding_inputs = tf.nn.dropout(self.embedding_inputs, self.keep_prob)
# Bi-GRU Encoder
with tf.variable_scope("Bi-GRU"):
bi_rnn = cudnn_gru(num_layers=1, num_units=128, input_size=self.embedding_inputs.get_shape().as_list()[-1], scope="encoder")
rnn_output, _ = bi_rnn(self.embedding_inputs, seq_len=self.seq_len, keep_prob=self.keep_prob)
# shape: [batch_size, 2*hidden]
self.rnn_out = tf.nn.dropout(rnn_output, keep_prob=self.keep_prob)
with tf.variable_scope("Attention_Layer"):
"""
将rnn的输出再做self-attention
"""
att = dot_attention(inputs=self.rnn_out, memory=self.rnn_out, mask=self.mask, hidden=128,
keep_prob=self.keep_prob)
# pooling
att_out_1 = tf.reduce_mean(att, axis=2) # shape: [batch_size, 50]
att_out_2 = tf.reduce_max(att, axis=2)
self.att_out = tf.concat([att_out_1, att_out_2], axis=1)
with tf.variable_scope("fully_connected"):
"""
全连接层
"""
# fc_W1 = tf.get_variable(
# shape=[self.rnn_out.get_shape().as_list()[1], 128],
# initializer=tf.contrib.layers.xavier_initializer(),
# name="fc_w1")
# fc_b1 = tf.get_variable(shape=[128], initializer=tf.constant_initializer(0.1), name="fc_b1")
# fc_1 = tf.nn.relu(tf.nn.bias_add(tf.matmul(self.rnn_out, fc_W1), fc_b1))
fc_1_drop = tf.nn.dropout(self.att_out, self.keep_prob)
fc_W2 = tf.get_variable(
shape=[self.att_out.get_shape().as_list()[1], num_classes],
initializer=tf.contrib.layers.variance_scaling_initializer(),
name="fc_w2")
fc_b2 = tf.get_variable(shape=[num_classes], initializer=tf.constant_initializer(0.1), name="fc_b2")
self.logits = tf.squeeze(tf.nn.bias_add(tf.matmul(fc_1_drop, fc_W2), fc_b2), name="logits")
with tf.variable_scope("sigmoid_and_loss"):
"""
用sigmoid函数加阈值代替softmax的多分类
"""
self.sigmoid = tf.nn.sigmoid(self.logits)
self.loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=self.logits, labels=tf.cast(self.input_y, dtype=tf.float32)))
```
# Training Tools
```
# batch生成器
def batch_generator(train_X, train_Y, batch_size, is_train=True):
"""
batch生成器:
在is_train为true的情况下,补充batch,并shuffle
"""
data_number = train_X.shape[0]
batch_count = 0
while True:
if batch_count * batch_size + batch_size > data_number:
# 最后一个batch的操作
if is_train:
# 后面的直接舍弃,重新开始
# shuffle
np.random.seed(2018)
trn_idx = np.random.permutation(data_number)
train_X = train_X[trn_idx]
train_Y = train_Y[trn_idx]
one_batch_X = train_X[0:batch_size]
one_batch_Y = train_Y[0:batch_size]
batch_count = 1
yield one_batch_X, one_batch_Y
else:
one_batch_X = train_X[batch_count * batch_size:data_number]
one_batch_Y = train_Y[batch_count * batch_size:data_number]
batch_count = 0
yield one_batch_X, one_batch_Y
else:
one_batch_X = train_X[batch_count * batch_size:batch_count * batch_size + batch_size]
one_batch_Y = train_Y[batch_count * batch_size:batch_count * batch_size + batch_size]
batch_count += 1
yield one_batch_X, one_batch_Y
# 正类欠采样,负类数据增强,暂时用随机打乱数据增强.
def data_augmentation(X, Y, under_sample=100000, aug_num=3):
"""
under_sample: 欠采样个数
aug: 数据增强倍数
"""
pos_X = []
neg_X = []
for i in range(X.shape[0]):
if Y[i] == 1:
neg_X.append(list(X[i]))
else:
pos_X.append(list(X[i]))
# 正样本欠采样
random.shuffle(pos_X)
pos_X = pos_X[:-under_sample]
# 正样本数据增强
pos_X_aug = []
for i in range(200000):
aug = []
for x in pos_X[i]:
if x != 0:
aug.append(x)
else:
break
random.shuffle(aug)
aug += [0] * (max_len-len(aug))
pos_X_aug.append(aug)
pos_X.extend(pos_X_aug)
print(len(pos_X))
# 负样本数据增强
neg_X_aug = []
for i in range(aug_num):
for neg in neg_X:
aug = []
for x in neg:
if x != 0:
aug.append(x)
else:
break
random.shuffle(aug)
aug += [0] * (max_len-len(aug))
neg_X_aug.append(aug)
neg_X.extend(neg_X_aug)
print(len(neg_X))
pos_Y = np.zeros(shape=[len(pos_X)], dtype=np.int32)
neg_Y = np.ones(shape=[len(neg_X)], dtype=np.int32)
pos_X.extend(neg_X)
X_out = np.array(pos_X, dtype=np.int32)
Y_out = np.append(pos_Y, neg_Y)
print(X_out.shape)
#shuffling the data
np.random.seed(2018)
trn_idx = np.random.permutation(len(X_out))
X_out = X_out[trn_idx]
Y_out = Y_out[trn_idx]
print(X_out.shape)
print(Y_out.shape)
return X_out, Y_out
# 搜索最佳阈值
def bestThreshold(y,y_preds):
tmp = [0,0,0] # idx, cur, max
delta = 0
for tmp[0] in tqdm(np.arange(0.1, 0.501, 0.01)):
tmp[1] = metrics.f1_score(y, np.array(y_preds)>tmp[0])
if tmp[1] > tmp[2]:
delta = tmp[0]
tmp[2] = tmp[1]
print('best threshold is {:.4f} with F1 score: {:.4f}'.format(delta, tmp[2]))
return delta , tmp[2]
```
# Main part
```
# 加载数据,平均词向量
train_X, test_X, train_Y, local_test_X, local_test_Y, word_index = load_and_prec(use_local_test)
# embedding_matrix_1 = load_glove(word_index)
embedding_matrix = load_fasttext(word_index)
# embedding_matrix = load_para(word_index)
# embedding_matrix = np.mean([embedding_matrix_1, embedding_matrix_3], axis = 0)
np.shape(embedding_matrix)
# embedding_matrix = np.zeros(shape=[100,300],dtype=np.float32)
# 多折训练,交叉验证平均,测试
# 划分交叉验证集
DATA_SPLIT_SEED = 20190101
splits = list(StratifiedKFold(n_splits=5, shuffle=True, random_state=DATA_SPLIT_SEED).split(train_X, train_Y))
# test batch
test_batch = batch_generator(test_X, np.zeros(shape=[test_X.shape[0]], dtype=np.int32), batch_size, False)
local_test_batch = batch_generator(local_test_X, local_test_Y, batch_size, False)
# 最终输出
train_preds = np.zeros(len(train_X), dtype=np.float32)
test_preds = np.zeros((len(test_X), len(splits)), dtype=np.float32)
test_preds_local = np.zeros((len(local_test_X), len(splits)), dtype=np.float32)
best_threshold = 0.33
# 多折训练
for i, (train_idx, valid_idx) in enumerate(splits):
print("fold:{}".format(i+1))
X_train = train_X[train_idx]
Y_train = train_Y[train_idx]
X_val = train_X[valid_idx]
Y_val = train_Y[valid_idx]
# # 数据增强
# X_train, Y_train = data_augmentation(X_train, Y_train)
# print(Y_train[:100])
# print(Y_train[-100:])
# 训练batch生成器
train_batch = batch_generator(X_train, Y_train, batch_size, True)
val_batch = batch_generator(X_val, Y_val, batch_size, False)
# 选择最好的结果
best_val_f1 = 0.0
best_val_loss = 99999.99999
best_val_fold = []
best_test_fold = []
best_local_test_fold = []
# 训练 & 验证 & 测试
with tf.Graph().as_default():
sess_config = tf.ConfigProto(allow_soft_placement=True)
sess_config.gpu_options.allow_growth = True
with tf.Session(config=sess_config) as sess:
writer = tf.summary.FileWriter("./log/", sess.graph)
# 模型
model = model_text_rnn_attention(embedding_matrix=embedding_matrix, sequence_length=max_len)
sess.run(tf.global_variables_initializer())
train_loss_sum = 0.0
start_time = time.time()
for go in range(20000):
steps = sess.run(model.global_step) + 1
# 训练
train_batch_X, train_batch_Y = next(train_batch)
feed = {model.input_x:train_batch_X, model.input_y:train_batch_Y, model.keep_prob:0.7}
loss, train_op = sess.run([model.loss, model.train_op], feed_dict=feed)
train_loss_sum += loss
# 验证 & 测试
if steps % 1000 == 0:
val_predictions = []
val_loss_sum = 0.0
for _ in range(X_val.shape[0] // batch_size + 1):
val_batch_X, val_batch_Y = next(val_batch)
feed_val = {model.input_x:val_batch_X, model.input_y:val_batch_Y, model.keep_prob:1.0}
val_loss, val_sigmoid = sess.run([model.loss, model.sigmoid], feed_dict=feed_val)
val_predictions.extend(val_sigmoid)
val_loss_sum += val_loss
# val_f1 = metrics.f1_score(Y_val, np.array(val_predictions))
# val_pre = metrics.precision_score(Y_val, np.array(val_predictions))
# val_recall = metrics.recall_score(Y_val, np.array(val_predictions))
val_loss_sum = val_loss_sum / (X_val.shape[0] // batch_size + 1)
# print("steps:{}, train_loss:{:.5f}, val_loss:{:.5f}, val_F1:{:.5f}, val_pre:{:.5f}, val_recall:{:.5f}".format(
# steps, float(train_loss_sum / 1000), float(val_loss_sum), float(val_f1), float(val_pre), float(val_recall)))
end_time = time.time()
print("steps:{}, train_loss:{:.5f}, val_loss:{:.5f}, time:{:.5f}".format(
steps, float(train_loss_sum / 1000), float(val_loss_sum), end_time-start_time))
start_time = time.time()
# 写入tensorboard
train_loss_write = tf.Summary(value=[tf.Summary.Value(tag="model/train_loss", \
simple_value=train_loss_sum / 1000), ])
writer.add_summary(train_loss_write, steps)
val_loss_write = tf.Summary(value=[tf.Summary.Value(tag="model/val_loss", simple_value=val_loss_sum), ])
writer.add_summary(val_loss_write, steps)
# val_f1_write = tf.Summary(value=[tf.Summary.Value(tag="index/val_f1", simple_value=val_f1), ])
# writer.add_summary(val_f1_write, steps)
# val_pre_write = tf.Summary(value=[tf.Summary.Value(tag="index/val_precision", simple_value=val_pre), ])
# writer.add_summary(val_pre_write, steps)
# val_recall_write = tf.Summary(value=[tf.Summary.Value(tag="index/val_recall", simple_value=val_recall), ])
# writer.add_summary(val_recall_write, steps)
writer.flush()
# train loss
train_loss_sum = 0.0
# # 测试,并选取最好的F1值的时刻的测试结果为最终结果
# if val_f1 > best_val_f1:
# best_val_f1 = val_f1
# best_test = []
# for _ in range(test_X.shape[0] // batch_size + 1):
# test_batch_X, _ = next(test_batch)
# feed_test = {model.input_x:test_batch_X, model.keep_prob:1.0}
# test_classes = sess.run(model.classes, feed_dict=feed_test)
# best_test.extend(test_classes)
# print("test done!")
# 测试,并选取最低的loss值的时刻的测试结果为最终结果
if val_loss_sum < best_val_loss and steps >= 10000:
best_val_loss = val_loss_sum
best_val_fold = val_predictions
best_test_fold = []
best_local_test_fold = []
# 线上test
for _ in range(test_X.shape[0] // batch_size + 1):
test_batch_X, _ = next(test_batch)
feed_test = {model.input_x:test_batch_X, model.keep_prob:1.0}
test_sigmoid = sess.run(model.sigmoid, feed_dict=feed_test)
best_test_fold.extend(test_sigmoid)
# 线下test
if use_local_test:
for _ in range(local_test_X.shape[0] // batch_size + 1):
local_test_batch_X, _ = next(local_test_batch)
feed_local_test = {model.input_x:local_test_batch_X, model.keep_prob:1.0}
local_test_sigmoid = sess.run(model.sigmoid, feed_dict=feed_local_test)
best_local_test_fold.extend(local_test_sigmoid)
print("test done!")
# 更新预测结果
best_threshold, best_f1 = bestThreshold(Y_val, best_val_fold)
# train_preds[valid_idx] = np.array(best_val_fold)
test_preds[:, i] = np.array(best_test_fold)
if use_local_test:
test_preds_local[:, i] = np.array(best_local_test_fold)
# print("fold:{}, threshold:{}, F1_score:{:.5f}".format(i, best_threshold_fold, \
# metrics.f1_score(Y_val, (np.array(best_val_fold)>best_threshold_fold).astype(int)))))
# 单模型只测试一折
break
# 后处理,提交结果
if use_local_test:
print("local_test_f1:{:.5f}".format(metrics.f1_score(local_test_Y, (test_preds_local.mean(axis=1) > best_threshold))))
sub = pd.read_csv('../input/sample_submission.csv')
sub["prediction"] = (test_preds.mean(axis=1)*5 > best_threshold).astype(int)
sub.to_csv("submission.csv", index=False)
pd.DataFrame(test_preds_local).corr()
```
|
github_jupyter
|
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
# Any results you write to the current directory are saved as output.
import os
import time
import random
import re
from tqdm import tqdm
from IPython.display import display
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.model_selection import GridSearchCV, StratifiedKFold
from sklearn.metrics import f1_score, roc_auc_score
from collections import Counter
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
data_dir = "../input/"
train_file = os.path.join(data_dir, "train.csv")
test_file = os.path.join(data_dir, "test.csv")
embedding_size = 300
max_len = 50
max_features = 120000
batch_size = 512
use_local_test = False
# 将特殊字符单独挑出
puncts = [',', '.', '"', ':', ')', '(', '-', '!', '?', '|', ';', "'", '$', '&', '/', '[', ']', '>', '%', '=', '#', '*', '+', '\\', '•', '~', '@', '£',
'·', '_', '{', '}', '©', '^', '®', '`', '<', '→', '°', '€', '™', '›', '♥', '←', '×', '§', '″', '′', 'Â', '█', '½', 'à', '…',
'“', '★', '”', '–', '●', 'â', '►', '−', '¢', '²', '¬', '░', '¶', '↑', '±', '¿', '▾', '═', '¦', '║', '―', '¥', '▓', '—', '‹', '─',
'▒', ':', '¼', '⊕', '▼', '▪', '†', '■', '’', '▀', '¨', '▄', '♫', '☆', 'é', '¯', '♦', '¤', '▲', 'è', '¸', '¾', 'Ã', '⋅', '‘', '∞',
'∙', ')', '↓', '、', '│', '(', '»', ',', '♪', '╩', '╚', '³', '・', '╦', '╣', '╔', '╗', '▬', '❤', 'ï', 'Ø', '¹', '≤', '‡', '√', ]
def clean_text(x):
x = str(x)
for punct in puncts:
if punct in x:
# x = x.replace(punct, f' {punct} ') # 这是python3.6语法
x = x.replace(punct, ' '+punct+' ')
return x
# 清洗数字
def clean_numbers(x):
if bool(re.search(r'\d', x)):
x = re.sub('[0-9]{5,}', '#####', x)
x = re.sub('[0-9]{4}', '####', x)
x = re.sub('[0-9]{3}', '###', x)
x = re.sub('[0-9]{2}', '##', x)
return x
# 清洗拼写
mispell_dict = {"aren't" : "are not",
"can't" : "cannot",
"couldn't" : "could not",
"didn't" : "did not",
"doesn't" : "does not",
"don't" : "do not",
"hadn't" : "had not",
"hasn't" : "has not",
"haven't" : "have not",
"he'd" : "he would",
"he'll" : "he will",
"he's" : "he is",
"i'd" : "I would",
"i'd" : "I had",
"i'll" : "I will",
"i'm" : "I am",
"isn't" : "is not",
"it's" : "it is",
"it'll":"it will",
"i've" : "I have",
"let's" : "let us",
"mightn't" : "might not",
"mustn't" : "must not",
"shan't" : "shall not",
"she'd" : "she would",
"she'll" : "she will",
"she's" : "she is",
"shouldn't" : "should not",
"that's" : "that is",
"there's" : "there is",
"they'd" : "they would",
"they'll" : "they will",
"they're" : "they are",
"they've" : "they have",
"we'd" : "we would",
"we're" : "we are",
"weren't" : "were not",
"we've" : "we have",
"what'll" : "what will",
"what're" : "what are",
"what's" : "what is",
"what've" : "what have",
"where's" : "where is",
"who'd" : "who would",
"who'll" : "who will",
"who're" : "who are",
"who's" : "who is",
"who've" : "who have",
"won't" : "will not",
"wouldn't" : "would not",
"you'd" : "you would",
"you'll" : "you will",
"you're" : "you are",
"you've" : "you have",
"'re": " are",
"wasn't": "was not",
"we'll":" will",
"didn't": "did not",
"tryin'":"trying"}
def _get_mispell(mispell_dict):
mispell_re = re.compile('(%s)' % '|'.join(mispell_dict.keys()))
return mispell_dict, mispell_re
mispellings, mispellings_re = _get_mispell(mispell_dict)
def replace_typical_misspell(text):
def replace(match):
return mispellings[match.group(0)]
return mispellings_re.sub(replace, text)
def load_and_prec(use_local_test=True):
train_df = pd.read_csv(train_file)
test_df = pd.read_csv(test_file)
print("Train shape : ",train_df.shape)
print("Test shape : ",test_df.shape)
display(train_df.head())
display(test_df.head())
# 小写
train_df["question_text"] = train_df["question_text"].str.lower()
test_df["question_text"] = test_df["question_text"].str.lower()
# 数字清洗
train_df["question_text"] = train_df["question_text"].apply(lambda x: clean_numbers(x))
test_df["question_text"] = test_df["question_text"].apply(lambda x: clean_numbers(x))
# 清洗拼写
train_df["question_text"] = train_df["question_text"].apply(lambda x: replace_typical_misspell(x))
test_df["question_text"] = test_df["question_text"].apply(lambda x: replace_typical_misspell(x))
# 数据清洗
train_df["question_text"] = train_df["question_text"].apply(lambda x: clean_text(x))
test_df["question_text"] = test_df["question_text"].apply(lambda x: clean_text(x))
## fill up the missing values
train_X = train_df["question_text"].fillna("_##_").values
test_X = test_df["question_text"].fillna("_##_").values
## Tokenize the sentences
# 这个方法把所有字母都小写了
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(list(train_X))
train_X = tokenizer.texts_to_sequences(train_X)
test_X = tokenizer.texts_to_sequences(test_X)
## Get the target values
train_Y = train_df['target'].values
print(np.sum(train_Y))
# # 在pad之前把前30个词去掉
# train_cut = []
# test_cut = []
# for x in train_X:
# train_cut.append([i for i in x if i>30])
# for x in test_X:
# test_cut.append([i for i in x if i>30])
# train_X = train_cut
# test_X = test_cut
## Pad the sentences
train_X = pad_sequences(train_X, maxlen=max_len, padding="post", truncating="post")
test_X = pad_sequences(test_X, maxlen=max_len, padding="post", truncating="post")
# # # 把最常用的40个词去掉,pad为0
# # train_X = np.where(train_X>=40, train_X, 0)
# # test_X = np.where(test_X>=40, test_X, 0)
#shuffling the data
np.random.seed(20190101)
trn_idx = np.random.permutation(len(train_X))
train_X = train_X[trn_idx]
train_Y = train_Y[trn_idx]
# 使用本地测试集
if use_local_test:
train_X, local_test_X = (train_X[:-4*len(test_X)], train_X[-4*len(test_X):])
train_Y, local_test_Y = (train_Y[:-4*len(test_X)], train_Y[-4*len(test_X):])
else:
local_test_X = np.zeros(shape=[1,max_len], dtype=np.int32)
local_test_Y = np.zeros(shape=[1], dtype=np.int32)
print(train_X.shape)
print(local_test_X.shape)
print(test_X.shape)
print(len(tokenizer.word_index))
return train_X, test_X, train_Y, local_test_X, local_test_Y, tokenizer.word_index
# load_and_prec()
def load_glove(word_index):
EMBEDDING_FILE = '../input/embeddings/glove.840B.300d/glove.840B.300d.txt'
def get_coefs(word,*arr): return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE))
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
embed_size = all_embs.shape[1]
# word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
return embedding_matrix
def load_fasttext(word_index):
"""
这个加载词向量还没有细看
"""
EMBEDDING_FILE = '../input/embeddings/wiki-news-300d-1M/wiki-news-300d-1M.vec'
def get_coefs(word,*arr):
return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE) if len(o)>100)
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
embed_size = all_embs.shape[1]
# word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
return embedding_matrix
def load_para(word_index):
EMBEDDING_FILE = '../input/embeddings/paragram_300_sl999/paragram_300_sl999.txt'
def get_coefs(word,*arr): return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE, encoding="utf8", errors='ignore') if len(o)>100 and o.split(" ")[0] in word_index)
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
embed_size = all_embs.shape[1]
embedding_matrix = np.random.normal(emb_mean, emb_std, (max_features, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
return embedding_matrix
# dense layer
def dense(inputs, hidden, use_bias=True, scope="dense"):
"""
全连接层
"""
with tf.variable_scope(scope):
shape = tf.shape(inputs)
dim = inputs.get_shape().as_list()[-1]
out_shape = [shape[idx] for idx in range(
len(inputs.get_shape().as_list()) - 1)] + [hidden]
# 三维的inputs,reshape成二维
flat_inputs = tf.reshape(inputs, [-1, dim])
W = tf.get_variable("W", [dim, hidden], initializer=tf.contrib.layers.xavier_initializer())
res = tf.matmul(flat_inputs, W)
if use_bias:
b = tf.get_variable(
"b", [hidden], initializer=tf.constant_initializer(0.1))
res = tf.nn.bias_add(res, b)
# outshape就是input的最后一维变成hidden
res = tf.reshape(res, out_shape)
return res
# dot-product attention
def dot_attention(inputs, memory, mask, hidden, keep_prob, scope="dot_attention"):
"""
门控attention层
"""
def softmax_mask(val, mask):
return -1e30 * (1 - tf.cast(mask, tf.float32)) + val
with tf.variable_scope(scope):
JX = tf.shape(inputs)[1] # inputs的1维度,应该是c_maxlen
with tf.variable_scope("attention"):
# inputs_的shape:[batch_size, c_maxlen, hidden]
inputs_ = tf.nn.relu(
dense(inputs, hidden, use_bias=False, scope="inputs"))
memory_ = tf.nn.relu(
dense(memory, hidden, use_bias=False, scope="memory"))
# 三维矩阵相乘,结果的shape是[batch_size, c_maxlen, q_maxlen]
outputs = tf.matmul(inputs_, tf.transpose(
memory_, [0, 2, 1])) / (hidden ** 0.5)
# 将mask平铺成与outputs相同的形状,这里考虑,改进成input和memory都需要mask
mask = tf.tile(tf.expand_dims(mask, axis=1), [1, JX, 1])
logits = tf.nn.softmax(softmax_mask(outputs, mask))
outputs = tf.matmul(logits, memory)
# res:[batch_size, c_maxlen, 12*hidden]
res = tf.concat([inputs, outputs], axis=2)
return res
# with tf.variable_scope("gate"):
# """
# attention * gate
# """
# dim = res.get_shape().as_list()[-1]
# d_res = dropout(res, keep_prob=keep_prob, is_train=is_train)
# gate = tf.nn.sigmoid(dense(d_res, dim, use_bias=False))
# return res * gate # 向量的逐元素相乘
# 定义一个多层的双向gru类,使用cudnn加速
class cudnn_gru:
def __init__(self, num_layers, num_units, input_size, scope=None):
self.num_layers = num_layers
self.grus = []
self.inits = []
self.dropout_mask = []
self.scope = scope
for layer in range(num_layers):
input_size_ = input_size if layer == 0 else 2 * num_units
gru_fw = tf.contrib.cudnn_rnn.CudnnGRU(
1, num_units, name="f_cudnn_gru")
gru_bw = tf.contrib.cudnn_rnn.CudnnGRU(
1, num_units, name="b_cudnn_gru")
self.grus.append((gru_fw, gru_bw, ))
def __call__(self, inputs, seq_len, keep_prob, concat_layers=True):
# cudnn GRU需要交换张量的维度,可能是便于计算
outputs = [tf.transpose(inputs, [1, 0, 2])]
out_states = []
with tf.variable_scope(self.scope):
for layer in range(self.num_layers):
gru_fw, gru_bw = self.grus[layer]
with tf.variable_scope("fw_{}".format(layer)):
out_fw, (fw_state,) = gru_fw(outputs[-1])
with tf.variable_scope("bw_{}".format(layer)):
inputs_bw = tf.reverse_sequence(outputs[-1], seq_lengths=seq_len, seq_dim=0, batch_dim=1)
out_bw, (bw_state,) = gru_bw(inputs_bw)
out_bw = tf.reverse_sequence(out_bw, seq_lengths=seq_len, seq_dim=0, batch_dim=1)
outputs.append(tf.concat([out_fw, out_bw], axis=2))
out_states.append(tf.concat([fw_state, bw_state], axis=-1))
if concat_layers:
res = tf.concat(outputs[1:], axis=2)
final_state = tf.squeeze(tf.transpose(tf.concat(out_states, axis=0), [1,0,2]), axis=1)
else:
res = outputs[-1]
final_state = tf.squeeze(out_states[-1], axis=0)
res = tf.transpose(res, [1, 0, 2])
return res, final_state
class model_text_rnn_attention(object):
"""
使用简单的双向GRU,并接一个attention层。
"""
def __init__(self, embedding_matrix, sequence_length=50, num_classes=1,
embedding_size=300, trainable=True):
# Placeholders for input, output and dropout
self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x")
self.input_y = tf.placeholder(tf.int32, [None], name="input_y")
self.keep_prob = tf.placeholder(tf.float32, name="keep_prob")
# Some variables
self.embedding_matrix = tf.get_variable("embedding_matrix", initializer=tf.constant(
embedding_matrix, dtype=tf.float32), trainable=False)
self.global_step = tf.get_variable('global_step', shape=[], dtype=tf.int32,
initializer=tf.constant_initializer(0), trainable=False)
with tf.name_scope("process"):
self.seq_len = tf.reduce_sum(tf.cast(tf.cast(self.input_x, dtype=tf.bool), dtype=tf.int32), axis=1, name="seq_len")
self.mask = tf.cast(self.input_x, dtype=tf.bool)
# The structure of the model
self.layers(num_classes)
# optimizer
if trainable:
self.learning_rate = tf.train.exponential_decay(
learning_rate=0.001, global_step=self.global_step, decay_steps=1000, decay_rate=0.95)
self.opt = tf.train.AdamOptimizer(learning_rate=self.learning_rate, epsilon=1e-8)
self.train_op = self.opt.minimize(self.loss, global_step=self.global_step)
def layers(self, num_classes):
# Embedding layer
with tf.variable_scope("embedding"):
self.embedding_inputs = tf.nn.embedding_lookup(self.embedding_matrix, self.input_x)
self.embedding_inputs = tf.nn.dropout(self.embedding_inputs, self.keep_prob)
# Bi-GRU Encoder
with tf.variable_scope("Bi-GRU"):
bi_rnn = cudnn_gru(num_layers=1, num_units=128, input_size=self.embedding_inputs.get_shape().as_list()[-1], scope="encoder")
rnn_output, _ = bi_rnn(self.embedding_inputs, seq_len=self.seq_len, keep_prob=self.keep_prob)
# shape: [batch_size, 2*hidden]
self.rnn_out = tf.nn.dropout(rnn_output, keep_prob=self.keep_prob)
with tf.variable_scope("Attention_Layer"):
"""
将rnn的输出再做self-attention
"""
att = dot_attention(inputs=self.rnn_out, memory=self.rnn_out, mask=self.mask, hidden=128,
keep_prob=self.keep_prob)
# pooling
att_out_1 = tf.reduce_mean(att, axis=2) # shape: [batch_size, 50]
att_out_2 = tf.reduce_max(att, axis=2)
self.att_out = tf.concat([att_out_1, att_out_2], axis=1)
with tf.variable_scope("fully_connected"):
"""
全连接层
"""
# fc_W1 = tf.get_variable(
# shape=[self.rnn_out.get_shape().as_list()[1], 128],
# initializer=tf.contrib.layers.xavier_initializer(),
# name="fc_w1")
# fc_b1 = tf.get_variable(shape=[128], initializer=tf.constant_initializer(0.1), name="fc_b1")
# fc_1 = tf.nn.relu(tf.nn.bias_add(tf.matmul(self.rnn_out, fc_W1), fc_b1))
fc_1_drop = tf.nn.dropout(self.att_out, self.keep_prob)
fc_W2 = tf.get_variable(
shape=[self.att_out.get_shape().as_list()[1], num_classes],
initializer=tf.contrib.layers.variance_scaling_initializer(),
name="fc_w2")
fc_b2 = tf.get_variable(shape=[num_classes], initializer=tf.constant_initializer(0.1), name="fc_b2")
self.logits = tf.squeeze(tf.nn.bias_add(tf.matmul(fc_1_drop, fc_W2), fc_b2), name="logits")
with tf.variable_scope("sigmoid_and_loss"):
"""
用sigmoid函数加阈值代替softmax的多分类
"""
self.sigmoid = tf.nn.sigmoid(self.logits)
self.loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=self.logits, labels=tf.cast(self.input_y, dtype=tf.float32)))
# batch生成器
def batch_generator(train_X, train_Y, batch_size, is_train=True):
"""
batch生成器:
在is_train为true的情况下,补充batch,并shuffle
"""
data_number = train_X.shape[0]
batch_count = 0
while True:
if batch_count * batch_size + batch_size > data_number:
# 最后一个batch的操作
if is_train:
# 后面的直接舍弃,重新开始
# shuffle
np.random.seed(2018)
trn_idx = np.random.permutation(data_number)
train_X = train_X[trn_idx]
train_Y = train_Y[trn_idx]
one_batch_X = train_X[0:batch_size]
one_batch_Y = train_Y[0:batch_size]
batch_count = 1
yield one_batch_X, one_batch_Y
else:
one_batch_X = train_X[batch_count * batch_size:data_number]
one_batch_Y = train_Y[batch_count * batch_size:data_number]
batch_count = 0
yield one_batch_X, one_batch_Y
else:
one_batch_X = train_X[batch_count * batch_size:batch_count * batch_size + batch_size]
one_batch_Y = train_Y[batch_count * batch_size:batch_count * batch_size + batch_size]
batch_count += 1
yield one_batch_X, one_batch_Y
# 正类欠采样,负类数据增强,暂时用随机打乱数据增强.
def data_augmentation(X, Y, under_sample=100000, aug_num=3):
"""
under_sample: 欠采样个数
aug: 数据增强倍数
"""
pos_X = []
neg_X = []
for i in range(X.shape[0]):
if Y[i] == 1:
neg_X.append(list(X[i]))
else:
pos_X.append(list(X[i]))
# 正样本欠采样
random.shuffle(pos_X)
pos_X = pos_X[:-under_sample]
# 正样本数据增强
pos_X_aug = []
for i in range(200000):
aug = []
for x in pos_X[i]:
if x != 0:
aug.append(x)
else:
break
random.shuffle(aug)
aug += [0] * (max_len-len(aug))
pos_X_aug.append(aug)
pos_X.extend(pos_X_aug)
print(len(pos_X))
# 负样本数据增强
neg_X_aug = []
for i in range(aug_num):
for neg in neg_X:
aug = []
for x in neg:
if x != 0:
aug.append(x)
else:
break
random.shuffle(aug)
aug += [0] * (max_len-len(aug))
neg_X_aug.append(aug)
neg_X.extend(neg_X_aug)
print(len(neg_X))
pos_Y = np.zeros(shape=[len(pos_X)], dtype=np.int32)
neg_Y = np.ones(shape=[len(neg_X)], dtype=np.int32)
pos_X.extend(neg_X)
X_out = np.array(pos_X, dtype=np.int32)
Y_out = np.append(pos_Y, neg_Y)
print(X_out.shape)
#shuffling the data
np.random.seed(2018)
trn_idx = np.random.permutation(len(X_out))
X_out = X_out[trn_idx]
Y_out = Y_out[trn_idx]
print(X_out.shape)
print(Y_out.shape)
return X_out, Y_out
# 搜索最佳阈值
def bestThreshold(y,y_preds):
tmp = [0,0,0] # idx, cur, max
delta = 0
for tmp[0] in tqdm(np.arange(0.1, 0.501, 0.01)):
tmp[1] = metrics.f1_score(y, np.array(y_preds)>tmp[0])
if tmp[1] > tmp[2]:
delta = tmp[0]
tmp[2] = tmp[1]
print('best threshold is {:.4f} with F1 score: {:.4f}'.format(delta, tmp[2]))
return delta , tmp[2]
# 加载数据,平均词向量
train_X, test_X, train_Y, local_test_X, local_test_Y, word_index = load_and_prec(use_local_test)
# embedding_matrix_1 = load_glove(word_index)
embedding_matrix = load_fasttext(word_index)
# embedding_matrix = load_para(word_index)
# embedding_matrix = np.mean([embedding_matrix_1, embedding_matrix_3], axis = 0)
np.shape(embedding_matrix)
# embedding_matrix = np.zeros(shape=[100,300],dtype=np.float32)
# 多折训练,交叉验证平均,测试
# 划分交叉验证集
DATA_SPLIT_SEED = 20190101
splits = list(StratifiedKFold(n_splits=5, shuffle=True, random_state=DATA_SPLIT_SEED).split(train_X, train_Y))
# test batch
test_batch = batch_generator(test_X, np.zeros(shape=[test_X.shape[0]], dtype=np.int32), batch_size, False)
local_test_batch = batch_generator(local_test_X, local_test_Y, batch_size, False)
# 最终输出
train_preds = np.zeros(len(train_X), dtype=np.float32)
test_preds = np.zeros((len(test_X), len(splits)), dtype=np.float32)
test_preds_local = np.zeros((len(local_test_X), len(splits)), dtype=np.float32)
best_threshold = 0.33
# 多折训练
for i, (train_idx, valid_idx) in enumerate(splits):
print("fold:{}".format(i+1))
X_train = train_X[train_idx]
Y_train = train_Y[train_idx]
X_val = train_X[valid_idx]
Y_val = train_Y[valid_idx]
# # 数据增强
# X_train, Y_train = data_augmentation(X_train, Y_train)
# print(Y_train[:100])
# print(Y_train[-100:])
# 训练batch生成器
train_batch = batch_generator(X_train, Y_train, batch_size, True)
val_batch = batch_generator(X_val, Y_val, batch_size, False)
# 选择最好的结果
best_val_f1 = 0.0
best_val_loss = 99999.99999
best_val_fold = []
best_test_fold = []
best_local_test_fold = []
# 训练 & 验证 & 测试
with tf.Graph().as_default():
sess_config = tf.ConfigProto(allow_soft_placement=True)
sess_config.gpu_options.allow_growth = True
with tf.Session(config=sess_config) as sess:
writer = tf.summary.FileWriter("./log/", sess.graph)
# 模型
model = model_text_rnn_attention(embedding_matrix=embedding_matrix, sequence_length=max_len)
sess.run(tf.global_variables_initializer())
train_loss_sum = 0.0
start_time = time.time()
for go in range(20000):
steps = sess.run(model.global_step) + 1
# 训练
train_batch_X, train_batch_Y = next(train_batch)
feed = {model.input_x:train_batch_X, model.input_y:train_batch_Y, model.keep_prob:0.7}
loss, train_op = sess.run([model.loss, model.train_op], feed_dict=feed)
train_loss_sum += loss
# 验证 & 测试
if steps % 1000 == 0:
val_predictions = []
val_loss_sum = 0.0
for _ in range(X_val.shape[0] // batch_size + 1):
val_batch_X, val_batch_Y = next(val_batch)
feed_val = {model.input_x:val_batch_X, model.input_y:val_batch_Y, model.keep_prob:1.0}
val_loss, val_sigmoid = sess.run([model.loss, model.sigmoid], feed_dict=feed_val)
val_predictions.extend(val_sigmoid)
val_loss_sum += val_loss
# val_f1 = metrics.f1_score(Y_val, np.array(val_predictions))
# val_pre = metrics.precision_score(Y_val, np.array(val_predictions))
# val_recall = metrics.recall_score(Y_val, np.array(val_predictions))
val_loss_sum = val_loss_sum / (X_val.shape[0] // batch_size + 1)
# print("steps:{}, train_loss:{:.5f}, val_loss:{:.5f}, val_F1:{:.5f}, val_pre:{:.5f}, val_recall:{:.5f}".format(
# steps, float(train_loss_sum / 1000), float(val_loss_sum), float(val_f1), float(val_pre), float(val_recall)))
end_time = time.time()
print("steps:{}, train_loss:{:.5f}, val_loss:{:.5f}, time:{:.5f}".format(
steps, float(train_loss_sum / 1000), float(val_loss_sum), end_time-start_time))
start_time = time.time()
# 写入tensorboard
train_loss_write = tf.Summary(value=[tf.Summary.Value(tag="model/train_loss", \
simple_value=train_loss_sum / 1000), ])
writer.add_summary(train_loss_write, steps)
val_loss_write = tf.Summary(value=[tf.Summary.Value(tag="model/val_loss", simple_value=val_loss_sum), ])
writer.add_summary(val_loss_write, steps)
# val_f1_write = tf.Summary(value=[tf.Summary.Value(tag="index/val_f1", simple_value=val_f1), ])
# writer.add_summary(val_f1_write, steps)
# val_pre_write = tf.Summary(value=[tf.Summary.Value(tag="index/val_precision", simple_value=val_pre), ])
# writer.add_summary(val_pre_write, steps)
# val_recall_write = tf.Summary(value=[tf.Summary.Value(tag="index/val_recall", simple_value=val_recall), ])
# writer.add_summary(val_recall_write, steps)
writer.flush()
# train loss
train_loss_sum = 0.0
# # 测试,并选取最好的F1值的时刻的测试结果为最终结果
# if val_f1 > best_val_f1:
# best_val_f1 = val_f1
# best_test = []
# for _ in range(test_X.shape[0] // batch_size + 1):
# test_batch_X, _ = next(test_batch)
# feed_test = {model.input_x:test_batch_X, model.keep_prob:1.0}
# test_classes = sess.run(model.classes, feed_dict=feed_test)
# best_test.extend(test_classes)
# print("test done!")
# 测试,并选取最低的loss值的时刻的测试结果为最终结果
if val_loss_sum < best_val_loss and steps >= 10000:
best_val_loss = val_loss_sum
best_val_fold = val_predictions
best_test_fold = []
best_local_test_fold = []
# 线上test
for _ in range(test_X.shape[0] // batch_size + 1):
test_batch_X, _ = next(test_batch)
feed_test = {model.input_x:test_batch_X, model.keep_prob:1.0}
test_sigmoid = sess.run(model.sigmoid, feed_dict=feed_test)
best_test_fold.extend(test_sigmoid)
# 线下test
if use_local_test:
for _ in range(local_test_X.shape[0] // batch_size + 1):
local_test_batch_X, _ = next(local_test_batch)
feed_local_test = {model.input_x:local_test_batch_X, model.keep_prob:1.0}
local_test_sigmoid = sess.run(model.sigmoid, feed_dict=feed_local_test)
best_local_test_fold.extend(local_test_sigmoid)
print("test done!")
# 更新预测结果
best_threshold, best_f1 = bestThreshold(Y_val, best_val_fold)
# train_preds[valid_idx] = np.array(best_val_fold)
test_preds[:, i] = np.array(best_test_fold)
if use_local_test:
test_preds_local[:, i] = np.array(best_local_test_fold)
# print("fold:{}, threshold:{}, F1_score:{:.5f}".format(i, best_threshold_fold, \
# metrics.f1_score(Y_val, (np.array(best_val_fold)>best_threshold_fold).astype(int)))))
# 单模型只测试一折
break
# 后处理,提交结果
if use_local_test:
print("local_test_f1:{:.5f}".format(metrics.f1_score(local_test_Y, (test_preds_local.mean(axis=1) > best_threshold))))
sub = pd.read_csv('../input/sample_submission.csv')
sub["prediction"] = (test_preds.mean(axis=1)*5 > best_threshold).astype(int)
sub.to_csv("submission.csv", index=False)
pd.DataFrame(test_preds_local).corr()
| 0.540196 | 0.608652 |

# Auto Read File
```
import azureml.dataprep as dprep
```
Data Prep has the ability to load different kinds of text files. The `auto_read_file` entry point can take any text based file (including excel, json and parquet) and auto-detect how to parse the file. It will also attempt to auto-detect the types of each column and apply type transformations to the columns it detects.
The result will be a Dataflow object that has all the steps added that are required to read the given file(s) and convert their columns to the predicted types. No parameters are required beyond the file path or `FileDataSource` object.
```
dflow_auto = dprep.auto_read_file('../data/crime_multiple_separators.csv')
dflow_auto.head(5)
dflow_auto1 = dprep.auto_read_file('../data/crime.xlsx')
dflow_auto1.head(5)
dflow_auto2 = dprep.auto_read_file('../data/crime.parquet')
dflow_auto2.head(5)
```
Looking at the data, we can see that there are two empty columns either side of the 'Completed' column.
If we compare the dataframe to a few rows from the original file:
```
ID |CaseNumber| |Completed|
10140490 |HY329907| |Y|
10139776 |HY329265| |Y|
```
We can see that the `|`'s have disappeared in the dataframe. This is because `|` is a very common separator character in csv files, so `auto_read_file` guessed it was the column separator. For this data we actually want the `|`'s to remain and instead use space as the column separator.
To achieve this we can use `detect_file_format`. It takes a file path or datasource object and gives back a `FileFormatBuilder` which has learnt some information about the supplied data.
This is what `auto_read_file` is using behind the scenes to 'learn' the contents of the given file and determine how to parse it. With the `FileFormatBuilder` we can take advantage of the intelligent learning aspect of `auto_read_file` but have the chance to modify some of the learnt information.
```
ffb = dprep.detect_file_format('../data/crime_multiple_separators.csv')
ffb_2 = dprep.detect_file_format('../data/crime.xlsx')
ffb_3 = dprep.detect_file_format('../data/crime_fixed_width_file.txt')
ffb_4 = dprep.detect_file_format('../data/json.json')
print(ffb.file_format)
print(ffb_2.file_format)
print(ffb_3.file_format)
print(type(ffb_4.file_format))
```
After calling `detect_file_format` we get a `FileFormatBuilder` that has had `learn` called on it. This means the `file_format` attribute will be populated with a `<Parse|Read><type>Properties` object, it contains all the information that was learnt about the file. As we can see above different file types have corresponding file_formats detected.
Continuing with our delimited example we can change any of these values and then call `ffb.to_dataflow()` to create a `Dataflow` that has the steps required to parse the datasource.
```
ffb.file_format.separator = ' '
dflow = ffb.to_dataflow()
df = dflow.to_pandas_dataframe()
df
```
The result is our desired dataframe with `|`'s included.
If we refer back to the original data output by `auto_read_file`, the 'ID' column was also detected as numeric and converted to a number data type instead of remaining a string like in the data above.
We can perform type inference on our new dataflow using the `dataflow.builders` property. This property exposes different builders that can `learn` from a dataflow and `apply` the learning to produce a new dataflow, very similar to the pattern we used above for the `FileFormatBuilder`.
```
ctb = dflow.builders.set_column_types()
ctb.learn()
ctb.conversion_candidates
```
After learning `ctb.conversion_candidates` has been populated with information about the inferred types for each column, it is possible for there to be multiple candidate types per column, in this example there is only one type for each column.
The candidates look correct, we only want to convert `ID` to be an integer column, so applying this `ColumnTypesBuilder` should result in a Dataflow with our columns converted to their respective types.
```
dflow_converted = ctb.to_dataflow()
df_converted = dflow_converted.to_pandas_dataframe()
df_converted
```
|
github_jupyter
|
import azureml.dataprep as dprep
dflow_auto = dprep.auto_read_file('../data/crime_multiple_separators.csv')
dflow_auto.head(5)
dflow_auto1 = dprep.auto_read_file('../data/crime.xlsx')
dflow_auto1.head(5)
dflow_auto2 = dprep.auto_read_file('../data/crime.parquet')
dflow_auto2.head(5)
ID |CaseNumber| |Completed|
10140490 |HY329907| |Y|
10139776 |HY329265| |Y|
ffb = dprep.detect_file_format('../data/crime_multiple_separators.csv')
ffb_2 = dprep.detect_file_format('../data/crime.xlsx')
ffb_3 = dprep.detect_file_format('../data/crime_fixed_width_file.txt')
ffb_4 = dprep.detect_file_format('../data/json.json')
print(ffb.file_format)
print(ffb_2.file_format)
print(ffb_3.file_format)
print(type(ffb_4.file_format))
ffb.file_format.separator = ' '
dflow = ffb.to_dataflow()
df = dflow.to_pandas_dataframe()
df
ctb = dflow.builders.set_column_types()
ctb.learn()
ctb.conversion_candidates
dflow_converted = ctb.to_dataflow()
df_converted = dflow_converted.to_pandas_dataframe()
df_converted
| 0.111157 | 0.985566 |
```
import itertools
import math
import struct
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics, preprocessing
pd.options.display.max_rows = 2000
pd.options.display.max_columns = 1000
pd.options.display.width = 1500
def read_idx(filename):
with open(filename, "rb") as f:
zero, data_type, dims = struct.unpack(">HBB", f.read(4))
shape = tuple(struct.unpack(">I", f.read(4))[0] for d in range(dims))
return np.frombuffer(f.read(), dtype=np.uint8).reshape(shape)
train_data = read_idx("./train-images.idx3-ubyte")
train_label = read_idx("./train-labels.idx1-ubyte")
test_data = read_idx("./t10k-images.idx3-ubyte")
test_label = read_idx("./t10k-labels.idx1-ubyte")
```
## Naive Bayes - Discrete mode
---
### Function of discrete mode:
- The equation is $$max\hspace{0.3cm}posterior = \frac{\Pr(x|\theta) \Pr(\theta)}{\Pr(x)} = \underset{j}{\operatorname{argmax}} \frac{\Pi_{i=0}^{783}\frac{x_i}{n_{ji}} \times \pi_{ji}}{\Pr(x)}$$
- $\pi_{ji}$ is the prior = $\Pr[y=j] ,\hspace{1cm} 0 \leq j \leq 9$
```
class NaiveBayes_Dis():
def __init__(self, train_data, train_label):
self.train_label = train_label
# How many numbers of digit 0~9
self.class_count = [0 for _ in range(10)]
# pixel_data[digit][pixel][bins] with pseudo count
self.pixel_data = [[[10**-5 for _ in range(32)]
for _ in range(28 * 28)] for _ in range(10)]
self.__buildTrainingData(train_data)
def mapping(self, value):
"""
For discrete mode: map 0~255 to 0~31
"""
if value == 255:
return 31
else:
return math.floor(value / 255 * 32)
def flatten(self, data):
flatten_data = []
for i in range(len(data)):
flatten_data.append(
[item for sublist in data[i] for item in sublist])
return flatten_data
def __buildTrainingData(self, data):
"""
For discrete mode: parse training data and save to dis_pixel_data[digit][pixel][bin]
"""
vec = np.vectorize(self.mapping)
data = vec(data)
# flatten_data
flatten_data = self.flatten(data)
# Build class_count and pixel_data
# class_count[digit]
# pixel_data[digit][pixel][bin]
for i, image in enumerate(flatten_data, 0):
self.class_count[self.train_label[i]] += 1
for pixel in range(len(image)):
self.pixel_data[self.train_label[i]][pixel][image[pixel]] += 1
def scale(self, posterior):
posterior = preprocessing.minmax_scale(posterior)
posterior = [i / posterior.sum() for i in posterior]
return posterior
def predict(self, test_data, test_label):
vec = np.vectorize(self.mapping)
test_data = vec(test_data)
flatten_test_data = self.flatten(test_data)
posterior = [np.zeros(10) for _ in range(len(flatten_test_data))]
predict = []
for i, img in enumerate(flatten_test_data, 0):
for num in range(10):
for pixel in range(784):
posterior[i][num] += np.log(
self.pixel_data[num][pixel][img[pixel]] /
self.class_count[num])
posterior[i][num] += np.log(self.class_count[num] / 60000)
predict.append(np.argmax(posterior[i]))
scaled_posterior = []
for post in posterior:
scaled_posterior.append(self.scale(post))
return scaled_posterior, predict
```
### Discrete mode: accuracy evaluate and guess
```
dis_result = NaiveBayes_Dis(train_data, train_label)
dis_post, dis_pred = dis_result.predict(test_data, test_label)
def discrete_output():
# test cases
for index in [0, 123]:
print("Posterior (in log scale):")
for i in range(10):
print("%d: %f" % (i, dis_post[index][i]))
print("Prediction: %d, Ans: %d" % (dis_pred[index], test_label[index]))
print("")
acc = metrics.accuracy_score(test_label, dis_pred)
print("Accuracy: %f\nError rate: %f" % (acc, 1 - acc))
discrete_output()
def discrete_guess():
guess = []
for digit, pixel_bin in enumerate(dis_result.pixel_data, 1):
digit = []
for bins in pixel_bin:
digit.append(int(sum(bins[0:16]) <= sum(bins[16:])))
digit = np.reshape(digit, (28, 28))
guess.append(digit)
return guess
discrete_guess = discrete_guess()
guess_num = 4
print("Guess %d\n" % guess_num, pd.DataFrame(discrete_guess[guess_num]))
fig = plt.figure(figsize=(8, 8))
for num in range(10):
sub = fig.add_subplot(2, 5, num + 1)
sub.imshow(discrete_guess[num], cmap="Greys")
plt.tight_layout()
plt.show()
```
## Naive Bayes - Continuous mode
---
- $$Posterior = \frac{\Pr(x|c) \times Prior}{marginal}$$
- $\Pr(x|c)$ using Gaussian distribution
```
class NaiveBayes_Con():
def __init__(self, train_data, train_label):
self.train_label = train_label
self.__buildTrainingData(train_data)
def log_gaussian(self, x, mean, sigma):
if sigma == 0:
return 0
elif x == mean:
return 1
return -1 / 2 * np.log(2 * math.pi) - 1 / 2 * np.log(sigma) - 1 / 2 * (
x - mean)**2 / sigma
def __buildTrainingData(self, train_data):
self.class_count = [0 for _ in range(10)]
flatten_data = self.flatten(train_data)
self.pixel_data = [[[] for _ in range(784)] for _ in range(10)]
for i, img in enumerate(flatten_data, 0):
self.class_count[self.train_label[i]] += 1
for pixel in range(784):
self.pixel_data[self.train_label[i]][pixel].append(img[pixel])
self.mean = [[] for _ in range(784)]
self.variance = [[] for _ in range(784)]
for num in range(10):
for pixel in range(784):
self.mean[num].append(np.mean(self.pixel_data[num][pixel]))
self.variance[num].append(np.var(self.pixel_data[num][pixel]))
def predict(self, test_data, test_label):
flatten_data = self.flatten(test_data)
posterior = [[] for _ in range(len(flatten_data))]
predict = []
for i, img in enumerate(flatten_data, 0):
for num in range(10):
post = 0
for pixel in range(784):
likelihood = self.log_gaussian(img[pixel],
self.mean[num][pixel],
self.variance[num][pixel])
post += likelihood
post += np.log(self.class_count[num] / 60000)
posterior[i].append(post)
for i in posterior:
predict.append(np.argmax(i))
return posterior, predict
def flatten(self, data):
flatten_data = []
for i in range(len(data)):
flatten_data.append(
[item for sublist in data[i] for item in sublist])
return flatten_data
result_con = NaiveBayes_Con(train_data, train_label)
con_post, con_pred = result_con.predict(test_data, test_label)
acc = metrics.accuracy_score(test_label, con_pred)
print("Accuracy: %f\nError rate: %f" % (acc, 1 - acc))
def Gamma(x):
return math.factorial(x - 1)
def Beta(x, y):
return Gamma(x) * Gamma(y) / Gamma(a + b)
def betaDistribution(x, alpha, beta):
return 1 / Beta(alpha, beta) * x**(alpha - 1) * (1 - x)**(beta - 1)
def Bin_Likelihood(p, n, m):
return (math.factorial(n) / (math.factorial(m) * math.factorial(n - m))
) * p**m * (1 - p)**(n - m)
data = []
original_input = []
with open("test.txt", "r") as file:
for lines in file.readlines():
if lines[-1] == "\n":
original_input.append(lines[:-1])
else:
original_input.append(lines)
a = 0
b = 0
for i in range(len(lines)):
if lines[i] == "0":
b += 1
elif lines[i] == "1":
a += 1
data.append([a, b])
# data[line][a, b]
# case 1: initial a = 0, b = 0
prior_a = 0
prior_b = 0
post_a = 0
post_b = 0
for i in range(len(original_input)):
m = data[i][0]
n = data[i][0] + data[i][1]
p = m / n
likelihood = Bin_Likelihood(p, n, m)
print("case %d: %s" % (i + 1, original_input[i]))
print("Likelihood: %f" % likelihood)
print("Beta Prior: \ta = %d, b = %d" % (prior_a, prior_b))
prior_a += data[i][0]
prior_b += data[i][1]
post_a = prior_a
post_b = prior_b
print("Beta Posterior: a = %d, b = %d" % (post_a, post_b))
print("\n")
# case 2: initial a = 10, b = 1
prior_a = 10
prior_b = 1
post_a = 0
post_b = 0
for i in range(len(original_input)):
m = data[i][0]
n = data[i][0] + data[i][1]
p = m / n
likelihood = Bin_Likelihood(p, n, m)
print("case %d: %s" % (i + 1, original_input[i]))
print("Likelihood: %f" % likelihood)
print("Beta Prior: \ta = %d, b = %d" % (prior_a, prior_b))
prior_a += data[i][0]
prior_b += data[i][1]
post_a = prior_a
post_b = prior_b
print("Beta Posterior: a = %d, b = %d" % (post_a, post_b))
print("\n")
```
|
github_jupyter
|
import itertools
import math
import struct
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics, preprocessing
pd.options.display.max_rows = 2000
pd.options.display.max_columns = 1000
pd.options.display.width = 1500
def read_idx(filename):
with open(filename, "rb") as f:
zero, data_type, dims = struct.unpack(">HBB", f.read(4))
shape = tuple(struct.unpack(">I", f.read(4))[0] for d in range(dims))
return np.frombuffer(f.read(), dtype=np.uint8).reshape(shape)
train_data = read_idx("./train-images.idx3-ubyte")
train_label = read_idx("./train-labels.idx1-ubyte")
test_data = read_idx("./t10k-images.idx3-ubyte")
test_label = read_idx("./t10k-labels.idx1-ubyte")
class NaiveBayes_Dis():
def __init__(self, train_data, train_label):
self.train_label = train_label
# How many numbers of digit 0~9
self.class_count = [0 for _ in range(10)]
# pixel_data[digit][pixel][bins] with pseudo count
self.pixel_data = [[[10**-5 for _ in range(32)]
for _ in range(28 * 28)] for _ in range(10)]
self.__buildTrainingData(train_data)
def mapping(self, value):
"""
For discrete mode: map 0~255 to 0~31
"""
if value == 255:
return 31
else:
return math.floor(value / 255 * 32)
def flatten(self, data):
flatten_data = []
for i in range(len(data)):
flatten_data.append(
[item for sublist in data[i] for item in sublist])
return flatten_data
def __buildTrainingData(self, data):
"""
For discrete mode: parse training data and save to dis_pixel_data[digit][pixel][bin]
"""
vec = np.vectorize(self.mapping)
data = vec(data)
# flatten_data
flatten_data = self.flatten(data)
# Build class_count and pixel_data
# class_count[digit]
# pixel_data[digit][pixel][bin]
for i, image in enumerate(flatten_data, 0):
self.class_count[self.train_label[i]] += 1
for pixel in range(len(image)):
self.pixel_data[self.train_label[i]][pixel][image[pixel]] += 1
def scale(self, posterior):
posterior = preprocessing.minmax_scale(posterior)
posterior = [i / posterior.sum() for i in posterior]
return posterior
def predict(self, test_data, test_label):
vec = np.vectorize(self.mapping)
test_data = vec(test_data)
flatten_test_data = self.flatten(test_data)
posterior = [np.zeros(10) for _ in range(len(flatten_test_data))]
predict = []
for i, img in enumerate(flatten_test_data, 0):
for num in range(10):
for pixel in range(784):
posterior[i][num] += np.log(
self.pixel_data[num][pixel][img[pixel]] /
self.class_count[num])
posterior[i][num] += np.log(self.class_count[num] / 60000)
predict.append(np.argmax(posterior[i]))
scaled_posterior = []
for post in posterior:
scaled_posterior.append(self.scale(post))
return scaled_posterior, predict
dis_result = NaiveBayes_Dis(train_data, train_label)
dis_post, dis_pred = dis_result.predict(test_data, test_label)
def discrete_output():
# test cases
for index in [0, 123]:
print("Posterior (in log scale):")
for i in range(10):
print("%d: %f" % (i, dis_post[index][i]))
print("Prediction: %d, Ans: %d" % (dis_pred[index], test_label[index]))
print("")
acc = metrics.accuracy_score(test_label, dis_pred)
print("Accuracy: %f\nError rate: %f" % (acc, 1 - acc))
discrete_output()
def discrete_guess():
guess = []
for digit, pixel_bin in enumerate(dis_result.pixel_data, 1):
digit = []
for bins in pixel_bin:
digit.append(int(sum(bins[0:16]) <= sum(bins[16:])))
digit = np.reshape(digit, (28, 28))
guess.append(digit)
return guess
discrete_guess = discrete_guess()
guess_num = 4
print("Guess %d\n" % guess_num, pd.DataFrame(discrete_guess[guess_num]))
fig = plt.figure(figsize=(8, 8))
for num in range(10):
sub = fig.add_subplot(2, 5, num + 1)
sub.imshow(discrete_guess[num], cmap="Greys")
plt.tight_layout()
plt.show()
class NaiveBayes_Con():
def __init__(self, train_data, train_label):
self.train_label = train_label
self.__buildTrainingData(train_data)
def log_gaussian(self, x, mean, sigma):
if sigma == 0:
return 0
elif x == mean:
return 1
return -1 / 2 * np.log(2 * math.pi) - 1 / 2 * np.log(sigma) - 1 / 2 * (
x - mean)**2 / sigma
def __buildTrainingData(self, train_data):
self.class_count = [0 for _ in range(10)]
flatten_data = self.flatten(train_data)
self.pixel_data = [[[] for _ in range(784)] for _ in range(10)]
for i, img in enumerate(flatten_data, 0):
self.class_count[self.train_label[i]] += 1
for pixel in range(784):
self.pixel_data[self.train_label[i]][pixel].append(img[pixel])
self.mean = [[] for _ in range(784)]
self.variance = [[] for _ in range(784)]
for num in range(10):
for pixel in range(784):
self.mean[num].append(np.mean(self.pixel_data[num][pixel]))
self.variance[num].append(np.var(self.pixel_data[num][pixel]))
def predict(self, test_data, test_label):
flatten_data = self.flatten(test_data)
posterior = [[] for _ in range(len(flatten_data))]
predict = []
for i, img in enumerate(flatten_data, 0):
for num in range(10):
post = 0
for pixel in range(784):
likelihood = self.log_gaussian(img[pixel],
self.mean[num][pixel],
self.variance[num][pixel])
post += likelihood
post += np.log(self.class_count[num] / 60000)
posterior[i].append(post)
for i in posterior:
predict.append(np.argmax(i))
return posterior, predict
def flatten(self, data):
flatten_data = []
for i in range(len(data)):
flatten_data.append(
[item for sublist in data[i] for item in sublist])
return flatten_data
result_con = NaiveBayes_Con(train_data, train_label)
con_post, con_pred = result_con.predict(test_data, test_label)
acc = metrics.accuracy_score(test_label, con_pred)
print("Accuracy: %f\nError rate: %f" % (acc, 1 - acc))
def Gamma(x):
return math.factorial(x - 1)
def Beta(x, y):
return Gamma(x) * Gamma(y) / Gamma(a + b)
def betaDistribution(x, alpha, beta):
return 1 / Beta(alpha, beta) * x**(alpha - 1) * (1 - x)**(beta - 1)
def Bin_Likelihood(p, n, m):
return (math.factorial(n) / (math.factorial(m) * math.factorial(n - m))
) * p**m * (1 - p)**(n - m)
data = []
original_input = []
with open("test.txt", "r") as file:
for lines in file.readlines():
if lines[-1] == "\n":
original_input.append(lines[:-1])
else:
original_input.append(lines)
a = 0
b = 0
for i in range(len(lines)):
if lines[i] == "0":
b += 1
elif lines[i] == "1":
a += 1
data.append([a, b])
# data[line][a, b]
# case 1: initial a = 0, b = 0
prior_a = 0
prior_b = 0
post_a = 0
post_b = 0
for i in range(len(original_input)):
m = data[i][0]
n = data[i][0] + data[i][1]
p = m / n
likelihood = Bin_Likelihood(p, n, m)
print("case %d: %s" % (i + 1, original_input[i]))
print("Likelihood: %f" % likelihood)
print("Beta Prior: \ta = %d, b = %d" % (prior_a, prior_b))
prior_a += data[i][0]
prior_b += data[i][1]
post_a = prior_a
post_b = prior_b
print("Beta Posterior: a = %d, b = %d" % (post_a, post_b))
print("\n")
# case 2: initial a = 10, b = 1
prior_a = 10
prior_b = 1
post_a = 0
post_b = 0
for i in range(len(original_input)):
m = data[i][0]
n = data[i][0] + data[i][1]
p = m / n
likelihood = Bin_Likelihood(p, n, m)
print("case %d: %s" % (i + 1, original_input[i]))
print("Likelihood: %f" % likelihood)
print("Beta Prior: \ta = %d, b = %d" % (prior_a, prior_b))
prior_a += data[i][0]
prior_b += data[i][1]
post_a = prior_a
post_b = prior_b
print("Beta Posterior: a = %d, b = %d" % (post_a, post_b))
print("\n")
| 0.510496 | 0.773516 |
# WorkFlow
## Classes
## Load the data
## Test Modelling
## Modelling
**<hr>**
## Classes
```
import os
import cv2
import torch
import numpy as np
def load_data(img_size=112):
data = []
index = -1
labels = {}
for directory in os.listdir('./data/'):
index += 1
labels[f'./data/{directory}/'] = [index,-1]
print(len(labels))
for label in labels:
for file in os.listdir(label):
filepath = label + file
img = cv2.imread(filepath,cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img,(img_size,img_size))
img = img / 255.0
data.append([
np.array(img),
labels[label][0]
])
labels[label][1] += 1
for _ in range(12):
np.random.shuffle(data)
print(len(data))
np.save('./data.npy',data)
return data
import torch
def other_loading_data_proccess(data):
X = []
y = []
print('going through the data..')
for d in data:
X.append(d[0])
y.append(d[1])
print('splitting the data')
VAL_SPLIT = 0.25
VAL_SPLIT = len(X)*VAL_SPLIT
VAL_SPLIT = int(VAL_SPLIT)
X_train = X[:-VAL_SPLIT]
y_train = y[:-VAL_SPLIT]
X_test = X[-VAL_SPLIT:]
y_test = y[-VAL_SPLIT:]
print('turning data to tensors')
X_train = torch.from_numpy(np.array(X_train))
y_train = torch.from_numpy(np.array(y_train))
X_test = torch.from_numpy(np.array(X_test))
y_test = torch.from_numpy(np.array(y_test))
return [X_train,X_test,y_train,y_test]
```
**<hr>**
## Load the data
```
REBUILD_DATA = True
if REBUILD_DATA:
data = load_data()
np.random.shuffle(data)
X_train,X_test,y_train,y_test = other_loading_data_proccess(data)
```
## Test Modelling
```
import torch
import torch.nn as nn
import torch.nn.functional as F
class Test_Model(nn.Module):
def __init__(self,output:int=36):
super().__init__()
self.conv1 = nn.Conv2d(1,32,3)
self.conv2 = nn.Conv2d(32,64,3)
self.conv3 = nn.Conv2d(64,128,3)
self.conv4 = nn.Conv2d(128,256,3)
self.conv5 = nn.Conv2d(256,384,3)
self.relu = nn.ReLU()
self.max_pool2d = F.max_pool2d
self.fc1 = nn.Linear(384*1*1,32)
self.fc2 = nn.Linear(32,64)
self.fc3 = nn.Linear(64,128)
self.fc4 = nn.Linear(128,256)
self.fc5 = nn.Linear(256,512)
self.fc6 = nn.Linear(512,output)
def forward(self,X):
preds = self.conv1(X)
preds = self.relu(preds)
preds = self.max_pool2d(preds,(2,2))
preds = self.conv2(preds)
preds = self.relu(preds)
preds = self.max_pool2d(preds,(2,2))
preds = self.conv3(preds)
preds = self.relu(preds)
preds = self.max_pool2d(preds,(2,2))
preds = self.conv4(preds)
preds = self.relu(preds)
preds = self.max_pool2d(preds,(2,2))
preds = self.conv5(preds)
preds = self.relu(preds)
preds = self.max_pool2d(preds,(2,2))
preds = preds.view(-1,384*1*1)
preds = self.fc1(preds)
preds = self.relu(preds)
preds = self.fc2(preds)
preds = self.relu(preds)
preds = self.fc3(preds)
preds = self.relu(preds)
preds = self.fc4(preds)
preds = self.relu(preds)
preds = self.fc5(preds)
preds = self.relu(preds)
preds = self.fc6(preds)
# preds = self.relu(preds)
# return F.softmax(preds,dim=1)
return preds
device = torch.device('cuda')
model = Test_Model().to(device)
# preds = model(X_test.reshape(-1,1,112,112).float())
# preds[0]
optimizer = torch.optim.Adam(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
BATCH_SIZE = 32
EPOCHS = 5
loss_logs = []
from tqdm import tqdm
PROJECT_NAME = "Sign-Language-Recognition"
def test(net,X,y):
correct = 0
total = 0
net.eval()
with torch.no_grad():
for i in range(len(X)):
real_class = torch.argmax(y[i]).to(device)
net_out = net(X[i].view(-1,1,112,112).to(device).float())
net_out = net_out[0]
predictied_class = torch.argmax(net_out)
if predictied_class == real_class:
correct += 1
total += 1
return round(correct/total,3)
import wandb
len(os.listdir('./data/'))
import random
index = random.randint(0,29)
print(index)
wandb.init(project=PROJECT_NAME,name='test-CrossEntropyLoss-Adam-0.1')
for _ in tqdm(range(EPOCHS)):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch.float())
loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index])})
wandb.finish()
import matplotlib.pyplot as plt
import pandas as pd
df = pd.Series(loss_logs)
df.plot.line(figsize=(12,6))
test(model,X_test,y_test)
test(model,X_train,y_train)
preds
y_batch
```
|
github_jupyter
|
import os
import cv2
import torch
import numpy as np
def load_data(img_size=112):
data = []
index = -1
labels = {}
for directory in os.listdir('./data/'):
index += 1
labels[f'./data/{directory}/'] = [index,-1]
print(len(labels))
for label in labels:
for file in os.listdir(label):
filepath = label + file
img = cv2.imread(filepath,cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img,(img_size,img_size))
img = img / 255.0
data.append([
np.array(img),
labels[label][0]
])
labels[label][1] += 1
for _ in range(12):
np.random.shuffle(data)
print(len(data))
np.save('./data.npy',data)
return data
import torch
def other_loading_data_proccess(data):
X = []
y = []
print('going through the data..')
for d in data:
X.append(d[0])
y.append(d[1])
print('splitting the data')
VAL_SPLIT = 0.25
VAL_SPLIT = len(X)*VAL_SPLIT
VAL_SPLIT = int(VAL_SPLIT)
X_train = X[:-VAL_SPLIT]
y_train = y[:-VAL_SPLIT]
X_test = X[-VAL_SPLIT:]
y_test = y[-VAL_SPLIT:]
print('turning data to tensors')
X_train = torch.from_numpy(np.array(X_train))
y_train = torch.from_numpy(np.array(y_train))
X_test = torch.from_numpy(np.array(X_test))
y_test = torch.from_numpy(np.array(y_test))
return [X_train,X_test,y_train,y_test]
REBUILD_DATA = True
if REBUILD_DATA:
data = load_data()
np.random.shuffle(data)
X_train,X_test,y_train,y_test = other_loading_data_proccess(data)
import torch
import torch.nn as nn
import torch.nn.functional as F
class Test_Model(nn.Module):
def __init__(self,output:int=36):
super().__init__()
self.conv1 = nn.Conv2d(1,32,3)
self.conv2 = nn.Conv2d(32,64,3)
self.conv3 = nn.Conv2d(64,128,3)
self.conv4 = nn.Conv2d(128,256,3)
self.conv5 = nn.Conv2d(256,384,3)
self.relu = nn.ReLU()
self.max_pool2d = F.max_pool2d
self.fc1 = nn.Linear(384*1*1,32)
self.fc2 = nn.Linear(32,64)
self.fc3 = nn.Linear(64,128)
self.fc4 = nn.Linear(128,256)
self.fc5 = nn.Linear(256,512)
self.fc6 = nn.Linear(512,output)
def forward(self,X):
preds = self.conv1(X)
preds = self.relu(preds)
preds = self.max_pool2d(preds,(2,2))
preds = self.conv2(preds)
preds = self.relu(preds)
preds = self.max_pool2d(preds,(2,2))
preds = self.conv3(preds)
preds = self.relu(preds)
preds = self.max_pool2d(preds,(2,2))
preds = self.conv4(preds)
preds = self.relu(preds)
preds = self.max_pool2d(preds,(2,2))
preds = self.conv5(preds)
preds = self.relu(preds)
preds = self.max_pool2d(preds,(2,2))
preds = preds.view(-1,384*1*1)
preds = self.fc1(preds)
preds = self.relu(preds)
preds = self.fc2(preds)
preds = self.relu(preds)
preds = self.fc3(preds)
preds = self.relu(preds)
preds = self.fc4(preds)
preds = self.relu(preds)
preds = self.fc5(preds)
preds = self.relu(preds)
preds = self.fc6(preds)
# preds = self.relu(preds)
# return F.softmax(preds,dim=1)
return preds
device = torch.device('cuda')
model = Test_Model().to(device)
# preds = model(X_test.reshape(-1,1,112,112).float())
# preds[0]
optimizer = torch.optim.Adam(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
BATCH_SIZE = 32
EPOCHS = 5
loss_logs = []
from tqdm import tqdm
PROJECT_NAME = "Sign-Language-Recognition"
def test(net,X,y):
correct = 0
total = 0
net.eval()
with torch.no_grad():
for i in range(len(X)):
real_class = torch.argmax(y[i]).to(device)
net_out = net(X[i].view(-1,1,112,112).to(device).float())
net_out = net_out[0]
predictied_class = torch.argmax(net_out)
if predictied_class == real_class:
correct += 1
total += 1
return round(correct/total,3)
import wandb
len(os.listdir('./data/'))
import random
index = random.randint(0,29)
print(index)
wandb.init(project=PROJECT_NAME,name='test-CrossEntropyLoss-Adam-0.1')
for _ in tqdm(range(EPOCHS)):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch.float())
loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'accuracy':test(model,X_train,y_train)*100,'val_accuracy':test(model,X_test,y_test)*100,'pred':torch.argmax(preds[index]),'real':torch.argmax(y_batch[index])})
wandb.finish()
import matplotlib.pyplot as plt
import pandas as pd
df = pd.Series(loss_logs)
df.plot.line(figsize=(12,6))
test(model,X_test,y_test)
test(model,X_train,y_train)
preds
y_batch
| 0.593609 | 0.767036 |
# Control tutorial
```
import pystablemotifs as sm
import networkx as nx
from timeit import default_timer
```
## Load network and find attractors
See the Basic Usage Tutorial for further details.
```
primes = sm.format.import_primes('../models/TLGL_fixed-inputs.txt',remove_constants=True)
sm.format.pretty_print_prime_rules(primes)
ar = sm.AttractorRepertoire.from_primes(primes)
ar.summary()
```
## Define a control target
Select a set of node values that we wish to drive the system toward. In this example, we specify a set of nodes (of size 1), namely `Apoptosis=1`, that uniquely identifies an attractor. This is not necessary in general (however, the succession-based methods require that the target is consistent with at least one attractor).
```
target = {'Apoptosis':1}
```
## Search for knockins/knockouts that achieve the target
### Brute-force
The `max_drivers` parameter limits our search to a maximum number of concurrent interventions.
Note that the brute force approach scales poorly with the size of the network and unless a value is specified for every variable, it does not guarantee convergence to an attractor. Therefore, the intervention must be permanent in general.
```
start=default_timer()
interventions = sm.drivers.knock_to_partial_state(target,primes,max_drivers=2)
end=default_timer()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print({k:v for k,v in sorted(x.items())})
```
### Grasp search
Here we use a heuristic approach to search for drivers that achieve the target. The `GRASP_iterations` parameter controls how many heuristic searches are performed.
```
GRASP_iterations=2000
start=default_timer()
interventions = sm.drivers.GRASP(target,ar.primes,GRASP_iterations)
end=default_timer()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print({k:v for k,v in sorted(x.items())})
```
### Internal history
In this method, all succession diagram pathways that are consistent with the target are identified. At each branch point in the succession diagram, the desired target stable motif is searched for internal driver node sets that drive the system into a narrower trap space containing the target. All possible paths are considered. All interventions can be permanent or temporary. Convergence to a consistent attractor (if it exists) is guaranteed.
```
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='history',
driver_method='internal')
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print({k:v for k,v in sorted(x.items())})
```
### Minimal history
This method also selects drivers for target stable motifs at each succession diagram branch point. It differs from the previous method in that it does not require these drivers to all be internal to each stable motif. This allows the method to uncover more parsimonious interventions at the cost of increase computational burden. It may identify interventions that are inconsistent with the target; such interventions *must* be temporary (e.g., temporary administration of a drug).
```
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='history',
driver_method='minimal')
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print("---")
print("One temporary intervention from each list, in order.")
print("("+str(len(x))+" interventions in total)")
for y in x: print(y,"\n")
```
### GRASP history
This method is like the two above, but the driver search is conducted using a heuristic approach. This is most useful in extremely large networks. The benefit of the GRASP method is that it does not consider all possible variable combinations, and can therefore consider larger driver sets with comparitively little additional computational burden.
```
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='history',
driver_method='GRASP',
GRASP_iterations=500)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print("---")
print("One temporary intervention from each list, in order.")
print("("+str(len(x))+" interventions in total)")
for y in x: print(y,"\n")
```
### Minimal merge
In this method, all minimal trap spaces containing only attractors consistent with the target are found, and a brute force search is conducted to identify interventions of minimal size that drive the system into these trap spaces. Unlike the brute-force method, it does not require that the intervention be permanent. Interventions that are inconsistent with the target *must* be temporary. Generally, this method is slower than others, but also finds the most parsimonious interventions. The worst-case computation time grows rapidly with the `max_drivers` parameter, as all possible sets of variables up to this size can be considered.
```
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='merge',
driver_method='minimal',
max_drivers=4)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print(x)
```
### Internal merge
This method is like the above, but it only considers interventions that are internal to the stable motifs that constitute the trap spaces under consideration. Typically, this is faster, but it has the potential to overlook some interventions. Interventions can be temporary or permanent. As with the previous method, the worst-case computation time grows rapidly with the `max_drivers` parameter; however rather than considering combinations of all variables, this method considers only combinations of variables that belong to the stable motifs that make up the target trap space. Therefore, the scaling is better than the previous method.
```
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='merge',
driver_method='internal',
max_drivers=4)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print(x)
```
### GRASP merge
This method is like the two above, but the driver search is conducted using a heuristic approach. This is most useful in extremely large networks when it is anticipated that only large intervention sets will drive the system to its desired target. This is due to the fact that method's computational cost scales polynomially with the size of considered intervention set (whereas the minimal merge method scales combinatorially).
```
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='merge',
driver_method='GRASP',
GRASP_iterations=500)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print(x)
```
## Another example: EMT
In this example, we will consider driving the system so that it avoids the EMT transition. This is example is more computationally expensive than the previous one.
```
primes = sm.format.import_primes("../models/EMT.txt",remove_constants=True)
sm.format.pretty_print_prime_rules(primes)
ar = sm.AttractorRepertoire.from_primes(primes)
ar.summary()
target = {'EMT':0}
```
### Brute-force
```
start=default_timer()
interventions = sm.drivers.knock_to_partial_state(target,primes,max_drivers=2)
end=default_timer()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print({k:v for k,v in sorted(x.items())})
```
### Grasp search
```
GRASP_iterations=2000
start=default_timer()
interventions = sm.drivers.GRASP(target,ar.primes,GRASP_iterations)
end=default_timer()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print({k:v for k,v in sorted(x.items())})
```
### Internal history
```
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='history',
driver_method='internal',
max_drivers=4)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print({k:v for k,v in sorted(x.items())})
```
### Minimal history
```
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='history',
driver_method='minimal',
max_drivers=4)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print("---")
print("One temporary intervention from each list, in order.")
print("("+str(len(x))+" interventions in total)")
for y in x: print(y,"\n")
```
### GRASP history
```
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='history',
driver_method='GRASP',
GRASP_iterations=50000)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print("---")
print("One temporary intervention from each list, in order.")
print("("+str(len(x))+" interventions in total)")
for y in x: print(y,"\n")
```
### Minimal merge
```
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='merge',
driver_method='minimal',
max_drivers=4)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print(x)
```
### Internal merge
```
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='merge',
driver_method='internal',
max_drivers=4)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print(x)
```
### GRASP merge
```
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='merge',
driver_method='GRASP',
GRASP_iterations=50000)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print(x)
```
|
github_jupyter
|
import pystablemotifs as sm
import networkx as nx
from timeit import default_timer
primes = sm.format.import_primes('../models/TLGL_fixed-inputs.txt',remove_constants=True)
sm.format.pretty_print_prime_rules(primes)
ar = sm.AttractorRepertoire.from_primes(primes)
ar.summary()
target = {'Apoptosis':1}
start=default_timer()
interventions = sm.drivers.knock_to_partial_state(target,primes,max_drivers=2)
end=default_timer()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print({k:v for k,v in sorted(x.items())})
GRASP_iterations=2000
start=default_timer()
interventions = sm.drivers.GRASP(target,ar.primes,GRASP_iterations)
end=default_timer()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print({k:v for k,v in sorted(x.items())})
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='history',
driver_method='internal')
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print({k:v for k,v in sorted(x.items())})
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='history',
driver_method='minimal')
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print("---")
print("One temporary intervention from each list, in order.")
print("("+str(len(x))+" interventions in total)")
for y in x: print(y,"\n")
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='history',
driver_method='GRASP',
GRASP_iterations=500)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print("---")
print("One temporary intervention from each list, in order.")
print("("+str(len(x))+" interventions in total)")
for y in x: print(y,"\n")
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='merge',
driver_method='minimal',
max_drivers=4)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print(x)
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='merge',
driver_method='internal',
max_drivers=4)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print(x)
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='merge',
driver_method='GRASP',
GRASP_iterations=500)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print(x)
primes = sm.format.import_primes("../models/EMT.txt",remove_constants=True)
sm.format.pretty_print_prime_rules(primes)
ar = sm.AttractorRepertoire.from_primes(primes)
ar.summary()
target = {'EMT':0}
start=default_timer()
interventions = sm.drivers.knock_to_partial_state(target,primes,max_drivers=2)
end=default_timer()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print({k:v for k,v in sorted(x.items())})
GRASP_iterations=2000
start=default_timer()
interventions = sm.drivers.GRASP(target,ar.primes,GRASP_iterations)
end=default_timer()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print({k:v for k,v in sorted(x.items())})
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='history',
driver_method='internal',
max_drivers=4)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print({k:v for k,v in sorted(x.items())})
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='history',
driver_method='minimal',
max_drivers=4)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print("---")
print("One temporary intervention from each list, in order.")
print("("+str(len(x))+" interventions in total)")
for y in x: print(y,"\n")
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='history',
driver_method='GRASP',
GRASP_iterations=50000)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions:
print("---")
print("One temporary intervention from each list, in order.")
print("("+str(len(x))+" interventions in total)")
for y in x: print(y,"\n")
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='merge',
driver_method='minimal',
max_drivers=4)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print(x)
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='merge',
driver_method='internal',
max_drivers=4)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print(x)
start=default_timer()
interventions = ar.reprogram_to_trap_spaces(target,
target_method='merge',
driver_method='GRASP',
GRASP_iterations=50000)
end=default_timer()
print()
print("Time running method:",end-start)
print("Sets found:")
for x in interventions: print(x)
| 0.156717 | 0.974018 |
# Random forest Classification
is a bagging algorithm of sub-trees made same as decision tree
#### preprocessing
```
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('../data_files/Social_Network_Ads.csv')
x = dataset.iloc[:, [2, 3]].values
y = dataset.iloc[:, 4].values
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_test)
```
#### Model
```
# Fitting Random Forest classifier to the Training set
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators=10, criterion='entropy', random_state=0)
# n_estimators : is the numbers of subtrees , entropy : is the cost function
classifier.fit(x_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(x_test)
print(y_pred)
# Making the confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
pd.DataFrame(cm)
# Visualizing the Training set results
from matplotlib.colors import ListedColormap
x_set, y_set = x_train, y_train
x1, x2 = np.meshgrid(np.arange(start=x_set[:, 0].min() - 1, stop=x_set[:, 0].max() + 1, step=0.01),
np.arange(start=x_set[:, 1].min() - 1, stop=x_set[:, 1].max() + 1, step=0.01))
plt.contourf(x1, x2, classifier.predict(np.array([x1.ravel(), x2.ravel()]).T).reshape(x1.shape),
alpha=0.75, cmap=ListedColormap(('red', 'green')))
plt.xlim(x1.min(), x1.max())
plt.ylim(x2.min(), x2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(x_set[y_set == j, 0], x_set[y_set == j, 1],
c=ListedColormap(('red', 'green'))(i), label=j)
plt.title('Random Forest classification (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
# Visualizing the Test set results
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min() - 1, stop=X_set[:, 0].max() + 1, step=0.01),
np.arange(start=X_set[:, 1].min() - 1, stop=X_set[:, 1].max() + 1, step=0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha=0.75, cmap=ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c=ListedColormap(('red', 'green'))(i), label=j)
plt.title('Random Forest classification (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
```
|
github_jupyter
|
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('../data_files/Social_Network_Ads.csv')
x = dataset.iloc[:, [2, 3]].values
y = dataset.iloc[:, 4].values
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_test)
# Fitting Random Forest classifier to the Training set
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators=10, criterion='entropy', random_state=0)
# n_estimators : is the numbers of subtrees , entropy : is the cost function
classifier.fit(x_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(x_test)
print(y_pred)
# Making the confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
pd.DataFrame(cm)
# Visualizing the Training set results
from matplotlib.colors import ListedColormap
x_set, y_set = x_train, y_train
x1, x2 = np.meshgrid(np.arange(start=x_set[:, 0].min() - 1, stop=x_set[:, 0].max() + 1, step=0.01),
np.arange(start=x_set[:, 1].min() - 1, stop=x_set[:, 1].max() + 1, step=0.01))
plt.contourf(x1, x2, classifier.predict(np.array([x1.ravel(), x2.ravel()]).T).reshape(x1.shape),
alpha=0.75, cmap=ListedColormap(('red', 'green')))
plt.xlim(x1.min(), x1.max())
plt.ylim(x2.min(), x2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(x_set[y_set == j, 0], x_set[y_set == j, 1],
c=ListedColormap(('red', 'green'))(i), label=j)
plt.title('Random Forest classification (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
# Visualizing the Test set results
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min() - 1, stop=X_set[:, 0].max() + 1, step=0.01),
np.arange(start=X_set[:, 1].min() - 1, stop=X_set[:, 1].max() + 1, step=0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha=0.75, cmap=ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c=ListedColormap(('red', 'green'))(i), label=j)
plt.title('Random Forest classification (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
| 0.761006 | 0.944689 |
<div class="alert alert-block alert-info">
<b><h1>ENGR 1330 Computational Thinking with Data Science </h1></b>
</div>
Copyright © 2021 Theodore G. Cleveland and Farhang Forghanparast
Last GitHub Commit Date:
# 21: Testing Hypothesis - Introductions
- Comparing two (or more) collections of observations (graphically)
A procedure to systematically decide if two data collections are similar or substantially different.
## Objectives
- To apply fundamental concepts involved in probability estimation modeling and descriptive statistics;
- Concept of a hypothesis
- Hypothesis components
- Null hypothesis and alternative hypothesis
- Normal distribution model
- One-tail, two-tail tests
- Attained significance
- Decision Error
- Type-1, Type-2
## Computational Thinking Concepts
The CT concepts include:
- Abstraction => Represent data behavior with a function
- Pattern Recognition => Patterns in data models to make decision
In engineering, when we wish to start asking questions about the data and interpret the results, we use statistical methods that provide a confidence or likelihood about the answers. In general, this class of methods is called statistical hypothesis testing, or significance tests. The material for today's lecture is inspired by and gathered from several resources including:
- Hypothesis testing in Machine learning using Python by Yogesh Agrawal available at https://towardsdatascience.com/hypothesis-testing-in-machine-learning-using-python-a0dc89e169ce
- Demystifying hypothesis testing with simple Python examples by Tirthajyoti Sarkar available at https://towardsdatascience.com/demystifying-hypothesis-testing-with-simple-python-examples-4997ad3c5294
- A Gentle Introduction to Statistical Hypothesis Testing by Jason Brownlee available at https://machinelearningmastery.com/statistical-hypothesis-tests/
### Fundamental Concepts
#### <font color=crimson>What is hypothesis testing ?</font><br>
Hypothesis testing is a statistical method that is used in making statistical decisions (about population) using experimental data (samples). Hypothesis Testing is basically an assumption that we make about the population parameter.<br>
Example : You state "on average, students in the class are taller than 5 ft and 4 inches" or "an average boy is taller than an average girl" or "a specific treatment is effective in treating COVID-19 patients". <br>
We need some mathematical way support that whatever we are stating is true.
We validate these hypotheses, basing our conclusion on random samples and empirical distributions.
#### <font color=crimson>Why do we use it ?</font><br>
Hypothesis testing is an essential procedure in experimentation. A hypothesis test evaluates two mutually exclusive statements about a population to determine which statement is supported by the sample data. When we say that a finding is **statistically significant**, it’s thanks to a hypothesis test.
### Comparing Two Collections
Lets first examine a typical data question; you will need an empty notebook to follow along, as only the code is supplied!
<font color=crimson>Do construction activities impact stormwater solids metrics?</font><br>
The webroot for the subsequent examples/exercises is [http://54.243.252.9/engr-1330-webroot/9-MyJupyterNotebooks/41A-HypothesisTests/](http://54.243.252.9/engr-1330-webroot/9-MyJupyterNotebooks/41A-HypothesisTests/)
### Background
The Clean Water Act (CWA) prohibits storm water discharge from construction sites
that disturb 5 or more acres, unless authorized by a National Pollutant Discharge
Elimination System (NPDES) permit. Permittees must provide a site description,
identify sources of contaminants that will affect storm water, identify appropriate
measures to reduce pollutants in stormwater discharges, and implement these measures.
The appropriate measures are further divided into four classes: erosion and
sediment control, stabilization practices, structural practices, and storm water management.
Collectively the site description and accompanying measures are known as
the facility’s Storm Water Pollution Prevention Plan (SW3P).
The permit contains no specific performance measures for construction activities,
but states that ”EPA anticipates that storm water management will be able to
provide for the removal of at least 80% of the total suspended solids (TSS).” The
rules also note ”TSS can be used as an indicator parameter to characterize the
control of other pollutants, including heavy metals, oxygen demanding pollutants,
and nutrients commonly found in stormwater discharges”; therefore, solids control is
critical to the success of any SW3P.
Although the NPDES permit requires SW3Ps to be in-place, it does not require
any performance measures as to the effectiveness of the controls with respect to
construction activities. The reason for the exclusion was to reduce costs associated
with monitoring storm water discharges, but unfortunately the exclusion also makes
it difficult for a permittee to assess the effectiveness of the controls implemented at
their site. Assessing the effectiveness of controls will aid the permittee concerned
with selecting the most cost effective SW3P.<br>
### Problem Statement <br>
The files precon.CSV and durcon.CSV contain observations of cumulative
rainfall, total solids, and total suspended solids collected from a construction
site on Nasa Road 1 in Harris County. <br>
The data in the file precon.CSV was collected `before` construction began. The data in the file durcon.CSV were collected `during` the construction activity.<br>
The first column is the date that the observation was made, the second column the total solids (by standard methods), the third column is is the total suspended solids (also by standard methods), and the last column is the cumulative rainfall for that storm.<br>
These data are not time series (there was sufficient time between site visits that you can safely assume each storm was independent.
__Our task is to analyze these two data sets and decide if construction activities impact stormwater quality in terms of solids measures.__
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
Lets introduce script to automatically get the files from the named resource, in this case a web server!
```{note}
You would need to insert this script into your notebook, and run it to replicate the example here.
```
```
import requests # Module to process http/https requests
remote_url="http://54.243.252.9/engr-1330-webroot/9-MyJupyterNotebooks/41A-HypothesisTests/precon.csv" # set the url
rget = requests.get(remote_url, allow_redirects=True) # get the remote resource, follow imbedded links
open('precon.csv','wb').write(rget.content) # extract from the remote the contents, assign to a local file same name
remote_url="http://54.243.252.9/engr-1330-webroot/9-MyJupyterNotebooks/41A-HypothesisTests/durcon.csv" # set the url
rget = requests.get(remote_url, allow_redirects=True) # get the remote resource, follow imbedded links
open('durcon.csv','wb').write(rget.content) # extract from the remote the contents, assign to a local file same name
```
Read and examine the files, see if we can understand their structure
```
precon = pd.read_csv("precon.csv")
durcon = pd.read_csv("durcon.csv")
precon.head()
durcon.head()
precon.describe()
durcon.describe()
```
Lets make some exploratory histograms to guide our investigation
- Is the rainfall different before construction?
```
precon['RAIN.PRE'].hist(alpha=0.4,color='red',density="True")
durcon['RAIN.DUR'].hist(alpha=0.4,color='blue',density="True")
```
This will show that as "distributions" they look pretty similar, although the during construction data has a few larger events.
Now
- Is the total solids (TS) different before construction?
```
precon['TS.PRE'].hist(alpha=0.4,color='red',density="True")
durcon['TS.DUR'].hist(alpha=0.4,color='blue',density="True")
```
Here it is hard to tell, but the preconstruction values are all to the left while the during construction phase has some large values.
Lets compare means and standard deviations
$ \mu TS_{pre} = 463 $<br>
$ \sigma TS_{pre} = 361 $<br>
$ \mu TS_{dur} = 3495 $<br>
$ \sigma TS_{dur} = 7104 $<br>
Certainly different, and the mean during construction is 8 pre-construction standard deviations larger, hence supportive of a claim that there is a difference, however the standard deviation of the during phase is huge, easily encompassing the preconstruction mean, so is there really a difference? We could resort to simulation to try to answer the question.
```
pre_s = np.random.normal(np.array(precon['TS.PRE']).mean(), np.array(precon['TS.PRE']).std(), 10000) # random sample from a normal distribution function
dur_s = np.random.normal(np.array(durcon['TS.DUR']).mean(), np.array(durcon['TS.DUR']).std(), 10000) # random sample from a normal distribution function
myfakedata_d = pd.DataFrame({'PreSim':pre_s,'DurSim':dur_s}) # make into a dataframe _d == derived
fig, ax = plt.subplots()
myfakedata_d.plot.hist(density=False, ax=ax, title='Histogram: Pre samples vs. Dur samples', bins=40)
ax.set_ylabel('Count')
ax.grid(axis='y')
```
Here we learn the standard deviations mean a lot, and the normal distribution is probably not the best model (negative solids don't make physical sense). However it does point to the important issue, how to quantify the sameness or differences?
Thats the goal of hypothesis testing methods.
In lab you will use similar explorations, and next time we will get into the actual methods.
## References
<hr>
## Laboratory 21
**Examine** (click) Laboratory 21 as a webpage at [Laboratory 21.html](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab21/Lab21.html)
**Download** (right-click, save target as ...) Laboratory 21 as a jupyterlab notebook from [Laboratory 21.ipynb](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab21/Lab21.ipynb)
<hr><hr>
## Exercise Set 21
**Examine** (click) Exercise Set 21 as a webpage at [Exercise 21](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab21/Lab21-TH.html)
**Download** (right-click, save target as ...) Exercise Set 21 as a jupyterlab notebook at [Exercise Set 21](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab21/Lab21-TH.ipynb)
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Read and examine the files, see if we can understand their structure
Lets make some exploratory histograms to guide our investigation
- Is the rainfall different before construction?
This will show that as "distributions" they look pretty similar, although the during construction data has a few larger events.
Now
- Is the total solids (TS) different before construction?
Here it is hard to tell, but the preconstruction values are all to the left while the during construction phase has some large values.
Lets compare means and standard deviations
$ \mu TS_{pre} = 463 $<br>
$ \sigma TS_{pre} = 361 $<br>
$ \mu TS_{dur} = 3495 $<br>
$ \sigma TS_{dur} = 7104 $<br>
Certainly different, and the mean during construction is 8 pre-construction standard deviations larger, hence supportive of a claim that there is a difference, however the standard deviation of the during phase is huge, easily encompassing the preconstruction mean, so is there really a difference? We could resort to simulation to try to answer the question.
| 0.379263 | 0.966632 |
<a href="https://colab.research.google.com/github/Data-Science-and-Data-Analytics-Courses/UCSanDiegoX---Machine-Learning-Fundamentals-03-Jan-2019-audit/blob/master/Packages/Git.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import os
from google.colab import drive, files
from urllib.parse import urlsplit
from pathlib import Path
import json
```
# Configure
```
def config(confile="", options="--global"):
"""
Configure Git
confile: path to .json configuration file, containing text: {"email": "<email>", "name": "<display name>", ...}
options: supported by git-config (https://git-scm.com/docs/git-config)
"""
# Configurations
confile = confile or input("Enter path to .json configuration file: ")
with open(confile) as f:
configs = json.load(f)
# Configure
!git config {options} user.email "{configs['email']}"
!git config {options} user.name "{configs['name']}"
if __name__ == "__main__":
config()
```
# Clone
```
def clone(url, dest=".", name="", options="--single-branch -b master", reloc=True):
"""
Clone url into dest
name: if provided, rename repository
options: supported by git-clone (https://git-scm.com/docs/git-clone)
reloc: if True, relocate to repository
"""
rurl = urlsplit(url)
dest = Path(dest).resolve()
repo = dest / (name or Path(rurl.path).name)
# Nested repositories not allowed
out = !git -C "{dest}" rev-parse
if not out: # inside repository
raise ValueError("Can't clone into existing repository")
# Clone
!git clone {options} "{rurl.geturl()}" "{repo}"
# Relocate
if reloc:
os.chdir(repo)
return repo
if __name__ == "__main__":
url = "https://github.com/Data-Science-and-Data-Analytics-Courses/UCSanDiegoX---Machine-Learning-Fundamentals-03-Jan-2019-audit"
clone(url)
```
# Push
```
def push(url, branch="HEAD", logfile=""):
"""
Push branch to url
branch: default, current branch; if not provided, all branches
logfile: path to .json log-in file, containing text: {"username": "<username>", "password": "<password>"}
"""
rurl = urlsplit(url)
# Log-in information
logfile = logfile or input("Enter path to .json log-in file: ")
with open(logfile) as f:
login = json.load(f)
# Construct authenticated remote
rauth = rurl._replace(netloc=f"{login['username']}:{login['password']}@{rurl.hostname}") # add username and password
# Push
if branch:
!git push "{rauth.geturl()}" "{branch}"
else:
!git push --all "{rauth.geturl()}"
if __name__ == "__main__":
url = "https://github.com/Data-Science-and-Data-Analytics-Courses/UCSanDiegoX---Machine-Learning-Fundamentals-03-Jan-2019-audit"
push(url)
```
|
github_jupyter
|
import os
from google.colab import drive, files
from urllib.parse import urlsplit
from pathlib import Path
import json
def config(confile="", options="--global"):
"""
Configure Git
confile: path to .json configuration file, containing text: {"email": "<email>", "name": "<display name>", ...}
options: supported by git-config (https://git-scm.com/docs/git-config)
"""
# Configurations
confile = confile or input("Enter path to .json configuration file: ")
with open(confile) as f:
configs = json.load(f)
# Configure
!git config {options} user.email "{configs['email']}"
!git config {options} user.name "{configs['name']}"
if __name__ == "__main__":
config()
def clone(url, dest=".", name="", options="--single-branch -b master", reloc=True):
"""
Clone url into dest
name: if provided, rename repository
options: supported by git-clone (https://git-scm.com/docs/git-clone)
reloc: if True, relocate to repository
"""
rurl = urlsplit(url)
dest = Path(dest).resolve()
repo = dest / (name or Path(rurl.path).name)
# Nested repositories not allowed
out = !git -C "{dest}" rev-parse
if not out: # inside repository
raise ValueError("Can't clone into existing repository")
# Clone
!git clone {options} "{rurl.geturl()}" "{repo}"
# Relocate
if reloc:
os.chdir(repo)
return repo
if __name__ == "__main__":
url = "https://github.com/Data-Science-and-Data-Analytics-Courses/UCSanDiegoX---Machine-Learning-Fundamentals-03-Jan-2019-audit"
clone(url)
def push(url, branch="HEAD", logfile=""):
"""
Push branch to url
branch: default, current branch; if not provided, all branches
logfile: path to .json log-in file, containing text: {"username": "<username>", "password": "<password>"}
"""
rurl = urlsplit(url)
# Log-in information
logfile = logfile or input("Enter path to .json log-in file: ")
with open(logfile) as f:
login = json.load(f)
# Construct authenticated remote
rauth = rurl._replace(netloc=f"{login['username']}:{login['password']}@{rurl.hostname}") # add username and password
# Push
if branch:
!git push "{rauth.geturl()}" "{branch}"
else:
!git push --all "{rauth.geturl()}"
if __name__ == "__main__":
url = "https://github.com/Data-Science-and-Data-Analytics-Courses/UCSanDiegoX---Machine-Learning-Fundamentals-03-Jan-2019-audit"
push(url)
| 0.326916 | 0.726013 |
# Inference and Validation
Now that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch.
As usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here:
```python
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
```
The test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training.
```
import torch
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here I'll create a model like normal, using the same one from my solution for part 4.
```
from torch import nn, optim
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set.
```
model = Classifier()
images, labels = next(iter(testloader))
# Get the class probabilities
ps = torch.exp(model(images))
# Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples
print(ps.shape)
```
With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.
```
top_p, top_class = ps.topk(1, dim=1)
# Look at the most likely classes for the first 10 examples
print(top_class[:10,:])
print(top_p[:10,:])
```
Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape.
If we do
```python
equals = top_class == labels
```
`equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row.
```
equals = top_class == labels.view(*top_class.shape)
```
Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error
```
RuntimeError: mean is not implemented for type torch.ByteTensor
```
This happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implemented for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`.
```
accuracy = torch.mean(equals.type(torch.FloatTensor))
print(f'Accuracy: {accuracy.item()*100}%')
```
The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up our code by turning off gradients using `torch.no_grad()`:
```python
# turn off gradients
with torch.no_grad():
# validation pass here
for images, labels in testloader:
...
```
>**Exercise:** Implement the validation loop below and print out the total accuracy after the loop. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting. You should be able to get an accuracy above 80%.
```
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 15
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
running_acc = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
running_acc += torch.mean(equals.type(torch.FloatTensor))
else:
test_loss = 0
accuracy = 0
# Turn off gradients for validation, saves memory and computations
with torch.no_grad():
for images, labels in testloader:
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Train Accuracy: {:.3f}".format(running_acc/len(trainloader)),
"Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
```
## Overfitting
If we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting.
<img src='assets/overfitting.png' width=450px>
The network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss.
The most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout) module.
```python
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
# output so no dropout here
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
During training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode.
```python
# turn off gradients
with torch.no_grad():
# set model to evaluation mode
model.eval()
# validation pass here
for images, labels in testloader:
...
# set model back to train mode
model.train()
```
> **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss or higher accuracy.
```
## TODO: Define your model with dropout added
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
# output so no dropout here
x = F.log_softmax(self.fc4(x), dim=1)
return x
## TODO: Train your model with dropout, and monitor the training progress with the validation loss and accuracy
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 10
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
running_acc = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
running_acc += torch.mean(equals.type(torch.FloatTensor))
else:
test_loss = 0
accuracy = 0
# Turn off gradients for validation, saves memory and computations
with torch.no_grad():
for images, labels in testloader:
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Train Accuracy: {:.3f}".format(running_acc/len(trainloader)),
"Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
```
## Inference
Now that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context.
```
# Import helper module (should be in the repo)
import helper
# Test out your network!
model.eval()
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.view(1, 784)
# Calculate the class probabilities (softmax) for img
with torch.no_grad():
output = model.forward(img)
ps = torch.exp(output)
# Plot the image and probabilities
helper.view_classify(img.view(1, 28, 28), ps, version='Fashion')
```
## Next Up!
In the next part, I'll show you how to save your trained models. In general, you won't want to train a model everytime you need it. Instead, you'll train once, save it, then load the model when you want to train more or use if for inference.
|
github_jupyter
|
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
import torch
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
from torch import nn, optim
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.log_softmax(self.fc4(x), dim=1)
return x
model = Classifier()
images, labels = next(iter(testloader))
# Get the class probabilities
ps = torch.exp(model(images))
# Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples
print(ps.shape)
top_p, top_class = ps.topk(1, dim=1)
# Look at the most likely classes for the first 10 examples
print(top_class[:10,:])
print(top_p[:10,:])
equals = top_class == labels
equals = top_class == labels.view(*top_class.shape)
RuntimeError: mean is not implemented for type torch.ByteTensor
accuracy = torch.mean(equals.type(torch.FloatTensor))
print(f'Accuracy: {accuracy.item()*100}%')
# turn off gradients
with torch.no_grad():
# validation pass here
for images, labels in testloader:
...
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 15
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
running_acc = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
running_acc += torch.mean(equals.type(torch.FloatTensor))
else:
test_loss = 0
accuracy = 0
# Turn off gradients for validation, saves memory and computations
with torch.no_grad():
for images, labels in testloader:
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Train Accuracy: {:.3f}".format(running_acc/len(trainloader)),
"Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
# output so no dropout here
x = F.log_softmax(self.fc4(x), dim=1)
return x
# turn off gradients
with torch.no_grad():
# set model to evaluation mode
model.eval()
# validation pass here
for images, labels in testloader:
...
# set model back to train mode
model.train()
## TODO: Define your model with dropout added
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
# output so no dropout here
x = F.log_softmax(self.fc4(x), dim=1)
return x
## TODO: Train your model with dropout, and monitor the training progress with the validation loss and accuracy
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 10
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
running_acc = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
running_acc += torch.mean(equals.type(torch.FloatTensor))
else:
test_loss = 0
accuracy = 0
# Turn off gradients for validation, saves memory and computations
with torch.no_grad():
for images, labels in testloader:
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Train Accuracy: {:.3f}".format(running_acc/len(trainloader)),
"Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
# Import helper module (should be in the repo)
import helper
# Test out your network!
model.eval()
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.view(1, 784)
# Calculate the class probabilities (softmax) for img
with torch.no_grad():
output = model.forward(img)
ps = torch.exp(output)
# Plot the image and probabilities
helper.view_classify(img.view(1, 28, 28), ps, version='Fashion')
| 0.928198 | 0.992184 |
<h1>lgbm baseline</h1>
<h1>DATA LOADING
```
import pandas as pd
import numpy as np
import os
os.listdir('../../data')
assert 'out_breed.csv' in os.listdir('../../data') # this assert breaks if the data is configured uncorrectly
breeds = pd.read_csv('../../data/out_breed.csv')
colors = pd.read_csv('../../data/out_color.csv')
states = pd.read_csv('../../data/out_state.csv')
train = pd.read_csv('../../data/out_train.csv')
test = pd.read_csv('../../data/out_test.csv')
sub = pd.read_csv('../../data/out_submission.csv')
```
<h1>MODEL</h1>
```
from lgbmModel import PredictiveModel
```
<h1>EXAMPLE USAGE</h1>
```
"""
this is a really primitive data cleaning to make KNN works: we drop the followings
- AdoptionSpeed, is target
- Unnamed:0, dataset_type, is useless
- Name, RescuerId, Description, PhotoAmt, VideoAmt, PetID: this are all strings valued not able to be processed by KNN
"""
X = train.drop(["AdoptionSpeed", "Unnamed: 0", "dataset_type", "Name", "RescuerID", "Description", "PhotoAmt","VideoAmt","PetID"], axis=1)
X_test = test.drop(["Unnamed: 0", "dataset_type", "Name", "RescuerID", "Description", "PhotoAmt","VideoAmt","PetID"], axis=1)
"""
Y is our target value, Adoption Speed can be a value [1,2,3,4]
"""
Y = train['AdoptionSpeed']
assert X.shape[0] == Y.shape[0]
model = PredictiveModel("example_usage_model")
model.train(X, Y)
predictions = model.predict(X_test)
assert len(predictions)
```
<h1>VALIDATION
```
"""
this is a really primitive data cleaning to make KNN works: we drop the followings
- AdoptionSpeed, is target
- Unnamed:0, dataset_type, is useless
- Name, RescuerId, Description, PhotoAmt, VideoAmt, PetID: this are all strings valued not able to be processed by KNN
"""
X = train.drop(["AdoptionSpeed", "Unnamed: 0", "dataset_type", "Name", "RescuerID", "Description", "PetID"], axis=1)
X_test = test.drop(["Unnamed: 0", "dataset_type", "Name", "RescuerID", "Description","PetID"], axis=1)
"""
Y is our target value, Adoption Speed can be a value [1,2,3,4]
"""
Y = train['AdoptionSpeed']
assert X.shape[0] == Y.shape[0]
model = PredictiveModel("validation_model_lgbm_baseline")
model.params
model.params['num_leaves'] = 50
model.validation(X, Y, method=2, verbose=True)
model.validation(X, Y, n_folds=1, verbose=True)
%matplotlib inline
model.visualize()
```
<h1>Exploration</h1>
```
dogs = train[train['Type'] == 1].drop('Type',axis=1)
cats = train[train['Type'] == 2].drop('Type',axis=1)
"""
this is a really primitive data cleaning to make KNN works: we drop the followings
- AdoptionSpeed, is target
- Unnamed:0, dataset_type, is useless
- Name, RescuerId, Description, PhotoAmt, VideoAmt, PetID: this are all strings valued not able to be processed by KNN
"""
X = cats.drop(["AdoptionSpeed", "Unnamed: 0", "dataset_type", "Name", "RescuerID", "Description", "PetID"], axis=1)
"""
Y is our target value, Adoption Speed can be a value [1,2,3,4]
"""
Y = cats['AdoptionSpeed']
assert X.shape[0] == Y.shape[0]
X = X.reset_index().drop('index',axis=1)
Y = Y.reset_index().drop('index',axis=1)['AdoptionSpeed']
model = PredictiveModel("validation_model_lgbm_baseline_dogs")
model.params
model.validation(X, Y, method=2, verbose=True)
```
DOGS: 0.1749853610297417<br>
CATS: 0.12471785924455214
<h1>How to use lightgbm library
```
import lightgbm as lgb
len(X)
lgb_train = lgb.Dataset(X[:-1000], Y[:-1000])
lgb_validation = lgb.Dataset(X.iloc[-1000:], Y.iloc[-1000:])
params_1 = {
'objective': 'multiclass',
'verbose': 1,
'num_class': 5,
'num_rounds':50
}
lgb.cv(params_1, lgb_train)
train_results = {}
model = lgb.train(params_1, lgb_train, evals_result = train_results, valid_sets = [lgb_train, lgb_validation], valid_names=('train','valid'), verbose_eval=1)
train_results
preds = model.predict(X)
print(preds)
from sklearn.metrics import accuracy_score as ac
ac(Y, np.argmax(preds, axis=1))
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import os
os.listdir('../../data')
assert 'out_breed.csv' in os.listdir('../../data') # this assert breaks if the data is configured uncorrectly
breeds = pd.read_csv('../../data/out_breed.csv')
colors = pd.read_csv('../../data/out_color.csv')
states = pd.read_csv('../../data/out_state.csv')
train = pd.read_csv('../../data/out_train.csv')
test = pd.read_csv('../../data/out_test.csv')
sub = pd.read_csv('../../data/out_submission.csv')
from lgbmModel import PredictiveModel
"""
this is a really primitive data cleaning to make KNN works: we drop the followings
- AdoptionSpeed, is target
- Unnamed:0, dataset_type, is useless
- Name, RescuerId, Description, PhotoAmt, VideoAmt, PetID: this are all strings valued not able to be processed by KNN
"""
X = train.drop(["AdoptionSpeed", "Unnamed: 0", "dataset_type", "Name", "RescuerID", "Description", "PhotoAmt","VideoAmt","PetID"], axis=1)
X_test = test.drop(["Unnamed: 0", "dataset_type", "Name", "RescuerID", "Description", "PhotoAmt","VideoAmt","PetID"], axis=1)
"""
Y is our target value, Adoption Speed can be a value [1,2,3,4]
"""
Y = train['AdoptionSpeed']
assert X.shape[0] == Y.shape[0]
model = PredictiveModel("example_usage_model")
model.train(X, Y)
predictions = model.predict(X_test)
assert len(predictions)
"""
this is a really primitive data cleaning to make KNN works: we drop the followings
- AdoptionSpeed, is target
- Unnamed:0, dataset_type, is useless
- Name, RescuerId, Description, PhotoAmt, VideoAmt, PetID: this are all strings valued not able to be processed by KNN
"""
X = train.drop(["AdoptionSpeed", "Unnamed: 0", "dataset_type", "Name", "RescuerID", "Description", "PetID"], axis=1)
X_test = test.drop(["Unnamed: 0", "dataset_type", "Name", "RescuerID", "Description","PetID"], axis=1)
"""
Y is our target value, Adoption Speed can be a value [1,2,3,4]
"""
Y = train['AdoptionSpeed']
assert X.shape[0] == Y.shape[0]
model = PredictiveModel("validation_model_lgbm_baseline")
model.params
model.params['num_leaves'] = 50
model.validation(X, Y, method=2, verbose=True)
model.validation(X, Y, n_folds=1, verbose=True)
%matplotlib inline
model.visualize()
dogs = train[train['Type'] == 1].drop('Type',axis=1)
cats = train[train['Type'] == 2].drop('Type',axis=1)
"""
this is a really primitive data cleaning to make KNN works: we drop the followings
- AdoptionSpeed, is target
- Unnamed:0, dataset_type, is useless
- Name, RescuerId, Description, PhotoAmt, VideoAmt, PetID: this are all strings valued not able to be processed by KNN
"""
X = cats.drop(["AdoptionSpeed", "Unnamed: 0", "dataset_type", "Name", "RescuerID", "Description", "PetID"], axis=1)
"""
Y is our target value, Adoption Speed can be a value [1,2,3,4]
"""
Y = cats['AdoptionSpeed']
assert X.shape[0] == Y.shape[0]
X = X.reset_index().drop('index',axis=1)
Y = Y.reset_index().drop('index',axis=1)['AdoptionSpeed']
model = PredictiveModel("validation_model_lgbm_baseline_dogs")
model.params
model.validation(X, Y, method=2, verbose=True)
import lightgbm as lgb
len(X)
lgb_train = lgb.Dataset(X[:-1000], Y[:-1000])
lgb_validation = lgb.Dataset(X.iloc[-1000:], Y.iloc[-1000:])
params_1 = {
'objective': 'multiclass',
'verbose': 1,
'num_class': 5,
'num_rounds':50
}
lgb.cv(params_1, lgb_train)
train_results = {}
model = lgb.train(params_1, lgb_train, evals_result = train_results, valid_sets = [lgb_train, lgb_validation], valid_names=('train','valid'), verbose_eval=1)
train_results
preds = model.predict(X)
print(preds)
from sklearn.metrics import accuracy_score as ac
ac(Y, np.argmax(preds, axis=1))
| 0.640861 | 0.631722 |
# Inference using PSSR EM model
```
from fastai import *
from fastai.vision import *
from fastai.callbacks import *
from torchvision.models import vgg16_bn
import PIL
import imageio
import libtiff
import skimage
import skimage.filters
from utils.utils import FeatureLoss
from scipy.ndimage.interpolation import zoom as npzoom
from skimage.util import img_as_float32, img_as_ubyte
def tif_predict_movie_blend_slices(learn, tif_in, orig_out='orig.tif', pred_out='pred.tif', size=128):
data = libtiff.TiffFile(tif_in)
data = data.get_tiff_array()
depths = data.shape[0]
img_max = None
for depth in progress_bar(list(range(depths))):
img = data[depth].astype(np.float32)
if img_max is None: img_max = img.max() * 1.0
img /= img_max
img = img[np.newaxis, :]
out_img = unet_image_from_tiles_blend(learn, img, tile_sz=size)
pred = (out_img[None]*65535).astype(np.uint16)
pred_img_out = pred_out+f'_slice{depth}.tif'
skimage.io.imsave(pred_img_out,pred)
# take float in with info about mi,ma,max in and spits out (0-1.0)
def unet_image_from_tiles_blend(learn, in_img, tile_sz=256, scale=4, overlap_pct=5.0, img_info=None):
n_frames = in_img.shape[0]
if img_info:
mi, ma, imax = [img_info[fld] for fld in ['mi','ma','img_max']]
in_img = ((in_img - mi) / (ma - mi + 1e-20)).clip(0.,1.)
else:
mi, ma = 0., 1.
in_img = np.stack([npzoom(in_img[i], scale, order=1) for i in range(n_frames)])
overlap = int(tile_sz*(overlap_pct/100.) // 2 * 2)
step_sz = tile_sz - overlap
h,w = in_img.shape[1:3]
assembled = np.zeros((h,w))
x_seams = set()
y_seams = set()
for x_tile in range(0,math.ceil(w/step_sz)):
for y_tile in range(0,math.ceil(h/step_sz)):
x_start = x_tile*step_sz
x_end = min(x_start + tile_sz, w)
y_start = y_tile*step_sz
y_end = min(y_start + tile_sz, h)
src_tile = in_img[:,y_start:y_end,x_start:x_end]
in_tile = torch.zeros((tile_sz, tile_sz, n_frames))
in_x_size = x_end - x_start
in_y_size = y_end - y_start
if (in_y_size, in_x_size) != src_tile.shape[1:3]: set_trace()
in_tile[0:in_y_size, 0:in_x_size, :] = tensor(src_tile).permute(1,2,0)
if n_frames > 1:
img_in = MultiImage([Image(in_tile[:,:,i][None]) for i in range(n_frames)])
else:
img_in = Image(in_tile[:,:,0][None])
pred, _, _ = learn.predict(img_in)
out_tile = pred.data.numpy()[0]
half_overlap = overlap // 2
left_adj = half_overlap if x_start != 0 else 0
right_adj = half_overlap if x_end != w else 0
top_adj = half_overlap if y_start != 0 else 0
bot_adj = half_overlap if y_end != h else 0
trim_y_start = y_start + top_adj
trim_x_start = x_start + left_adj
trim_y_end = y_end - bot_adj
trim_x_end = x_end - right_adj
out_x_start = left_adj
out_y_start = top_adj
out_x_end = in_x_size - right_adj
out_y_end = in_y_size - bot_adj
assembled[trim_y_start:trim_y_end, trim_x_start:trim_x_end] = out_tile[out_y_start:out_y_end, out_x_start:out_x_end]
if trim_x_start != 0: x_seams.add(trim_x_start)
if trim_y_start != 0: y_seams.add(trim_y_start)
blur_rects = []
blur_size = 5
for x_seam in x_seams:
left = x_seam - blur_size
right = x_seam + blur_size
top, bottom = 0, h
blur_rects.append((slice(top, bottom), slice(left, right)))
for y_seam in y_seams:
top = y_seam - blur_size
bottom = y_seam + blur_size
left, right = 0, w
blur_rects.append((slice(top, bottom), slice(left, right)))
for xs,ys in blur_rects:
assembled[xs,ys] = skimage.filters.gaussian(assembled[xs,ys], sigma=1.0)
if assembled.min() < 0: assembled -= assembled.min()
return assembled.astype(np.float32)
```
## Set path for test sets
```
# Modify accordingly
testset_path = Path('stats')
testset_name = 'real-world_SEM'
lr_path = testset_path/f'LR/{testset_name}'
results = testset_path/f'LR-PSSR/{testset_name}'
test_files = list(lr_path.glob('*.tif'))
if results.exists(): shutil.rmtree(results)
results.mkdir(parents=True, mode=0o775, exist_ok=True)
print('Processing '+str(len(test_files))+' files...')
```
## Load PSSR model
```
model_name = 'PSSR_for_EM_1024'
learn = load_learner('models/pkl_files', f'{model_name}.pkl')
size = int(model_name.split('_')[-1])
print(f'{model_name} model is being used.')
```
## Inference
```
for fn in test_files:
print(f'Processing:{fn.stem}')
pred_name = str(results/f'{fn.stem}_pred')
orig_name = results/f'{fn.stem}_orig.tif'
tif_predict_movie_blend_slices(learn, fn, size=size, orig_out=orig_name, pred_out=pred_name )
print('All done!')
```
|
github_jupyter
|
from fastai import *
from fastai.vision import *
from fastai.callbacks import *
from torchvision.models import vgg16_bn
import PIL
import imageio
import libtiff
import skimage
import skimage.filters
from utils.utils import FeatureLoss
from scipy.ndimage.interpolation import zoom as npzoom
from skimage.util import img_as_float32, img_as_ubyte
def tif_predict_movie_blend_slices(learn, tif_in, orig_out='orig.tif', pred_out='pred.tif', size=128):
data = libtiff.TiffFile(tif_in)
data = data.get_tiff_array()
depths = data.shape[0]
img_max = None
for depth in progress_bar(list(range(depths))):
img = data[depth].astype(np.float32)
if img_max is None: img_max = img.max() * 1.0
img /= img_max
img = img[np.newaxis, :]
out_img = unet_image_from_tiles_blend(learn, img, tile_sz=size)
pred = (out_img[None]*65535).astype(np.uint16)
pred_img_out = pred_out+f'_slice{depth}.tif'
skimage.io.imsave(pred_img_out,pred)
# take float in with info about mi,ma,max in and spits out (0-1.0)
def unet_image_from_tiles_blend(learn, in_img, tile_sz=256, scale=4, overlap_pct=5.0, img_info=None):
n_frames = in_img.shape[0]
if img_info:
mi, ma, imax = [img_info[fld] for fld in ['mi','ma','img_max']]
in_img = ((in_img - mi) / (ma - mi + 1e-20)).clip(0.,1.)
else:
mi, ma = 0., 1.
in_img = np.stack([npzoom(in_img[i], scale, order=1) for i in range(n_frames)])
overlap = int(tile_sz*(overlap_pct/100.) // 2 * 2)
step_sz = tile_sz - overlap
h,w = in_img.shape[1:3]
assembled = np.zeros((h,w))
x_seams = set()
y_seams = set()
for x_tile in range(0,math.ceil(w/step_sz)):
for y_tile in range(0,math.ceil(h/step_sz)):
x_start = x_tile*step_sz
x_end = min(x_start + tile_sz, w)
y_start = y_tile*step_sz
y_end = min(y_start + tile_sz, h)
src_tile = in_img[:,y_start:y_end,x_start:x_end]
in_tile = torch.zeros((tile_sz, tile_sz, n_frames))
in_x_size = x_end - x_start
in_y_size = y_end - y_start
if (in_y_size, in_x_size) != src_tile.shape[1:3]: set_trace()
in_tile[0:in_y_size, 0:in_x_size, :] = tensor(src_tile).permute(1,2,0)
if n_frames > 1:
img_in = MultiImage([Image(in_tile[:,:,i][None]) for i in range(n_frames)])
else:
img_in = Image(in_tile[:,:,0][None])
pred, _, _ = learn.predict(img_in)
out_tile = pred.data.numpy()[0]
half_overlap = overlap // 2
left_adj = half_overlap if x_start != 0 else 0
right_adj = half_overlap if x_end != w else 0
top_adj = half_overlap if y_start != 0 else 0
bot_adj = half_overlap if y_end != h else 0
trim_y_start = y_start + top_adj
trim_x_start = x_start + left_adj
trim_y_end = y_end - bot_adj
trim_x_end = x_end - right_adj
out_x_start = left_adj
out_y_start = top_adj
out_x_end = in_x_size - right_adj
out_y_end = in_y_size - bot_adj
assembled[trim_y_start:trim_y_end, trim_x_start:trim_x_end] = out_tile[out_y_start:out_y_end, out_x_start:out_x_end]
if trim_x_start != 0: x_seams.add(trim_x_start)
if trim_y_start != 0: y_seams.add(trim_y_start)
blur_rects = []
blur_size = 5
for x_seam in x_seams:
left = x_seam - blur_size
right = x_seam + blur_size
top, bottom = 0, h
blur_rects.append((slice(top, bottom), slice(left, right)))
for y_seam in y_seams:
top = y_seam - blur_size
bottom = y_seam + blur_size
left, right = 0, w
blur_rects.append((slice(top, bottom), slice(left, right)))
for xs,ys in blur_rects:
assembled[xs,ys] = skimage.filters.gaussian(assembled[xs,ys], sigma=1.0)
if assembled.min() < 0: assembled -= assembled.min()
return assembled.astype(np.float32)
# Modify accordingly
testset_path = Path('stats')
testset_name = 'real-world_SEM'
lr_path = testset_path/f'LR/{testset_name}'
results = testset_path/f'LR-PSSR/{testset_name}'
test_files = list(lr_path.glob('*.tif'))
if results.exists(): shutil.rmtree(results)
results.mkdir(parents=True, mode=0o775, exist_ok=True)
print('Processing '+str(len(test_files))+' files...')
model_name = 'PSSR_for_EM_1024'
learn = load_learner('models/pkl_files', f'{model_name}.pkl')
size = int(model_name.split('_')[-1])
print(f'{model_name} model is being used.')
for fn in test_files:
print(f'Processing:{fn.stem}')
pred_name = str(results/f'{fn.stem}_pred')
orig_name = results/f'{fn.stem}_orig.tif'
tif_predict_movie_blend_slices(learn, fn, size=size, orig_out=orig_name, pred_out=pred_name )
print('All done!')
| 0.388966 | 0.688926 |
## Guiding Layout with Edge Weights
We can use edge attributes to guide the layout by having how much the nodes of an edge get attracted to one another be influenced by that attribute. This is useful in several scenarios:
* An edge has a natural property, such as `affinity`
* An edge represents multiple edges and thus represents a non-uniform weight such as `count`
* Algorithms provide edge properties, such as `relevance`
By binding such an edge column to **edge_weight** and optionally tuning how much to factor in that column with the **edgeInfluence** control, we can guide the clustering to factor in the edge weight.
1. By default, every edge contributes a weight of `1` on how much to pull its nodes together.
* Multiple edges between the same 2 nodes will thus cause those nodes to be closer together
2. Activate edge weights in `api=3` (2.0): Edges with high edge weights bring their nodes closer together; edges with low weight allow their nodes to move further appart
* Set via binding `edge_weight` (`.bind(edge_weight='my_col')`)
* Edge weight values automatically normalize between 0 and 1 starting with v2.30.25
2. The edge influence control guides whether to ignore edge weight (`0`) and use it primarily (`7+`)
* Set via the UI (`Layout Controls` -> `Edge Influence`) or via url parameter `edgeInfluence` (`.settings(url_params={'edgeInfluence': 2})`)
```
import pandas as pd, graphistry
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
```
### Demo: Strongly connected graph of 20 nodes
* No edge weight: Appears as a regular mesh
* Same edge weights: Appears as a regular mesh
* Edge weight `1` for edges (`i`, `i+1`), defining a chain, and the other edges set to weight `0`:
* `'edgeInfluence': 0`: Appears as a regular mesh
* `'edgeInfluence': 1`: Still a mesh, but start to see a chain interleaved
* `'edgeInfluence': 2`: The chain starts to form a circle around the mesh
* `'edgeInfluence': 7`: The chain starts to become a straight line; the other edges have little relative impact (no more mesh)
* Edge weight `100` instead of `1` for the chain: same as edge weight `1` due to normalization
* Edge weight `1` for the chain's edges and `-1` for the rest: Same due to normalization
```
edges = []
n = 20
k = 2
edges = pd.DataFrame({
's': [i for i in range(0,n) for j in range(0,n) if i != j],
'd': [j for i in range(0,n) for j in range(0,n) if i != j]
})
edges['1_if_neighbor'] = edges.apply(
lambda r: \
1 \
if (r['s'] == r['d'] - 1) \
or (r['s'] == r['d'] + 1) \
else 0,
axis=1).astype('float32')
edges['100_if_neighbor'] = (edges['1_if_neighbor'] * 100).astype('int64')
edges['ec'] = edges['1_if_neighbor'].apply(lambda v: round(v) * 0xFF000000)
edges.head(20)
URL_PARAMS = {'play': 5000, 'edgeCurvature': 0.1, 'precisionVsSpeed': -3}
g = graphistry.edges(edges).bind(source='s', destination='d', edge_color='ec').settings(url_params=URL_PARAMS)
```
### Edge Influence 0: No weights -- a mesh
```
g.bind(edge_weight='1_if_neighbor').settings(url_params={**URL_PARAMS, 'edgeInfluence': 0}).plot(render=True)
```
### Edge influence 1: Some weight -- chain interleaved into the mesh
```
g.bind(edge_weight='1_if_neighbor').settings(url_params={**URL_PARAMS, 'edgeInfluence': 1}).plot(render=True)
```
### Edge influence 2: Strong weight -- chain becomes circumference of mesh
```
g.bind(edge_weight='1_if_neighbor').settings(url_params={**URL_PARAMS, 'edgeInfluence': 2}).plot(render=True)
```
### Edge influence 7: Non-chain edges lose relative influence -- chain becomes a straight line (no more mesh)
```
g.bind(edge_weight='1_if_neighbor').settings(url_params={**URL_PARAMS, 'edgeInfluence': 7}).plot(render=True)
```
### Edge weights -1 to 1, and 0 to 100: Same as if edge weights were between 0 and 1
Graphistry automatically normalizes edge weights in version 2.30.25+
```
g.edges(g._edges.assign(with_negative=\
g._edges['1_if_neighbor'].apply(lambda v: \
-1 if v == 0 else 1 )))\
.bind(edge_weight='1_if_neighbor').settings(url_params={**URL_PARAMS, 'edgeInfluence': 1}).plot(render=True)
g.bind(edge_weight='100_if_neighbor').settings(url_params={**URL_PARAMS, 'edgeInfluence': 2}).plot(render=True)
```
|
github_jupyter
|
import pandas as pd, graphistry
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
edges = []
n = 20
k = 2
edges = pd.DataFrame({
's': [i for i in range(0,n) for j in range(0,n) if i != j],
'd': [j for i in range(0,n) for j in range(0,n) if i != j]
})
edges['1_if_neighbor'] = edges.apply(
lambda r: \
1 \
if (r['s'] == r['d'] - 1) \
or (r['s'] == r['d'] + 1) \
else 0,
axis=1).astype('float32')
edges['100_if_neighbor'] = (edges['1_if_neighbor'] * 100).astype('int64')
edges['ec'] = edges['1_if_neighbor'].apply(lambda v: round(v) * 0xFF000000)
edges.head(20)
URL_PARAMS = {'play': 5000, 'edgeCurvature': 0.1, 'precisionVsSpeed': -3}
g = graphistry.edges(edges).bind(source='s', destination='d', edge_color='ec').settings(url_params=URL_PARAMS)
g.bind(edge_weight='1_if_neighbor').settings(url_params={**URL_PARAMS, 'edgeInfluence': 0}).plot(render=True)
g.bind(edge_weight='1_if_neighbor').settings(url_params={**URL_PARAMS, 'edgeInfluence': 1}).plot(render=True)
g.bind(edge_weight='1_if_neighbor').settings(url_params={**URL_PARAMS, 'edgeInfluence': 2}).plot(render=True)
g.bind(edge_weight='1_if_neighbor').settings(url_params={**URL_PARAMS, 'edgeInfluence': 7}).plot(render=True)
g.edges(g._edges.assign(with_negative=\
g._edges['1_if_neighbor'].apply(lambda v: \
-1 if v == 0 else 1 )))\
.bind(edge_weight='1_if_neighbor').settings(url_params={**URL_PARAMS, 'edgeInfluence': 1}).plot(render=True)
g.bind(edge_weight='100_if_neighbor').settings(url_params={**URL_PARAMS, 'edgeInfluence': 2}).plot(render=True)
| 0.386879 | 0.985635 |
## Kubeflow UI 설정 및 노트북 생성
아래 노트북은 원본 노트북에 설명을 추가 하였습니다.
노트북 가이드로 따라 하시기 바랍니다.
- 실제 실행시에 여기의 셀의 "결과 값" 과 일치하는지를 확인하면서 실행 하십시오.
- 만일 에러등이 발생하면 확인하시고 다시 실행 해야 합니다.
- https://github.com/data-science-on-aws/workshop/blob/master/12_kubeflow/00_05_Launch_Kubeflow_Jupyter_Notebook.ipynb
# Enable the Public Kubeflow UI
This deploys `istio-ingress` in the `istio-system` namespace and creates a publicly-available `LoadBalancer` endpoint.
THIS IS A PUBLIC ENDPOINT.
```
%%bash
source ~/.bash_profile
###################################################################\n",
# UNCOMMENT THIS OUT - PUBLIC ENDPOINT \n",
###################################################################\n",
cd ${KF_DIR}/.cache/manifests/manifests-1.0.2/
kubectl apply -k aws/istio-ingress/base --namespace istio-system
cd ${KF_DIR}
```
# Run the Next Cell Until You See a Valid URL
Notes:
* If you see an empty `http://` endpoint, then you need to uncomment out the code above, but be careful because the endpoint is public and you may be hacked!
* The endpoint will look something like this: `http://[some-long-subdomain-name].[aws-region].elb.amazonaws.com`
* Navigate to the Kubeflow Dashboard at this URL. THIS WILL TAKE A FEW MINUTES AS DNS IS EVENTUALLY CONSISTENT AND TAKES A FEW MINUTES TO PROPAGATE ACROSS THE WORLD.
* If you see a 404 in your browser, please be patient. This will take a few minutes as mentioned above.
**아래와 같이 elb(Elastic Load Balancer)가 생성이 되면 아래에 URL 이 생성됨**
- EC2 Console --> 좌측의 Load Balancer 클릭

```
%%bash
echo " THIS LINK MAY TAKE A FEW MINUTES TO SHOW UP. PATIENCE "
echo ""
echo "********************************************************"
echo " CLICK THE FOLLOWING LINK WHEN IT APPEARS "
echo ""
echo http://$(kubectl get ingress -n istio-system -o jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}')
echo ""
echo "^^^^^^ COPY/PASTE THIS URL INTO A NEW BROWSER TAB ^^^^^^"
echo "********************************************************"
echo ""
echo "=====> FOLLOW THE INSTRUCTIONS IN NEW BROWSER TAB <====="
echo ""
echo "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^"
```

Click on `Start Setup`.
**Note: You must use the default namespace `anonymous`.**
Click `Finish` to view the dashboard.
# You should continue when you see the following Kubeflow Dashboard.

# Launch Kubeflow Notebook Server
Kubeflow Notebooks are based on Jupyter Notebooks. They are open-source web applications that allow you to create and share documents that contain live code, equations, visualizations and narrative text. They are often used for data cleaning, transformations, analysis, and visualizations. Additionally, Kubeflow and Jupyter Notebooks are used for numerical simulations, statistical modeling, machine learning, and artificial intelligence.
# Create a New Kubeflow Notebook Server

# Select the `anonymous` Namespace

This pre-populates the namespace field on the dashboard. Specify a name **notebook** for the notebook:

# Check the `Custom Image` Checkbox
# Specify Our Optimized Notebook Image:
*********************************************
```
pipelineai/kubeflow-notebook-cpu-1.13.1:2.0.0
```
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
*********************************************

*********************************************
```
pipelineai/kubeflow-notebook-cpu-1.13.1:2.0.0
```
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
*********************************************
# Change the CPU to **1.0** (Not More, Not Less)

Scroll to the bottom, accept all other defaults, and click on **LAUNCH**.

It takes a couple minutes for the first Jupyter notebook to come online. Click on **CONNECT**

This connects to the notebook and opens the notebook interface in a new browser tab.

# Launch a New Terminal in the Notebook
Click on **New**, select **Terminal**

# Clone our Repo in the Terminal
*********************************************
```
git clone https://github.com/data-science-on-aws/workshop
```
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
*********************************************

*********************************************
```
git clone https://github.com/data-science-on-aws/workshop
```
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
*********************************************
# Run the Kubeflow Notebooks
**Note:** Make sure you are in the Kubeflow Jupyter Notebook (not SageMaker Jupyter Notebook.)
Navigate to the `workshop/12_kubeflow/` directory and start running the notebooks from `01_*`.

```
%%javascript
Jupyter.notebook.save_checkpoint();
Jupyter.notebook.session.delete();
```
|
github_jupyter
|
%%bash
source ~/.bash_profile
###################################################################\n",
# UNCOMMENT THIS OUT - PUBLIC ENDPOINT \n",
###################################################################\n",
cd ${KF_DIR}/.cache/manifests/manifests-1.0.2/
kubectl apply -k aws/istio-ingress/base --namespace istio-system
cd ${KF_DIR}
%%bash
echo " THIS LINK MAY TAKE A FEW MINUTES TO SHOW UP. PATIENCE "
echo ""
echo "********************************************************"
echo " CLICK THE FOLLOWING LINK WHEN IT APPEARS "
echo ""
echo http://$(kubectl get ingress -n istio-system -o jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}')
echo ""
echo "^^^^^^ COPY/PASTE THIS URL INTO A NEW BROWSER TAB ^^^^^^"
echo "********************************************************"
echo ""
echo "=====> FOLLOW THE INSTRUCTIONS IN NEW BROWSER TAB <====="
echo ""
echo "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^"
pipelineai/kubeflow-notebook-cpu-1.13.1:2.0.0
pipelineai/kubeflow-notebook-cpu-1.13.1:2.0.0
git clone https://github.com/data-science-on-aws/workshop
git clone https://github.com/data-science-on-aws/workshop
%%javascript
Jupyter.notebook.save_checkpoint();
Jupyter.notebook.session.delete();
| 0.236252 | 0.712895 |
## Postprocess SWAT Simulations (2) - Runoff Change
As mentioned in the tutorial of [A Toolchain for Training Hydrological Modeling under Climate Change based on SWAT](https://www.linkedin.com/pulse/toolchain-training-hydrological-modeling-under-climate-chonghua-yin/), we know that the tool of [SWAT Output Viewer](https://swatviewer.com/) is excellent at managing multiple SWAT simulation scenarios through storing all data in SQLite databases.
Since we already know how to post-process SWAT simulation by SQLite and Pandas in this [tutorial](https://www.linkedin.com/pulse/postprocess-swat-simulations-sqlite-pandas-1-runoff-chonghua-yin/), it is very easy to calculate seasonal mean runoff changes between different SWAT scenarios, which is a common approach to assess the impacts of climate change on hydrological processes.
In this notebook, we put some codes from the [previous one](https://www.linkedin.com/pulse/postprocess-swat-simulations-sqlite-pandas-1-runoff-chonghua-yin/) into functions to simplify to query SQLite databases and calculate seasonal statistics. Even so, it is still better to go through the previous notebook that shows the basic postprocessing procedure step by step.
*It is worth noting that all data in this series are fake data and only are used to show how to post-process SWAT simulations through open source tools.*
## 1. Load all needed libraries
```
import pandas as pd
import sqlite3
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
```
## 2. Calculate seasonal mean runoff changes
To simplify reading, we put the code into a function. Moreover, we only used the RCH table.
```
def read_rch(db_name):
con = sqlite3.connect(db_name)
cursor = con.cursor()
df = pd.read_sql_query("SELECT RCH, YR, MO, FLOW_OUTcms from rch", con)
df = df.set_index(['MO'])
con.close()
return df
```
In addition, we only care about seasonal changes. Therefore, have to convert monthly data into seasonal means.
```
def calculate_ssnmean(df):
quarters = {1: 'DJF', 2: 'DJF', 3: 'MAM', 4: 'MAM', 5: 'MAM', 6: 'JJA',
7: 'JJA', 8: 'JJA', 9: 'SON', 10: 'SON', 11: 'SON', 12: 'DJF'}
ssndf = df.groupby(['RCH',quarters])['FLOW_OUTcms'].mean()
ssndf = ssndf.reset_index()
ssndf.set_index(['RCH'])
ssndf = ssndf.rename(index=str, columns={"level_1":"SSN"})
pivoted = ssndf.pivot(index='RCH', columns='SSN', values='FLOW_OUTcms')
return pivoted
```
### 2.1 Read Baseline runoff
```
db_name = 'data\\baseline\\result_664_monthly.db3'
df_bsl = read_rch(db_name)
df_bsl.head()
```
### 2.2 Read runoff in future
```
db_name = 'data\\future\\result_664_monthly.db3'
df_fut = read_rch(db_name)
df_fut.head()
```
### 2.3 Calculate seasonal mean runoff
```
pivoted_bsl = calculate_ssnmean(df_bsl)
pivoted_fut = calculate_ssnmean(df_fut)
print(pivoted_fut.head())
print(pivoted_bsl.head())
```
### 2.4 Calculate seasonal changes
```
pivoted_ch = (pivoted_fut - pivoted_bsl)/pivoted_bsl*100.0
pivoted_ch.head()
```
## 3. Visualize
Set some parameters to make figure pretty
```
# Plot size to 15" x 7"
matplotlib.rc('figure', figsize = (15, 7))
# Font size to 14
matplotlib.rc('font', size = 14)
# Display top and right frame lines
matplotlib.rc('axes.spines', top = True, right = True)
# Remove grid lines
matplotlib.rc('axes', grid = False)
# Set backgound color to white
matplotlib.rc('axes', facecolor = 'white')
ax = pivoted_ch.plot(kind='bar',
title='Seasonal Mean Runoff Change between Baseline and Future Periods')
ax.axhline(y=0, xmin=-1, xmax=1, color='k', lw=2)
ax.set_ylabel('Runoff Change (%)')
```
## References
Fernando Pérez and Brian E. Granger. IPython: A System for Interactive Scientific Computing, Computing in Science & Engineering, 9, 21-29 (2007), DOI:10.1109/MCSE.2007.53
John D. Hunter. Matplotlib: A 2D Graphics Environment, Computing in Science & Engineering, 9, 90-95 (2007), DOI:10.1109/MCSE.2007.55
Wes McKinney. Data Structures for Statistical Computing in Python, Proceedings of the 9th Python in Science Conference, 51-56 (2010)
https://www.sqlite.org/lang.html
|
github_jupyter
|
import pandas as pd
import sqlite3
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
def read_rch(db_name):
con = sqlite3.connect(db_name)
cursor = con.cursor()
df = pd.read_sql_query("SELECT RCH, YR, MO, FLOW_OUTcms from rch", con)
df = df.set_index(['MO'])
con.close()
return df
def calculate_ssnmean(df):
quarters = {1: 'DJF', 2: 'DJF', 3: 'MAM', 4: 'MAM', 5: 'MAM', 6: 'JJA',
7: 'JJA', 8: 'JJA', 9: 'SON', 10: 'SON', 11: 'SON', 12: 'DJF'}
ssndf = df.groupby(['RCH',quarters])['FLOW_OUTcms'].mean()
ssndf = ssndf.reset_index()
ssndf.set_index(['RCH'])
ssndf = ssndf.rename(index=str, columns={"level_1":"SSN"})
pivoted = ssndf.pivot(index='RCH', columns='SSN', values='FLOW_OUTcms')
return pivoted
db_name = 'data\\baseline\\result_664_monthly.db3'
df_bsl = read_rch(db_name)
df_bsl.head()
db_name = 'data\\future\\result_664_monthly.db3'
df_fut = read_rch(db_name)
df_fut.head()
pivoted_bsl = calculate_ssnmean(df_bsl)
pivoted_fut = calculate_ssnmean(df_fut)
print(pivoted_fut.head())
print(pivoted_bsl.head())
pivoted_ch = (pivoted_fut - pivoted_bsl)/pivoted_bsl*100.0
pivoted_ch.head()
# Plot size to 15" x 7"
matplotlib.rc('figure', figsize = (15, 7))
# Font size to 14
matplotlib.rc('font', size = 14)
# Display top and right frame lines
matplotlib.rc('axes.spines', top = True, right = True)
# Remove grid lines
matplotlib.rc('axes', grid = False)
# Set backgound color to white
matplotlib.rc('axes', facecolor = 'white')
ax = pivoted_ch.plot(kind='bar',
title='Seasonal Mean Runoff Change between Baseline and Future Periods')
ax.axhline(y=0, xmin=-1, xmax=1, color='k', lw=2)
ax.set_ylabel('Runoff Change (%)')
| 0.495606 | 0.954858 |
# Object Orientation in Python
We will cover Object Orientation based on three main topics:
* Object Orientation basics
* Implementing OO in Python
* Inheritance
## Object Orientation basics
Object-Oriented (OO) is a programming paradigm in which different "components" of your software are modeled based on real-world objects. An object is anything that has some characteristics and can perform a function.
OO basically relies in abstraction concept, by defining "**classes**" of **objects** in the software level, which can be represented by the object's data (attributes) and "code" (actions they can perform)
A **class** can be defined as a "blueprint" of objects
Used to make the code easier to maintain, by modularizing and creating better representations of real things.
Dummy example used to explain OO:
 
Other example of class would be **Person**, which would have different attributes, like: birth_date, gender, height, weight, city_of_birth, hair_color. a **Person** can also perform some actions like: exercise (which may change their weight), sleep, work, etc.
Let's brainstorm a bit, and think about a real-world thing that we all may know...
## OO in Python
Now we will explore how to deal with Object Orientation in Python.
As a first step, we will use a well-known example when learning OO:
* designing a class to define a **point** in a plane
* what are the clear attributes that a point would have?
See below how to create a class that abstract points, with attributes representing its position
```
class Point:
x = 0
y = 0
```
**Great!**
We've just created a class point! But this is just our "blueprint" or our "template". To actually create points we need to **instantiate** objects out of this **class**
```
#let's pretend this is our main program
point1 = Point()
point2 = Point()
print(type(point1))
```
Now we have TWO points, created based on our class Point. When we do ```point1 = Point()```, we are creating a variable called point1, which is a variable typed as a Point (as per the print statement).
#### Attributes
So far, our class template only provides attributes to the point. So, what we can do is change the attributes of each point... (although, I anticipate that this is not a good practice, because we are messing up with the **encapsulation** of the objects)
```
point1.x = 10.4
point1.y = 2.2
point2.x = 0
point2.y = -4.3
print("Point 1 position is: %0.2f, %0.2f. Point 2 position is: %0.2f, %0.2f"%(point1.x, point1.y, point2.x, point2.y))
```
#### Methods
Methods are the actions that map the behavior of our objects. In Python, the way to declare these actions is through `def`, similarly to the way we define functions (methods are functions that apply to the object)
See the example of adding a `translate` method to our Point class. This method translates our points by a given `x, y` units in the plane
```
class Point:
__x = 0
y = 0
def translate(self, dx, dy):
self.__x += dx # self.x = self.x + dx
self.y += dy
point1 = Point()
point1.__x= 2
print("Point 1 position is: %0.2f, %0.2f."%(point1.__x, point1.y))
point1.translate(2,2)
print("Point 1 position is: %0.2f, %0.2f."%(point1.__x, point1.y))
```
Let's see what we've done there:
* Created our method, that receives *three* parameters: ``self``, ``dx``, and ``dy``. While `dx` and `dy` are the units that I want to translate the points, `self` is a *implicit parameter* representing the object we are dealing with. We do not need to provide any value to this parameter, it is used internally. **self must be the first parameter of any method**
* then, we change `x` and `y` attributes that belong to the `self` object (which is a self reference, saying: "I want to change MY values of x and y"
#### Let's code together
1. We will write the method `set_location(self, x, y)`, which is reponsible for setting the position of the point without accessing the attributes directly
2. You will code a method `distance_from_zero(self)`, which returns the distance from the point to the origin of the plane (position 0,0)
3. You will code a method `distance (self, other)`, which calculates the distance from the object to `other` point received via parameter
```
###space for our exercise
import math
class Point:
x = 0
y = 0
def translate(self, dx, dy):
self.x += dx # self.x = self.x + dx
self.y += dy
def set_location(self, new_x, new_y):
self.x = new_x
self.y = new_y
def distance(self, pointB):
return math.sqrt((self.x-pointB.x)**2 + (self.y-pointB.y)**2)
def distance_from_zero(self):
return math.sqrt(self.x**2 + self.y**2)
point1 = Point()
point1.set_location(3, 4)
point2 = Point()
point2.set_location(2, 4)
print(point1.distance_from_zero())
print(point1.distance(point1))
```
#### Dummy master to test our methods
```
point1 = Point()
point1.set_location(3, 4)
print(point1.distance_from_zero())
point2 = Point()
point2.set_location(-2, -5)
print(point1.distance(point2))
print(point2.distance(point1))
print(point2.distance_from_zero())
```
#### Constructors
Nice concept in OO languages.
When you build an object, a specific method called constructor is called.
In previous examples, whenever we called `point1 = Point()`, a constructor with no parameters was called (which is a default, and do not change any attribute). We can define which kinds of constructors we want to make available, with or without parameters.
In Python, this method is named `__init__`
For example, if we want to instantiate our points providing its position, we would define a `__init__` method like this:
```
class Point:
def __init__(self, x=0, y=0):
self.x = x
self.y = y
point1 = Point(3,4)
#point1= Point()
print("Point 1 position is: %0.2f, %0.2f. "%(point1.x, point1.y))
##And now I've create a point in position x=2, y=0
```
We can define different *signatures* to different constructors. Meaning that, we can define many `__init__` methods if we change the parameters required.
For example, let's create a constructor that does not require any parameter, and sets the position to 0,0.
**Let's do together**
```
###sandbox for the constructor
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
```
**Solving the question about overriding a method**
The way to enable calling a method using multiple number of parameter in Python is using defaults (even for the constructors).
So, if we want to enable someone calling
`point1 = Point()` AND `point2 = Point(2,3)` and both work is:
```
###sandbox for the constructor
class Point:
# when you add an assignment statement in the parameters it means:
# assign this value if you do not receive anything
def __init__(self, x=0, y=0):
self.x = x
self.y = y
point1 = Point()
point2 = Point (2,3)
print(point2.x)
```
#### Defining the way the object is print
We can also define what would be an output when someone wants to print our object, like (`print(point1)`). What would happen now??
```
point1 = Point(2,4)
print(point1)
```
*Not a good expression of what this point actually is, right?*
We can make it better. Defining a method called `__str__` is the way to go
```
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
# FOCUS HERE:
def __str__(self):
return("(" + str(self.x) + ", " + str(self.y) + ")")
# let's see how it goes now
point1 = Point(2,4)
print(point1)
```
#### Our complete code should come here
```
###space for our exercise
import math
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def __str__(self):
return(str(self.x) + ", " + str(self.y))
def translate(self, dx, dy):
self.x += dx # self.x = self.x + dx
self.y += dy
def set_location(self, x, y):
self.x = x
self.y = y
def distance_from_zero(self):.
distance = math.sqrt(self.x**2 + self.y**2)
return distance
def distance (self, other:
return math.sqrt((self.x-other.x)**2 + (self.y-other.y)**2)
```
### One more example for your delight
In the following example, we have 2 classes:
* `Contact`: an abstraction of the contacts in our Contact List. We represent this class of objects as something with the attributes `name` and `email_address`, because these are important in the context of our problem scope.
* `Email`: representing the email itself, which contains recipients, sender, body, subject, a flag identifying whether the email was sent or not. It also maps a set of actions that we may use to interact with our email objects.
```
class Contact:
def __init__(self, name="", email_address=""):
self.name = name
self.email_address = email_address
def set_email_address(self, new_address):
self.email_address = new_address
def set_name(self, new_name):
self.name = new_name
def __str__ (self):
return (self.name + " <" + self.email_address + ">")
class Email:
def __init__(self):
self.is_sent = False
self.subject = ""
self.from_ = ""
self.to = []
self.cc = []
def send_email(self):
if (not self.is_sent):
#do the magic of sending the email and:
self.is_sent = True
def set_sender(self, from_):
self.from_ = from_
def add_recipient(self, contact, where="to"):
if (where == "to"):
self.to.append(contact.email_address)
elif (where == "cc"):
self.cc.append(contact.email_address)
else:
print ("please provide a valid field (cc or to)")
def set_body (self, body):
self.body = body
def set_subject (self, subject):
self.subject = subject
```
## Inheritance
Inheritance enable us to define a class that takes all the characteristics and functions from a parent class. And we can extend the parent or change specific ways that an action is performed.
Look at this simple example built upon our `Point` class
```
class Point3D(Point):
z = 0
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
def translate(self, dx, dy, dz):
Point.translate(self, dx, dy)
self.z += dz
point1 = Point3D(3,4,5)
print(point1.distance_from_zero())
```
Well... what we have is:
* we define the class `class Point3D (Point)`: this means that Point is the *parent* class for Point3D. Everything inside Point is part of Point3D
* so, we can change any attribute and call any function of Point when we create a Point3D object.
* we can call any parent function from the child class by mentioning the name of the parent class: `Point.translate(...)`
* Here we see the value of `self` attribute. We can pass the object of the child class to the parent
One issue to deal with:
* if there is a different behavior for some action (e.g.: `distance_from_zero(self)`), we need to **"overwrite"** the function:
```
class Point3D(Point):
z = 0
def __init__(self, x, y, z):
Point.__init__(self, x, y)
self.z = z
def translate(self, dx, dy, dz):
Point.translate(self, dx, dy)
self.z += dz
def distance_from_zero(self):
distance = math.sqrt(self.x**2 + self.y**2 + self.z**2)
return distance
point1 = Point3D(3,4,5)
print(point1.distance_from_zero())
```
### Another example (inheritance):
```
import math
class Polygon:
lengths_of_sides = list()
def __init__(self, sides):
self.sides = sides
def print_num_sides(self):
print('This polygon has %d sides'%(self.sides))
def perimeter(self):
perimeter = sum(self.lengths_of_sides)
return perimeter
#This is a Rectangle, child of Polygon!
class Rectangle(Polygon):
#lengths of sides is a list of size 2
def __init__(self, lengths_of_sides):
Polygon.__init__(self, 4)
double_sides = []+lengths_of_sides
self.lengths_of_sides = lengths_of_sides+double_sides
print(self.lengths_of_sides)
def area(self):
#multiple assignment.
# side_1 receives lenghts_of_sides[0]
# side_2 receives lenghts_of_sides[1]
side_1 = self.lengths_of_sides[0]
side_2 = self.lengths_of_sides[1]
return side_1 * side_2
#And class Polygon has another child!!!
class Triangle(Polygon):
#lengths of sides is a list of size 3
def __init__(self, lengths_of_sides):
Polygon.__init__(self, 3)
self.lengths_of_sides = lengths_of_sides
def area(self):
a, b, c = self.lengths_of_sides
# calculate the semi-perimeter
semiperimeter = Polygon.perimeter(self) / 2
return math.sqrt((semiperimeter*(semiperimeter-a)*(semiperimeter-b)*(semiperimeter-c)))
triangle = Triangle([3, 4, 5])
print(triangle.area())
print(triangle.perimeter())
rectangle = Rectangle([2,4])
print(rectangle.area())
print(rectangle.perimeter())
print(type(rectangle))
list1 = [1,2,3]
list2 = []+list1
list1[1]=1000
print(list2)
```
### Hands on!
1. Create a class Square (inheriting...)
2. Create a way to print (`__str__`) the information about the Polygon that applies to all types of Polygons
```
###CODE HERE
```
### In Class Assignment!!! ###
**How to Turn In:** Create a Python Notebook on your repository named `InClassSept29.ipynb`
**Deadline:** Oct-01
We want to manage our movie collection. To do so, we need to write a program that helps us. You need to use OO design and follow the constraints below:
1. A `Movie` needs to have a title, a genres (may be more than one), year, my review, a list of actors, and a watch counter (how many times I watched), borrowed (a flag - True/False - that says if this movie is currently with someone), borrower name.
2. We can interact with a movie by:
- watching the movie (increase the counter),
- writing a review about the movie,
- set any of the fields (except flag, borrower, and counter, which are changed by different actions)
- borrowing the movie (set the borrower and change the flag)
- returning the moving (set borrower to "" and flag to False)
- list the details of the movie when printing it in the following format:
```
Movie: The Godfather Year: 1972
Genre: Crime, Drama
List of Actors:
Marlon Brando
Al Pacino
Robert Duvall
```
3. The list of actors, need to have objects of type `Actor`, which are composed of name, date_of_birth, and nationality. You should be able to:
- set the fields name, date_of_birth, and nationality.
***CHALLENGE:*** *change your classes to make it possible to list (from an object actor) all the movies that the actor participated.*
```
```
|
github_jupyter
|
class Point:
x = 0
y = 0
#let's pretend this is our main program
point1 = Point()
point2 = Point()
print(type(point1))
point1.x = 10.4
point1.y = 2.2
point2.x = 0
point2.y = -4.3
print("Point 1 position is: %0.2f, %0.2f. Point 2 position is: %0.2f, %0.2f"%(point1.x, point1.y, point2.x, point2.y))
class Point:
__x = 0
y = 0
def translate(self, dx, dy):
self.__x += dx # self.x = self.x + dx
self.y += dy
point1 = Point()
point1.__x= 2
print("Point 1 position is: %0.2f, %0.2f."%(point1.__x, point1.y))
point1.translate(2,2)
print("Point 1 position is: %0.2f, %0.2f."%(point1.__x, point1.y))
###space for our exercise
import math
class Point:
x = 0
y = 0
def translate(self, dx, dy):
self.x += dx # self.x = self.x + dx
self.y += dy
def set_location(self, new_x, new_y):
self.x = new_x
self.y = new_y
def distance(self, pointB):
return math.sqrt((self.x-pointB.x)**2 + (self.y-pointB.y)**2)
def distance_from_zero(self):
return math.sqrt(self.x**2 + self.y**2)
point1 = Point()
point1.set_location(3, 4)
point2 = Point()
point2.set_location(2, 4)
print(point1.distance_from_zero())
print(point1.distance(point1))
point1 = Point()
point1.set_location(3, 4)
print(point1.distance_from_zero())
point2 = Point()
point2.set_location(-2, -5)
print(point1.distance(point2))
print(point2.distance(point1))
print(point2.distance_from_zero())
class Point:
def __init__(self, x=0, y=0):
self.x = x
self.y = y
point1 = Point(3,4)
#point1= Point()
print("Point 1 position is: %0.2f, %0.2f. "%(point1.x, point1.y))
##And now I've create a point in position x=2, y=0
###sandbox for the constructor
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
###sandbox for the constructor
class Point:
# when you add an assignment statement in the parameters it means:
# assign this value if you do not receive anything
def __init__(self, x=0, y=0):
self.x = x
self.y = y
point1 = Point()
point2 = Point (2,3)
print(point2.x)
point1 = Point(2,4)
print(point1)
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
# FOCUS HERE:
def __str__(self):
return("(" + str(self.x) + ", " + str(self.y) + ")")
# let's see how it goes now
point1 = Point(2,4)
print(point1)
###space for our exercise
import math
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def __str__(self):
return(str(self.x) + ", " + str(self.y))
def translate(self, dx, dy):
self.x += dx # self.x = self.x + dx
self.y += dy
def set_location(self, x, y):
self.x = x
self.y = y
def distance_from_zero(self):.
distance = math.sqrt(self.x**2 + self.y**2)
return distance
def distance (self, other:
return math.sqrt((self.x-other.x)**2 + (self.y-other.y)**2)
class Contact:
def __init__(self, name="", email_address=""):
self.name = name
self.email_address = email_address
def set_email_address(self, new_address):
self.email_address = new_address
def set_name(self, new_name):
self.name = new_name
def __str__ (self):
return (self.name + " <" + self.email_address + ">")
class Email:
def __init__(self):
self.is_sent = False
self.subject = ""
self.from_ = ""
self.to = []
self.cc = []
def send_email(self):
if (not self.is_sent):
#do the magic of sending the email and:
self.is_sent = True
def set_sender(self, from_):
self.from_ = from_
def add_recipient(self, contact, where="to"):
if (where == "to"):
self.to.append(contact.email_address)
elif (where == "cc"):
self.cc.append(contact.email_address)
else:
print ("please provide a valid field (cc or to)")
def set_body (self, body):
self.body = body
def set_subject (self, subject):
self.subject = subject
class Point3D(Point):
z = 0
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
def translate(self, dx, dy, dz):
Point.translate(self, dx, dy)
self.z += dz
point1 = Point3D(3,4,5)
print(point1.distance_from_zero())
class Point3D(Point):
z = 0
def __init__(self, x, y, z):
Point.__init__(self, x, y)
self.z = z
def translate(self, dx, dy, dz):
Point.translate(self, dx, dy)
self.z += dz
def distance_from_zero(self):
distance = math.sqrt(self.x**2 + self.y**2 + self.z**2)
return distance
point1 = Point3D(3,4,5)
print(point1.distance_from_zero())
import math
class Polygon:
lengths_of_sides = list()
def __init__(self, sides):
self.sides = sides
def print_num_sides(self):
print('This polygon has %d sides'%(self.sides))
def perimeter(self):
perimeter = sum(self.lengths_of_sides)
return perimeter
#This is a Rectangle, child of Polygon!
class Rectangle(Polygon):
#lengths of sides is a list of size 2
def __init__(self, lengths_of_sides):
Polygon.__init__(self, 4)
double_sides = []+lengths_of_sides
self.lengths_of_sides = lengths_of_sides+double_sides
print(self.lengths_of_sides)
def area(self):
#multiple assignment.
# side_1 receives lenghts_of_sides[0]
# side_2 receives lenghts_of_sides[1]
side_1 = self.lengths_of_sides[0]
side_2 = self.lengths_of_sides[1]
return side_1 * side_2
#And class Polygon has another child!!!
class Triangle(Polygon):
#lengths of sides is a list of size 3
def __init__(self, lengths_of_sides):
Polygon.__init__(self, 3)
self.lengths_of_sides = lengths_of_sides
def area(self):
a, b, c = self.lengths_of_sides
# calculate the semi-perimeter
semiperimeter = Polygon.perimeter(self) / 2
return math.sqrt((semiperimeter*(semiperimeter-a)*(semiperimeter-b)*(semiperimeter-c)))
triangle = Triangle([3, 4, 5])
print(triangle.area())
print(triangle.perimeter())
rectangle = Rectangle([2,4])
print(rectangle.area())
print(rectangle.perimeter())
print(type(rectangle))
list1 = [1,2,3]
list2 = []+list1
list1[1]=1000
print(list2)
###CODE HERE
Movie: The Godfather Year: 1972
Genre: Crime, Drama
List of Actors:
Marlon Brando
Al Pacino
Robert Duvall
```
3. The list of actors, need to have objects of type `Actor`, which are composed of name, date_of_birth, and nationality. You should be able to:
- set the fields name, date_of_birth, and nationality.
***CHALLENGE:*** *change your classes to make it possible to list (from an object actor) all the movies that the actor participated.*
| 0.510008 | 0.990533 |
<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# Qonto - Get statement barline
<a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Qonto/Qonto_Get_statement_barline.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
**Tags:** #qonto #bank #statement #plotly #barline #naas_drivers
**Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/)
## Input
### Import library
```
from naas_drivers import qonto
```
### Get your Qonto credentials
<a href='https://www.notion.so/naas-official/Qonto-driver-Get-your-credentials-0cc97828b4e7467c8bfbcf704a77e5f4'>How to get your credentials ?</a>
```
QONTO_USER_ID = 'YOUR_USER_ID'
QONTO_SECRET_KEY = 'YOUR_SECRET_KEY'
```
### Parameters
```
# Date to start extraction, format: "AAAA-MM-JJ", example: "2021-01-01"
date_from = None
# Date to end extraction, format: "AAAA-MM-JJ", example: "2021-01-01", default = now
date_to = None
# Title of the graph, default = "Evolution du {date_from} au {date_to}"
title = f"💵<b> Qonto - Suivi des encaissements / Décaissements</b><br>"
# Name of line displayed in legend
line_name = "Solde"
# Line color
line_color = "#1ea1f1"
# Name of cash in bar displayed in legend
cashin_name = "Encaissements"
# Cash in bar color
cashin_color = "#47dd82"
# Name of cash out bar displayed in legend
cashout_name = "Décaissements"
# Cash out bar color
cashout_color = "#ea484f"
```
## Model
### Create barline chart
```
barline = qonto.connect(QONTO_USER_ID, QONTO_SECRET_KEY).statements.barline(date_from=date_from,
date_to=date_to,
title=title,
line_name=line_name,
line_color=line_color,
cashin_name=cashin_name,
cashin_color=cashin_color,
cashout_name=cashout_name,
cashout_color=cashout_color)
```
## Output
### Display chart
```
barline
```
|
github_jupyter
|
from naas_drivers import qonto
QONTO_USER_ID = 'YOUR_USER_ID'
QONTO_SECRET_KEY = 'YOUR_SECRET_KEY'
# Date to start extraction, format: "AAAA-MM-JJ", example: "2021-01-01"
date_from = None
# Date to end extraction, format: "AAAA-MM-JJ", example: "2021-01-01", default = now
date_to = None
# Title of the graph, default = "Evolution du {date_from} au {date_to}"
title = f"💵<b> Qonto - Suivi des encaissements / Décaissements</b><br>"
# Name of line displayed in legend
line_name = "Solde"
# Line color
line_color = "#1ea1f1"
# Name of cash in bar displayed in legend
cashin_name = "Encaissements"
# Cash in bar color
cashin_color = "#47dd82"
# Name of cash out bar displayed in legend
cashout_name = "Décaissements"
# Cash out bar color
cashout_color = "#ea484f"
barline = qonto.connect(QONTO_USER_ID, QONTO_SECRET_KEY).statements.barline(date_from=date_from,
date_to=date_to,
title=title,
line_name=line_name,
line_color=line_color,
cashin_name=cashin_name,
cashin_color=cashin_color,
cashout_name=cashout_name,
cashout_color=cashout_color)
barline
| 0.390243 | 0.792665 |
## Assigning plate ids to features
To reconstruct any feature geometries, each feature must have a plate id assigned. If they don't already, then the pygplates function 'PlatePartitioner' performs this function (analogous to the 'assign plate ids' menu option in GPlates GUI)
In the first example, we partition magnetic anomaly picks from the Global Seafloor Fabric and Magnetic Lineation Database (or GSFML): http://www.soest.hawaii.edu/PT/GSFML/
Magnetic picks from this database can be downloaded in GPlates-friendly data formats - however, none of the points are associated with any particular plate tectonic reconstruction or plate id system. Hence, to be able to reconstruct these points, we need to assign the plate id to each point ourselves. This involves using the 'Static Polygons' - for an overview, look at the 'Creating Features' tutorial for GPlates: https://sites.google.com/site/gplatestutorials/
```
import pygplates
# The magnetic picks are the 'features to partition'
# Since they are already in OGR GMT format, gplates can read them directly
mag_picks = pygplates.FeatureCollection('Data/GSFML.Gaina++_2009_JGeolSoc.picks.gmt')
# static polygons are the 'partitioning features'
static_polygons = pygplates.FeatureCollection('Data/Seton_etal_ESR2012_StaticPolygons_2012.1.gpmlz')
# The partition_into_plates function requires a rotation model, since sometimes this would be
# necessary even at present day (for example to resolve topological polygons)
rotation_model=pygplates.RotationModel('Data/Seton_etal_ESR2012_2012.1.rot')
# partition features
partitioned_mag_picks = pygplates.partition_into_plates(static_polygons,
rotation_model,
mag_picks)
# Write the partitioned data set to a file
output_feature_collection = pygplates.FeatureCollection(partitioned_mag_picks)
output_feature_collection.write('/tmp/GSFML.Gaina++_2009_JGeolSoc.picks.partitioned.gmt')
```
As a second example, we take a dataset that currently could not be loaded directly into gplates (since it is not in a gplates-readable format).
The points are paleoenvironmental indicators from a map compilation by Boucot et al (2013), downloaded in from Christopher Scotese's rsearchgate page in csv format (all data are lat,long points with a series of other attributes).
There are a number of ways to deal with this data in python, here we use a pandas dataframe
```
import pandas as pd
df = pd.read_csv('Data/Boucot_etal_Map24_Paleocene_v4.csv',sep=',')
df
```
There are a few ways to assign plateids, but the simplest ways generally involve putting the points into a feature collection and using the same partitioning function we used above on the.
```
# put the points into a feature collection, using Lat,Long coordinates from dataframe
point_features = []
for index,row in df.iterrows():
point = pygplates.PointOnSphere(float(row.LAT),float(row.LONG))
point_feature = pygplates.Feature()
point_feature.set_geometry(point)
point_features.append(point_feature)
# The partition points function can then be used as before
partitioned_point_features = pygplates.partition_into_plates(static_polygons,
rotation_model,
point_features)
# Reconstruct the points to 60 Ma (in the Paleocene)
#reconstructed_point_features = []
pygplates.reconstruct(partitioned_point_features,
rotation_model,
'/tmp/reconstructed_points.shp',
60.0)
coastlines_filename = 'Data/Seton_etal_ESR2012_Coastlines_2012.1_Polygon.gpmlz'
pygplates.reconstruct(coastlines_filename,
rotation_model,
'/tmp/reconstructed_coastlines.shp',
60.0)
```
Now we can plot the reconstructed points to see their distribution in the Paleocene
```
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import cartopy.io.shapereader as shpreader
import numpy as np
%matplotlib inline
# Create map
fig = plt.figure(figsize=(14,10))
ax_map = fig.add_axes([0,0,0.9,1.0], projection=ccrs.Mollweide(central_longitude=0))
# Plot the reconstructed coastlines
shp_info = shpreader.Reader('/tmp/reconstructed_coastlines.shp').geometries()
ft_coastline = cfeature.ShapelyFeature(shp_info, ccrs.PlateCarree())
ax_map.add_feature(ft_coastline, facecolor='khaki', edgecolor='0.5', linewidth=0.5, alpha=0.5)
# Plot the reconstructed points
points = list(shpreader.Reader('/tmp/reconstructed_points.shp').geometries())
ax_map.scatter([point.x for point in points], [point.y for point in points], transform=ccrs.PlateCarree(),
marker='o', color='m', s=70, zorder=2)
# Show global extent and plot
ax_map.set_global()
plt.show()
```
|
github_jupyter
|
import pygplates
# The magnetic picks are the 'features to partition'
# Since they are already in OGR GMT format, gplates can read them directly
mag_picks = pygplates.FeatureCollection('Data/GSFML.Gaina++_2009_JGeolSoc.picks.gmt')
# static polygons are the 'partitioning features'
static_polygons = pygplates.FeatureCollection('Data/Seton_etal_ESR2012_StaticPolygons_2012.1.gpmlz')
# The partition_into_plates function requires a rotation model, since sometimes this would be
# necessary even at present day (for example to resolve topological polygons)
rotation_model=pygplates.RotationModel('Data/Seton_etal_ESR2012_2012.1.rot')
# partition features
partitioned_mag_picks = pygplates.partition_into_plates(static_polygons,
rotation_model,
mag_picks)
# Write the partitioned data set to a file
output_feature_collection = pygplates.FeatureCollection(partitioned_mag_picks)
output_feature_collection.write('/tmp/GSFML.Gaina++_2009_JGeolSoc.picks.partitioned.gmt')
import pandas as pd
df = pd.read_csv('Data/Boucot_etal_Map24_Paleocene_v4.csv',sep=',')
df
# put the points into a feature collection, using Lat,Long coordinates from dataframe
point_features = []
for index,row in df.iterrows():
point = pygplates.PointOnSphere(float(row.LAT),float(row.LONG))
point_feature = pygplates.Feature()
point_feature.set_geometry(point)
point_features.append(point_feature)
# The partition points function can then be used as before
partitioned_point_features = pygplates.partition_into_plates(static_polygons,
rotation_model,
point_features)
# Reconstruct the points to 60 Ma (in the Paleocene)
#reconstructed_point_features = []
pygplates.reconstruct(partitioned_point_features,
rotation_model,
'/tmp/reconstructed_points.shp',
60.0)
coastlines_filename = 'Data/Seton_etal_ESR2012_Coastlines_2012.1_Polygon.gpmlz'
pygplates.reconstruct(coastlines_filename,
rotation_model,
'/tmp/reconstructed_coastlines.shp',
60.0)
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import cartopy.io.shapereader as shpreader
import numpy as np
%matplotlib inline
# Create map
fig = plt.figure(figsize=(14,10))
ax_map = fig.add_axes([0,0,0.9,1.0], projection=ccrs.Mollweide(central_longitude=0))
# Plot the reconstructed coastlines
shp_info = shpreader.Reader('/tmp/reconstructed_coastlines.shp').geometries()
ft_coastline = cfeature.ShapelyFeature(shp_info, ccrs.PlateCarree())
ax_map.add_feature(ft_coastline, facecolor='khaki', edgecolor='0.5', linewidth=0.5, alpha=0.5)
# Plot the reconstructed points
points = list(shpreader.Reader('/tmp/reconstructed_points.shp').geometries())
ax_map.scatter([point.x for point in points], [point.y for point in points], transform=ccrs.PlateCarree(),
marker='o', color='m', s=70, zorder=2)
# Show global extent and plot
ax_map.set_global()
plt.show()
| 0.655446 | 0.952353 |
<h1>REGIONE VALLE D'AOSTA</h1>
Confronto dei dati relativi ai decessi registrati dall'ISTAT e i decessi causa COVID-19 registrati dalla Protezione Civile Italiana con i decessi previsti dal modello predittivo SARIMA.
<h2>DECESSI MENSILI REGIONE ABRUZZO ISTAT</h2>
Il DataFrame contiene i dati relativi ai decessi mensili della regione <b>Valle d'Aosta</b> dal <b>2015</b> al <b>30 settembre 2020</b>.
```
import matplotlib.pyplot as plt
import pandas as pd
decessi_istat = pd.read_csv('../../csv/regioni/valle_aosta.csv')
decessi_istat.head()
decessi_istat['DATA'] = pd.to_datetime(decessi_istat['DATA'])
decessi_istat.TOTALE = pd.to_numeric(decessi_istat.TOTALE)
```
<h3>Recupero dei dati inerenti al periodo COVID-19</h3>
```
decessi_istat = decessi_istat[decessi_istat['DATA'] > '2020-02-29']
decessi_istat.head()
```
<h3>Creazione serie storica dei decessi ISTAT</h3>
```
decessi_istat = decessi_istat.set_index('DATA')
decessi_istat = decessi_istat.TOTALE
decessi_istat
```
<h2>DECESSI MENSILI REGIONE VALLE D'AOSTA CAUSATI DAL COVID</h2>
Il DataFrame contine i dati forniti dalla Protezione Civile relativi ai decessi mensili della regione <b>Valle d'Aosta</b> da <b> marzo 2020</b> al <b>30 settembre 2020</b>.
```
covid = pd.read_csv('../../csv/regioni_covid/valle_aosta.csv')
covid.head()
covid['data'] = pd.to_datetime(covid['data'])
covid.deceduti = pd.to_numeric(covid.deceduti)
covid = covid.set_index('data')
covid.head()
```
<h3>Creazione serie storica dei decessi COVID-19</h3>
```
covid = covid.deceduti
```
<h2>PREDIZIONE DECESSI MENSILI REGIONE SECONDO MODELLO SARIMA</h2>
Il DataFrame contiene i dati riguardanti i decessi mensili della regione <b>Valle d'Aosta</b> secondo la predizione del modello SARIMA applicato.
```
predictions = pd.read_csv('../../csv/pred/predictions_SARIMA_valle_aosta.csv')
predictions.head()
predictions.rename(columns={'Unnamed: 0': 'Data', 'predicted_mean':'Totale'}, inplace=True)
predictions.head()
predictions['Data'] = pd.to_datetime(predictions['Data'])
predictions.Totale = pd.to_numeric(predictions.Totale)
```
<h3>Recupero dei dati inerenti al periodo COVID-19</h3>
```
predictions = predictions[predictions['Data'] > '2020-02-29']
predictions.head()
predictions = predictions.set_index('Data')
predictions.head()
```
<h3>Creazione serie storica dei decessi secondo la predizione del modello</h3>
```
predictions = predictions.Totale
```
<h1>INTERVALLI DI CONFIDENZA
<h3>Limite massimo
```
upper = pd.read_csv('../../csv/upper/predictions_SARIMA_valle_aosta_upper.csv')
upper.head()
upper.rename(columns={'Unnamed: 0': 'Data', 'upper TOTALE':'Totale'}, inplace=True)
upper['Data'] = pd.to_datetime(upper['Data'])
upper.Totale = pd.to_numeric(upper.Totale)
upper.head()
upper = upper[upper['Data'] > '2020-02-29']
upper = upper.set_index('Data')
upper.head()
upper = upper.Totale
```
<h3>Limite minimo
```
lower = pd.read_csv('../../csv/lower/predictions_SARIMA_valle_aosta_lower.csv')
lower.head()
lower.rename(columns={'Unnamed: 0': 'Data', 'lower TOTALE':'Totale'}, inplace=True)
lower['Data'] = pd.to_datetime(lower['Data'])
lower.Totale = pd.to_numeric(lower.Totale)
lower.head()
lower = lower[lower['Data'] > '2020-02-29']
lower = lower.set_index('Data')
lower.head()
lower = lower.Totale
```
<h1> CONFRONTO DELLE SERIE STORICHE </h1>
Di seguito il confronto grafico tra le serie storiche dei <b>decessi totali mensili</b>, dei <b>decessi causa COVID-19</b> e dei <b>decessi previsti dal modello SARIMA</b> della regione <b>Valle d'Aosta</b>.
<br />
I mesi di riferimento sono: <b>marzo</b>, <b>aprile</b>, <b>maggio</b>, <b>giugno</b>, <b>luglio</b>, <b>agosto</b> e <b>settembre</b>.
```
plt.figure(figsize=(15,4))
plt.title("VALLE D'AOSTA - Confronto decessi totali, decessi causa covid e decessi del modello predittivo", size=16)
plt.plot(covid, label='decessi causa covid')
plt.plot(decessi_istat, label='decessi totali')
plt.plot(predictions, label='predizione modello')
plt.legend(prop={'size': 12})
plt.show()
plt.figure(figsize=(15,4))
plt.title("VALLE D'AOSTA - Confronto decessi totali ISTAT con decessi previsti dal modello", size=18)
plt.plot(predictions, label='predizione modello')
plt.plot(upper, label='limite massimo')
plt.plot(lower, label='limite minimo')
plt.plot(decessi_istat, label='decessi totali')
plt.legend(prop={'size': 12})
plt.show()
```
<h3>Calcolo dei decessi COVID-19 secondo il modello predittivo</h3>
Differenza tra i decessi totali rilasciati dall'ISTAT e i decessi secondo la previsione del modello SARIMA.
```
n = decessi_istat - predictions
n_upper = decessi_istat - lower
n_lower = decessi_istat - upper
plt.figure(figsize=(15,4))
plt.title("VALLE D'AOSTA - Confronto decessi accertati covid con decessi covid previsti dal modello", size=18)
plt.plot(covid, label='decessi covid accertati - Protezione Civile')
plt.plot(n, label='devessi covid previsti - modello SARIMA')
plt.plot(n_upper, label='limite massimo - modello SARIMA')
plt.plot(n_lower, label='limite minimo - modello SARIMA')
plt.legend(prop={'size': 12})
plt.show()
```
Gli <b>intervalli</b> corrispondono alla differenza tra i decessi totali forniti dall'ISTAT per i mesi di marzo, aprile, maggio e giugno 2020 e i valori degli <b>intervalli di confidenza</b> (intervallo superiore e intervallo inferiore) del modello predittivo SARIMA dei medesimi mesi.
```
d = decessi_istat.sum()
print("Decessi 2020:", d)
d_m = predictions.sum()
print("Decessi attesi dal modello 2020:", d_m)
d_lower = lower.sum()
print("Decessi attesi dal modello 2020 - livello mimino:", d_lower)
```
<h3>Numero totale dei decessi accertati COVID-19 per la regione Valle d'Aosta</h3>
```
m = covid.sum()
print(int(m))
```
<h3>Numero totale dei decessi COVID-19 previsti dal modello per la regione Valle d'Aosta </h3>
<h4>Valore medio
```
total = n.sum()
print((total))
```
<h4>Valore massimo
```
total_upper = n_upper.sum()
print((total_upper))
```
<h4>Valore minimo
```
total_lower = n_lower.sum()
print(int(total_lower))
```
<h3>Calcolo del numero dei decessi COVID-19 non registrati secondo il modello predittivo SARIMA della regione Valle d'Aosta</h3>
<h4>Valore medio
```
x = decessi_istat - predictions - covid
x = x.sum()
print((x))
```
<h4>Valore massimo
```
x_upper = decessi_istat - lower - covid
x_upper = x_upper.sum()
print((x_upper))
```
<h4>Valore minimo
```
x_lower = decessi_istat - upper - covid
x_lower = x_lower.sum()
print(int(x_lower))
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import pandas as pd
decessi_istat = pd.read_csv('../../csv/regioni/valle_aosta.csv')
decessi_istat.head()
decessi_istat['DATA'] = pd.to_datetime(decessi_istat['DATA'])
decessi_istat.TOTALE = pd.to_numeric(decessi_istat.TOTALE)
decessi_istat = decessi_istat[decessi_istat['DATA'] > '2020-02-29']
decessi_istat.head()
decessi_istat = decessi_istat.set_index('DATA')
decessi_istat = decessi_istat.TOTALE
decessi_istat
covid = pd.read_csv('../../csv/regioni_covid/valle_aosta.csv')
covid.head()
covid['data'] = pd.to_datetime(covid['data'])
covid.deceduti = pd.to_numeric(covid.deceduti)
covid = covid.set_index('data')
covid.head()
covid = covid.deceduti
predictions = pd.read_csv('../../csv/pred/predictions_SARIMA_valle_aosta.csv')
predictions.head()
predictions.rename(columns={'Unnamed: 0': 'Data', 'predicted_mean':'Totale'}, inplace=True)
predictions.head()
predictions['Data'] = pd.to_datetime(predictions['Data'])
predictions.Totale = pd.to_numeric(predictions.Totale)
predictions = predictions[predictions['Data'] > '2020-02-29']
predictions.head()
predictions = predictions.set_index('Data')
predictions.head()
predictions = predictions.Totale
upper = pd.read_csv('../../csv/upper/predictions_SARIMA_valle_aosta_upper.csv')
upper.head()
upper.rename(columns={'Unnamed: 0': 'Data', 'upper TOTALE':'Totale'}, inplace=True)
upper['Data'] = pd.to_datetime(upper['Data'])
upper.Totale = pd.to_numeric(upper.Totale)
upper.head()
upper = upper[upper['Data'] > '2020-02-29']
upper = upper.set_index('Data')
upper.head()
upper = upper.Totale
lower = pd.read_csv('../../csv/lower/predictions_SARIMA_valle_aosta_lower.csv')
lower.head()
lower.rename(columns={'Unnamed: 0': 'Data', 'lower TOTALE':'Totale'}, inplace=True)
lower['Data'] = pd.to_datetime(lower['Data'])
lower.Totale = pd.to_numeric(lower.Totale)
lower.head()
lower = lower[lower['Data'] > '2020-02-29']
lower = lower.set_index('Data')
lower.head()
lower = lower.Totale
plt.figure(figsize=(15,4))
plt.title("VALLE D'AOSTA - Confronto decessi totali, decessi causa covid e decessi del modello predittivo", size=16)
plt.plot(covid, label='decessi causa covid')
plt.plot(decessi_istat, label='decessi totali')
plt.plot(predictions, label='predizione modello')
plt.legend(prop={'size': 12})
plt.show()
plt.figure(figsize=(15,4))
plt.title("VALLE D'AOSTA - Confronto decessi totali ISTAT con decessi previsti dal modello", size=18)
plt.plot(predictions, label='predizione modello')
plt.plot(upper, label='limite massimo')
plt.plot(lower, label='limite minimo')
plt.plot(decessi_istat, label='decessi totali')
plt.legend(prop={'size': 12})
plt.show()
n = decessi_istat - predictions
n_upper = decessi_istat - lower
n_lower = decessi_istat - upper
plt.figure(figsize=(15,4))
plt.title("VALLE D'AOSTA - Confronto decessi accertati covid con decessi covid previsti dal modello", size=18)
plt.plot(covid, label='decessi covid accertati - Protezione Civile')
plt.plot(n, label='devessi covid previsti - modello SARIMA')
plt.plot(n_upper, label='limite massimo - modello SARIMA')
plt.plot(n_lower, label='limite minimo - modello SARIMA')
plt.legend(prop={'size': 12})
plt.show()
d = decessi_istat.sum()
print("Decessi 2020:", d)
d_m = predictions.sum()
print("Decessi attesi dal modello 2020:", d_m)
d_lower = lower.sum()
print("Decessi attesi dal modello 2020 - livello mimino:", d_lower)
m = covid.sum()
print(int(m))
total = n.sum()
print((total))
total_upper = n_upper.sum()
print((total_upper))
total_lower = n_lower.sum()
print(int(total_lower))
x = decessi_istat - predictions - covid
x = x.sum()
print((x))
x_upper = decessi_istat - lower - covid
x_upper = x_upper.sum()
print((x_upper))
x_lower = decessi_istat - upper - covid
x_lower = x_lower.sum()
print(int(x_lower))
| 0.263126 | 0.665791 |
```
# Introduction to Data Science – Lecture 3: Basic Python II
```
*COMP 5360 / MATH 4100, University of Utah, http://datasciencecourse.net/*
In this lecture we'll continue to see what Python can do and learn more about data types, operators, conditions, basic data structures, and loops.
## 1. More on Data Types and Operators
We've already covered the basic data types and operators. Now we'll recap and go into some more details.
Also, make sure to check out the [complete documentation of standard types and operations](https://docs.python.org/3/library/stdtypes.html).
### Boolean
Boolean values represent truth values `True` and `False`. Booleans can be used as any other variable:
```
my_true_var = True
print (my_true_var)
my_false_var = False
print (my_false_var)
`True` and `False` are reserved keywords in their capitalized form.
There are three operations defined on booleans: `and`, `or`, and `not`.
| Operation | Result |
|------|------|
| `x or y` | if x is false, then y, else x |
| `x and y` | if x is false, then x, else y |
| `not x` | if x is false, then True, else False |
True or False
True and False
not True
not False
```
#### Comparisons
Comparisons are very important in programming: they let us decide on conditional flows, which we will discuss later. To compare two entities, Python provides eight comparison operators:
| Operation | Meaning
| - | - |
| < | strictly less than
|<= | less than or equal
|> | strictly greater than
|>= | greater than or equal
|== |equal
|!= | not equal
|is | object identity
|is not | negated object identity
These operators take two operands and return a boolean. We'll glance over the last two for now, but here are some examples of the others:
```
1 < 2
1 <= 1
14 == 14
14 != 14
"my text" == "my text "
"my text" == "my other text"
"a" > "b"
"a" < "b"
"aa" < "aba"
"aa" < "aab"
```
We see that the operations work on numbers just as we would expect.
Strings are also compared as we'd expect. The greater and less than operators use lexicographic ordering.
### Numerical Data Types
Python supports three built in numerical data types, `int`, `float`, and `complex`. Since Python is dynamically typed, we don't have to define the data types explicitly!
The **int** data type is used to to represent integers $\mathbb{Z}$. Python is special in the way it handles integers as it allows arbitrarily large integers, while most other programming languages reserve a certain chunk of memory for integers, which can lead to a number "overflowing". This, for example, would not work properly in C or Java:
```
2 ** 200
```
However, we can still experience overflows in Python if we work with pandas, a library we will extensively use.
Integers can be **positive, zero, or negative**, as you would expect.
The **float** datatype is used to represent real numbers $\mathbb{R}$. Floats, however, can not be precisely represented by a computer. Take the example of $1/3$. Representing $1/3$ accurately would require the computer to store an infinitely large number of $0.33333333333333333333....$ (if a computer used a decimal number system).
Since computers use binary numbers, also seemingly simple numbers such as 0.1 cannot be accurately represented. Check out this example:
```
.1 + .1 + .1 == .3
```
What computers do is that they store approximations using a limited chunck of memory to store the number. At the same time, Python rounds the output of numbers:
```
1 / 10
```
This number is in fact not 0.1 but is stored in the computer as:
`0.1000000000000000055511151231257827021181583404541015625`
This representation, however, is rarely useful, hence the number is rounded.
The lesson that you should remember is that **you CANNOT compare two float numbers with the `==` operator**.
```
a = .1 + .1 + .1
b = .3
a == b
```
Instead, you can do something like this:
```
# Compare for equality up to a constant value
a < b + 0.00001 and a > b - 0.00001
```
This, of course, only compares up to the 5th digit behind the comma.
A better way to do this is the [isclose](https://docs.python.org/3/library/math.html#math.isclose) function from the math package.
```
# this is how we import a package
import math
# here we call the isclose function that comes with the math package.
math.isclose(a, b, rel_tol=0.0000000000000000000001)
```
Here we've also used our first package, the package `math`!
Packages extend the basic functionality of python. We'll work a lot with packages in the future, details will follow.
**Type Annotations**
Python now supports [type annotations](https://docs.python.org/3/library/typing.html), but those are not enforced. They can be used by IDEs or linters to check your code.
```
# a type annotation for string
greeting: str = "Hello World"
print(greeting)
# we can still override that
greeting = 3
print(greeting)
# we can also hint at the return type of a function
def greet(name: str) -> str:
return "Hello " + name
greeting = greet("3")
print(greeting)
```
#### Numerical Operators
Here is a selection of operators and functions that work on numerical data types.
| Operation | Result
| - | - |
|`x + y` |sum of x and y
|`x - y` |difference of x and y
|`x * y` |product of x and y
|`x / y` |quotient of x and y
|`x % y` | remainder of x / y
|`-x` | x negated
|`abs(x)` | absolute value or magnitude of x
|`int(x)` | x converted to integer
|`float(x)` | x converted to floating point
|`pow(x, y)` | x to the power y
| `x ** y` | x to the power y
Most of these should be rather straight-forward.
You might not have heard of the "modulo operator" `%` which returns the remainder of a division x / y. Here is an example:
```
7.4 % 2
```
Also, remember, that many operations have a shorthand assignment version, i.e., instead of:
```
x = 2
y = 3
x = x+y
x
```
you can also write:
```
x = 2
y = 3
x += y
x
```
This works also for other operators:
```
x = 2
y = 3
x -= y
x
x = 2
y = 3
x /= y
x
x = 2
y = 3
z = 5
x **= (y * z)
x
```
## 2. Functions Recap
Functions have a name, take parameters, and can (but must not) provide a return value.
Indentation is what distinguishes the body of a function from the surrounding code.
```
def add(x, y):
result = x + y
return result
add(1,9)
```
Also, remember that variables defined inside of a function are not accesible outside of a function:
```
def scope_test():
function_scope = "only readable in here"
# Within the function, we can use the variable we have defined
print("Within function: " + function_scope)
# calling the function, which will print
scope_test()
# If we try to use the function_scope variable outse of the function, we will find that it is not defined.
# This will throw a NameError, because Python doesn't know about that variable here
print("Outside function: " + function_scope)
```
Functions can also be given **default values**.
Also, parameters can be **explicitly defined**.
```
def print_vars(a="", b="", c=""):
print(a, b, c)
# Position determines the variable assignment. Defaults are used for the second and third parameter.
print_vars("a")
# Explicit assignment of the b parameter. Defaults are used for the rest.
print_vars(b="b")
# Explicit assignment out of order. Defaults are used for the rest.
print_vars(c="CC", a="AAA")
print_vars()
```
Finally, we can also use **arbitrary length arguments**:
```
def var_args(*names):
print(names)
var_args("Devin", "Kutay", "Shaurya", "Daniel")
```
## 3. Conditions: if-elif-else statements
We've learned how to make comparisons between items and use Boolean operations. The result of these operations was usually a Boolean value.
We can now make use of these Boolean values to **steer the program flow using conditions**.
We can do that using **if-statements**. If conditions evaluate an expression for its boolean value and execute one branch of code if they are true, and, optionally, another branch if they are false:
```
def isOdd(x):
# the statement within the brackets is evaluated for truth
if x % 2 == 1:
# body, executed if true
print(str(x), "is in fact an odd number")
else:
# executed if false
print(str(x), "is an even number")
isOdd(144.5)
isOdd(13)
```
Notice the **"body" of the if statement is intended**, just as for functions.
Also note that you don't need to put paranthesis around the expression, though it's OK to do so.
This:
```python
if x == True:
```
works just as well as this:
```python
if (x == True):
```
though the first way is generally considered the more elegant way in Python. This also applies to all other control structures (`for`, `while`, etc.) that we will discuss.
You should use parantehsis for logic and if it helps readability.
Here's an example of a more complex boolean expression:
```
if (True and False) or False:
print(True)
```
In addition to the explicit boolean values that we can use to test for truth, most **programming languages define a range of things to be true or false**.
By definition, **false is**:
* the Boolean value `False`,
* `0` of any numeric type,
* empty sequences or lists,
* empty strings,
* `None` values.
Everything else is considered true.
```
if 0:
print("This should never happen")
else:
print("0 is false")
undefined_var = None
if not undefined_var:
print("An undefined variable is false")
if not []:
print("An empty list is false")
if not "":
print("An empty string is false")
```
You can also **chain conditions using the `elif` statement**, which is short for else if:
```
def smallest_factors(x):
# notice the use of the negation and the use of 0 as false
if not x % 2:
print("2 is a factor of " + str(x))
elif not x % 3: # only evaluated when if was false
print("3 is a factor of " + str(x))
else: # only evaluated when both if and elif were false
print("Neither 2 nor 3 are factors of " + str(x))
smallest_factors(4)
smallest_factors(9)
smallest_factors(12)
smallest_factors(13)
```
Notice that the `elif` (or the `else`) branch is not evaluated if the `if` branch matches. A function that prints whether both, 2 and 3 is a factor could be written like this:
```
def factors(x):
# notice the use of the negation and the use of 0 as false
if not x % 2:
print("2 is a factor of " + str(x))
if not x % 3:
print("3 is a factor of " + str(x))
if (x % 2) and (x % 3):
print("Neither 2 nor 3 are factors of " + str(x))
factors(4)
factors(9)
factors(12)
factors(13)
```
## 4. Lists
Up to know we've worked only with basic data types such as booleans, numbers and strings. Now we'll take a look at a compound data type: [lists](https://docs.python.org/3/tutorial/introduction.html#lists).
**A list is a collection of items.** Another word commonly used for a list in other programming languages is an **array** (though there are differences between lists and arrays in many languages).
**Lists are created with square brackets `[]` and can be accessed via an index:**
```
beatles = ["James", "Bobby", "Naomi","Alex"]
# printing the whole array
print(beatles)
# printing the first element of that array, at index 0
print(beatles[0])
# third element, at index 2
print(beatles[1])
```
You can also address elements from the back of the list:
```
# access the last element
print(beatles[-1])
# access the one-but-last element
print(beatles[-2])
```
If we try to address an index outside of the range of an array, we get an error:
```
beatles[4]
```
Sometimes, it makes sense to pre-initialize an array of a certain size, but you don't generally have to pre-specify the size of a list in python.
```
[0] * 10
```
There is also a handy shortcut for quickly initializing lists. This uses the [`range()`](https://docs.python.org/3/library/functions.html#func-range) function, which we'll explore in more detail later.
We can also create **slices of an array with the slice operator `:`**
```python
a[start:end] # items start through end-1
a[start:] # items start through the rest of the array
a[:end] # items from the beginning through end-1
a[:] # a copy of the whole array
```
There is also the step value, which can be used with any of the above:
```python
a[start:end:step] # start through not past end, by step
```
The slice operations return a new array, the original array is untouched.
See [this post](http://stackoverflow.com/questions/509211/explain-pythons-slice-notation) for a good explanation on slicing.
```
# Get the slice from 0 (included) to 2 (excluded)
beatles[:2] # this can also be written as [0:2]
# Sclice from index 2 (3rd element) to end
beatles[2:]
# A copy of the array
beatles[:]
```
Slicing outside of a defined range returns an empty list:
```
beatles[4:9]
```
Strings can be treated similar to arrays with respect to indexing and slicing:
```
paul = "Paul McCartney"
paul[0:4]
```
Lists (in contrast to strings) are mutable.
That means **we can change the elements that are contained in a list**:
```
beatles[1] = "JohnYoko"
beatles
```
This does not work with strings, strings are immutable:
```
# This will return an error
paul[1] = "o"
```
Arrays can also be **extended with the `append()` function**:
```
beatles.append("4-George Martin")
beatles
```
Lists can be **concatenated**:
```
zeppelin = ["Jimmy", "Robert", "John", "John"]
supergroup = beatles + zeppelin
supergroup
```
We can **check the length** of a list using the built-in [`len()`](https://docs.python.org/3.3/library/functions.html#len) function:
```
len(zeppelin)
```
Lists can also be **nested**:
```
bands = [beatles, zeppelin]
bands
len(bands[0])
```
In fact, lists can be of hybrid data types, which, however, is something that you typically don't want to and shouldn't do:
```
bad_bands = bands + [1, 0.3, 17, "This is bad"]
# this list contains lists, integers, floats and strings
bad_bands
```
## 4.1 NumPy Lists
We will frequently use [NumPy](https://numpy.org/) arrays instead of regular Python lists. NumPy provides data structures and operations that are suitable for scientific computing, especially with regards to performance. A lot of data science libraries also expect a NumPy array or return one.
Here's a simple NumPy array. We can do slicing etc just like on regular arrays.
```
import numpy as np
my_array = np.array([1,2,3,4,5])
print(my_array[1])
print(my_array[-1])
print(my_array[1:3])
# Notice that the data type is different from a regular python data type
print(my_array.dtype.name)
```
NumPy arrays have a lot of additional functionality, which we will introduce as needed. One significant difference to regular arrays is that an array has to be of a single data type.
```
# trying to set up a hybrid array; that would be OK in python lists.
my_hybrid_array = np.array([1,"test",3,4,5])
# We see that the elements are up-casted to the most inclusive data type, a string.
print(my_hybrid_array)
print(type(my_hybrid_array[-1]))
print(my_hybrid_array.dtype.name)
```
## 5. Loops
So far we have learned about two ways to control the flow of a program: functions and if-statements. Now we'll look at another important control structure: loops.
Like an if statement, a loop has a condition, and as long as that condition is true, it will continue to re-execute its body.
There are two types of loops. **For** loops and **while** loops.
### 5.1 While loops
While loops use the `while` keyword, a condition, and the loop body:
```
a = 1
# print numbers 0-100
while a < 100:
# end is a parameter of print that defines how the string to be printed ends.
# By default, a newline \n is appended, which we overwrite here
print("$" + str(a), end=", ")
a += 1
```
What happens here? The `while` keyword indicates that this is a loop, which is followed by the **terminating condition of `a <= 100`**. As long as that condition is true, the loop's body will be called again and again and again ...
Once the terminating condition evaluates to false, the code in the loop body will be skipped and the flow of execution continues below the loop.
You might rightly guess that it's easy to write loops that don't terminate. Here is one example:
```python
while True:
print "Stuck"
```
This program would be stuck in the loop forever (or until you terminate it by interrupting your kernel, your computer goes off, etc.) It is hence important to take care that loops actually reach a terminating condition, and it's not always as obvious as in the previous example that this is not the case.
But we could also **use the `break` statement to terminate a loop**:
```
a = 1
while True:
print(a, end=", ")
a += 1
if (a > 100):
break
```
Here, we've moved the check of the condition into an if-statement, and break when the if-statement is executed.
Similar to the `break` statement, there is also a `continue` statement, that ends evaluation of the loop body and goes back to the start of the loop in the next cycle:
```
a = 0
while a < 100:
a +=1;
# throw brackets around all numbers divisible by 3
if (not a % 3):
print(f"[{a}]", end=", ")
continue # the next line isn't executed because the flow goes back to the beginning of the loop
print(a, end=", ")
```
Here we've also introduced a [Format String](https://docs.python.org/3/library/string.html?highlight=f%20string#format-string-syntax), which is convenient for creating strings that are a mix of variables and other text.
A format string begins with an `f` before the quotes. Variables are specified in curly brackets `{}`.
```
name = "James Holden"
print(f"My name is {name}")
```
### 5.2 For loops
The most common use for for-loops in Python is to iterate over items of a sequence. Most other programming languages use for loops to iterate over a fixed number of indices.
It uses the following syntax:
```python
for variable in sequence:
#body
```
The variable is then accessible within the body of the loop.
Here is an example:
```
for member in zeppelin:
print(member)
```
Of course, that works with arbitrary **slices of lists**, as these are just lists themselves:
```
for member in zeppelin[:2]:
print(member)
```
We can iterate over **nested lists** with nested for loops:
```
for band in bands:
print("Band Members: ")
print("-------------")
for member in band:
print(member)
print()
```
When you want to iterate over a sequence of numbers, use the [`range()`](https://docs.python.org/3/library/stdtypes.html#range) function. Ranges are rules that you can use to generate a sequence of numbers. Here's how you could define a range rule for a range from 0-5.
```
range(5)
```
Ranges by themsleves are iterable, so they can be used e.g., for looping.
```
for i in range(10):
print (i)
```
But we can also create a new list with the output of the range function:
```
list(range(5))
```
The range function also takes other parameters, specifically a "start", "stop" and a "step-size" parameter.
```
# start at 0, stop at index 10, two steps
list(range(0, 10, 2))
for i in range (0, -20, -3):
print(i)
```
## 6. Recursion
**To understand recursion, you must first understand recursion.**

Another way to control program flow is recursion.
**Recursion is a function that calls itself, until it doesn't.**
The first part of that sentences explains the self-referencing nature of recursion, the second part indicates that it – just like a loop – needs a terminating condition.
Here is an example for printing the numbers 0-10:
```
def printNumber(current, limit):
print(current)
if current < limit:
printNumber(current + 1, limit)
printNumber(0, 10)
```
We have implemented looping / iteration behavior without actually using a loop! However, recursion can be used for more than just loops; it is very well suited, for example, to operate on trees and graphs.
We can print these numbers in reverse just by moving the print statement to after the function call. Think about why that is.
```
def printNumberReverse(current, limit):
if current < limit:
printNumberReverse(current + 1, limit)
print(current)
printNumberReverse(0, 10)
def printCallStack(current, limit):
print(f"Depth before recursive call: {current}")
if current < limit:
printCallStack(current + 1, limit)
print(f"Returning at depth {current}")
# we don't need this; it's implicit, but to illustrate the return it's here
return
printCallStack(0, 10)
```
We can also use return values in recursive functions. In the following, the recursive call is in the return statement. Here, the evaluation stack goes all the way to 10, after which the return doesn't contain another recursive call, terminating the recursion. Then all the functions return in the order in which they were called and build the string:
```
def getNumberString(current, limit):
if current <= limit:
return f"{current}, {getNumberString(current+1, limit)}"
return ""
getNumberString(0, 10)
```
## 7. List Comprehension
Now that we know about loops, we can also take a look at [list comprehension](https://docs.python.org/3.5/tutorial/datastructures.html#list-comprehensions).
List comprehension can be used to initialize and transform arrays.
A list comprehension consists of **brackets**, an **expression** applied to every element of the future list, and a **for clause**.
```python
[expression for element in list]
```
The expression can be a variable, an operation, or a function. Let's start with variables.
```
# _ is customary for a variable name if you don't need it
[0 for _ in range(10)]
["John" for _ in range(10)]
# we can also make use of values we iterate over
[i for i in range(10)]
```
We can use functions for our expressions in place of a variable. Here we initialize an array of random numbers in the unit interval:
```
import random
rands = [random.random() for _ in range(10)]
rands
```
You can also use list comprehension to create a list based on another list:
```
[x*10 for x in rands]
```
|
github_jupyter
|
# Introduction to Data Science – Lecture 3: Basic Python II
my_true_var = True
print (my_true_var)
my_false_var = False
print (my_false_var)
`True` and `False` are reserved keywords in their capitalized form.
There are three operations defined on booleans: `and`, `or`, and `not`.
| Operation | Result |
|------|------|
| `x or y` | if x is false, then y, else x |
| `x and y` | if x is false, then x, else y |
| `not x` | if x is false, then True, else False |
True or False
True and False
not True
not False
1 < 2
1 <= 1
14 == 14
14 != 14
"my text" == "my text "
"my text" == "my other text"
"a" > "b"
"a" < "b"
"aa" < "aba"
"aa" < "aab"
2 ** 200
.1 + .1 + .1 == .3
1 / 10
a = .1 + .1 + .1
b = .3
a == b
# Compare for equality up to a constant value
a < b + 0.00001 and a > b - 0.00001
# this is how we import a package
import math
# here we call the isclose function that comes with the math package.
math.isclose(a, b, rel_tol=0.0000000000000000000001)
# a type annotation for string
greeting: str = "Hello World"
print(greeting)
# we can still override that
greeting = 3
print(greeting)
# we can also hint at the return type of a function
def greet(name: str) -> str:
return "Hello " + name
greeting = greet("3")
print(greeting)
7.4 % 2
x = 2
y = 3
x = x+y
x
x = 2
y = 3
x += y
x
x = 2
y = 3
x -= y
x
x = 2
y = 3
x /= y
x
x = 2
y = 3
z = 5
x **= (y * z)
x
def add(x, y):
result = x + y
return result
add(1,9)
def scope_test():
function_scope = "only readable in here"
# Within the function, we can use the variable we have defined
print("Within function: " + function_scope)
# calling the function, which will print
scope_test()
# If we try to use the function_scope variable outse of the function, we will find that it is not defined.
# This will throw a NameError, because Python doesn't know about that variable here
print("Outside function: " + function_scope)
def print_vars(a="", b="", c=""):
print(a, b, c)
# Position determines the variable assignment. Defaults are used for the second and third parameter.
print_vars("a")
# Explicit assignment of the b parameter. Defaults are used for the rest.
print_vars(b="b")
# Explicit assignment out of order. Defaults are used for the rest.
print_vars(c="CC", a="AAA")
print_vars()
def var_args(*names):
print(names)
var_args("Devin", "Kutay", "Shaurya", "Daniel")
def isOdd(x):
# the statement within the brackets is evaluated for truth
if x % 2 == 1:
# body, executed if true
print(str(x), "is in fact an odd number")
else:
# executed if false
print(str(x), "is an even number")
isOdd(144.5)
isOdd(13)
if x == True:
if (True and False) or False:
print(True)
if 0:
print("This should never happen")
else:
print("0 is false")
undefined_var = None
if not undefined_var:
print("An undefined variable is false")
if not []:
print("An empty list is false")
if not "":
print("An empty string is false")
def smallest_factors(x):
# notice the use of the negation and the use of 0 as false
if not x % 2:
print("2 is a factor of " + str(x))
elif not x % 3: # only evaluated when if was false
print("3 is a factor of " + str(x))
else: # only evaluated when both if and elif were false
print("Neither 2 nor 3 are factors of " + str(x))
smallest_factors(4)
smallest_factors(9)
smallest_factors(12)
smallest_factors(13)
def factors(x):
# notice the use of the negation and the use of 0 as false
if not x % 2:
print("2 is a factor of " + str(x))
if not x % 3:
print("3 is a factor of " + str(x))
if (x % 2) and (x % 3):
print("Neither 2 nor 3 are factors of " + str(x))
factors(4)
factors(9)
factors(12)
factors(13)
beatles = ["James", "Bobby", "Naomi","Alex"]
# printing the whole array
print(beatles)
# printing the first element of that array, at index 0
print(beatles[0])
# third element, at index 2
print(beatles[1])
# access the last element
print(beatles[-1])
# access the one-but-last element
print(beatles[-2])
beatles[4]
[0] * 10
a[start:end] # items start through end-1
a[start:] # items start through the rest of the array
a[:end] # items from the beginning through end-1
a[:] # a copy of the whole array
a[start:end:step] # start through not past end, by step
# Get the slice from 0 (included) to 2 (excluded)
beatles[:2] # this can also be written as [0:2]
# Sclice from index 2 (3rd element) to end
beatles[2:]
# A copy of the array
beatles[:]
beatles[4:9]
paul = "Paul McCartney"
paul[0:4]
beatles[1] = "JohnYoko"
beatles
# This will return an error
paul[1] = "o"
beatles.append("4-George Martin")
beatles
zeppelin = ["Jimmy", "Robert", "John", "John"]
supergroup = beatles + zeppelin
supergroup
len(zeppelin)
bands = [beatles, zeppelin]
bands
len(bands[0])
bad_bands = bands + [1, 0.3, 17, "This is bad"]
# this list contains lists, integers, floats and strings
bad_bands
import numpy as np
my_array = np.array([1,2,3,4,5])
print(my_array[1])
print(my_array[-1])
print(my_array[1:3])
# Notice that the data type is different from a regular python data type
print(my_array.dtype.name)
# trying to set up a hybrid array; that would be OK in python lists.
my_hybrid_array = np.array([1,"test",3,4,5])
# We see that the elements are up-casted to the most inclusive data type, a string.
print(my_hybrid_array)
print(type(my_hybrid_array[-1]))
print(my_hybrid_array.dtype.name)
a = 1
# print numbers 0-100
while a < 100:
# end is a parameter of print that defines how the string to be printed ends.
# By default, a newline \n is appended, which we overwrite here
print("$" + str(a), end=", ")
a += 1
This program would be stuck in the loop forever (or until you terminate it by interrupting your kernel, your computer goes off, etc.) It is hence important to take care that loops actually reach a terminating condition, and it's not always as obvious as in the previous example that this is not the case.
But we could also **use the `break` statement to terminate a loop**:
Here, we've moved the check of the condition into an if-statement, and break when the if-statement is executed.
Similar to the `break` statement, there is also a `continue` statement, that ends evaluation of the loop body and goes back to the start of the loop in the next cycle:
Here we've also introduced a [Format String](https://docs.python.org/3/library/string.html?highlight=f%20string#format-string-syntax), which is convenient for creating strings that are a mix of variables and other text.
A format string begins with an `f` before the quotes. Variables are specified in curly brackets `{}`.
### 5.2 For loops
The most common use for for-loops in Python is to iterate over items of a sequence. Most other programming languages use for loops to iterate over a fixed number of indices.
It uses the following syntax:
The variable is then accessible within the body of the loop.
Here is an example:
Of course, that works with arbitrary **slices of lists**, as these are just lists themselves:
We can iterate over **nested lists** with nested for loops:
When you want to iterate over a sequence of numbers, use the [`range()`](https://docs.python.org/3/library/stdtypes.html#range) function. Ranges are rules that you can use to generate a sequence of numbers. Here's how you could define a range rule for a range from 0-5.
Ranges by themsleves are iterable, so they can be used e.g., for looping.
But we can also create a new list with the output of the range function:
The range function also takes other parameters, specifically a "start", "stop" and a "step-size" parameter.
## 6. Recursion
**To understand recursion, you must first understand recursion.**

Another way to control program flow is recursion.
**Recursion is a function that calls itself, until it doesn't.**
The first part of that sentences explains the self-referencing nature of recursion, the second part indicates that it – just like a loop – needs a terminating condition.
Here is an example for printing the numbers 0-10:
We have implemented looping / iteration behavior without actually using a loop! However, recursion can be used for more than just loops; it is very well suited, for example, to operate on trees and graphs.
We can print these numbers in reverse just by moving the print statement to after the function call. Think about why that is.
We can also use return values in recursive functions. In the following, the recursive call is in the return statement. Here, the evaluation stack goes all the way to 10, after which the return doesn't contain another recursive call, terminating the recursion. Then all the functions return in the order in which they were called and build the string:
## 7. List Comprehension
Now that we know about loops, we can also take a look at [list comprehension](https://docs.python.org/3.5/tutorial/datastructures.html#list-comprehensions).
List comprehension can be used to initialize and transform arrays.
A list comprehension consists of **brackets**, an **expression** applied to every element of the future list, and a **for clause**.
The expression can be a variable, an operation, or a function. Let's start with variables.
We can use functions for our expressions in place of a variable. Here we initialize an array of random numbers in the unit interval:
You can also use list comprehension to create a list based on another list:
| 0.465873 | 0.98703 |
# WeatherPy
----
#### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
import datetime
import scipy.stats as st
from scipy.stats import linregress
# Import API key
import api_keys
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
print (citipy)
```
## Generate Cities List
```
#Print today's date
today = f"{datetime.datetime.now():%m/%d/%y}"
print (today)
# List for holding lat_Lngs and cities
lat_lngs = []
cities = []
# Creates a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat & lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# For each unique city name, then add it to the cities list
if city not in cities:
cities.append(city)
# Print city count to confirm sufficient count
len(cities)
```
### Perform API Calls
* Perform a weather check on each city using a series of successive API calls.
* Include a print log of each city as it'sbeing processed (with the city number and city name).
```
url = "http://api.openweathermap.org/data/2.5/weather?"
units = "imperial"
# Build partial query URL
query_url = f"{url}appid={weather_api_key}&units={units}&q="
# create lists that hold reponse information
City = []
Cloudiness = []
Country = []
Date = []
Humidity = []
Lat = []
Lng = []
Max_Temp = []
Wind_Speed = []
# Loop through the list of cities and execute a request for data on each city item
print('Beginning Data Retrieval')
print('_________________________')
i=0
for city in cities:
#print(f"query_url is : {query_url}")
response = requests.get(query_url + city).json()
#print(f"response is : {response}")
cod = response['cod']
if cod == 200:
i = i + 1
City.append(response['name'])
Cloudiness.append(response['clouds']['all'])
Country.append(response['sys']['country'])
Date.append(response['dt'])
Humidity.append(response['main']['humidity'])
Lat.append(response['coord']['lat'])
Lng.append(response['coord']['lon'])
Max_Temp.append(response['main']['temp_max'])
Wind_Speed.append(response['wind']['speed'])
print(f'Processing Record {i} of Set 1 | {city}')
else:
print(f'City not found. Skipping...')
print(f'______________________________')
print(f'Data Retrieval Complete ')
print(f'______________________________')
```
### Convert Raw Data to DataFrame
* Export the city data into a .csv.
* Display the DataFrame
```
weather_dict = pd.DataFrame({
"City": City,
"Cloudiness": Cloudiness,
"Country": Country,
"Date": Date,
"Humidity": Humidity,
"Lat": Lat,
"Lng": Lng,
"Max Temp": Max_Temp,
"Wind Speed": Wind_Speed})
weather_data = pd.DataFrame(weather_dict)
weather_data.to_csv('WeatherPy_data.csv')
# print the lengh of each list
print(f'City {len(City)}')
print(f'Cloudiness {len(Cloudiness)}')
print(f'Country {len(Country)}')
print(f'Date {len(Date)}')
print(f'Humidity {len(Humidity)}')
print(f'Lat {len(Lat)}')
print(f'Lng {len(Lng)}')
print(f'Max Temp {len(Max_Temp)}')
print(f'Wind Speed {len(Wind_Speed)}')
weather_data.head()
```
## Plotting the Data
* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
* Save the plotted figures as .pngs.
## Latitude vs. Temperature Plot
```
#plot Latitude vs Max_Temperature
plt.title(f"City Latitude vs. Max Temperature ({today})")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (F)")
plt.scatter(Lat, Max_Temp, marker="o", alpha=.75, color = "red",edgecolor = "black")
plt.grid()
plt.show()
```
## Latitude vs. Humidity Plot
```
#plot Latitude vs Humidity
plt.title(f"City Latitude vs. Humidity ({today})")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
plt.scatter(Lat, Humidity, marker="o", alpha=.75, color = "orange",edgecolor = "black")
plt.grid()
plt.show()
```
## Latitude vs. Cloudiness Plot
```
#plot Latitude vs Cloudiness
plt.title(f"City Latitude vs. Cloudiness ({today})")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.scatter(Lat, Cloudiness, marker="o", alpha=.75, color = "blue",edgecolor = "black")
plt.grid()
plt.show()
```
## Latitude vs. Wind Speed Plot
```
#plot Latitude vs Wind Speed
plt.title(f"City Latitude vs. Wind Speed ({today})")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
plt.scatter(Lat, Wind_Speed, alpha=.75, color = "green",edgecolor = "black")
plt.grid()
plt.show()
```
## Linear Regression
#### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
```
# Define the function that creates a linear Regression and Scatter plot
def linear_regression(x,y):
print(f"The r-squared is : {round(st.pearsonr(x, y)[0],2)}")
(slope, intercept, rvalue, pvalue, stderr) = linregress(x, y)
regress_values = x * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x, y)
plt.plot(x,regress_values,"r-")
return line_eq
# Define a fuction for annotating
def annotate(line_eq, a, b):
plt.annotate(line_eq,(a,b),fontsize=15,color="red")
# Create Northern and Southern Hemisphere Dataframes
northern_hemisphere = weather_dict.loc[weather_dict["Lat"] >= 0]
southern_hemisphere = weather_dict.loc[weather_dict["Lat"] < 0]
#Define linear regression equation
equation = linear_regression(northern_hemisphere["Lat"], northern_hemisphere["Max Temp"])
# Annotate equation
annotate(equation, 0, 0)
# Set plot title
plt.title("Northern Hemisphere - Max Temp vs. Latitude Linear Regression")
# Set the x-label
plt.xlabel("Latitude")
# Set the y-label
plt.ylabel("Max Temp (F)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Northern Hemisphere - Max Temp vs. Latitude Linear Regression.png")
```
#### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
```
#Define linear regression equation
equation = linear_regression(southern_hemisphere["Lat"],southern_hemisphere["Max Temp"])
# Annotate equation
annotate(equation, -30, 50)
# Set plot title
plt.title("Southern Hemisphere - Max Temp vs. Latitude Linear Regression")
# Set the x-label
plt.xlabel("Latitude")
# Set the y-label
plt.ylabel("Max Temp (F)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Southern Hemisphere - Max Temp vs. Latitude Linear Regression.png")
```
#### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
#Define linear regression equation
equation = linear_regression(northern_hemisphere["Lat"], northern_hemisphere["Humidity"])
# Annotate equation
annotate(equation, 40, 15)
# Set plot title
plt.title("Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression")
# Set the x-label
plt.xlabel("Latitude")
# Set the y-label
plt.ylabel("Humidity (%)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression.png")
```
#### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
#Define linear regression equation
equation = linear_regression(southern_hemisphere["Lat"], southern_hemisphere["Humidity"])
# Annotate equation
annotate(equation, -40, 50)
# Set plot title
plt.title("Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression")
# Set the x-label
plt.xlabel("Latitude")
# Set the y-label
plt.ylabel("Humidity (%)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression.png")
```
#### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
#Define linear regression equation
equation = linear_regression(northern_hemisphere["Lat"], northern_hemisphere["Cloudiness"])
# Annotate equation
annotate(equation, 30, 40)
# Set plot title
plt.title("Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression")
# Set xlabel
plt.xlabel("Latitude")
# Set ylabel
plt.ylabel("Cloudiness (%)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression.png")
```
#### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
#Define linear regression equation
equation = linear_regression(southern_hemisphere["Lat"], southern_hemisphere["Cloudiness"])
# Annotate equation
annotate(equation, -30, 40)
# Set plot title
plt.title("Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression")
# Set the x-label
plt.xlabel("Latitude")
# Set the y-label
plt.ylabel("Cloudiness (%)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression.png")
```
#### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
#Define linear regression equation
equation = linear_regression(northern_hemisphere["Lat"], northern_hemisphere["Wind Speed"])
# Annotate equation
annotate(equation, 40, 20)
# Set plot title
plt.title("Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression")
# Set the x-label
plt.xlabel("Latitude")
# Set the y-label
plt.ylabel("Wind Speed (mph)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Northern Hemisphere - Wind Speed vs. Latitude Linear Regression.png")
```
#### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
#Define linear regression equation
equation = linear_regression(southern_hemisphere["Lat"], southern_hemisphere["Wind Speed"])
# Annotate equation
annotate(equation, -30, 15)
# Set plot title
plt.title("Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression")
# Set the x-label
plt.xlabel("Latitude")
# Set the y-label
plt.ylabel("Wind Speed (mph)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Southern Hemisphere - Wind Speed vs. Latitude Linear Regression.png")
```
|
github_jupyter
|
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
import datetime
import scipy.stats as st
from scipy.stats import linregress
# Import API key
import api_keys
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
print (citipy)
#Print today's date
today = f"{datetime.datetime.now():%m/%d/%y}"
print (today)
# List for holding lat_Lngs and cities
lat_lngs = []
cities = []
# Creates a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat & lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# For each unique city name, then add it to the cities list
if city not in cities:
cities.append(city)
# Print city count to confirm sufficient count
len(cities)
url = "http://api.openweathermap.org/data/2.5/weather?"
units = "imperial"
# Build partial query URL
query_url = f"{url}appid={weather_api_key}&units={units}&q="
# create lists that hold reponse information
City = []
Cloudiness = []
Country = []
Date = []
Humidity = []
Lat = []
Lng = []
Max_Temp = []
Wind_Speed = []
# Loop through the list of cities and execute a request for data on each city item
print('Beginning Data Retrieval')
print('_________________________')
i=0
for city in cities:
#print(f"query_url is : {query_url}")
response = requests.get(query_url + city).json()
#print(f"response is : {response}")
cod = response['cod']
if cod == 200:
i = i + 1
City.append(response['name'])
Cloudiness.append(response['clouds']['all'])
Country.append(response['sys']['country'])
Date.append(response['dt'])
Humidity.append(response['main']['humidity'])
Lat.append(response['coord']['lat'])
Lng.append(response['coord']['lon'])
Max_Temp.append(response['main']['temp_max'])
Wind_Speed.append(response['wind']['speed'])
print(f'Processing Record {i} of Set 1 | {city}')
else:
print(f'City not found. Skipping...')
print(f'______________________________')
print(f'Data Retrieval Complete ')
print(f'______________________________')
weather_dict = pd.DataFrame({
"City": City,
"Cloudiness": Cloudiness,
"Country": Country,
"Date": Date,
"Humidity": Humidity,
"Lat": Lat,
"Lng": Lng,
"Max Temp": Max_Temp,
"Wind Speed": Wind_Speed})
weather_data = pd.DataFrame(weather_dict)
weather_data.to_csv('WeatherPy_data.csv')
# print the lengh of each list
print(f'City {len(City)}')
print(f'Cloudiness {len(Cloudiness)}')
print(f'Country {len(Country)}')
print(f'Date {len(Date)}')
print(f'Humidity {len(Humidity)}')
print(f'Lat {len(Lat)}')
print(f'Lng {len(Lng)}')
print(f'Max Temp {len(Max_Temp)}')
print(f'Wind Speed {len(Wind_Speed)}')
weather_data.head()
#plot Latitude vs Max_Temperature
plt.title(f"City Latitude vs. Max Temperature ({today})")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (F)")
plt.scatter(Lat, Max_Temp, marker="o", alpha=.75, color = "red",edgecolor = "black")
plt.grid()
plt.show()
#plot Latitude vs Humidity
plt.title(f"City Latitude vs. Humidity ({today})")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
plt.scatter(Lat, Humidity, marker="o", alpha=.75, color = "orange",edgecolor = "black")
plt.grid()
plt.show()
#plot Latitude vs Cloudiness
plt.title(f"City Latitude vs. Cloudiness ({today})")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.scatter(Lat, Cloudiness, marker="o", alpha=.75, color = "blue",edgecolor = "black")
plt.grid()
plt.show()
#plot Latitude vs Wind Speed
plt.title(f"City Latitude vs. Wind Speed ({today})")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
plt.scatter(Lat, Wind_Speed, alpha=.75, color = "green",edgecolor = "black")
plt.grid()
plt.show()
# Define the function that creates a linear Regression and Scatter plot
def linear_regression(x,y):
print(f"The r-squared is : {round(st.pearsonr(x, y)[0],2)}")
(slope, intercept, rvalue, pvalue, stderr) = linregress(x, y)
regress_values = x * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x, y)
plt.plot(x,regress_values,"r-")
return line_eq
# Define a fuction for annotating
def annotate(line_eq, a, b):
plt.annotate(line_eq,(a,b),fontsize=15,color="red")
# Create Northern and Southern Hemisphere Dataframes
northern_hemisphere = weather_dict.loc[weather_dict["Lat"] >= 0]
southern_hemisphere = weather_dict.loc[weather_dict["Lat"] < 0]
#Define linear regression equation
equation = linear_regression(northern_hemisphere["Lat"], northern_hemisphere["Max Temp"])
# Annotate equation
annotate(equation, 0, 0)
# Set plot title
plt.title("Northern Hemisphere - Max Temp vs. Latitude Linear Regression")
# Set the x-label
plt.xlabel("Latitude")
# Set the y-label
plt.ylabel("Max Temp (F)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Northern Hemisphere - Max Temp vs. Latitude Linear Regression.png")
#Define linear regression equation
equation = linear_regression(southern_hemisphere["Lat"],southern_hemisphere["Max Temp"])
# Annotate equation
annotate(equation, -30, 50)
# Set plot title
plt.title("Southern Hemisphere - Max Temp vs. Latitude Linear Regression")
# Set the x-label
plt.xlabel("Latitude")
# Set the y-label
plt.ylabel("Max Temp (F)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Southern Hemisphere - Max Temp vs. Latitude Linear Regression.png")
#Define linear regression equation
equation = linear_regression(northern_hemisphere["Lat"], northern_hemisphere["Humidity"])
# Annotate equation
annotate(equation, 40, 15)
# Set plot title
plt.title("Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression")
# Set the x-label
plt.xlabel("Latitude")
# Set the y-label
plt.ylabel("Humidity (%)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression.png")
#Define linear regression equation
equation = linear_regression(southern_hemisphere["Lat"], southern_hemisphere["Humidity"])
# Annotate equation
annotate(equation, -40, 50)
# Set plot title
plt.title("Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression")
# Set the x-label
plt.xlabel("Latitude")
# Set the y-label
plt.ylabel("Humidity (%)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression.png")
#Define linear regression equation
equation = linear_regression(northern_hemisphere["Lat"], northern_hemisphere["Cloudiness"])
# Annotate equation
annotate(equation, 30, 40)
# Set plot title
plt.title("Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression")
# Set xlabel
plt.xlabel("Latitude")
# Set ylabel
plt.ylabel("Cloudiness (%)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression.png")
#Define linear regression equation
equation = linear_regression(southern_hemisphere["Lat"], southern_hemisphere["Cloudiness"])
# Annotate equation
annotate(equation, -30, 40)
# Set plot title
plt.title("Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression")
# Set the x-label
plt.xlabel("Latitude")
# Set the y-label
plt.ylabel("Cloudiness (%)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression.png")
#Define linear regression equation
equation = linear_regression(northern_hemisphere["Lat"], northern_hemisphere["Wind Speed"])
# Annotate equation
annotate(equation, 40, 20)
# Set plot title
plt.title("Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression")
# Set the x-label
plt.xlabel("Latitude")
# Set the y-label
plt.ylabel("Wind Speed (mph)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Northern Hemisphere - Wind Speed vs. Latitude Linear Regression.png")
#Define linear regression equation
equation = linear_regression(southern_hemisphere["Lat"], southern_hemisphere["Wind Speed"])
# Annotate equation
annotate(equation, -30, 15)
# Set plot title
plt.title("Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression")
# Set the x-label
plt.xlabel("Latitude")
# Set the y-label
plt.ylabel("Wind Speed (mph)")
#Save image of the figure into Images folder
plt.savefig("../Instructions/Images/Southern Hemisphere - Wind Speed vs. Latitude Linear Regression.png")
| 0.35488 | 0.800536 |
# Introduction
This notebook provides a quick learning resource for manipulating image data with OpenCV, following the Basics and Advanced sections of this youtube video: https://www.youtube.com/watch?v=oXlwWbU8l2o
Original code for the course can be found at Jason Dsouza's [github profile](https://github.com/jasmcaus/opencv-course).
Images of the Sun are used as examples, which can be downloaded for a given year, e.g. 2021, using a link similar to the following: https://spaceweather.com/images2021/. Of course, any image will do, just place the image file(s) in a folder on your file system, and provide the appropriate data_dir and image_files string values below. It is helpful to have several different images to test with.
## References
- https://pypi.org/project/opencv-python/
- https://theailearner.com/2018/10/15/creating-video-from-images-using-opencv-python/
- https://forums.developer.nvidia.com/t/python-what-is-the-four-characters-fourcc-code-for-mp4-encoding-on-tx2/57701/6
- https://www.pyimagesearch.com/2015/04/06/zero-parameter-automatic-canny-edge-detection-with-python-and-opencv/
- https://docs.opencv.org/4.5.1/d9/d8b/tutorial_py_contours_hierarchy.html
-
```
# Import packages
import cv2 as cv
import matplotlib.pyplot as plt
import numpy as np
import os
from opencv_tools import load_frame_gray, resize_frame, translate_frame, rotate_frame, flip_frame, print_frame_info, show_frame
from opencv_tools import plot_frame_histogram
%matplotlib inline
# Set parameters
# data_dir: a string containing the directory where image files are located
# image_files: a list containing strings, with each string specifying the name of an image file, including extension
# image_index: an integer that specifies which image_file value to analyze in the code blocks below
# Note that os.listdir or glob could also be used to obtain a list of all files in data_dir
data_dir = "/home/fdpearce/Documents/Projects/data/Images/Sun/"
image_files = ["27aug21_hmi4096_blank.jpg", "28aug21_hmi4096_blank.jpg", "29aug21_hmi4096_blank.jpg", "30aug21_hmi4096_blank.jpg", \
"31aug21_hmi4096_blank.jpg", "01sep21_hmi4096_blank.jpg", "02sep21_hmi4096_blank.jpg", "03sep21_hmi4096_blank.jpg", \
"04sep21_hmi4096_blank.jpg", "05sep21_hmi4096_blank.jpg"]
image_index = 0
```
## Basics
### 1. Read and Resize an Image/Video Frame
```
# Read in an example image to use for testing
img_path = os.path.join(data_dir, image_files[image_index])
img = load_frame_gray(img_path)
img_orig = img.copy()
resize_scale = .25
img_resize = resize_frame(img, scale=resize_scale)
show_frame(img_resize, "Resized Sun Image")
print_frame_info(img, "Original")
print_frame_info(img_resize, "Resized")
```
### 2. Draw Circle on Image/Video Frame
```
# Distance dimensions are in pixels (i.e. int)
circle_size = (img_resize.shape[1]//2, img_resize.shape[0]//2)
circle_radius = 472
circle_color = (0, 0, 255)
circle_thickness = 2
img_circle = cv.circle(img_resize.copy(), circle_size, circle_radius, circle_color, thickness=circle_thickness)
show_frame(img_circle, "Resized Sun Image w/ Circle")
```
### 3. Draw Text on Image/Video Frame
```
text_str = "Circle Center"
text_loc = (img_resize.shape[1]//2, img_resize.shape[0]//2)
text_font = cv.FONT_HERSHEY_TRIPLEX
text_scale = 1
text_color = (255, 0, 0)
text_thickness = 2
img_text = cv.putText(img_circle.copy(), text_str, text_loc, text_font, text_scale, text_color, text_thickness)
show_frame(img_text, "Resized Sun Image w/ Circle + Text")
```
### 4. Convert to Grayscale
```
# Use the raw, resized sun image for subsequent analysis, e.g. edge detection, contours, etc
gray = cv.cvtColor(img_resize.copy(), cv.COLOR_BGR2GRAY)
show_frame(gray, "Resized Sun Image w/ Circle + Text in Grayscale")
```
### 5. Blur an Image
```
blur_kernal_size = (3, 3)
blur_border = cv.BORDER_DEFAULT
blur = cv.GaussianBlur(gray.copy(), blur_kernal_size, blur_border)
show_frame(blur, "Blurred, Resized Sun Image w/ Circle + Text in Grayscale")
```
### 6. Find Edges in an Image
```
threshold1 = 125
threshold2 = 175
canny = cv.Canny(blur, threshold1, threshold2)
show_frame(canny, "Blurred, Resized Sun Image Edges w/ Circle + Text in Grayscale")
```
### 7. Dilate an Image
```
dilated_kernal_size = (3, 3)
dilated_iterations = 1
dilated = cv.dilate(canny, dilated_kernal_size, dilated_iterations)
show_frame(dilated, "Dilated, Resized Sun Image Edges w/ Circle + Text in Grayscale")
```
### 8. Eroding an Image
```
eroded_kernal_size = (3, 3)
eroded_iterations = 1
eroded = cv.erode(dilated, eroded_kernal_size, eroded_iterations)
show_frame(eroded, "Eroded, Resized Sun Image Edges w/ Circle + Text in Grayscale")
```
### 9. Resize an Image
```
resized_size = (500, 500)
resized_interp = cv.INTER_AREA
resized = cv.resize(img_orig, resized_size, interpolation=resized_interp)
show_frame(resized, "Resized Sun Image")
```
### 10. Cropping an Image
```
cropped_row_indices = (50, 200)
cropped_col_indices = (200, 400)
cropped = resized[cropped_row_indices[0]:cropped_row_indices[1], cropped_col_indices[0]:cropped_col_indices[1]]
show_frame(cropped, "Cropped Sun Image")
```
### 11. Translating an Image
```
translated_x = 100
translated_y = 50
translated = translate_frame(resized.copy(), translated_x, translated_y)
show_frame(translated, "Translated Sun Image")
```
### 12. Rotating an Image
```
rotated_angle = 45
rotated = rotate_frame(translated.copy(), rotated_angle)
show_frame(rotated, "Rotated, Translated Sun Image")
```
### 13. Flip an Image
```
flipped_code = 0
flipped = flip_frame(resized.copy(), flipped_code)
show_frame(flipped, "Flipped Sun Image")
```
### 14. Find Contours using Canny Edges
```
contour_output = cv.RETR_LIST
contour_method = cv.CHAIN_APPROX_NONE
contours_canny, hierarchies_canny = cv.findContours(canny, contour_output, contour_method)
print(f"{len(contours_canny)} Contour(s) found")
contours_canny[0:2]
hierarchies_canny[0, :5, :]
```
### 15. Thresholding an Image
```
thresh_cutoff = 125
thresh_color = 255
thresh_type = cv.THRESH_BINARY
ret, thresh = cv.threshold(gray, thresh_cutoff, thresh_color, thresh_type)
show_frame(thresh, "Thresholded Sun Image")
```
### 16. Find Contours using Thresholded Image
```
contour_output = cv.RETR_LIST
contour_method = cv.CHAIN_APPROX_NONE
contours_thresh, hierarchies_thresh = cv.findContours(thresh, contour_output, contour_method)
print(f"{len(contours_thresh)} Contour(s) found")
contours_thresh[0]
hierarchies_thresh[0, :5, :]
```
### 17. Display Contours
```
drawcont_color = (0, 255, 0)
drawcont_thickness = 2
img_cont = cv.drawContours(img_resize.copy(), contours_canny, -1, drawcont_color, drawcont_thickness)
show_frame(img_cont, "Contours of Sun Image")
img_cont.shape
```
## Advanced
### 18. Changing the ColorSpace of an Image
```
hsv = cv.cvtColor(img_resize.copy(), cv.COLOR_BGR2HSV)
show_frame(hsv, "Sun Image in HSV")
lab = cv.cvtColor(img_resize.copy(), cv.COLOR_BGR2LAB)
show_frame(lab, "Sun Image in LAB")
```
### 19. Split an Image into its Color Channels
```
b, g, r = cv.split(img_resize.copy())
show_frame(b, "Blue Channel of Sun Image")
show_frame(g, "Green Channel of Sun Image")
show_frame(r, "Red Channel of Sun Image")
# Merge channels back together
bgr = cv.merge([b, g, r])
show_frame(bgr, "Merged Sun Image")
```
### 20. Blurring an Image (cont)
```
avg_kernal = (3, 3)
avg = cv.blur(img_resize.copy(), avg_kernal)
show_frame(avg, "Blurred (Avg) Sun Image")
med_kernal_size = 3
med = cv.medianBlur(img_resize.copy(), med_kernal_size)
show_frame(med, "Blurred (Median) Sun Image")
bilat_diam = 5
bilat_color = 15
bilat_space = 15
bilateral = cv.bilateralFilter(img_resize.copy(), bilat_diam, bilat_color, bilat_space)
show_frame(bilateral, "Blurred (Bilateral) Sun Image")
```
### 21. Bitwise Operations
```
blank = np.zeros((400, 400), dtype='uint8')
rectange = cv.rectangle(blank.copy(), (30, 30), (370, 370), 255, -1)
circle = cv.circle(blank.copy(), (200, 200), 200, 255, -1)
bitwise_and = cv.bitwise_and(rectange, circle)
show_frame(bitwise_and, "Bitwise AND")
bitwise_or = cv.bitwise_or(rectange, circle)
show_frame(bitwise_or, "Bitwise OR")
bitwise_xor = cv.bitwise_xor(rectange, circle)
show_frame(bitwise_xor, "Bitwise XOR")
bitwise_not = cv.bitwise_not(rectange)
show_frame(bitwise_not, "Bitwise NOT")
```
### 22. Masking an Image
```
mask_circle_radius = 50
masked_circle_center_x = 130
masked_circle_center_y = -50
mask_blank = np.zeros(gray.shape[0:2], dtype='uint8')
mask_circle = cv.circle(mask_blank.copy(), (mask_blank.shape[1]//2+masked_circle_center_x, mask_blank.shape[0]//2+masked_circle_center_y), mask_circle_radius, 255, thickness=-1)
mask_gray = cv.bitwise_and(gray, gray, mask=mask_circle)
show_frame(mask_gray, "Masked Sun Image")
mask_bgr = cv.bitwise_and(img_resize, img_resize, mask=mask_circle)
show_frame(mask_bgr, "Masked Sun Image")
```
### 23. Computing Histograms of Image Pixel Values
```
# Histogram for Grayscale Image
plot_frame_histogram({'images': [gray], 'mask': mask_circle})
plot_frame_histogram({'images': [img_resize], 'channels': [0, 1, 2], 'mask': mask_circle}, {'channel_colors': ['b', 'g', 'r']})
```
### 24. Adaptive Thresholding of an Image
```
athresh_maxval = 255
athresh_adamethod = cv.ADAPTIVE_THRESH_MEAN_C
#athresh_adamethod = cv.ADAPTIVE_THRESH_GAUSSIAN_C
athresh_thrmethod = cv.THRESH_BINARY
athresh_blocksize = 11
athresh_c = 0
adaptive_thresh = cv.adaptiveThreshold(gray, athresh_maxval, athresh_adamethod, athresh_thrmethod, athresh_blocksize, athresh_c)
show_frame(adaptive_thresh, "Adaptive Thresholded Sun Image")
```
### 25. Edge Detection in an Image (cont)
```
# Laplacian
lap_ddepth = cv.CV_64F
lap = cv.Laplacian(gray, lap_ddepth)
lap = np.uint8(np.absolute(lap))
show_frame(lap, "Laplacian of Sun Image")
# Sobel
sob_ddep = cv.CV_64F
sobelx = cv.Sobel(gray, sob_ddep, 1, 0)
sobely = cv.Sobel(gray, sob_ddep, 0, 1)
show_frame(sobelx, "Sobel X of Sun Image")
show_frame(sobely, "Sobel Y of Sun Image")
combined_sobel = cv.bitwise_or(sobelx, sobely)
show_frame(combined_sobel, "Combined Sobel of Sun Image")
```
|
github_jupyter
|
# Import packages
import cv2 as cv
import matplotlib.pyplot as plt
import numpy as np
import os
from opencv_tools import load_frame_gray, resize_frame, translate_frame, rotate_frame, flip_frame, print_frame_info, show_frame
from opencv_tools import plot_frame_histogram
%matplotlib inline
# Set parameters
# data_dir: a string containing the directory where image files are located
# image_files: a list containing strings, with each string specifying the name of an image file, including extension
# image_index: an integer that specifies which image_file value to analyze in the code blocks below
# Note that os.listdir or glob could also be used to obtain a list of all files in data_dir
data_dir = "/home/fdpearce/Documents/Projects/data/Images/Sun/"
image_files = ["27aug21_hmi4096_blank.jpg", "28aug21_hmi4096_blank.jpg", "29aug21_hmi4096_blank.jpg", "30aug21_hmi4096_blank.jpg", \
"31aug21_hmi4096_blank.jpg", "01sep21_hmi4096_blank.jpg", "02sep21_hmi4096_blank.jpg", "03sep21_hmi4096_blank.jpg", \
"04sep21_hmi4096_blank.jpg", "05sep21_hmi4096_blank.jpg"]
image_index = 0
# Read in an example image to use for testing
img_path = os.path.join(data_dir, image_files[image_index])
img = load_frame_gray(img_path)
img_orig = img.copy()
resize_scale = .25
img_resize = resize_frame(img, scale=resize_scale)
show_frame(img_resize, "Resized Sun Image")
print_frame_info(img, "Original")
print_frame_info(img_resize, "Resized")
# Distance dimensions are in pixels (i.e. int)
circle_size = (img_resize.shape[1]//2, img_resize.shape[0]//2)
circle_radius = 472
circle_color = (0, 0, 255)
circle_thickness = 2
img_circle = cv.circle(img_resize.copy(), circle_size, circle_radius, circle_color, thickness=circle_thickness)
show_frame(img_circle, "Resized Sun Image w/ Circle")
text_str = "Circle Center"
text_loc = (img_resize.shape[1]//2, img_resize.shape[0]//2)
text_font = cv.FONT_HERSHEY_TRIPLEX
text_scale = 1
text_color = (255, 0, 0)
text_thickness = 2
img_text = cv.putText(img_circle.copy(), text_str, text_loc, text_font, text_scale, text_color, text_thickness)
show_frame(img_text, "Resized Sun Image w/ Circle + Text")
# Use the raw, resized sun image for subsequent analysis, e.g. edge detection, contours, etc
gray = cv.cvtColor(img_resize.copy(), cv.COLOR_BGR2GRAY)
show_frame(gray, "Resized Sun Image w/ Circle + Text in Grayscale")
blur_kernal_size = (3, 3)
blur_border = cv.BORDER_DEFAULT
blur = cv.GaussianBlur(gray.copy(), blur_kernal_size, blur_border)
show_frame(blur, "Blurred, Resized Sun Image w/ Circle + Text in Grayscale")
threshold1 = 125
threshold2 = 175
canny = cv.Canny(blur, threshold1, threshold2)
show_frame(canny, "Blurred, Resized Sun Image Edges w/ Circle + Text in Grayscale")
dilated_kernal_size = (3, 3)
dilated_iterations = 1
dilated = cv.dilate(canny, dilated_kernal_size, dilated_iterations)
show_frame(dilated, "Dilated, Resized Sun Image Edges w/ Circle + Text in Grayscale")
eroded_kernal_size = (3, 3)
eroded_iterations = 1
eroded = cv.erode(dilated, eroded_kernal_size, eroded_iterations)
show_frame(eroded, "Eroded, Resized Sun Image Edges w/ Circle + Text in Grayscale")
resized_size = (500, 500)
resized_interp = cv.INTER_AREA
resized = cv.resize(img_orig, resized_size, interpolation=resized_interp)
show_frame(resized, "Resized Sun Image")
cropped_row_indices = (50, 200)
cropped_col_indices = (200, 400)
cropped = resized[cropped_row_indices[0]:cropped_row_indices[1], cropped_col_indices[0]:cropped_col_indices[1]]
show_frame(cropped, "Cropped Sun Image")
translated_x = 100
translated_y = 50
translated = translate_frame(resized.copy(), translated_x, translated_y)
show_frame(translated, "Translated Sun Image")
rotated_angle = 45
rotated = rotate_frame(translated.copy(), rotated_angle)
show_frame(rotated, "Rotated, Translated Sun Image")
flipped_code = 0
flipped = flip_frame(resized.copy(), flipped_code)
show_frame(flipped, "Flipped Sun Image")
contour_output = cv.RETR_LIST
contour_method = cv.CHAIN_APPROX_NONE
contours_canny, hierarchies_canny = cv.findContours(canny, contour_output, contour_method)
print(f"{len(contours_canny)} Contour(s) found")
contours_canny[0:2]
hierarchies_canny[0, :5, :]
thresh_cutoff = 125
thresh_color = 255
thresh_type = cv.THRESH_BINARY
ret, thresh = cv.threshold(gray, thresh_cutoff, thresh_color, thresh_type)
show_frame(thresh, "Thresholded Sun Image")
contour_output = cv.RETR_LIST
contour_method = cv.CHAIN_APPROX_NONE
contours_thresh, hierarchies_thresh = cv.findContours(thresh, contour_output, contour_method)
print(f"{len(contours_thresh)} Contour(s) found")
contours_thresh[0]
hierarchies_thresh[0, :5, :]
drawcont_color = (0, 255, 0)
drawcont_thickness = 2
img_cont = cv.drawContours(img_resize.copy(), contours_canny, -1, drawcont_color, drawcont_thickness)
show_frame(img_cont, "Contours of Sun Image")
img_cont.shape
hsv = cv.cvtColor(img_resize.copy(), cv.COLOR_BGR2HSV)
show_frame(hsv, "Sun Image in HSV")
lab = cv.cvtColor(img_resize.copy(), cv.COLOR_BGR2LAB)
show_frame(lab, "Sun Image in LAB")
b, g, r = cv.split(img_resize.copy())
show_frame(b, "Blue Channel of Sun Image")
show_frame(g, "Green Channel of Sun Image")
show_frame(r, "Red Channel of Sun Image")
# Merge channels back together
bgr = cv.merge([b, g, r])
show_frame(bgr, "Merged Sun Image")
avg_kernal = (3, 3)
avg = cv.blur(img_resize.copy(), avg_kernal)
show_frame(avg, "Blurred (Avg) Sun Image")
med_kernal_size = 3
med = cv.medianBlur(img_resize.copy(), med_kernal_size)
show_frame(med, "Blurred (Median) Sun Image")
bilat_diam = 5
bilat_color = 15
bilat_space = 15
bilateral = cv.bilateralFilter(img_resize.copy(), bilat_diam, bilat_color, bilat_space)
show_frame(bilateral, "Blurred (Bilateral) Sun Image")
blank = np.zeros((400, 400), dtype='uint8')
rectange = cv.rectangle(blank.copy(), (30, 30), (370, 370), 255, -1)
circle = cv.circle(blank.copy(), (200, 200), 200, 255, -1)
bitwise_and = cv.bitwise_and(rectange, circle)
show_frame(bitwise_and, "Bitwise AND")
bitwise_or = cv.bitwise_or(rectange, circle)
show_frame(bitwise_or, "Bitwise OR")
bitwise_xor = cv.bitwise_xor(rectange, circle)
show_frame(bitwise_xor, "Bitwise XOR")
bitwise_not = cv.bitwise_not(rectange)
show_frame(bitwise_not, "Bitwise NOT")
mask_circle_radius = 50
masked_circle_center_x = 130
masked_circle_center_y = -50
mask_blank = np.zeros(gray.shape[0:2], dtype='uint8')
mask_circle = cv.circle(mask_blank.copy(), (mask_blank.shape[1]//2+masked_circle_center_x, mask_blank.shape[0]//2+masked_circle_center_y), mask_circle_radius, 255, thickness=-1)
mask_gray = cv.bitwise_and(gray, gray, mask=mask_circle)
show_frame(mask_gray, "Masked Sun Image")
mask_bgr = cv.bitwise_and(img_resize, img_resize, mask=mask_circle)
show_frame(mask_bgr, "Masked Sun Image")
# Histogram for Grayscale Image
plot_frame_histogram({'images': [gray], 'mask': mask_circle})
plot_frame_histogram({'images': [img_resize], 'channels': [0, 1, 2], 'mask': mask_circle}, {'channel_colors': ['b', 'g', 'r']})
athresh_maxval = 255
athresh_adamethod = cv.ADAPTIVE_THRESH_MEAN_C
#athresh_adamethod = cv.ADAPTIVE_THRESH_GAUSSIAN_C
athresh_thrmethod = cv.THRESH_BINARY
athresh_blocksize = 11
athresh_c = 0
adaptive_thresh = cv.adaptiveThreshold(gray, athresh_maxval, athresh_adamethod, athresh_thrmethod, athresh_blocksize, athresh_c)
show_frame(adaptive_thresh, "Adaptive Thresholded Sun Image")
# Laplacian
lap_ddepth = cv.CV_64F
lap = cv.Laplacian(gray, lap_ddepth)
lap = np.uint8(np.absolute(lap))
show_frame(lap, "Laplacian of Sun Image")
# Sobel
sob_ddep = cv.CV_64F
sobelx = cv.Sobel(gray, sob_ddep, 1, 0)
sobely = cv.Sobel(gray, sob_ddep, 0, 1)
show_frame(sobelx, "Sobel X of Sun Image")
show_frame(sobely, "Sobel Y of Sun Image")
combined_sobel = cv.bitwise_or(sobelx, sobely)
show_frame(combined_sobel, "Combined Sobel of Sun Image")
| 0.587115 | 0.951097 |
# Analyzing QUBICC Data
**For Figure 1 of the paper**
```
import os
import sys
import xarray as xr
import numpy as np
import pandas as pd
import importlib
import matplotlib
import matplotlib.pyplot as plt
# For psyplot
import psyplot.project as psy
import matplotlib as mpl
# %matplotlib inline
# %config InlineBackend.close_figures = False
psy.rcParams['plotter.maps.xgrid'] = False
psy.rcParams['plotter.maps.ygrid'] = False
mpl.rcParams['figure.figsize'] = [10., 8.]
# path = '/pf/b/b309170/my_work/QUBICC/data_var_vertinterp/cl/'
# file = 'int_var_hc2_02_p1m_cl_ml_20041110T010000Z.nc'
path = '/pf/b/b309170/my_work/QUBICC/'
file_cg = 'data_hor_interp/hc2_02_p1m_cl_ml_20041105T150000Z.nc'
file_orig = 'some_orig_data/hc2_02_p1m_cl_ml_20041105T150000Z.nc'
```
#### Question 1: Does the coarse-graining look right?
Of horizontal coarse-graining (psyplot): <br>
If you get the error 'ValueError: Can only plot 2-dimensional data!', then you need to use cdo setgrid on the file first.
**'height' 40 is layer 41 counting from 1 to 91**
```
# # Note that the cloud cover scheme used was a 0-1 cloud cover scheme.
# maps = psy.plot.mapplot(os.path.join(path, file_orig), dims = {'name': 'ccl', 'height': 40},
# projection='robin', cmap='Blues_r', title='Cloud cover on 20041105 at 15:00 (on layer 40)')
# plt.savefig('original_cloud_cover_snapshot.pdf')
# Note that the cloud cover scheme used was a 0-1 cloud cover scheme.
maps = psy.plot.mapplot(os.path.join(path, file_orig), dims = {'name': 'ccl', 'height': 40}, cticksize=34,
projection='robin', cmap='Blues_r')
plt.savefig('original_cloud_cover_snapshot_untitled.pdf')
# maps = psy.plot.mapplot(os.path.join(path, file_cg), dims = {'name': 'cl', 'height': 40},
# projection='robin', cmap='Blues_r', title='Horizontally interpolated cloud cover on 20041105 at 15:00 (on layer 40)')
# plt.savefig('horizontally_coarse_grained_cloud_cover.pdf')
maps = psy.plot.mapplot(os.path.join(path, file_cg), dims = {'name': 'cl', 'height': 40}, cticksize=34,
projection='robin', cmap='Blues_r')
# plt.savefig('horizontally_coarse_grained_cloud_cover_untitled.pdf')
```
Of vertical coarse-graining:
```
# Some arbitrary horizontal field
rand_field = np.random.randint(20480)
# rand_field = 15252 # To reproduce the profile from the paper
rand_field
## Load original data
# Load zg profile
DS = xr.open_dataset('/pf/b/b309170/my_work/QUBICC/grids/qubicc_l191_zg_ml_0015_R02B04_G.nc')
da = DS.zg.values
zg_hr = da[:, rand_field]
zg_hr = zg_hr[-91:] # Need the 91 earth-bound layers
# Load clc profile
DS = xr.open_dataset('/pf/b/b309170/my_work/QUBICC/data_hor_interp/hc2_02_p1m_cl_ml_20041105T150000Z.nc')
da = DS.cl.values
print(da.shape)
cl_hr = da[0, :, rand_field]
## Load vertically coarse-grained data
# Load clc profile
DS = xr.open_dataset('/pf/b/b309170/my_work/QUBICC/data_var_vertinterp/cl/int_var_hc2_02_p1m_cl_ml_20041105T150000Z_R02B04.nc')
da = DS.cl.values
not_nan = ~np.isnan(da[0,:,rand_field])
cl_lr = da[0, not_nan, rand_field]
# Load zg profile
DS = xr.open_dataset('/pf/b/b309170/my_work/QUBICC/data_var_vertinterp/zg/zg_icon-a_capped.nc')
da = DS.zg.values
zg_lr = da[not_nan, rand_field]
# Increase the general font size
size_plot_elements = 16
matplotlib.rcParams['legend.fontsize'] = size_plot_elements
matplotlib.rcParams['axes.labelsize'] = size_plot_elements # For an axes xlabel and ylabel
matplotlib.rcParams['xtick.labelsize'] = size_plot_elements
matplotlib.rcParams['ytick.labelsize'] = size_plot_elements
fig = plt.figure(figsize=(2,4))
# # Units in kilometers
# zg_hr = zg_hr/1000
# zg_lr = zg_lr/1000
# ax = fig.add_subplot(211, title='High-res vertical cloud cover profile', ylim=(0, np.max(zg_lr)), xlim=(-0.05,1),
# xlabel='Cloud Cover Fraction', ylabel='Mean height of a vertical layer in km')
ax = fig.add_subplot(111, ylim=(0, np.max(zg_lr)), xlim=(-0.05,1), ylabel='z [km]', xticks=[0,0.5,1])
ax.plot(cl_hr, zg_hr)
ax.plot(cl_hr, zg_hr, 'b.')
plt.savefig('vertical_coarse-graining_qubicc_example_v2_1.pdf', bbox_inches='tight')
fig = plt.figure(figsize=(2,4))
# ax_2 = fig.add_subplot(212, title='Low-res vertical cloud cover profile', ylim=(0, np.max(zg_lr)), xlim=(-0.05,1),
# xlabel='Cloud Cover Fraction', ylabel='Mean height of a vertical layer in km')
ax_2 = fig.add_subplot(111, ylim=(0, np.max(zg_lr)), xlim=(-0.05,1),
xlabel='Cloud Fraction', ylabel='z [km]', xticks=[0,0.5,1])
ax_2.plot(cl_lr, zg_lr)
ax_2.plot(cl_lr, zg_lr, 'b.')
plt.savefig('vertical_coarse-graining_qubicc_example_v2_2.pdf', bbox_inches='tight')
fig = plt.figure(figsize=(10,7))
ax = fig.add_subplot(121, title='High-res vertical cloud cover profile')
ax.plot(cl_hr, zg_hr)
ax.plot(cl_hr, zg_hr, 'b.')
ax_2 = fig.add_subplot(122, title='Low-res vertical cloud cover profile')
ax_2.plot(cl_lr, zg_lr)
ax_2.plot(cl_lr, zg_lr, 'b.')
```
|
github_jupyter
|
import os
import sys
import xarray as xr
import numpy as np
import pandas as pd
import importlib
import matplotlib
import matplotlib.pyplot as plt
# For psyplot
import psyplot.project as psy
import matplotlib as mpl
# %matplotlib inline
# %config InlineBackend.close_figures = False
psy.rcParams['plotter.maps.xgrid'] = False
psy.rcParams['plotter.maps.ygrid'] = False
mpl.rcParams['figure.figsize'] = [10., 8.]
# path = '/pf/b/b309170/my_work/QUBICC/data_var_vertinterp/cl/'
# file = 'int_var_hc2_02_p1m_cl_ml_20041110T010000Z.nc'
path = '/pf/b/b309170/my_work/QUBICC/'
file_cg = 'data_hor_interp/hc2_02_p1m_cl_ml_20041105T150000Z.nc'
file_orig = 'some_orig_data/hc2_02_p1m_cl_ml_20041105T150000Z.nc'
# # Note that the cloud cover scheme used was a 0-1 cloud cover scheme.
# maps = psy.plot.mapplot(os.path.join(path, file_orig), dims = {'name': 'ccl', 'height': 40},
# projection='robin', cmap='Blues_r', title='Cloud cover on 20041105 at 15:00 (on layer 40)')
# plt.savefig('original_cloud_cover_snapshot.pdf')
# Note that the cloud cover scheme used was a 0-1 cloud cover scheme.
maps = psy.plot.mapplot(os.path.join(path, file_orig), dims = {'name': 'ccl', 'height': 40}, cticksize=34,
projection='robin', cmap='Blues_r')
plt.savefig('original_cloud_cover_snapshot_untitled.pdf')
# maps = psy.plot.mapplot(os.path.join(path, file_cg), dims = {'name': 'cl', 'height': 40},
# projection='robin', cmap='Blues_r', title='Horizontally interpolated cloud cover on 20041105 at 15:00 (on layer 40)')
# plt.savefig('horizontally_coarse_grained_cloud_cover.pdf')
maps = psy.plot.mapplot(os.path.join(path, file_cg), dims = {'name': 'cl', 'height': 40}, cticksize=34,
projection='robin', cmap='Blues_r')
# plt.savefig('horizontally_coarse_grained_cloud_cover_untitled.pdf')
# Some arbitrary horizontal field
rand_field = np.random.randint(20480)
# rand_field = 15252 # To reproduce the profile from the paper
rand_field
## Load original data
# Load zg profile
DS = xr.open_dataset('/pf/b/b309170/my_work/QUBICC/grids/qubicc_l191_zg_ml_0015_R02B04_G.nc')
da = DS.zg.values
zg_hr = da[:, rand_field]
zg_hr = zg_hr[-91:] # Need the 91 earth-bound layers
# Load clc profile
DS = xr.open_dataset('/pf/b/b309170/my_work/QUBICC/data_hor_interp/hc2_02_p1m_cl_ml_20041105T150000Z.nc')
da = DS.cl.values
print(da.shape)
cl_hr = da[0, :, rand_field]
## Load vertically coarse-grained data
# Load clc profile
DS = xr.open_dataset('/pf/b/b309170/my_work/QUBICC/data_var_vertinterp/cl/int_var_hc2_02_p1m_cl_ml_20041105T150000Z_R02B04.nc')
da = DS.cl.values
not_nan = ~np.isnan(da[0,:,rand_field])
cl_lr = da[0, not_nan, rand_field]
# Load zg profile
DS = xr.open_dataset('/pf/b/b309170/my_work/QUBICC/data_var_vertinterp/zg/zg_icon-a_capped.nc')
da = DS.zg.values
zg_lr = da[not_nan, rand_field]
# Increase the general font size
size_plot_elements = 16
matplotlib.rcParams['legend.fontsize'] = size_plot_elements
matplotlib.rcParams['axes.labelsize'] = size_plot_elements # For an axes xlabel and ylabel
matplotlib.rcParams['xtick.labelsize'] = size_plot_elements
matplotlib.rcParams['ytick.labelsize'] = size_plot_elements
fig = plt.figure(figsize=(2,4))
# # Units in kilometers
# zg_hr = zg_hr/1000
# zg_lr = zg_lr/1000
# ax = fig.add_subplot(211, title='High-res vertical cloud cover profile', ylim=(0, np.max(zg_lr)), xlim=(-0.05,1),
# xlabel='Cloud Cover Fraction', ylabel='Mean height of a vertical layer in km')
ax = fig.add_subplot(111, ylim=(0, np.max(zg_lr)), xlim=(-0.05,1), ylabel='z [km]', xticks=[0,0.5,1])
ax.plot(cl_hr, zg_hr)
ax.plot(cl_hr, zg_hr, 'b.')
plt.savefig('vertical_coarse-graining_qubicc_example_v2_1.pdf', bbox_inches='tight')
fig = plt.figure(figsize=(2,4))
# ax_2 = fig.add_subplot(212, title='Low-res vertical cloud cover profile', ylim=(0, np.max(zg_lr)), xlim=(-0.05,1),
# xlabel='Cloud Cover Fraction', ylabel='Mean height of a vertical layer in km')
ax_2 = fig.add_subplot(111, ylim=(0, np.max(zg_lr)), xlim=(-0.05,1),
xlabel='Cloud Fraction', ylabel='z [km]', xticks=[0,0.5,1])
ax_2.plot(cl_lr, zg_lr)
ax_2.plot(cl_lr, zg_lr, 'b.')
plt.savefig('vertical_coarse-graining_qubicc_example_v2_2.pdf', bbox_inches='tight')
fig = plt.figure(figsize=(10,7))
ax = fig.add_subplot(121, title='High-res vertical cloud cover profile')
ax.plot(cl_hr, zg_hr)
ax.plot(cl_hr, zg_hr, 'b.')
ax_2 = fig.add_subplot(122, title='Low-res vertical cloud cover profile')
ax_2.plot(cl_lr, zg_lr)
ax_2.plot(cl_lr, zg_lr, 'b.')
| 0.478529 | 0.736116 |
# HuberRegressor with PolynomialFeatures
This Code template is for the regression analysis using a simple Huber Regressor with Feature Transformation technique PolynomialFeatures in a pipeline.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import HuberRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features= []
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Feature Transformation
Generate polynomial and interaction features.
Generate a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree.
[More on PolynomialFeatures module and parameters](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html)
### Model
Linear regression model that is robust to outliers.
The Huber Regressor optimizes the squared loss for the samples where |(y - X'w) / sigma| < epsilon and the absolute loss for the samples where |(y - X'w) / sigma| > epsilon, where w and sigma are parameters to be optimized. The parameter sigma makes sure that if y is scaled up or down by a certain factor, one does not need to rescale epsilon to achieve the same robustness. Note that this does not take into account the fact that the different features of X may be of different scales.
This makes sure that the loss function is not heavily influenced by the outliers while not completely ignoring their effect.
```
model= make_pipeline(PolynomialFeatures(),HuberRegressor())
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Nikhil Shrotri , Github: [Profile](https://github.com/nikhilshrotri)
|
github_jupyter
|
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import HuberRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
#filepath
file_path= ""
#x_values
features= []
#y_value
target=''
df=pd.read_csv(file_path)
df.head()
X=df[features]
Y=df[target]
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
model= make_pipeline(PolynomialFeatures(),HuberRegressor())
model.fit(x_train,y_train)
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
| 0.351979 | 0.988165 |
# Installation
:label:`chap_installation`
In order to get you up and running for hands-on learning experience,
we need to set you up with an environment for running Python,
Jupyter notebooks, the relevant libraries,
and the code needed to run the book itself.
## Installing Miniconda
The simplest way to get going will be to install
[Miniconda](https://conda.io/en/latest/miniconda.html). The Python 3.x version
is required. You can skip the following steps if conda has already been installed.
Download the corresponding Miniconda sh file from the website
and then execute the installation from the command line
using `sh <FILENAME> -b`. For macOS users:
```bash
# The file name is subject to changes
sh Miniconda3-latest-MacOSX-x86_64.sh -b
```
For Linux users:
```bash
# The file name is subject to changes
sh Miniconda3-latest-Linux-x86_64.sh -b
```
Next, initialize the shell so we can run `conda` directly.
```bash
~/miniconda3/bin/conda init
```
Now close and re-open your current shell. You should be able to create a new
environment as following:
```bash
conda create --name d2l python=3.8 -y
```
## Downloading the D2L Notebooks
Next, we need to download the code of this book. You can click the "All
Notebooks" tab on the top of any HTML page to download and unzip the code.
Alternatively, if you have `unzip` (otherwise run `sudo apt install unzip`) available:
```bash
mkdir d2l-en && cd d2l-en
curl https://d2l.ai/d2l-en.zip -o d2l-en.zip
unzip d2l-en.zip && rm d2l-en.zip
```
Now we will want to activate the `d2l` environment.
```bash
conda activate d2l
```
## Installing the Framework and the `d2l` Package
Before installing the deep learning framework, please first check
whether or not you have proper GPUs on your machine
(the GPUs that power the display on a standard laptop
do not count for our purposes).
If you are installing on a GPU server,
proceed to :ref:`subsec_gpu` for instructions
to install a GPU-supported version.
Otherwise, you can install the CPU version as follows.
That will be more than enough horsepower to get you
through the first few chapters but you will want
to access GPUs before running larger models.
```bash
pip install torch torchvision -f https://download.pytorch.org/whl/torch_stable.html
```
We also install the `d2l` package that encapsulates frequently used
functions and classes in this book.
```bash
# -U: Upgrade all packages to the newest available version
pip install -U d2l
```
Once they are installed, we now open the Jupyter notebook by running:
```bash
jupyter notebook
```
At this point, you can open http://localhost:8888 (it usually opens automatically) in your Web browser. Then we can run the code for each section of the book.
Please always execute `conda activate d2l` to activate the runtime environment
before running the code of the book or updating the deep learning framework or the `d2l` package.
To exit the environment, run `conda deactivate`.
## GPU Support
:label:`subsec_gpu`
By default, the deep learning framework is installed with GPU support.
If your computer has NVIDIA GPUs and has installed [CUDA](https://developer.nvidia.com/cuda-downloads),
then you are all set.
## Exercises
1. Download the code for the book and install the runtime environment.
[Discussions](https://discuss.d2l.ai/t/24)
|
github_jupyter
|
# The file name is subject to changes
sh Miniconda3-latest-MacOSX-x86_64.sh -b
# The file name is subject to changes
sh Miniconda3-latest-Linux-x86_64.sh -b
~/miniconda3/bin/conda init
conda create --name d2l python=3.8 -y
mkdir d2l-en && cd d2l-en
curl https://d2l.ai/d2l-en.zip -o d2l-en.zip
unzip d2l-en.zip && rm d2l-en.zip
conda activate d2l
pip install torch torchvision -f https://download.pytorch.org/whl/torch_stable.html
# -U: Upgrade all packages to the newest available version
pip install -U d2l
jupyter notebook
| 0.283087 | 0.890532 |
# Dimensionality reduction using Keras Auto Encoder
* Prepare Data
* Design Auto Encoder
* Train Auto Encoder
* Use Encoder level from Auto Encoder
* Use Encoder to obtain reduced dimensionality data for train and test sets
```
import os
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from numpy.random import seed
from sklearn.preprocessing import minmax_scale
from sklearn.model_selection import train_test_split
from keras.layers import Input, Dense
from keras.models import Model
import xgboost
import numpy as np
import pandas as pd
import seaborn as sns
from math import sqrt
import matplotlib.pyplot as plt
from sklearn import preprocessing
import matplotlib.pyplot as plote
from sklearn import cross_validation, metrics
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
import seaborn as sns
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
print(os.listdir("../input"))
```
## Read train and test data
```
train = pd.read_csv('../input/train.csv')
test = pd.read_csv('../input/test.csv')
train2 = train.copy()
train3 = train.copy()
```
## Dropping Target and ID's from train and test
```
#target = train['target']
#train_id = train['ID']
#test_id = test['ID']
#train.drop(['target'], axis=1, inplace=True)
#train.drop(['ID'], axis=1, inplace=True)
#test.drop(['ID'], axis=1, inplace=True)
print('Train data shape', X_train.shape)
print('Test data shape', X_test.shape)
```
### Scaling Train and Test data for Neural Net
```
#train_scaled = minmax_scale(train, axis = 0)
#test_scaled = min3max_scale(test, axis = 0)
scale_list = train3.columns[1:]
sc = train3[scale_list]
scaler = StandardScaler()
sc = scaler.fit_transform(sc)
train3[scale_list] = sc
train3[scale_list].head()
```
## Design Auto Encoder
Auto Encoders are is a type of artificial neural network used to learn efficient data patterns in an unsupervised manner. An Auto Encoder ideally consists of an encoder and decoder.
The Neural Network is designed compress data using the Encoding level. The Decoder will try to uncompress the data to the original dimension.
To achieve this, the Neural net is trained using the Training data as the training features as well as target.
```
# Training a Typical Neural Net
model.fit(X_train, y_train)
# Training a Auto Encoder
model.fit(X_train, X_train)
```
These are typically used for dimensionality reduction use cases where there are more number of features.
```
# define the number of features
ncol = X_train.shape[1]
ncol
```
### Split train data into train and validation 80:20 in ratio
```
#X_train, X_test, Y_train, Y_test = train_test_split(train_scaled, target, train_size = 0.9, random_state = seed(2017))
X3 = train3.drop(['target','ID'], axis=1)
Y3 = train3['target']
X_train, X_test, y_train, y_test = train_test_split(X, Y ,test_size=0.2)
### Define the encoder dimension
encoding_dim = 200
input_dim = Input(shape = (ncol, ))
# Encoder Layers
encoded1 = Dense(3000, activation = 'relu')(input_dim)
encoded2 = Dense(2750, activation = 'relu')(encoded1)
encoded3 = Dense(2500, activation = 'relu')(encoded2)
encoded4 = Dense(2250, activation = 'relu')(encoded3)
encoded5 = Dense(2000, activation = 'relu')(encoded4)
encoded6 = Dense(1750, activation = 'relu')(encoded5)
encoded7 = Dense(1500, activation = 'relu')(encoded6)
encoded8 = Dense(1250, activation = 'relu')(encoded7)
encoded9 = Dense(1000, activation = 'relu')(encoded8)
encoded10 = Dense(750, activation = 'relu')(encoded9)
encoded11 = Dense(500, activation = 'relu')(encoded10)
encoded12 = Dense(250, activation = 'relu')(encoded11)
encoded13 = Dense(encoding_dim, activation = 'relu')(encoded12)
# Decoder Layers
decoded1 = Dense(250, activation = 'relu')(encoded13)
decoded2 = Dense(500, activation = 'relu')(decoded1)
decoded3 = Dense(750, activation = 'relu')(decoded2)
decoded4 = Dense(1000, activation = 'relu')(decoded3)
decoded5 = Dense(1250, activation = 'relu')(decoded4)
decoded6 = Dense(1500, activation = 'relu')(decoded5)
decoded7 = Dense(1750, activation = 'relu')(decoded6)
decoded8 = Dense(2000, activation = 'relu')(decoded7)
decoded9 = Dense(2250, activation = 'relu')(decoded8)
decoded10 = Dense(2500, activation = 'relu')(decoded9)
decoded11 = Dense(2750, activation = 'relu')(decoded10)
decoded12 = Dense(3000, activation = 'relu')(decoded11)
decoded13 = Dense(ncol, activation = 'sigmoid')(decoded12)
# Combine Encoder and Deocder layers
autoencoder = Model(inputs = input_dim, outputs = decoded13)
# Compile the Model
autoencoder.compile(optimizer = 'adadelta', loss = 'binary_crossentropy')
autoencoder.summary()
```
### Train Auto Encoder
```
autoencoder.fit(X_train, X_train, nb_epoch = 10, batch_size = 32, shuffle = False, validation_data = (X_test, X_test))
```
## Use Encoder level to reduce dimension of train and test data
```
encoder = Model(inputs = input_dim, outputs = encoded13)
encoded_input = Input(shape = (encoding_dim, ))
```
### Predict the new train and test data using Encoder
```
encoded_train = pd.DataFrame(encoder.predict(X_train))
encoded_train = encoded_train.add_prefix('feature_')
encoded_test = pd.DataFrame(encoder.predict(X_test))
encoded_test = encoded_test.add_prefix('feature_')
print(encoded_train.shape)
```
### Add target to train
```
print(encoded_train.shape)
encoded_train.head(5)
print(encoded_test.shape)
encoded_test.head()
encoded_train.to_csv('train_encoded.csv', index=False)
encoded_test.to_csv('test_encoded.csv', index=False)
encoded_test = encoded_test.fillna(0)
sns.heatmap(encoded_train.isnull(),yticklabels = False,cbar = False,cmap = 'viridis')
missing_val_count_by_column = (encoded_test.isnull().sum())
print(missing_val_count_by_column[missing_val_count_by_column > 0])
#encoder + PCA
from sklearn.decomposition import PCA
pca = PCA(n_components=None)
x_train = pca.fit_transform(encoded_train)
x_test = pca.transform(encoded_test)
explained_variance = pca.explained_variance_ratio_
explained_variance
xgb = xgboost.XGBRegressor(n_estimators=35, learning_rate=0.06, gamma=0, subsample=0.6,
colsample_bytree=0.7, min_child_weight=4, max_depth=3)
xgb.fit(x_train,y_train)
predictions = xgb.predict(x_test)
print(metrics.mean_squared_error(y_test, predictions))
rand = RandomForestRegressor(n_estimators = 10,random_state = 0)
rand.fit(x_train,y_train)
y_pred2 = rand.predict(x_test)
print(metrics.mean_squared_error(y_test,y_pred2 ))
logreg=LinearRegression()
logreg.fit(x_train,y_train)
y_pred=logreg.predict(x_test)
y_pred
print(metrics.mean_squared_error(y_test, y_pred))
regressor = DecisionTreeRegressor( random_state = 0)
regressor.fit(x_train,y_train)
y_pred1 = regressor.predict(x_test)
print(metrics.mean_squared_error(y_test,y_pred1 ))
scale_list = train2.columns[1:]
sc = train2[scale_list]
scaler = StandardScaler()
sc = scaler.fit_transform(sc)
train2[scale_list] = sc
train2[scale_list].head()
X = train2.drop(['target','ID'], axis=1)
Y = train2['target']
X_train, X_test, y_train, y_test = train_test_split(X, Y ,test_size=0.2)
#PCA ONLY
from sklearn.decomposition import PCA
pca = PCA(n_components=None)
x2_train = pca.fit_transform(X_train)
x2_test = pca.transform(X_test)
explained_variance2 = pca.explained_variance_ratio_
regressor = DecisionTreeRegressor( random_state = 0)
regressor.fit(x2_train,y_train)
y_pred1 = regressor.predict(x2_test)
print(metrics.mean_squared_error(y_test,y_pred1 ))
xgb = xgboost.XGBRegressor(n_estimators=35, learning_rate=0.06, gamma=0, subsample=0.6,
colsample_bytree=0.7, min_child_weight=4, max_depth=3)
xgb.fit(x2_train,y_train)
predictions = xgb.predict(x2_test)
print(metrics.mean_squared_error(y_test, predictions))
rand = RandomForestRegressor(n_estimators = 10,random_state = 0)
rand.fit(x2_train,y_train)
y_pred2 = rand.predict(x2_test)
print(metrics.mean_squared_error(y_test,y_pred2 ))
logreg=LinearRegression()
logreg.fit(x2_train,y_train)
y_pred=logreg.predict(x2_test)
y_pred
print(metrics.mean_squared_error(y_test, y_pred))
#KERNEL PCA + ENCODER
from sklearn.decomposition import KernelPCA
kpca = KernelPCA(n_components = 2, kernel = 'rbf')
x4_train = kpca.fit_transform(encoded_train)
x4_test = kpca.transform(encoded_test)
regressor = DecisionTreeRegressor( random_state = 0)
regressor.fit(x4_train,y_train)
y_pred1 = regressor.predict(x4_test)
print(metrics.mean_squared_error(y_test,y_pred1 ))
xgb = xgboost.XGBRegressor(n_estimators=35, learning_rate=0.06, gamma=0, subsample=0.6,
colsample_bytree=0.7, min_child_weight=4, max_depth=3)
xgb.fit(x4_train,y_train)
predictions = xgb.predict(x4_test)
print(metrics.mean_squared_error(y_test, predictions))
rand = RandomForestRegressor(n_estimators = 10,random_state = 0)
rand.fit(x4_train,y_train)
y_pred2 = rand.predict(x4_test)
print(metrics.mean_squared_error(y_test,y_pred2 ))
logreg=LinearRegression()
logreg.fit(x4_train,y_train)
y_pred=logreg.predict(x4_test)
y_pred
print(metrics.mean_squared_error(y_test, y_pred))
```
|
github_jupyter
|
import os
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from numpy.random import seed
from sklearn.preprocessing import minmax_scale
from sklearn.model_selection import train_test_split
from keras.layers import Input, Dense
from keras.models import Model
import xgboost
import numpy as np
import pandas as pd
import seaborn as sns
from math import sqrt
import matplotlib.pyplot as plt
from sklearn import preprocessing
import matplotlib.pyplot as plote
from sklearn import cross_validation, metrics
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
import seaborn as sns
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
print(os.listdir("../input"))
train = pd.read_csv('../input/train.csv')
test = pd.read_csv('../input/test.csv')
train2 = train.copy()
train3 = train.copy()
#target = train['target']
#train_id = train['ID']
#test_id = test['ID']
#train.drop(['target'], axis=1, inplace=True)
#train.drop(['ID'], axis=1, inplace=True)
#test.drop(['ID'], axis=1, inplace=True)
print('Train data shape', X_train.shape)
print('Test data shape', X_test.shape)
#train_scaled = minmax_scale(train, axis = 0)
#test_scaled = min3max_scale(test, axis = 0)
scale_list = train3.columns[1:]
sc = train3[scale_list]
scaler = StandardScaler()
sc = scaler.fit_transform(sc)
train3[scale_list] = sc
train3[scale_list].head()
# Training a Typical Neural Net
model.fit(X_train, y_train)
# Training a Auto Encoder
model.fit(X_train, X_train)
# define the number of features
ncol = X_train.shape[1]
ncol
#X_train, X_test, Y_train, Y_test = train_test_split(train_scaled, target, train_size = 0.9, random_state = seed(2017))
X3 = train3.drop(['target','ID'], axis=1)
Y3 = train3['target']
X_train, X_test, y_train, y_test = train_test_split(X, Y ,test_size=0.2)
### Define the encoder dimension
encoding_dim = 200
input_dim = Input(shape = (ncol, ))
# Encoder Layers
encoded1 = Dense(3000, activation = 'relu')(input_dim)
encoded2 = Dense(2750, activation = 'relu')(encoded1)
encoded3 = Dense(2500, activation = 'relu')(encoded2)
encoded4 = Dense(2250, activation = 'relu')(encoded3)
encoded5 = Dense(2000, activation = 'relu')(encoded4)
encoded6 = Dense(1750, activation = 'relu')(encoded5)
encoded7 = Dense(1500, activation = 'relu')(encoded6)
encoded8 = Dense(1250, activation = 'relu')(encoded7)
encoded9 = Dense(1000, activation = 'relu')(encoded8)
encoded10 = Dense(750, activation = 'relu')(encoded9)
encoded11 = Dense(500, activation = 'relu')(encoded10)
encoded12 = Dense(250, activation = 'relu')(encoded11)
encoded13 = Dense(encoding_dim, activation = 'relu')(encoded12)
# Decoder Layers
decoded1 = Dense(250, activation = 'relu')(encoded13)
decoded2 = Dense(500, activation = 'relu')(decoded1)
decoded3 = Dense(750, activation = 'relu')(decoded2)
decoded4 = Dense(1000, activation = 'relu')(decoded3)
decoded5 = Dense(1250, activation = 'relu')(decoded4)
decoded6 = Dense(1500, activation = 'relu')(decoded5)
decoded7 = Dense(1750, activation = 'relu')(decoded6)
decoded8 = Dense(2000, activation = 'relu')(decoded7)
decoded9 = Dense(2250, activation = 'relu')(decoded8)
decoded10 = Dense(2500, activation = 'relu')(decoded9)
decoded11 = Dense(2750, activation = 'relu')(decoded10)
decoded12 = Dense(3000, activation = 'relu')(decoded11)
decoded13 = Dense(ncol, activation = 'sigmoid')(decoded12)
# Combine Encoder and Deocder layers
autoencoder = Model(inputs = input_dim, outputs = decoded13)
# Compile the Model
autoencoder.compile(optimizer = 'adadelta', loss = 'binary_crossentropy')
autoencoder.summary()
autoencoder.fit(X_train, X_train, nb_epoch = 10, batch_size = 32, shuffle = False, validation_data = (X_test, X_test))
encoder = Model(inputs = input_dim, outputs = encoded13)
encoded_input = Input(shape = (encoding_dim, ))
encoded_train = pd.DataFrame(encoder.predict(X_train))
encoded_train = encoded_train.add_prefix('feature_')
encoded_test = pd.DataFrame(encoder.predict(X_test))
encoded_test = encoded_test.add_prefix('feature_')
print(encoded_train.shape)
print(encoded_train.shape)
encoded_train.head(5)
print(encoded_test.shape)
encoded_test.head()
encoded_train.to_csv('train_encoded.csv', index=False)
encoded_test.to_csv('test_encoded.csv', index=False)
encoded_test = encoded_test.fillna(0)
sns.heatmap(encoded_train.isnull(),yticklabels = False,cbar = False,cmap = 'viridis')
missing_val_count_by_column = (encoded_test.isnull().sum())
print(missing_val_count_by_column[missing_val_count_by_column > 0])
#encoder + PCA
from sklearn.decomposition import PCA
pca = PCA(n_components=None)
x_train = pca.fit_transform(encoded_train)
x_test = pca.transform(encoded_test)
explained_variance = pca.explained_variance_ratio_
explained_variance
xgb = xgboost.XGBRegressor(n_estimators=35, learning_rate=0.06, gamma=0, subsample=0.6,
colsample_bytree=0.7, min_child_weight=4, max_depth=3)
xgb.fit(x_train,y_train)
predictions = xgb.predict(x_test)
print(metrics.mean_squared_error(y_test, predictions))
rand = RandomForestRegressor(n_estimators = 10,random_state = 0)
rand.fit(x_train,y_train)
y_pred2 = rand.predict(x_test)
print(metrics.mean_squared_error(y_test,y_pred2 ))
logreg=LinearRegression()
logreg.fit(x_train,y_train)
y_pred=logreg.predict(x_test)
y_pred
print(metrics.mean_squared_error(y_test, y_pred))
regressor = DecisionTreeRegressor( random_state = 0)
regressor.fit(x_train,y_train)
y_pred1 = regressor.predict(x_test)
print(metrics.mean_squared_error(y_test,y_pred1 ))
scale_list = train2.columns[1:]
sc = train2[scale_list]
scaler = StandardScaler()
sc = scaler.fit_transform(sc)
train2[scale_list] = sc
train2[scale_list].head()
X = train2.drop(['target','ID'], axis=1)
Y = train2['target']
X_train, X_test, y_train, y_test = train_test_split(X, Y ,test_size=0.2)
#PCA ONLY
from sklearn.decomposition import PCA
pca = PCA(n_components=None)
x2_train = pca.fit_transform(X_train)
x2_test = pca.transform(X_test)
explained_variance2 = pca.explained_variance_ratio_
regressor = DecisionTreeRegressor( random_state = 0)
regressor.fit(x2_train,y_train)
y_pred1 = regressor.predict(x2_test)
print(metrics.mean_squared_error(y_test,y_pred1 ))
xgb = xgboost.XGBRegressor(n_estimators=35, learning_rate=0.06, gamma=0, subsample=0.6,
colsample_bytree=0.7, min_child_weight=4, max_depth=3)
xgb.fit(x2_train,y_train)
predictions = xgb.predict(x2_test)
print(metrics.mean_squared_error(y_test, predictions))
rand = RandomForestRegressor(n_estimators = 10,random_state = 0)
rand.fit(x2_train,y_train)
y_pred2 = rand.predict(x2_test)
print(metrics.mean_squared_error(y_test,y_pred2 ))
logreg=LinearRegression()
logreg.fit(x2_train,y_train)
y_pred=logreg.predict(x2_test)
y_pred
print(metrics.mean_squared_error(y_test, y_pred))
#KERNEL PCA + ENCODER
from sklearn.decomposition import KernelPCA
kpca = KernelPCA(n_components = 2, kernel = 'rbf')
x4_train = kpca.fit_transform(encoded_train)
x4_test = kpca.transform(encoded_test)
regressor = DecisionTreeRegressor( random_state = 0)
regressor.fit(x4_train,y_train)
y_pred1 = regressor.predict(x4_test)
print(metrics.mean_squared_error(y_test,y_pred1 ))
xgb = xgboost.XGBRegressor(n_estimators=35, learning_rate=0.06, gamma=0, subsample=0.6,
colsample_bytree=0.7, min_child_weight=4, max_depth=3)
xgb.fit(x4_train,y_train)
predictions = xgb.predict(x4_test)
print(metrics.mean_squared_error(y_test, predictions))
rand = RandomForestRegressor(n_estimators = 10,random_state = 0)
rand.fit(x4_train,y_train)
y_pred2 = rand.predict(x4_test)
print(metrics.mean_squared_error(y_test,y_pred2 ))
logreg=LinearRegression()
logreg.fit(x4_train,y_train)
y_pred=logreg.predict(x4_test)
y_pred
print(metrics.mean_squared_error(y_test, y_pred))
| 0.627152 | 0.893588 |
# 参数管理
在选择了架构并设置了超参数后,我们就进入了训练阶段。
此时,我们的目标是找到使损失函数最小化的模型参数值。
经过训练后,我们将需要使用这些参数来做出未来的预测。
此外,有时我们希望提取参数,以便在其他环境中复用它们,
将模型保存下来,以便它可以在其他软件中执行,
或者为了获得科学的理解而进行检查。
之前的介绍中,我们只依靠深度学习框架来完成训练的工作,
而忽略了操作参数的具体细节。
本节,我们将介绍以下内容:
* 访问参数,用于调试、诊断和可视化。
* 参数初始化。
* 在不同模型组件间共享参数。
(**我们首先看一下具有单隐藏层的多层感知机。**)
```
import tensorflow as tf
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(4, activation=tf.nn.relu),
tf.keras.layers.Dense(1),
])
X = tf.random.uniform((2, 4))
net(X)
```
## [**参数访问**]
我们从已有模型中访问参数。
当通过`Sequential`类定义模型时,
我们可以通过索引来访问模型的任意层。
这就像模型是一个列表一样,每层的参数都在其属性中。
如下所示,我们可以检查第二个全连接层的参数。
```
print(net.layers[2].weights)
```
输出的结果告诉我们一些重要的事情:
首先,这个全连接层包含两个参数,分别是该层的权重和偏置。
两者都存储为单精度浮点数(float32)。
注意,参数名称允许唯一标识每个参数,即使在包含数百个层的网络中也是如此。
### [**目标参数**]
注意,每个参数都表示为参数类的一个实例。
要对参数执行任何操作,首先我们需要访问底层的数值。
有几种方法可以做到这一点。有些比较简单,而另一些则比较通用。
下面的代码从第二个全连接层(即第三个神经网络层)提取偏置,
提取后返回的是一个参数类实例,并进一步访问该参数的值。
```
print(type(net.layers[2].weights[1]))
print(net.layers[2].weights[1])
print(tf.convert_to_tensor(net.layers[2].weights[1]))
```
### [**一次性访问所有参数**]
当我们需要对所有参数执行操作时,逐个访问它们可能会很麻烦。
当我们处理更复杂的块(例如,嵌套块)时,情况可能会变得特别复杂,
因为我们需要递归整个树来提取每个子块的参数。
下面,我们将通过演示来比较访问第一个全连接层的参数和访问所有层。
```
print(net.layers[1].weights)
print(net.get_weights())
```
这为我们提供了另一种访问网络参数的方式,如下所示。
```
net.get_weights()[1]
```
### [**从嵌套块收集参数**]
让我们看看,如果我们将多个块相互嵌套,参数命名约定是如何工作的。
我们首先定义一个生成块的函数(可以说是“块工厂”),然后将这些块组合到更大的块中。
```
def block1(name):
return tf.keras.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(4, activation=tf.nn.relu)],
name=name)
def block2():
net = tf.keras.Sequential()
for i in range(4):
# 在这里嵌套
net.add(block1(name=f'block-{i}'))
return net
rgnet = tf.keras.Sequential()
rgnet.add(block2())
rgnet.add(tf.keras.layers.Dense(1))
rgnet(X)
```
[**设计了网络后,我们看看它是如何工作的。**]
```
print(rgnet.summary())
```
因为层是分层嵌套的,所以我们也可以像通过嵌套列表索引一样访问它们。
下面,我们访问第一个主要的块中、第二个子块的第一层的偏置项。
```
rgnet.layers[0].layers[1].layers[1].weights[1]
```
## 参数初始化
知道了如何访问参数后,现在我们看看如何正确地初始化参数。
我们在 :numref:`sec_numerical_stability`中讨论了良好初始化的必要性。
深度学习框架提供默认随机初始化,
也允许我们创建自定义初始化方法,
满足我们通过其他规则实现初始化权重。
默认情况下,Keras会根据一个范围均匀地初始化权重矩阵,
这个范围是根据输入和输出维度计算出的。
偏置参数设置为0。
TensorFlow在根模块和`keras.initializers`模块中提供了各种初始化方法。
### [**内置初始化**]
让我们首先调用内置的初始化器。
下面的代码将所有权重参数初始化为标准差为0.01的高斯随机变量,
且将偏置参数设置为0。
```
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(
4, activation=tf.nn.relu,
kernel_initializer=tf.random_normal_initializer(mean=0, stddev=0.01),
bias_initializer=tf.zeros_initializer()),
tf.keras.layers.Dense(1)])
net(X)
net.weights[0], net.weights[1]
```
我们还可以将所有参数初始化为给定的常数,比如初始化为1。
```
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(
4, activation=tf.nn.relu,
kernel_initializer=tf.keras.initializers.Constant(1),
bias_initializer=tf.zeros_initializer()),
tf.keras.layers.Dense(1),
])
net(X)
net.weights[0], net.weights[1]
```
我们还可以[**对某些块应用不同的初始化方法**]。
例如,下面我们使用Xavier初始化方法初始化第一个神经网络层,
然后将第三个神经网络层初始化为常量值42。
```
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(
4,
activation=tf.nn.relu,
kernel_initializer=tf.keras.initializers.GlorotUniform()),
tf.keras.layers.Dense(
1, kernel_initializer=tf.keras.initializers.Constant(1)),
])
net(X)
print(net.layers[1].weights[0])
print(net.layers[2].weights[0])
```
### [**自定义初始化**]
有时,深度学习框架没有提供我们需要的初始化方法。
在下面的例子中,我们使用以下的分布为任意权重参数$w$定义初始化方法:
$$
\begin{aligned}
w \sim \begin{cases}
U(5, 10) & \text{ 可能性 } \frac{1}{4} \\
0 & \text{ 可能性 } \frac{1}{2} \\
U(-10, -5) & \text{ 可能性 } \frac{1}{4}
\end{cases}
\end{aligned}
$$
在这里,我们定义了一个`Initializer`的子类,
并实现了`__call__`函数。
该函数返回给定形状和数据类型的所需张量。
```
class MyInit(tf.keras.initializers.Initializer):
def __call__(self, shape, dtype=None):
data=tf.random.uniform(shape, -10, 10, dtype=dtype)
factor=(tf.abs(data) >= 5)
factor=tf.cast(factor, tf.float32)
return data * factor
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(
4,
activation=tf.nn.relu,
kernel_initializer=MyInit()),
tf.keras.layers.Dense(1),
])
net(X)
print(net.layers[1].weights[0])
```
注意,我们始终可以直接设置参数。
```
net.layers[1].weights[0][:].assign(net.layers[1].weights[0] + 1)
net.layers[1].weights[0][0, 0].assign(42)
net.layers[1].weights[0]
```
## [**参数绑定**]
有时我们希望在多个层间共享参数:
我们可以定义一个稠密层,然后使用它的参数来设置另一个层的参数。
```
# tf.keras的表现有点不同。它会自动删除重复层
shared = tf.keras.layers.Dense(4, activation=tf.nn.relu)
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
shared,
shared,
tf.keras.layers.Dense(1),
])
net(X)
# 检查参数是否不同
print(len(net.layers) == 3)
```
## 小结
* 我们有几种方法可以访问、初始化和绑定模型参数。
* 我们可以使用自定义初始化方法。
## 练习
1. 使用 :numref:`sec_model_construction` 中定义的`FancyMLP`模型,访问各个层的参数。
1. 查看初始化模块文档以了解不同的初始化方法。
1. 构建包含共享参数层的多层感知机并对其进行训练。在训练过程中,观察模型各层的参数和梯度。
1. 为什么共享参数是个好主意?
[Discussions](https://discuss.d2l.ai/t/1830)
|
github_jupyter
|
import tensorflow as tf
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(4, activation=tf.nn.relu),
tf.keras.layers.Dense(1),
])
X = tf.random.uniform((2, 4))
net(X)
print(net.layers[2].weights)
print(type(net.layers[2].weights[1]))
print(net.layers[2].weights[1])
print(tf.convert_to_tensor(net.layers[2].weights[1]))
print(net.layers[1].weights)
print(net.get_weights())
net.get_weights()[1]
def block1(name):
return tf.keras.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(4, activation=tf.nn.relu)],
name=name)
def block2():
net = tf.keras.Sequential()
for i in range(4):
# 在这里嵌套
net.add(block1(name=f'block-{i}'))
return net
rgnet = tf.keras.Sequential()
rgnet.add(block2())
rgnet.add(tf.keras.layers.Dense(1))
rgnet(X)
print(rgnet.summary())
rgnet.layers[0].layers[1].layers[1].weights[1]
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(
4, activation=tf.nn.relu,
kernel_initializer=tf.random_normal_initializer(mean=0, stddev=0.01),
bias_initializer=tf.zeros_initializer()),
tf.keras.layers.Dense(1)])
net(X)
net.weights[0], net.weights[1]
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(
4, activation=tf.nn.relu,
kernel_initializer=tf.keras.initializers.Constant(1),
bias_initializer=tf.zeros_initializer()),
tf.keras.layers.Dense(1),
])
net(X)
net.weights[0], net.weights[1]
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(
4,
activation=tf.nn.relu,
kernel_initializer=tf.keras.initializers.GlorotUniform()),
tf.keras.layers.Dense(
1, kernel_initializer=tf.keras.initializers.Constant(1)),
])
net(X)
print(net.layers[1].weights[0])
print(net.layers[2].weights[0])
class MyInit(tf.keras.initializers.Initializer):
def __call__(self, shape, dtype=None):
data=tf.random.uniform(shape, -10, 10, dtype=dtype)
factor=(tf.abs(data) >= 5)
factor=tf.cast(factor, tf.float32)
return data * factor
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(
4,
activation=tf.nn.relu,
kernel_initializer=MyInit()),
tf.keras.layers.Dense(1),
])
net(X)
print(net.layers[1].weights[0])
net.layers[1].weights[0][:].assign(net.layers[1].weights[0] + 1)
net.layers[1].weights[0][0, 0].assign(42)
net.layers[1].weights[0]
# tf.keras的表现有点不同。它会自动删除重复层
shared = tf.keras.layers.Dense(4, activation=tf.nn.relu)
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
shared,
shared,
tf.keras.layers.Dense(1),
])
net(X)
# 检查参数是否不同
print(len(net.layers) == 3)
| 0.799364 | 0.90549 |
<center>
<img src="https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# **SpaceX Falcon 9 First Stage Landing Prediction**
## Assignment: Exploring and Preparing Data
Estimated time needed: **70** minutes
In this assignment, we will predict if the Falcon 9 first stage will land successfully. SpaceX advertises Falcon 9 rocket launches on its website with a cost of 62 million dollars; other providers cost upward of 165 million dollars each, much of the savings is due to the fact that SpaceX can reuse the first stage.
In this lab, you will perform Exploratory Data Analysis and Feature Engineering.
Falcon 9 first stage will land successfully

Several examples of an unsuccessful landing are shown here:

Most unsuccessful landings are planned. Space X performs a controlled landing in the oceans.
## Objectives
Perform exploratory Data Analysis and Feature Engineering using `Pandas` and `Matplotlib`
* Exploratory Data Analysis
* Preparing Data Feature Engineering
***
### Import Libraries and Define Auxiliary Functions
We will import the following libraries the lab
```
# andas is a software library written for the Python programming language for data manipulation and analysis.
import pandas as pd
#NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays
import numpy as np
# Matplotlib is a plotting library for python and pyplot gives us a MatLab like plotting framework. We will use this in our plotter function to plot data.
import matplotlib.pyplot as plt
#Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics
import seaborn as sns
```
## Exploratory Data Analysis
First, let's read the SpaceX dataset into a Pandas dataframe and print its summary
```
df=pd.read_csv("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/dataset_part_2.csv")
# If you were unable to complete the previous lab correctly you can uncomment and load this csv
#df = pd.read_csv('https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/dataset_part_2.csv')
df.head(5)
```
First, let's try to see how the `FlightNumber` (indicating the continuous launch attempts.) and `Payload` variables would affect the launch outcome.
We can plot out the <code>FlightNumber</code> vs. <code>PayloadMass</code>and overlay the outcome of the launch. We see that as the flight number increases, the first stage is more likely to land successfully. The payload mass is also important; it seems the more massive the payload, the less likely the first stage will return.
```
sns.catplot(y="PayloadMass", x="FlightNumber", hue="Class", data=df, aspect = 5)
plt.xlabel("Flight Number",fontsize=20)
plt.ylabel("Pay load Mass (kg)",fontsize=20)
plt.show()
```
We see that different launch sites have different success rates. <code>CCAFS LC-40</code>, has a success rate of 60 %, while <code>KSC LC-39A</code> and <code>VAFB SLC 4E</code> has a success rate of 77%.
Next, let's drill down to each site visualize its detailed launch records.
### TASK 1: Visualize the relationship between Flight Number and Launch Site
Use the function <code>catplot</code> to plot <code>FlightNumber</code> vs <code>LaunchSite</code>, set the parameter <code>x</code> parameter to <code>FlightNumber</code>,set the <code>y</code> to <code>Launch Site</code> and set the parameter <code>hue</code> to <code>'class'</code>
```
# Plot a scatter point chart with x axis to be Flight Number and y axis to be the launch site, and hue to be the class value
sns.catplot(y="LaunchSite", x="FlightNumber", hue="Class", data=df, aspect = 5)
plt.xlabel("Flight Number",fontsize=20)
plt.ylabel("Launch Site",fontsize=20)
plt.show()
```
Now try to explain the patterns you found in the Flight Number vs. Launch Site scatter point plots.
### TASK 2: Visualize the relationship between Payload and Launch Site
We also want to observe if there is any relationship between launch sites and their payload mass.
```
# Plot a scatter point chart with x axis to be Pay Load Mass (kg) and y axis to be the launch site, and hue to be the class value
sns.catplot(y="LaunchSite", x="PayloadMass", hue="Class", data=df, aspect = 5)
plt.ylabel("Launch Site",fontsize=20)
plt.xlabel("Pay load Mass (kg)",fontsize=20)
plt.show()
```
Now if you observe Payload Vs. Launch Site scatter point chart you will find for the VAFB-SLC launchsite there are no rockets launched for heavypayload mass(greater than 10000).
### TASK 3: Visualize the relationship between success rate of each orbit type
Next, we want to visually check if there are any relationship between success rate and orbit type.
Let's create a `bar chart` for the sucess rate of each orbit
```
# HINT use groupby method on Orbit column and get the mean of Class column
sns.barplot(x = "Orbit", y="Class", data = df)
plt.ylabel("Success Rate",fontsize=20)
plt.xlabel("Orbit",fontsize=20)
plt.show()
```
Analyze the ploted bar chart try to find which orbits have high sucess rate.
### TASK 4: Visualize the relationship between FlightNumber and Orbit type
For each orbit, we want to see if there is any relationship between FlightNumber and Orbit type.
```
# Plot a scatter point chart with x axis to be FlightNumber and y axis to be the Orbit, and hue to be the class value
sns.relplot(y="Orbit", x="FlightNumber", hue="Class", data=df)
plt.ylabel("Flight Number",fontsize=20)
plt.xlabel("Orbit",fontsize=20)
plt.show()
```
You should see that in the LEO orbit the Success appears related to the number of flights; on the other hand, there seems to be no relationship between flight number when in GTO orbit.
### TASK 5: Visualize the relationship between Payload and Orbit type
Similarly, we can plot the Payload vs. Orbit scatter point charts to reveal the relationship between Payload and Orbit type
```
# Plot a scatter point chart with x axis to be Payload and y axis to be the Orbit, and hue to be the class value
sns.relplot(y="Orbit", x="PayloadMass", hue="Class", data=df)
plt.ylabel("Orbit",fontsize=20)
plt.xlabel("Pay load Mass (kg)",fontsize=20)
plt.show()
```
With heavy payloads the successful landing or positive landing rate are more for Polar,LEO and ISS.
However for GTO we cannot distinguish this well as both positive landing rate and negative landing(unsuccessful mission) are both there here.
### TASK 6: Visualize the launch success yearly trend
You can plot a line chart with x axis to be <code>Year</code> and y axis to be average success rate, to get the average launch success trend.
The function will help you get the year from the date:
```
# A function to Extract years from the date
year=[]
def Extract_year(df):
for i in df["Date"]:
year.append(i.split("-")[0])
return year
df['year'] = Extract_year(df)
df
# Plot a line chart with x axis to be the extracted year and y axis to be the success rate
sns.lineplot(data=df, x="year", y="Class")
plt.ylabel("Year",fontsize=20)
plt.xlabel("Success Rate",fontsize=20)
plt.show()
```
you can observe that the sucess rate since 2013 kept increasing till 2020
## Features Engineering
By now, you should obtain some preliminary insights about how each important variable would affect the success rate, we will select the features that will be used in success prediction in the future module.
```
features = df[['FlightNumber', 'PayloadMass', 'Orbit', 'LaunchSite', 'Flights', 'GridFins', 'Reused', 'Legs', 'LandingPad', 'Block', 'ReusedCount', 'Serial']]
features.head()
```
### TASK 7: Create dummy variables to categorical columns
Use the function <code>get_dummies</code> and <code>features</code> dataframe to apply OneHotEncoder to the column <code>Orbits</code>, <code>LaunchSite</code>, <code>LandingPad</code>, and <code>Serial</code>. Assign the value to the variable <code>features_one_hot</code>, display the results using the method head. Your result dataframe must include all features including the encoded ones.
```
# HINT: Use get_dummies() function on the categorical columns
features_one_hot = pd.get_dummies(features, prefix=None, columns=['Orbit', 'LaunchSite', 'LandingPad', 'Serial'])
features_one_hot.head()
```
### TASK 8: Cast all numeric columns to `float64`
Now that our <code>features_one_hot</code> dataframe only contains numbers cast the entire dataframe to variable type <code>float64</code>
```
# HINT: use astype function
features_one_hot = features_one_hot.astype('float64')
```
We can now export it to a <b>CSV</b> for the next section,but to make the answers consistent, in the next lab we will provide data in a pre-selected date range.
<code>features_one_hot.to_csv('dataset_part\_3.csv', index=False)</code>
## Authors
<a href="https://www.linkedin.com/in/joseph-s-50398b136/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
<a href="https://www.linkedin.com/in/nayefaboutayoun/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01">Nayef Abou Tayoun</a> is a Data Scientist at IBM and pursuing a Master of Management in Artificial intelligence degree at Queen's University.
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ------------- | ----------------------- |
| 2021-10-12 | 1.1 | Lakshmi Holla | Modified markdown |
| 2020-09-20 | 1.0 | Joseph | Modified Multiple Areas |
| 2020-11-10 | 1.1 | Nayef | updating the input data |
Copyright © 2020 IBM Corporation. All rights reserved.
|
github_jupyter
|
# andas is a software library written for the Python programming language for data manipulation and analysis.
import pandas as pd
#NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays
import numpy as np
# Matplotlib is a plotting library for python and pyplot gives us a MatLab like plotting framework. We will use this in our plotter function to plot data.
import matplotlib.pyplot as plt
#Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics
import seaborn as sns
df=pd.read_csv("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/dataset_part_2.csv")
# If you were unable to complete the previous lab correctly you can uncomment and load this csv
#df = pd.read_csv('https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/dataset_part_2.csv')
df.head(5)
sns.catplot(y="PayloadMass", x="FlightNumber", hue="Class", data=df, aspect = 5)
plt.xlabel("Flight Number",fontsize=20)
plt.ylabel("Pay load Mass (kg)",fontsize=20)
plt.show()
# Plot a scatter point chart with x axis to be Flight Number and y axis to be the launch site, and hue to be the class value
sns.catplot(y="LaunchSite", x="FlightNumber", hue="Class", data=df, aspect = 5)
plt.xlabel("Flight Number",fontsize=20)
plt.ylabel("Launch Site",fontsize=20)
plt.show()
# Plot a scatter point chart with x axis to be Pay Load Mass (kg) and y axis to be the launch site, and hue to be the class value
sns.catplot(y="LaunchSite", x="PayloadMass", hue="Class", data=df, aspect = 5)
plt.ylabel("Launch Site",fontsize=20)
plt.xlabel("Pay load Mass (kg)",fontsize=20)
plt.show()
# HINT use groupby method on Orbit column and get the mean of Class column
sns.barplot(x = "Orbit", y="Class", data = df)
plt.ylabel("Success Rate",fontsize=20)
plt.xlabel("Orbit",fontsize=20)
plt.show()
# Plot a scatter point chart with x axis to be FlightNumber and y axis to be the Orbit, and hue to be the class value
sns.relplot(y="Orbit", x="FlightNumber", hue="Class", data=df)
plt.ylabel("Flight Number",fontsize=20)
plt.xlabel("Orbit",fontsize=20)
plt.show()
# Plot a scatter point chart with x axis to be Payload and y axis to be the Orbit, and hue to be the class value
sns.relplot(y="Orbit", x="PayloadMass", hue="Class", data=df)
plt.ylabel("Orbit",fontsize=20)
plt.xlabel("Pay load Mass (kg)",fontsize=20)
plt.show()
# A function to Extract years from the date
year=[]
def Extract_year(df):
for i in df["Date"]:
year.append(i.split("-")[0])
return year
df['year'] = Extract_year(df)
df
# Plot a line chart with x axis to be the extracted year and y axis to be the success rate
sns.lineplot(data=df, x="year", y="Class")
plt.ylabel("Year",fontsize=20)
plt.xlabel("Success Rate",fontsize=20)
plt.show()
features = df[['FlightNumber', 'PayloadMass', 'Orbit', 'LaunchSite', 'Flights', 'GridFins', 'Reused', 'Legs', 'LandingPad', 'Block', 'ReusedCount', 'Serial']]
features.head()
# HINT: Use get_dummies() function on the categorical columns
features_one_hot = pd.get_dummies(features, prefix=None, columns=['Orbit', 'LaunchSite', 'LandingPad', 'Serial'])
features_one_hot.head()
# HINT: use astype function
features_one_hot = features_one_hot.astype('float64')
| 0.750918 | 0.991883 |
## MIPS
noninteracting particles with motility restricted by density
```
# Importing packages
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
from IPython.display import display
import os
import matplotlib.patches as patches
# Jupyter notebook magic for matplotlib
%matplotlib notebook
class MIPS:
def __init__(self, N, eta, r,L):
# Initialize simulation
self.L = L # length of the square 2D region to be simulated
self.halfL = self.L / 2 # half of length (used later for PBCs)
self.N = N # number of particles in the 2D region
self.rho = N/self.L**2 # density of particles in the simulation
self.eta = eta # noise in the system
self.r = r # interaction radius
self.rsq = self.r * self.r # square of interaction radius
self.dt = 0.1 # time step
self.vinit = 20
self.v = self.vinit*np.ones(self.N) # magnitude of velocity
self.pos = np.random.rand(self.N, 2) * self.L # random initial position in 2D region
self.direction = np.zeros(self.N) # 0 for moving toward goal, 1 for moving toward home
self.state = np.zeros(self.N) # 0 for walker, 1 for bridge
self.theta = (np.random.rand(self.N) * 2 - 1) * np.pi # random velocity angle [-pi pi]
self.vel = np.zeros((self.N, 2)) # initialize velocity array
self.vel[:, 0] = self.v * np.cos(self.theta) # velocity along x
self.vel[:, 1] = self.v * np.sin(self.theta) # velocity along y
self.tt = 5000 # total number of time steps
self.rparts = np.eye(N, dtype=np.bool) # matrix representing particles within distance r
self.home = (2.5,10)
self.goal = (17.5,10)
def main(self):
axrange = [-5, self.L+5, -5, self.L+5]
#Setup plot for updated positions
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
fig1.show()
fig1.tight_layout()
fig1.canvas.draw()
for nn in range(self.tt):
ax1.clear()
x = [7.5,12.5,11,9]
y = [0,0,20,20]
ax1.add_patch(patches.Polygon(xy=list(zip(x,y)),edgecolor='blue',facecolor='blue', fill=True,zorder=0, alpha=0.1)) # obstacle
ax1.add_patch(patches.Rectangle((self.home[0]-self.r/2,self.home[1]-self.r/2),self.r,self.r ,edgecolor='green',facecolor='green', fill=True,zorder=0, alpha=0.3)) # home
ax1.add_patch(patches.Rectangle((self.goal[0]-self.r/2,self.goal[1]-self.r/2),self.r,self.r ,edgecolor='red',facecolor='red', fill=True,zorder=0, alpha=0.3)) # object
ax1.quiver(self.pos[:, 0], self.pos[:, 1], self.vel[:, 0], self.vel[:, 1])
ax1.scatter(self.pos[self.state==0][:, 0], self.pos[self.state==0][:, 1],s=100,alpha=0.5,c='k') # walker
ax1.scatter(self.pos[self.state==1][:, 0], self.pos[self.state==1][:, 1],s=100,alpha=0.5,c='y') # bridge
ax1.axis(axrange)
ax1.set_aspect('equal', 'box')
fig1.canvas.draw()
fig1.savefig(str(os.getcwd())+'/fig3/'+str(nn)+'.png')
self.update()
def update(self):
# Generate the set of random movements dTheta from [-eta/2, eta/2]
noise = (np.random.rand(self.N) - 0.5) * self.eta
# Find particles within distance r
self.find_particles()
self.direction[self.rhome]=0
self.direction[self.rgoal]=1
orient = np.arctan2(self.goal[1]*(self.direction==0) +self.home[1]* (self.direction==1) -self.pos.T[1],self.goal[0]*(self.direction==0) +self.home[0]* (self.direction==1) -self.pos.T[0])
self.theta = orient+(1-0.1*self.dt)*(np.mod((self.theta-orient)+np.pi,2*np.pi)-np.pi) +noise*self.dt
self.v = self.vinit
# for i in range(self.N):
# neighbor = np.sum(self.rparts[i,:])
# #self.v[i] = self.vinit/(1+0.05*neighbor + 0.001*neighbor**2)
# self.v[i] = self.vinit/neighbor
# Updated velocities
self.vel[:, 0] = self.v * np.cos(self.theta)
self.vel[:, 1] = self.v * np.sin(self.theta)
# Updated positions
self.pos = self.pos + self.vel * self.dt
# # Applying periodic boundaries
# self.pos = np.mod(self.pos, self.L)
def find_particles(self): # updated using matrix operation
# Reset rparts matrix
self.rparts = np.eye(self.N, dtype=np.bool)
x = self.pos[:,0].reshape(1,-1)
y = self.pos[:,1].reshape(1,-1)
diffx = x-x.T
diffy = y-y.T
diffxn = -self.halfL + np.mod(diffx+self.halfL,self.L)
diffyn = -self.halfL + np.mod(diffy+self.halfL,self.L)
diff = diffxn**2+diffyn**2
#self.rparts = 1/(diff/self.r**2+1)
self.rparts = diff<self.rsq
self.rhome = (self.pos[:,0]-self.home[0]>-self.r/2)*(self.pos[:,0]-self.home[0]<self.r/2)*(self.pos[:,1]-self.home[1]>-self.r/2)*(self.pos[:,1]-self.home[1]<self.r/2)
self.rgoal = (self.pos[:,0]-self.goal[0]>-self.r/2)*(self.pos[:,0]-self.goal[0]<self.r/2)*(self.pos[:,1]-self.goal[1]>-self.r/2)*(self.pos[:,1]-self.goal[1]<self.r/2)
def start_AM_sim(num_particles=2, noise=0.5, v=1, r=2, L=20):
v2d = MIPS(num_particles, noise, r,L)
v2d.vinit = v
print("Box size =", v2d.L)
print("Particle density =", v2d.rho)
v2d.main()
# Interactive control for entering number of particles
style = {'description_width': 'initial'}
# num_particles = widgets.IntSlider(description='Number of particles', style=style,
# min=100, max=1100, step=200, value=2, continuous_update=False)
# # Interactive control for entering noise
# noise = widgets.FloatSlider(description='Noise', style=style,
# min=0.1, max=1, step=0.1, value=0, continuous_update=False)
num_particles = 100
noise = 2
v=0.3
#r=0.5
r=2
L=20
# Creating the interactive controls
# widget_ui = widgets.HBox([num_particles, noise])
# widget_out = widgets.interactive_output(start_AM_sim,
# {'num_particles': num_particles, 'noise': noise})
# # Display the controls and output
# # display(widget_ui, widget_out)
# display(widget_out)
start_AM_sim(num_particles,noise,v,r,L)
t=np.linspace(-10,10,100)
%matplotlib inline
# print(np.arctan2(np.sin(t),np.cos(t)))
plt.plot(x, np.arctan2(np.sin(t),np.cos(t)))
plt.show()
x = np.zeros((5,2))
(x==0)
import matplotlib.pyplot as plt
import numpy as np
import imageio
from PIL import Image
import os
import matplotlib.image as mpimg
path = [f"./fig1/{i}.png" for i in range(1000)]
paths = [ Image.open(i) for i in path]
imageio.mimsave('./test1.gif', paths, fps=300)
import matplotlib.pyplot as plt
import numpy as np
import imageio
from PIL import Image
import os
import matplotlib.image as mpimg
path = [f"./fig2/{i}.png" for i in range(1000)]
paths = [ Image.open(i) for i in path]
imageio.mimsave('./test2.gif', paths, fps=300)
import matplotlib.pyplot as plt
import numpy as np
import imageio
from PIL import Image
import os
import matplotlib.image as mpimg
path = [f"./fig3/{i}.png" for i in range(3000)]
paths = [ Image.open(i) for i in path]
imageio.mimsave('./test3.gif', paths, fps=300)
```
|
github_jupyter
|
# Importing packages
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
from IPython.display import display
import os
import matplotlib.patches as patches
# Jupyter notebook magic for matplotlib
%matplotlib notebook
class MIPS:
def __init__(self, N, eta, r,L):
# Initialize simulation
self.L = L # length of the square 2D region to be simulated
self.halfL = self.L / 2 # half of length (used later for PBCs)
self.N = N # number of particles in the 2D region
self.rho = N/self.L**2 # density of particles in the simulation
self.eta = eta # noise in the system
self.r = r # interaction radius
self.rsq = self.r * self.r # square of interaction radius
self.dt = 0.1 # time step
self.vinit = 20
self.v = self.vinit*np.ones(self.N) # magnitude of velocity
self.pos = np.random.rand(self.N, 2) * self.L # random initial position in 2D region
self.direction = np.zeros(self.N) # 0 for moving toward goal, 1 for moving toward home
self.state = np.zeros(self.N) # 0 for walker, 1 for bridge
self.theta = (np.random.rand(self.N) * 2 - 1) * np.pi # random velocity angle [-pi pi]
self.vel = np.zeros((self.N, 2)) # initialize velocity array
self.vel[:, 0] = self.v * np.cos(self.theta) # velocity along x
self.vel[:, 1] = self.v * np.sin(self.theta) # velocity along y
self.tt = 5000 # total number of time steps
self.rparts = np.eye(N, dtype=np.bool) # matrix representing particles within distance r
self.home = (2.5,10)
self.goal = (17.5,10)
def main(self):
axrange = [-5, self.L+5, -5, self.L+5]
#Setup plot for updated positions
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
fig1.show()
fig1.tight_layout()
fig1.canvas.draw()
for nn in range(self.tt):
ax1.clear()
x = [7.5,12.5,11,9]
y = [0,0,20,20]
ax1.add_patch(patches.Polygon(xy=list(zip(x,y)),edgecolor='blue',facecolor='blue', fill=True,zorder=0, alpha=0.1)) # obstacle
ax1.add_patch(patches.Rectangle((self.home[0]-self.r/2,self.home[1]-self.r/2),self.r,self.r ,edgecolor='green',facecolor='green', fill=True,zorder=0, alpha=0.3)) # home
ax1.add_patch(patches.Rectangle((self.goal[0]-self.r/2,self.goal[1]-self.r/2),self.r,self.r ,edgecolor='red',facecolor='red', fill=True,zorder=0, alpha=0.3)) # object
ax1.quiver(self.pos[:, 0], self.pos[:, 1], self.vel[:, 0], self.vel[:, 1])
ax1.scatter(self.pos[self.state==0][:, 0], self.pos[self.state==0][:, 1],s=100,alpha=0.5,c='k') # walker
ax1.scatter(self.pos[self.state==1][:, 0], self.pos[self.state==1][:, 1],s=100,alpha=0.5,c='y') # bridge
ax1.axis(axrange)
ax1.set_aspect('equal', 'box')
fig1.canvas.draw()
fig1.savefig(str(os.getcwd())+'/fig3/'+str(nn)+'.png')
self.update()
def update(self):
# Generate the set of random movements dTheta from [-eta/2, eta/2]
noise = (np.random.rand(self.N) - 0.5) * self.eta
# Find particles within distance r
self.find_particles()
self.direction[self.rhome]=0
self.direction[self.rgoal]=1
orient = np.arctan2(self.goal[1]*(self.direction==0) +self.home[1]* (self.direction==1) -self.pos.T[1],self.goal[0]*(self.direction==0) +self.home[0]* (self.direction==1) -self.pos.T[0])
self.theta = orient+(1-0.1*self.dt)*(np.mod((self.theta-orient)+np.pi,2*np.pi)-np.pi) +noise*self.dt
self.v = self.vinit
# for i in range(self.N):
# neighbor = np.sum(self.rparts[i,:])
# #self.v[i] = self.vinit/(1+0.05*neighbor + 0.001*neighbor**2)
# self.v[i] = self.vinit/neighbor
# Updated velocities
self.vel[:, 0] = self.v * np.cos(self.theta)
self.vel[:, 1] = self.v * np.sin(self.theta)
# Updated positions
self.pos = self.pos + self.vel * self.dt
# # Applying periodic boundaries
# self.pos = np.mod(self.pos, self.L)
def find_particles(self): # updated using matrix operation
# Reset rparts matrix
self.rparts = np.eye(self.N, dtype=np.bool)
x = self.pos[:,0].reshape(1,-1)
y = self.pos[:,1].reshape(1,-1)
diffx = x-x.T
diffy = y-y.T
diffxn = -self.halfL + np.mod(diffx+self.halfL,self.L)
diffyn = -self.halfL + np.mod(diffy+self.halfL,self.L)
diff = diffxn**2+diffyn**2
#self.rparts = 1/(diff/self.r**2+1)
self.rparts = diff<self.rsq
self.rhome = (self.pos[:,0]-self.home[0]>-self.r/2)*(self.pos[:,0]-self.home[0]<self.r/2)*(self.pos[:,1]-self.home[1]>-self.r/2)*(self.pos[:,1]-self.home[1]<self.r/2)
self.rgoal = (self.pos[:,0]-self.goal[0]>-self.r/2)*(self.pos[:,0]-self.goal[0]<self.r/2)*(self.pos[:,1]-self.goal[1]>-self.r/2)*(self.pos[:,1]-self.goal[1]<self.r/2)
def start_AM_sim(num_particles=2, noise=0.5, v=1, r=2, L=20):
v2d = MIPS(num_particles, noise, r,L)
v2d.vinit = v
print("Box size =", v2d.L)
print("Particle density =", v2d.rho)
v2d.main()
# Interactive control for entering number of particles
style = {'description_width': 'initial'}
# num_particles = widgets.IntSlider(description='Number of particles', style=style,
# min=100, max=1100, step=200, value=2, continuous_update=False)
# # Interactive control for entering noise
# noise = widgets.FloatSlider(description='Noise', style=style,
# min=0.1, max=1, step=0.1, value=0, continuous_update=False)
num_particles = 100
noise = 2
v=0.3
#r=0.5
r=2
L=20
# Creating the interactive controls
# widget_ui = widgets.HBox([num_particles, noise])
# widget_out = widgets.interactive_output(start_AM_sim,
# {'num_particles': num_particles, 'noise': noise})
# # Display the controls and output
# # display(widget_ui, widget_out)
# display(widget_out)
start_AM_sim(num_particles,noise,v,r,L)
t=np.linspace(-10,10,100)
%matplotlib inline
# print(np.arctan2(np.sin(t),np.cos(t)))
plt.plot(x, np.arctan2(np.sin(t),np.cos(t)))
plt.show()
x = np.zeros((5,2))
(x==0)
import matplotlib.pyplot as plt
import numpy as np
import imageio
from PIL import Image
import os
import matplotlib.image as mpimg
path = [f"./fig1/{i}.png" for i in range(1000)]
paths = [ Image.open(i) for i in path]
imageio.mimsave('./test1.gif', paths, fps=300)
import matplotlib.pyplot as plt
import numpy as np
import imageio
from PIL import Image
import os
import matplotlib.image as mpimg
path = [f"./fig2/{i}.png" for i in range(1000)]
paths = [ Image.open(i) for i in path]
imageio.mimsave('./test2.gif', paths, fps=300)
import matplotlib.pyplot as plt
import numpy as np
import imageio
from PIL import Image
import os
import matplotlib.image as mpimg
path = [f"./fig3/{i}.png" for i in range(3000)]
paths = [ Image.open(i) for i in path]
imageio.mimsave('./test3.gif', paths, fps=300)
| 0.491456 | 0.813127 |
<img align="centre" width="750" height="750" img src="https://i0.wp.com/www.creatingentrepreneursinfood.eu/wp-content/uploads/2017/02/GMIT-logo.png">
# Assignment - Programming for Data Analysis
* **Author:** John Paul Lee
* **Github:** JPLee01
* **Email:** G00387906@gmit.ie
* **Created:** 31-10-2020, **Last update:** 22-11-2020
* Programming of Data Analysis: *numpy.random* Assignment 2020
****
This Jupyter Notebook has been created to explain the *numpy.random* package in Python. This notebook will explain the packages use and detailed explanations of at least five of the distributions provided for in the package for the Programming of Data Analysis Assignment.
**Lecturer:** Dr. Brian McGinley
The Project instructions can be found [here](https://github.com/JPLee01/Programming_for_Data_Analysis-Assignment/blob/main/Assignment%20Instructions.pdf)
****
As part of the assignment this notebook will be broken into four distinct sections:
1. Explain the overall purpose of the package
2. Explain the use of the “Simple Random Data” and “Permutations” functions
3. Explain the use and purpose of at least five “Distributions” functions
4. Explain the use of seeds in Generating Pseudorandom Numbers
## Preliminaries
Prior to explaining each section we first need to import a number of libraries. We need to import the NumPy library as it is essential for the analysis of the numpy.random package. The matplotlib and seaborn libaries will also need to be imported to allow for the creation of visualisations in the assignmnet.
```
# Import numpy to allow for analysis of the numpy.random package
# Import matplotlib.pyplot and seaborn for the creation of visualisations
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
```
Also as we will be displaying Plots in this Jupyter Notebook we will implement the *inline* magic command to allow the Plots to be rendered inline within the Notebook.<sup>[1]</sup>
```
#Inline Magic command implemented to ensure that the Plots are rendered inline
%matplotlib inline
```
To ensure uniformity throughout the Juypter Notebook in terms of the the Seaborn Plots display the *style* and *palette* fuction will be set.
The *style* function will be set to *darkgrid*. This will allow for optimal measurments of Plots as the darkened background with the built in grid lines will be best displayed against the white background of the Juypter Notebook.<sup>[2]</sup>
The *palette* fuction will be set to *bright* as it will allow for clear distinction of multiple outputs within one Plot.<sup>[3]</sup>
```
#Setting of Seaborn dispays to enure uniformity throughout the Juypter Notebook
#Darkplot style selected to allow for optimal measurments of Plots
sns.set_style("darkgrid")
#Bright colour palette selected to allow for clear distinction of multiple outputs within one Plot
sns.set_palette("bright")
```
## Section 1 - Explain the Overall Purpose of the Package
<img align="centre" width="350" height="350" img src="https://user-images.githubusercontent.com/50221806/86498201-a8bd8680-bd39-11ea-9d08-66b610a8dc01.png">
The numpy.random fuction is a package within the NumPy (Numerical Python) library for doing random sampling.<sup>[4]</sup>
NumPy according to it's [manual](https://numpy.org/doc/stable/user/whatisnumpy.html) is a "Python library that provides a multidimensional array object, various derived objects (such as masked arrays and matrices), and an assortment of routines for fast operations on arrays, including mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical operations, random simulation."<sup>[5]</sup> The NumPy library enhances Python through the use of powerful data structures, implementing multi-dimensional arrays and matrices.<sup>[6]</sup>
These data structures guarantee efficient calculations with matrices and arrays.<sup>[7]</sup> As a result, NumPy is able to help programmers in easily performing numerical computations, and is considered the fundamental package for scientific computing with Python.<sup>[8]</sup> Some of the numerical computations which can be performed easily with NumPy include:<sup>[9]</sup>
* Machine Learning Models
* Image Processing and Computer Graphics
* Mathematical tasks
The numpy.random fuction within NumPy produces pseudorandom numbers (numbers created through the use of algorithms that use mathematical formulae or simply precalculated tables to produce sequences of numbers that appear random<sup>[10]</sup>) using combinations of a BitGenerator to create sequences and a Generator to use those sequences to sample from different statistical distributions.<sup>[11]</sup> A BitGenerator generates random numbers. These are typically unsigned integer words filled with sequences of either 32 or 64 random bits.<sup>[12]</sup> While a Generator transforms the sequences of random bits from the BitGenerator into sequences of numbers that follow a specific probability distribution (such as uniform, Normal or Binomial) within a specified interval.<sup>[13]</sup>
It should be noted that since NumPy version 1.17.0 the Generator can be initialized with a number of different BitGenerators.<sup>[11]</sup> The default BitGenerator within NumPy is PCG64 which is a 128-bit implementation of O’Neill’s permutation congruential generator.<sup>[14]</sup>
## Section 2 - Explain the use of the “Simple Random Data” and “Permutations” functions
Within this section I will explain the use of the simple random data and permutations functions within NumPy.
### Sction 2.1 - Simple Random Data
In statistics a simple random data is defined as a subset of a statistical population in which each member of the subset has an equal probability of being chosen.<sup>[15]</sup> Moore, David and McCabe go into further detail by describing it as: "A simple random data of size *n* consists of *n* individuals from the population chosen in such a way that every set of *n* individuals has an equal chance to be the sample actually selected".<sup>[16]</sup> This is also displayed in pictorial form in the image below. An example simple random data in action would be when a teacher puts students' names in a hat and chooses without looking to get a sample of students. In essence, simple random data is meant to be an unbiased representation of a group. A positive of which is that it is seen as fairly representative since they don't favor certain members/groups.<sup>[17]</sup>
<img align="centre" width="350" height="350" img src="https://rm-15da4.kxcdn.com/wp-content/uploads/2015/04/Simple-random-sampling2.png">
Within *numpy.random* package there are four functions which use Simple Random Data:
1. Integers
2. Random
3. Choice
4. Bytes
#### 2.1.1 Integers
This function returns random integers from the discrete uniform distribution (i.e. equally likely outcomes<sup>[18]</sup>) of the specified dtype.
The method for generating these integers is as follows:
**```Generator.integers(low, high=None, size=None, dtype=np.int64, endpoint=False)```**
The parameters of the above method are as follows:<sup>[19]</sup>
* *low* - Lowest (signed) integers to be drawn from the distribution (if ```high=None``` selected low is set to 0 and is used for the high parameter). The value for this parameter may be an integer or array-like of integers.
* *high* - Optional parameter. If provided, one above the largest (signed) integer to be drawn from the distribution (see above for if ```high=None``` selected). If array-like, must contain integer values. The value (if chosen) for this parameter may be an integer or array-like of integers.
* *size* - Optional parameter. If provided, dictates the output shape. If the given shape is, e.g., ```size=(2, 4, 10)``` then ```2 * 4 * 10``` samples are drawn (2 x Arrays (Groups), 4 x Rows in Each Array and 10 x Columns in Each Array). Default is None, in which case a single value is returned. The value (if chosen) for this parameter may be an integer or tuple of integers.
* *dtype* - Optional parameter. The desired dtype (data type object<sup>[20]</sup>) of the result. Byteorder must be native. The default value is ```np.int64```. The value (if chosen) for this parameter may be a dtype.
* *endpoint* - Optional parameter. If provided (```endpoint=True```), sample from the interval (low, high) instead of the default (low, high) defaults to False. The value (if chosen) for this parameter may be boolean.
The return from this function is a size-shaped array of random integers from the appropriate distribution, or a single such random integers if size not provided.
This will now be highlighted in the below examples:
```
#Generate a 2 x 8 x 3 array (2 Groups with 8 Rows and 3 columns) of integers between 1 and 10, inclusive
#Setting of "rng" as the np.random.default_rng() function. This will be used throughout the Notebook
rng = np.random.default_rng()
rng.integers(1, 11, size=(2, 8, 3))
z = rng.integers(1, 11, size=(2, 8, 3))
print(z)
#Generate a 1 x 4 x 6 array (1 Group with 4 Rows and 6 columns) of integers between 1 and 5, inclusive
rng.integers(1, 6, size=(1, 4, 6))
x = rng.integers(1, 6, size=(1, 4, 6))
print(x)
```
The resulting arrays can also be displayed visually using seaborn and matplotlib.pyplot:
```
#Visualise the above Integers
sns.distplot(x, bins=6)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
```
#### 2.1.2 Random
This function returns random floats in the half-open interval (0.0, 1.0), i.e. returns random values between 0 and 1 in a given shape.<sup>[21]</sup>
The results are from the continuous uniform distribution over the stated interval. This means that it takes values within a specified range, e.g. between 0 and 1.<sup>[22]</sup>) The mathematical formula for this distribution can be seen in the image below:
<img align="centre" width="500" height="500" img src="https://images.slideplayer.com/25/7872338/slides/slide_1.jpg">
The method for generating the random floats is as follows:
**```Generator.random(size=None, dtype=np.float64, out=None)```**
The parameters of the above method are as follows:<sup>[23]</sup>
* *size* - Optional parameter. If provided, dictates the output shape. If the given shape is, e.g., ```size=(3, 5, 8)``` then ```3 * 5 * 8``` samples are drawn (3 x Arrays (Groups), 5 x Rows in Each Array and 8 x Columns in Each Array). Default is None, in which case a single value is returned. The value (if chosen) for this parameter may be an integer or tuple of integers.
* *dtype* - Optional parameter. The desired dtype (data type object<sup>[20]</sup>) of the result. Byteorder must be native. The default value is np.int64 and only float64 and float32 are supported. The value (if chosen) for this parameter must be either ```np.int64``` or ```np.int32```.
* *out* - Optional parameter.An alternative output array in which to place the result. If size is not None, it must have the same shape as the provided size and must match the type of the output values. The value (if chosen) for this parameter may a ndarray i.e. a multidimensional container of items of the same type and size.<sup>[24]</sup>
The return from this function is an array of random floats of shape size. Unless size is none, in which case a single float is returned.
This will now be highlighted in the below examples:
```
#Generate a random float between 0 and 1
rng.random()
#Generate a 4 x 3 Array (1 Array (Group) with 4 Rows and 3 columns)) of random numbers from -5 to 0
x = 5 * rng.random((4, 3)) - 5
print(x)
```
The resulting floats can also be displayed visually using seaborn and matplotlib.pyplot:
```
#Visualise the above Floats
sns.distplot(x, bins=8)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
```
This function can also be used for an array of inputs, outputting multiple arrays of random floats.
```
#Generate an 4 x 2 x 2 Array (4 Arrays (Groups) with 2 Rows and 2 columns)) of random numbers
x = np.random.rand(4,2,2)
print(x)
```
The resulting arrays of random floats can also be displayed visually using seaborn and matplotlib.pyplot:
```
#Visualise the above arrays of random Floats
sns.distplot(x[0,0], label="x[0,0]")
sns.distplot(x[0,1], label="x[0,1]")
sns.distplot(x[1,0], label="x[1,0]")
sns.distplot(x[1,1], label="x[1,1]")
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
plt.legend(loc="best")
```
#### 2.1.3 Choice
This function generates a random sample from a given 1-D (One Dimensional) array, i.e. it generates random samples for data analysis.<sup>[25]</sup>
The method for generating the random samples is as follows:
**```Generator.choice(a, size=None, replace=True, p=None, axis=0, shuffle=True)```**
The parameters of the above method are as follows:<sup>[26]</sup>
* *a* - The array you want to operate on. If an ndarray is selected, a random sample is generated from its elements. If an integer is selected, the random sample is generated from ```np.arange(a)```. The value for this parameter may be an integer or array-like of integers.
* *size* - Optional parameter. If provided, dictates the output shape. If the given shape is, e.g., ```size=(2, 3, 3)```then ```2 * 3 * 3``` samples are drawn (2 x Arrays (Groups), 3 x Rows in Each Array and 3 x Columns in Each Array) in the 1-D *a*. If *a* has more than one dimension, the size shape will be inserted into the axis dimension, so the output ```ndim``` will be ```a.ndim - 1 + len(size)```. Default is None, in which case a single value is returned.
* *replace* - Optional parameter of whether the sample is with or without replacement. The value (if chosen) for this parameter may be boolean.
* *p* - Optional parameter. The probabilities associated with each entry in *a*. If not given the sample assumes a uniform distribution over all entries in *a*. The value (if chosen) for this parameter may be an one dimensional array-like of integers.
* *axis* - Optional parameter. The axis along which the selection is performed. The default is 0. The value (if chosen) for this parameter may be an integer.
* *shuffle* - Optional parameter. Decides whether the sample is shuffled when sampling without replacement. The default is True, if False selected a speedup is provided. The value (if chosen) for this parameter may be boolean.
The return from this function is a single item an array of random samples.
It should be noted that a ```ValueError``` may be rasied if:
* *a* is an integer and less than zero,
* *p* is not 1-dimensional,
* *a* is array-like with a size 0
* *p* is not a vector of probabilities
* *a* and *p* have different lengths
* ```replace=False``` and the sample size is greater than the population size.
This fuction will now be highlighted in the below examples:
```
#Generate an uniform random sample from 0 to 10 of 20 values
rng.choice(11, 20)
#It should be noted that this is the equivalent to rng.integers(0,11,20)
rng.integers(0,11,20)
#Generate an non-uniform random sample from 0 to 10 of 50 values
a = rng.choice(11, 50, p=[0.1, 0.2, 0.2, 0, 0.02, 0.25, 0.0, 0.1, 0.05, 0.05, 0.03])
print(a)
```
The resulting random sample can also be displayed visually using seaborn and matplotlib.pyplot:
```
#Visualise the above Random Sample
sns.distplot(a)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
#Generate a uniform random sample from 0 to 100 of 50 values, without replacement:
y = np.random.choice(101, 50, replace=False)
print(y)
```
The resulting random sample can also be displayed visually using seaborn and matplotlib.pyplot:
```
#Visualise the above Random Sample
sns.distplot(y)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
```
This function can also be used for non-numerical arrays.
```
#Generate a uniform random sample of 3 European citys form a list of 10, without replacement:
cityList = ["Dublin", "London", "Rome", "Paris", "Berlin", "Istanbul", "Moscow", "Madrid", "Lisbon", "Zürich,"]
x = np.random.choice(cityList, 3, replace=False)
print(x)
```
#### 2.1.4 Bytes
This function returns random bytes.<sup>[27]</sup>
The method for generating the random bytes is as follows:
**```Generator.bytes(length)```**
The parameter of the above method is:<sup>[28]</sup>
* *length* - Number of random bytes. The value for this parameter must be an integer.
The return from this function is a String of length *length (figure)*.
This will now be highlighted in the below example:
```
#Return 20 Random Bytes
rng.bytes(20)
```
Note the *\x* indicates an escape character in which the result can't be displayed correctly otherwise.<sup>[29]</sup>
### Sction 2.2 - Permutations
A permutation is a mathematical technique that determines the number of possible arrangements in a set when the order of the arrangements matters.<sup>[30]</sup> While they may seem similar, it is important to note that permutations and combinations differ because in combinations the order of the arrangment is not important.<sup>[31]</sup> An example to highlight this different is the pin to your bank account. You pin is a permutation, as it needs to be entered in the correct order to access your bank account, if it was a combination, the order of the numbers would not matter.
Statistically speaking, permutation is described as *n* distinct objects taken *r* at a time. Meaning that *n* refers to the number of objects from which the permutation is formed; and *r* refers to the number of objects used to form the permutation.<sup>[32]</sup> The mathematical formula for this can be seen in the image below:
<img align="centre" width="350" height="350" img src="https://www.bizskinny.com/images/Permutation-Formula.PNG">
To highlight this foumula we will use the following example. Suppose we have a set of three letters: X, Y, and Z, and we want to see how many ways we can arrange 2 letters from that set. Each possible arrangement would be an example of a permutation. The complete list of possible permutations would be: XY, XZ, YX, YZ, ZX, and ZY. As the permutation was formed from 3 letters (X, Y, and Z) n = 3; and the permutation consisted of 2 letters, so k = 2.<sup>[33]</sup>
Within numpy.random package there are two functions which use Permutations:
1. Shuffle
2. Permutation
#### 2.2.1 Shuffle
The function modifies a sequence in-place by shuffling its contents. It should be noted this function only shuffles the array along the first axis of a multi-dimensional array. The order of sub-arrays is changed but their contents remains the same.
The method for modifying a sequence in-place is as follows:
**```Generator.shuffle(x, axis=0)```**
The parameter of the above method is:<sup>[34]</sup>
* *x* - The array or list to be shuffled. The value for this parameter must be an array-like of integers.
* *axis* - Optional parameter. The axis along which *x* is shuffled along. The default is 0 and it is only supported on ndarray objects. The value (if chosen) for this parameter may be an integer.
The shuffle function will now be highlighted in the below examples:
```
#Randomly shuffle the integers 0 to 20
arr = np.arange(20)
rng.shuffle(arr)
arr
```
This function can also be used for non-numerical arrays:
```
#Randomly shuffle 10 Football Teams
Teams = ["Arsenal", "Liverpool", "Man United", "Man City", "Spurs", "Chelsea", "Southampton",
"Leicester", "Everton", "Leeds"]
rng.shuffle(Teams)
Teams
```
We can also shuffle integers generate from the *np.arange* function:
```
#Shuffle the range of 0-20 generated by np.arange
List = np.arange(21)
rng.shuffle(List)
List
```
#### 2.2.1 Permutation
The function randomly permutes (rearranges) a sequence, or returns a permuted range. It should be noted that depending on the input the function operates differently.
The method for randomly permuting a sequence is as follows:
**```Generator.permutation(x, axis=0)```**
The parameter of the above method is:<sup>[35]</sup>
* *x* - The array or list to be permuted. If *x* is an integer, randomly permute ```np.arange(x)```. If *x* is an array, make a copy and shuffle the elements randomly.<sup>[36]</sup> The value for this parameter must be an integer or an array-like of integers.
* *axis* - Optional parameter. The axis along which *x* is shuffled along. The default is 0. The value (if chosen) for this parameter may be an integer.
The shuffle function will now be highlighted in the below examples:
```
#As x in an integer the function will assume the input is a range and randomly permute np.arange(x)
rng.permutation(5)
#As x is an array the function will make a copy and shuffle the elements randomly
rng.permutation([2, 3, 7, 11, 44])
```
## Section 3 - Explain the Use and Purpose of at Least Five “Distributions” Functions
In this section we will explore the following five distribution functions in the numpy.random package:
1. Normal
2. Binomial
3. Poisson
4. Hypergeometric
5. Laplace
### 3.1 Normal Distribution
The normal distribution, also known as Gaussian distribution (after the German mathematician Carl Friedrich Gauss<sup>[37]</sup>) is viewed as one of the most important probability distributions.<sup>[38]</sup> This is due to the fact that it fits many natural phenomena such as; heights, blood pressure, measurement error, and IQ scores. <sup>[39]</sup>
The normal distribution is a probability function that describes how the values of a variable are distributed. It is a symmetric distribution where most of the observations cluster around the central peak and the probabilities for values further away from the mean taper off equally in both directions. Extreme values in both tails of the distribution are similarly unlikely. As a result it is also know as the Bell Curve.<sup>[40]</sup> A sample normal distribution can be seen in the image below:
<img align="centre" width="350" height="350" img src="https://mathbitsnotebook.com/Algebra2/Statistics/normalturqa.jpg">
The graph of the normal distribution depends on two factors, the mean and the standard deviation. The mean of the distribution determines the location of the center of the graph, and the standard deviation determines the height and width of the graph. When the standard deviation is small, the curve is tall and narrow, and when the standard deviation is big, the curve is short and wide.<sup>[41]</sup> As a result, the normal distribution works best when the sample size is very large.<sup>[42]</sup>
It should also be noted that every normal distribution curve (regardless of its mean or standard deviation) conforms to the 68-95-99.7 rule<sup>[43]</sup>. This is that:
* About 68% of the area under the curve falls within 1 standard deviation of the mean.
* About 95% of the area under the curve falls within 2 standard deviations of the mean.
* About 99.7% of the area under the curve falls within 3 standard deviations of the mean.
This rule can also be seen in graphical form below:
<img align="centre" width="450" height="450" img src="https://miro.medium.com/max/1400/1*IZ2II2HYKeoMrdLU5jW6Dw.png">
The normal distribution function in numpy draws random samples from a normal (Gaussian) distribution.
The method for achieving this is as follows:
**```numpy.random.normal(loc=0.0, scale=1.0, size=None)```**
The parameter of the above method is:<sup>[44]</sup>
* *loc* - Mean or centre of the distribution. The value for this parameter must be a float or an array-like of floats.
* *scale* - Standard deviation i.e the spread or width, of the distribution. The value for this parameter must be a non-negative float or an array-like of floats.
* *size* - Dictates the output shape. If the given shape is, e.g., ```size=(1, 4, 7)```then ```1 * 4 * 7``` samples are drawn. If size is ```None``` (default), a single value is returned if ```loc``` and ```scale``` are both scalars. Otherwise, ```np.broadcast(loc, scale).size``` samples are drawn.
The normal distribution function will now be highlighted in the below examples:
```
#Generate and display 50 random numbers from the Normal Distribution
c = np.random.normal(size=50)
sns.distplot(c)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
#Generate and display 500 random numbers from the Normal Distribution with a mean of 0 and standard deviation of .1
mu, sigma = 0, 0.1
c = np.random.normal(mu, sigma, size=500)
sns.distplot(c)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
```
As you can see the greater the sample size, the more accurate the normal distribution curve. Also the 68-95-99.7 rule is true in both examples.
### 3.2 Binomial Distribution
The binomial distribution is a probability distribution that describes the outcome of *n* independent trials in an experiment. Each trial is assumed to have only two outcomes, either success or failure and have a probability associated with success and failure.<sup>[45]</sup> The mathematical formula for this can be seen in the image below:<sup>[46]</sup>
<img align="centre" width="350" height="350" img src="https://www.onlinemathlearning.com/image-files/binomial-distribution-formula.png">
Binomial distribution can be used in many real world situations such as if a new drug is introduced to cure a disease, it either cures the disease (it’s successful) or it doesn’t cure the disease (it’s a failure). Also, if you purchase a lottery ticket, you’re either going to win money, or you aren’t. In essence, any situation in which there can only be a success or a failure can be represented by a binomial distribution.<sup>[47]</sup>
The binomial distribution function in numpy draws samples from a binomial distribution. Samples are drawn from a binomial distribution with specified parameters, *n* trials and *p* probability of success where *n* an integer greater than or equal to 0 and *p* is in the interval (0 to 1). Note *n* may be input as a float, but it is truncated to an integer in use i.e. limiting the number of digits right of the decimal point.
The method for achieving this is as follows:
**```numpy.random.binomial(n, p, size=None)```**
The parameter of the above method is:<sup>[48]</sup>
* *n* - Parameter of the distribution, must be greater than or equal to 0. Floats are also accepted, but as stated above, they will be truncated to integers. The value for this parameter must be an integer or an array-like of integers.
* *p* - Parameter of the distribution, must be beteen 0 and 1. The value for this parameter must be a non-negative float or an array-like of floats.
* *size* - Dictates the output shape. If the given shape is, e.g., ```size=(2, 3, 6)```then ```2 * 3 * 6``` samples are drawn. If size is ```None``` (default), a single value is returned if ```n``` and ```p``` are both scalars. Otherwise, ```np.broadcast(n, p).size``` samples are drawn.
The output for the the binomial distribution function drawn samples from the parameterized binomial distribution, where each sample is equal to the number of successes over the *n* trials.
The binomial distribution function will now be highlighted in the below example (developed from the work of Tony Yiu<sup>[49]</sup>):
In this example we will run a stylized real world case in which a technology company is trying to improve it's ROI (Return on Investment) on it's app launches. From analysis we have gathered the following information:
* The company develops on average 2 new apps a year.
* The probability of a conversion for each app is 10%.
* The average revenue to the company for each conversion is €100,000.
* The company has 1000 employees.
* Each employee is paid on average €22,500 a year.
* The yearly fixed costs for company is calculated at €10,000,000.
As a result we can see that:
* n = 2
* p = 10%
```
#Calculating the Binomial Distribution
#Number of employees
employees = 1000
#Cost per employee
wage = 22500
#Number of new apps developed a year
n = 2
#Probability of success for each app
p = 0.1
#Revenue per product
revenue = 100000
#Binomial random variables of the technology company
conversions = np.random.binomial(n, p, size=employees)
#Print some key metrics of the technology company
print('Average Conversions per Employee: ' + str(round(np.mean(conversions), 2)))
print('Standard Deviation of Conversions per Employee: ' + str(round(np.std(conversions), 2)))
print('Total Conversions: ' + str(np.sum(conversions)))
print('Total Revenues: ' + str(np.sum(conversions)*revenue))
print('Total Expense: ' + str(employees*wage + 10000000))
print('Total Profits: ' + str(np.sum(conversions)*revenue - employees*wage - 10000000))
```
As we can see the manufacturing company is expected to make a loss. While the above calculations predict a loss it must be remembered that these are results for just one randomly generated year. Let’s look at the profits for the manufacturing company over 1000 simulations and see how the yearly profit varies:
```
# Simulate 1000 iterations of the above calculations
# Number of simulations
sims = 1000
sim_conversions = [np.sum(np.random.binomial(n, p, size=employees)) for i in range(sims)]
sim_profits = np.array(sim_conversions)*revenue - employees*wage - 5000000
# Plot the results as a histogram
fig, ax = plt.subplots(figsize=(14,7))
ax = sns.distplot(sim_profits, bins=20, label='simulation results')
ax.set_xlabel("Profits",fontsize=16)
ax.set_ylabel("Frequency",fontsize=16)
```
As we can see in the above graph the binomial distribution almost certainly predicts company will generate a loss. This result would also have a severe impact on the company's share price and as a result, a greater ROI will be sought.
Through a complete overhall of it's operations the company has been able to streamline it's operations. This has resulted in an increase in the amount of new apps develops a year to 4 and an increase in the probability of success for each apps to 12%.
With these improvements in place we will now eaxmine the effects have on the companys profits:
```
#Calculating the New Binomial distribution
#Number of employees
employees = 1000
#Cost per employee
wage = 22500
#New number of new apps created a year
n = 4
#New probability of success for each app
p = 0.12
#Revenue per app
revenue = 100000
#New binomial random variables of the technology company
new_conversions = np.random.binomial(n, p, size=employees)
#Print some key metrics of the technology company
print('Average Conversions per Employee: ' + str(round(np.mean(conversions), 2)))
print('Standard Deviation of Conversions per Employee: ' + str(round(np.std(conversions), 2)))
print('Total Conversions: ' + str(np.sum(new_conversions)))
print('Total Revenues: ' + str(np.sum(new_conversions)*revenue))
print('Total Expense: ' + str(employees*wage + 10000000))
print('Total Profits: ' + str(np.sum(new_conversions)*revenue - employees*wage - 10000000))
```
As we can see as a resutl of the improvements the company's profits are now over €1 million. Again to fully investigate these changes we will conduct over 1000 simulations of the new results and see how the yearly profit varies:
```
# Simulate 100 iterations with the new calculations
# Number of simulations
sims = 1000
imp_conversions = [np.sum(np.random.binomial(n, p, size=employees)) for i in range(sims)]
imp_profits = [np.array(imp_conversions)*revenue - employees*wage - 6000000]
# Plot the results as a histogram
fig, ax = plt.subplots(figsize=(14,7) )
ax = sns.distplot(imp_profits, bins=20, label='simulation results', color='red')
ax.set_xlabel("Profits",fontsize=16)
ax.set_ylabel("Frequency",fontsize=16)
```
As we can see the with new binomial distribution the company now has a far higher chance of making a profit of over €1.5 million. We will now plot the old and new binomial distributions to visually highlight the changes.
```
# Plot the new results versus the old as a histogram
fig, ax = plt.subplots(figsize=(14,7))
ax = sns.distplot(sim_profits, bins=20, label='Original Simulation Results')
ax = sns.distplot(imp_profits, bins=20, label='Improved Simulation Results', color='red')
ax.set_xlabel("Profits",fontsize=16)
ax.set_ylabel("Frequency",fontsize=16)
plt.legend()
```
As we can see the profit produced by each employee follows a binomial distribution, As a result of an increase both to the n (number of apps developed a year) and p (probability of conversion for each app) parameters, higher profits are generated.<sup>[50]</sup>
### 3.3 Poisson Distribution
The Poisson distribution is a discrete distribution that measures the probability of a given number of events happening in a specified time period.<sup>[51]</sup> Named after French mathematician Siméon Denis Poisson, the Poisson distribution, is a discrete probability distribution, meaning that the event can only be measured as occurring or not as occurring, meaning the variable can only be measured in whole numbers.<sup>[52]</sup>The mathematical formula for this can be seen in the image below:<sup>[53]</sup>
<img align="centre" width="350" height="350" img src="https://www.onlinemathlearning.com/image-files/poisson-distribution-formula.png">
Euler's Constant refered above is a mathematical expression for the limit of the sum of 1 + 1/2 + 1/3 + 1/4 ... + 1/*n*, minus the natural log of *n* as *n* approaches infinity.<sup>[54]</sup>
The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume.<sup>[55]</sup> Real life examples of the use of Poisson distribution include the number of traffic accidents and the number of phone calls received within a given time period.<sup>[56]</sup>
The method for achieving this is as follows:
**```numpy.random.poisson(lam=1.0, size=None)```**
The parameter of the above method is:<sup>[57]</sup>
* *n* - Expectation of the interval. Must be greater than or equal to 0. A sequence of expectation intervals must be broadcastable over the requested size. The value for this parameter must be a float or an array-like of floats.
* *size* - Dictates the output shape. If the given shape is, e.g., ```size=(5, 1, 4)```then ```5 * 1 * 4``` samples are drawn. If size is ```None``` (default), a single value is returned if ```lam``` is a scalar. Otherwise, ```np.array(lam).size``` samples are drawn.
The output for the the binomial distribution function drawn samples from the parameterized Poisson distribution.
The Poisson distribution function will now be highlighted in the below example:
```
#Poisson Distribution of 100 samples at an interval of 10
x = rng.poisson(10, size=100)
#Generate a distplot to visualise:
sns.distplot(x)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
#Poisson Distribution of 500 samples at an interval of 2
q = rng.poisson(2, size=500)
#Generate a distplot to visualise:
sns.distplot(q)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
```
### 3.4 Hypergeometric Distribution
The hypergeometric distribution, is a probability distribution in which selections are made from two groups without replacing members of the groups.<sup>[58]</sup> While similiar to the binomial distribution, the hypergeometric distribution differes due to the lack of replacements.<sup>[59]</sup> The binomial distribution though is a very good approximation of the hypergeometric distribution as long as you are sampling 5% or less of the population.<sup>[60]</sup>
The hypergeometric distribution can be used in many real world situations such as the random selection of members for a team from a population of girls and boys, or the random seletion of a certain suit of cards from a pack.<sup>[61]</sup> TA sample hypergeometric distribution for three different scenarios can be seen in the image below:
<img align="centre" width="400" height="350" img src="https://upload.wikimedia.org/wikipedia/en/thumb/1/1a/NoncentralHypergeometricCompare1.png/300px-NoncentralHypergeometricCompare1.png">
The method for achieving the NumPy hypergeometric distribution function is as follows:
**```numpy.random.hypergeometric(ngood, nbad, nsample, size=None)```**
The parameter of the above method is:<sup>[62]</sup>
* *ngood* - Number of ways to make a good (positive) selection. Must be nonnegative i.e. either positive or equal to zero. The value for this parameter must be an integer or an array-like of integers.
* *nbad* - Number of ways to make a bad (negative) selection. Must be nonnegative i.e. either positive or equal to zero. The value for this parameter must be an integer or an array-like of integers.
* *nspample* - Number of items sampled. Must be at least 1 and at most ```ngood + nbad```. The value for this parameter must be an integer or an array-like of integers.
* *size* - Dictates the output shape. If the given shape is, e.g., ```size=(2, 3, 1)```then ```2 * 3 * 1``` samples are drawn. If size is ```None``` (default), a single value is returned if ```ngood, nbad,``` and ```nsample``` are both scalars. Otherwise, ```np.broadcast(ngood, nbad, nsample).size``` samples are drawn.
The hypergeometric distribution function will now be highlighted in the below examples:
What is the probability of getting a spade in a 5 card hand in poker.
```
#There are 13 spades total in a deck (ngood), which means there are 39 other cards remaining (nbad).
#It is a 5 card hand (nspample) and it is a 52 card deck (size)
s = np.random.hypergeometric(13, 39, 5, 52)
plt.xlabel('Number of Spades Drawn')
plt.ylabel('Probability')
plt.title('Probability of getting a Spade in a 5 card hand in Poker')
plt.hist(s)
```
Suppose we randomly select 5 cards without replacement from an ordinary deck of playing cards. What is the probability of getting exactly 2 red cards. Answer equal to precentage of 1 (100%)
```
s = np.random.hypergeometric(26, 26, 5, 2)
sum(s>=2)/100. + sum(s<=3)/100
```
### 3.5 Laplace Distribution
The Laplace distribution (named after French mathematician Pierre Simon Laplace<sup>[63]</sup>) represents the distribution of differences between two independent variables having identical exponential distributions.<sup>[64]</sup> Also known as the Double Exponential distribution, the Laplace distribution is very similiar to the above mentioned Normal distribution in that it is unimodal (has one peak) and symmetrical. However, it has a sharper peak than the Normal distribution.<sup>[65]</sup> This difference can be seen in the image below:
<img align="centre" width="400" height="350" img src="https://www.johndcook.com/normal_laplace.svg">
The Laplace distribution is mainly used for modelling distributions with sharp peaks and long tails, such as rainfall and financial variables like stock returns.<sup>[66]</sup> The Laplace distribution draws samples with specified location (or mean) and scale (decay).
The method for achieving this is as follows:
**```numpy.random.laplace(loc=0.0, scale=1.0, size=None)```**
The parameter of the above method is:<sup>[67]</sup>
* *loc* - Optional parameter. Dictates the position, of the distributions peak. The default is 0. The value (if chosen) for this parameter may be a float or an array-like of floats.
* *scale* - Optional parameter. Dictates the exponentials decay i.e. it's tail. The default is 1 and it must be non-negative i.e. either positive or equal to zero. The value (if chosen) for this parameter may be a float or an array-like of floats.
* *size* - Optional parameter. Dictates the output shape. If the given shape is, e.g., ```size=(1, 5, 2)```then ```1 * 5 * 2``` samples are drawn. If size is ```None``` (default), a single value is returned if ```loc ``` and ```nsample``` are both scalars. Otherwise, ```np.broadcast(loc, scale).size``` samples are drawn.
The Laplace distribution function will now be highlighted in the below examples:
```
#Draw a laplace distribution from 10000 samples, a distribution peak centred on 100 and generate a exponential decay of 2
loc, scale = 100., 2.
x = np.random.laplace(loc, scale, size=10000)
#Generate a distplot to visualise:
sns.distplot(x)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
```
For this example we will plot a Laplace distribution from 5000 samples, a distribution peak centred on 0 and generate a exponential decay of 1. We will then compare this with a Normal Distribution of the same parameters.
```
#Set the Location and Scale for the Laplace Distribution
loc, scale = 0., 1.
s = np.random.laplace(loc, scale, 5000)
#Plot a histogram of the Laplace Distribution, including the probability density function
plt.hist(s, 30, density=True)
x = np.arange(-8., 8., .01)
pdf = np.exp(-abs(x-loc)/scale)/(2.*scale)
plt.plot(x, pdf, label='Laplace Distribution', color='red')
#Plot a Normal Distribution for comparison
g = (1/(scale * np.sqrt(2 * np.pi)) * np.exp(-(x - loc)**2 / (2 * scale**2)))
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
plt.plot(x,g, label='Normal Distribution', color='green')
plt.legend(loc='best')
```
## Section 4 - Explain the use of seeds in Generating Pseudorandom Numbers
A seed specifies the start point where a computer generates a sequence of pseudorandom numbers.<sup>[68]</sup> While a seed can be any number, it is typically taken from the computer system’s clock or Unix time.<sup>[69]</sup> The Unix time is a timestamp that began at the Unix Epoch on January 1st, 1970 at UTC. Since then the Unix time is calculated as the number of seconds between a particular time and date and the Unix Epoch. <sup>[70]</sup> The current Unix time can be found [here](https://www.epochconverter.com).<sup>[71]</sup> The Unix time is very useful to computer systems for tracking and sorting dated information in dynamic and distributed applications and is also used to initialize a pseudorandom number generator within the Numpy package.<sup>[70]</sup> NumPy's random package uses pseudorandom numbers as it is a limitation of computers that they cannot produce true random numbers.<sup>[72]</sup>
A pseudorandom number is a number, which appears to be random but is not. Pseudorandom numbers are generated by pseudorandom number generators (PRNG), also known as deterministic random bit generators (DRBG).<sup>[73]</sup> A PRNG is an algorithm for generating a sequence of numbers whose properties approximate the properties of sequences of random numbers. The PRNG sequence is not truly random though as it is determined by an initial value, which is the PRNG’s seed. As a result, PRNG generate numbers that appear to be random but are predictable. Here, computers use a seed value and an algorithm to generate numbers that seem to be random, but are deterministic.<sup>[74]</sup> Below is an example of a seed and a PRNG in action:<sup>[69]</sup>
* This particular PRNG will take a number *x*, add 900 *+x*, then subtract 52.
* For the PRNG to start, you have to specify a starting number, x i.e. the seed. For this example we will take the seed as 77. The result of this would be:
* Add 900 + 77 = 977
* Subtract 52 = 925 (This is the first 'random number')
* Following the same algorithm, the second “random” number would be:
* 900 + 925 = 1825
* Subtract 52 = 1773 (This is the second 'random number')
While this example is a lot more simplistic than the algorithms behind a computers PRNG it does highlight that the process still follows a pattern, which will be repeated the next time you enter 77 or 50, or whatever number you choose as the *seed*.
The NumPy function uses a uses an algorithm that generate pseudo-random numbers to give the appearance of randomness is:
**```RandomState.rand(d0, d1, ..., dn)```**
The parameter of the above method is:<sup>[75]</sup>
* *d0, d1, …, dn* - Optional parameter. Dictates the dimensions of the returned array. Must be nonnegative i.e. either positive or equal to zero. The value (if chosen) for this parameter may be an integer.
This function will now be highlighted in the below example:
```
#Generate 3 pseudo-random distributions of 50 numbers
d = np.random.rand(50)
e = np.random.rand(50)
f = np.random.rand(50)
print(d)
print(e)
print(f)
#Illustrate these 3 distributions on one Plot to compare for similarities
sns.distplot(d, label="D")
sns.distplot(e, label="E")
sns.distplot(f, label="F")
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
plt.legend(loc="best")
```
As we can see in the above Plot while the 3 distributions generated slightly different outputs as the seed has been changed for each distribution.
Within NumPy the seed can be set using the following function:
**```numpy.random.seed(self, seed=None)```**
The following should be noted for the above function:<sup>[76]</sup>
* This is a convenience, legacy function. The best practice is to not reseed a BitGenerator, rather to recreate a new one
* As a result this function is present in Numpy 1.19 Manual for legacy reasons only.
The benefit in the ability to be able to set the seed number for each function is repeatable results can be generated and analysed.
This function will now be highlighted in the below examples:
```
#Set the seed number to 10 for the three functions in the above example and plot the results
np.random.seed(10)
x = np.random.rand(50)
np.random.seed(10)
y = np.random.rand(50)
np.random.seed(10)
z = np.random.rand(50)
print(x)
print(y)
print(z)
sns.distplot(x, label="X")
sns.distplot(y, label="Y")
sns.distplot(z, label="Z")
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
plt.legend(loc="best")
```
As we can see by setting the same seed number for the three functions we generate identical results.
The ability to be able to generate the same repeatable results is beneficial in situations such as; when you are trying to debug a program or evaluate a particular task with same random varibales.<sup>[77]</sup>
## References
----------------------------------------------------
<a name="myfootnote1">1</a>: Stack Overflow - Purpose of “%matplotlib inline”, <https://stackoverflow.com/questions/43027980/purpose-of-matplotlib-inline/43028034>
<a name="myfootnote2">2</a>: The Python Graph Gallery - 104 Seaborn Themes, <https://python-graph-gallery.com/104-seaborn-themes/>
<a name="myfootnote3">3</a>: Seaborn - Choosing color palettes, <https://seaborn.pydata.org/tutorial/color_palettes.html>
<a name="myfootnote4">4</a>: Geeks for Geeks - Random sampling in numpy | random() function, <https://www.geeksforgeeks.org/random-sampling-in-numpy-random-function/>
<a name="myfootnote5">5</a>: NumPy 1.19 Manual - What is NumPy?,<https://numpy.org/doc/stable/user/whatisnumpy.html>
<a name="myfootnote6">6</a>: Bernd Klein - Numpy Tutorial, <https://www.python-course.eu/numpy.php>
<a name="myfootnote7">7</a>: Numpy 1.19 Manual - NumPy: the absolute basics for beginners, <https://numpy.org/doc/stable/user/absolute_beginners.html>
<a name="myfootnote8">8</a>: Geeks for Geeks - Python Numpy, <https://www.geeksforgeeks.org/python-numpy/>
<a name="myfootnote9">9</a>: Vijay Singh Khatri - Understanding NumPy, <https://dzone.com/articles/understanding-numpy#:~:text=NumPy%20is%20a%20powerful%20Python,in%20easily%20performing%20numerical%20computations>
<a name="myfootnote10">10</a>: Dr Mads Haahr - Introduction to Randomness and Random Numbers, <https://www.random.org/randomness/#:~:text=As%20the%20word%20'pseudo'%20suggests,of%20numbers%20that%20appear%20random>
<a name="myfootnote11">11</a>: NumPy 1.19 Manual - Random sampling (numpy.random), <https://numpy.org/doc/stable/reference/random/index.html>
<a name="myfootnote12">12</a>: NumPy 1.19 Manual - Bit Generators, <https://numpy.org/doc/stable/reference/random/bit_generators/index.html>
<a name="myfootnote13">13</a>: NumPy 1.19 Manual - Random Generator, <https://numpy.org/doc/stable/reference/random/generator.html#numpy.random.Generator>
<a name="myfootnote14">14</a>: NumPy 1.19 Manual - Permuted Congruential Generator (64-bit, PCG64), <https://numpy.org/doc/stable/reference/random/bit_generators/pcg64.html#numpy.random.PCG64>
<a name="myfootnote15">15</a>: Adam Hayes - Simple Random Sample, <https://www.investopedia.com/terms/s/simple-random-sample.asp#:~:text=A%20simple%20random%20sample%20is,equal%20probability%20of%20being%20chosen.&text=An%20example%20of%20a%20simple,a%20company%20of%20250%20employees.>
<a name="myfootnote16">16</a>: Moore, David S. and George P. McCabe (2006) - Introduction to the Practice of Statistics, 5th edition, p. 219
<a name="myfootnote17">17</a>: Kahn Acamedy - Sampling methods Review, <https://www.khanacademy.org/math/statistics-probability/designing-studies/sampling-methods-stats/a/sampling-methods-review>
<a name="myfootnote18">18</a>: Wolfram Mathworld - Discrete Uniform Distribution, <https://mathworld.wolfram.com/DiscreteUniformDistribution.html>
<a name="myfootnote19">19</a>: Numpy 1.19 Manual - numpy.random.Generator.integers, <https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.integers.html#numpy.random.Generator.integers>
<a name="myfootnote20">20</a>: Numpy 1.19 Manual - Data type objects (dtype), <https://numpy.org/doc/stable/reference/arrays.dtypes.html>
<a name="myfootnote21">21</a>: Freie Universität Berlin - Statistics and Geospatial Data Analysis: The Continuous Uniform Distribution, <https://www.geo.fu-berlin.de/en/v/soga/Basics-of-statistics/Continous-Random-Variables/Continuous-Uniform-Distribution/index.html>
<a name="myfootnote22">22</a>: UCD Maths Support Centre - Statistics: Uniform Distribution (Continuous), <https://www.ucd.ie/msc/t4media/Uniform%20Distribution.pdf>
<a name="myfootnote23">23</a>: Numpy 1.19 Manual - numpy.random.Generator.random, <https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.random.html#numpy.random.Generator.random>
<a name="myfootnote24">24</a>: Numpy 1.19 Manual - The N-dimensional array (ndarray), <https://numpy.org/doc/stable/reference/arrays.ndarray.html#:~:text=An%20ndarray%20is%20a%20(usually,the%20sizes%20of%20each%20dimension.>
<a name="myfootnote25">25</a>: Joshua Ebner - How To Use Numpy Random Choice, <https://www.sharpsightlabs.com/blog/numpy-random-choice/>
<a name="myfootnote26">26</a>: Numpy 1.19 Manual - numpy.random.Generator.choice, <https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.choice.html#numpy.random.Generator.choice>
<a name="myfootnote27">27</a>: Kite - Bytes, <https://www.kite.com/python/docs/numpy.random.RandomState.bytes>
<a name="myfootnote28">28</a>: Numpy 1.19 Manual - numpy.random.Generator.bytes, <https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.bytes.html#numpy.random.Generator.bytes>
<a name="myfootnote29">29</a>: w3shools.com - Python Escape Characters, <https://www.w3schools.com/python/gloss_python_escape_characters.asp>
<a name="myfootnote30">30</a>: Corporate Finance Institute - Permutation, <https://corporatefinanceinstitute.com/resources/knowledge/other/permutation/>
<a name="myfootnote31">31</a>: Stat Trek - Combination Definition, <https://stattrek.com/statistics/dictionary.aspx?definition=combination>
<a name="myfootnote32">32</a>: Britannica Dictionary - Permutations and Combinations, <https://www.britannica.com/science/permutation>
<a name="myfootnote33">33</a>: Stat Trek - Permutation Definition, <https://stattrek.com/statistics/dictionary.aspx?definition=permutation>
<a name="myfootnote34">34</a>: Numpy 1.19 Manual - numpy.random.Generator.shuffle, <https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.shuffle.html#numpy.random.Generator.shuffle>
<a name="myfootnote35">35</a>: Numpy 1.19 Manual - numpy.random.Generator.permutation, <https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.permutation.html#numpy.random.Generator.permutation>
<a name="myfootnote36">36</a>: Geeks for Geeks - numpy.random.permutation() in Python, <https://www.geeksforgeeks.org/numpy-random-permutation-in-python/>
<a name="myfootnote37">37</a>: w3shools.com - Normal (Gaussian) Distribution, <https://www.w3schools.com/python/numpy_random_normal.asp>
<a name="myfootnote38">38</a>: Jim Frost - Normal Distribution in Statistics, <https://statisticsbyjim.com/basics/normal-distribution/>
<a name="myfootnote39">39</a>: Saul McLeod - Introduction to the Normal Distribution (Bell Curve), <https://www.simplypsychology.org/normal-distribution.html>
<a name="myfootnote40">40</a>: EW Weisstein - What is a Normal distribution?, <https://www.statisticshowto.com/probability-and-statistics/normal-distributions/>
<a name="myfootnote40">41</a>: Stat Trek - The Normal Distribution, <https://stattrek.com/probability-distributions/normal.aspx>
<a name="myfootnote42">42</a>: Hyper Physics - Gaussian Distribution Function, <http://hyperphysics.phy-astr.gsu.edu/hbase/Math/gaufcn.html>
<a name="myfootnote43">43</a>: Michael Galarnyk - Explaining the 68-95-99.7 rule for a Normal Distribution, <https://towardsdatascience.com/understanding-the-68-95-99-7-rule-for-a-normal-distribution-b7b7cbf760c2>
<a name="myfootnote44">44</a>: Numpy 1.19 Manual - numpy.random.normal, <https://numpy.org/doc/stable/reference/random/generated/numpy.random.normal.html?highlight=numpy.random.normal#numpy.random.normal>
<a name="myfootnote45">45</a>: R Tutorial - Binomial Distribution, <http://www.r-tutor.com/elementary-statistics/probability-distributions/binomial-distribution>
<a name="myfootnote46">46</a>: OnlineMathLearning.com - Binomial Distribution, <https://www.onlinemathlearning.com/binomial-distribution.html>
<a name="myfootnote47">47</a>: EW Weisstein - Binomial Distribution: Formula, What it is and How to use it, <https://www.statisticshowto.com/probability-and-statistics/binomial-theorem/binomial-distribution-formula/#:~:text=Many%20instances%20of%20binomial%20distributions,%2C%20or%20you%20aren't.>
<a name="myfootnote48">48</a>: Numpy 1.19 Manual - numpy.random.binomial, <https://numpy.org/doc/stable/reference/random/generated/numpy.random.binomial.html?highlight=binomial%20distribution>
<a name="myfootnote49">49</a>: Tony Yiu - Fun with the Binomial Distribution, <https://towardsdatascience.com/fun-with-the-binomial-distribution-96a5ecabf65b>
<a name="myfootnote50">50</a>: Stat Trek - The Binomial Distribution, <https://stattrek.com/probability-distributions/binomial.aspx>
<a name="myfootnote51">51</a>: Science Direct - Poisson Distribution, <https://www.sciencedirect.com/topics/mathematics/poisson-distribution>
<a name="myfootnote52">52</a>: Alexander Katz - Poisson Distribution, <https://brilliant.org/wiki/poisson-distribution/>
<a name="myfootnote53">53</a>: Kellogg School of Management - The Poisson and Exponential Distributions, <https://www.kellogg.northwestern.edu/faculty/weber/decs-430/Notes%20on%20the%20Poisson%20and%20exponential%20distributions.pdf>
<a name="myfootnote54">54</a>: Will Kenton - Euler's Constant, <https://www.investopedia.com/terms/e/eulers-constant.asp#:~:text=Euler's%20constant%20is%20a%20mathematical,derivative%20of%20a%20logarithmic%20function.>
<a name="myfootnote55">55</a>: Lumen Learning - Other Random Variables, <https://courses.lumenlearning.com/boundless-statistics/chapter/other-random-variables/>
<a name="myfootnote56">56</a>: Neil J. Salkind - Poisson Distribution, <https://methods.sagepub.com/Reference/encyc-of-research-design/n316.xml>
<a name="myfootnote57">57</a>: Numpy 1.19 Manual - numpy.random.poisson, <https://numpy.org/doc/stable/reference/random/generated/numpy.random.poisson.html>
<a name="myfootnote58">58</a>: Britannica Dictionary - Hypergeometric Distribution, <https://www.britannica.com/topic/hypergeometric-distribution>
<a name="myfootnote59">59</a>: EW Weisstein - Hypergeometric Distribution: Examples and Formula, <https://www.statisticshowto.com/hypergeometric-distribution-examples/>
<a name="myfootnote60">60</a>: Stat Trek - Hypergeometric Distribution, <https://stattrek.com/probability-distributions/hypergeometric.aspx>
<a name="myfootnote61">61</a>: Penn State - Eberly College of Science - 7.4 - Hypergeometric Distribution, <https://online.stat.psu.edu/stat414/lesson/7/7.4>
<a name="myfootnote62">62</a>: Numpy 1.19 Manual - numpy.random.hypergeometric, <https://numpy.org/doc/stable/reference/random/generated/numpy.random.hypergeometric.html>
<a name="myfootnote63">63</a>: Vose Software - Laplace Distribution, <https://www.vosesoftware.com/riskwiki/Laplacedistribution.php>
<a name="myfootnote64">64</a>: Science Direct - Laplace Distribution, <https://www.sciencedirect.com/topics/mathematics/laplace-distribution>
<a name="myfootnote65">65</a>: EW Weisstein - Laplace Distribution / Double Exponential, <https://www.statisticshowto.com/laplace-distribution-double-exponential/>
<a name="myfootnote66">66</a>: Kyle Siegrist - The Standard Laplace Distribution, <https://www.randomservices.org/random/special/Laplace.html>
<a name="myfootnote67">67</a>: Numpy 1.19 Manual - numpy.random.laplace, <https://numpy.org/doc/stable/reference/random/generated/numpy.random.laplace.html>
<a name="myfootnote68">68</a>: Research Gate - Can someone explain what is seed in generating a random number?, <https://www.researchgate.net/post/Can-someone-explain-what-is-seed-in-generating-a-random-number>
<a name="myfootnote69">69</a>: EW Weisstein - Random Seed: Definition, <https://www.statisticshowto.com/random-seed-definition/>
<a name="myfootnote70">70</a>: Stack Overflow - What is a Unix timestamp and why use it?, <https://stackoverflow.com/questions/20822821/what-is-a-unix-timestamp-and-why-use-it>
<a name="myfootnote71">71</a>: Epoch Converter - Epoch & Unix Timestamp Conversion Tools, <https://www.epochconverter.com>
<a name="myfootnote72">72</a>: Jason M. Rubin - Can a computer generate a truly random number?, <https://engineering.mit.edu/engage/ask-an-engineer/can-a-computer-generate-a-truly-random-number/>
<a name="myfootnote73">73</a>: Huzaifa Sidhpurwala - Understanding random number generators, and their limitations, in Linux, <https://www.redhat.com/en/blog/understanding-random-number-generators-and-their-limitations-linux>
<a name="myfootnote74">74</a>: Palash Baranwal - PseudoRandom number generator, <https://medium.com/@palashbaranwal/pseudorandom-number-generator-52b0efc23fb8>
<a name="myfootnote75">75</a>: Numpy 1.19 Manual - numpy.random.RandomState.rand, <https://numpy.org/doc/stable/reference/random/generated/numpy.random.RandomState.rand.html>
<a name="myfootnote76">76</a>: Numpy 1.19 Manual - numpy.random.seed, <https://numpy.org/doc/stable/reference/random/generated/numpy.random.seed.html>
<a name="myfootnote77">77</a>: Stack Overflow - Reasons for using the set.seed function, <https://stackoverflow.com/questions/13605271/reasons-for-using-the-set-seed-function>
## Bibliography
----------------------------------------------------
Within the course of this project the following sources were also used for research purposes:
* Alex Lenail - Understanding and Implementing the Hypergeometric Test in Python, <https://blog.alexlenail.me/understanding-and-implementing-the-hypergeometric-test-in-python-a7db688a7458>
* Better Explained - Easy Permutations and Combinations, <https://betterexplained.com/articles/easy-permutations-and-combinations/>
* Brett Berry - Combinations vs Permutations, <https://medium.com/i-math/combinations-permutations-fa7ac680f0ac#:~:text=The%20difference%20between%20combinations%20and,different%20ordering%20(aka%20permutation)>
* Chris Albon - Generating Random Numbers With NumPy, <https://chrisalbon.com/python/basics/generating_random_numbers_with_numpy/>
* DataCamp - Random Number Generator Using Numpy, <https://www.datacamp.com/community/tutorials/numpy-random>
* David M. Lane - Binomial Distribution, <http://onlinestatbook.com/2/probability/binomial.html>
* Debanjona Bhattacharjya - NumPy.Random.Seed(101) Explained, <https://medium.com/@debanjana.bhattacharyya9818/numpy-random-seed-101-explained-2e96ee3fd90b>
* Engineering and Statistics HAndbook - 1.3.6.6.19. Poisson Distribution, <https://www.itl.nist.gov/div898/handbook/eda/section3/eda366j.htm>
* EW Weisstein - Poisson Distribution / Poisson Curve: Simple Definition, <https://www.statisticshowto.com/poisson-distribution/>
* Geeks for Geeks - numpy.random.choice() in Python, <https://www.geeksforgeeks.org/numpy-random-choice-in-python/>
* Geek for Geeks - numpy.random.poisson() in Python, <https://www.geeksforgeeks.org/numpy-random-poisson-in-python/>
* Geeks for Geeks - numpy.random.randn() in Python, <https://www.geeksforgeeks.org/numpy-random-randn-python/>
* Geeks for Geeks - Python – Binomial Distribution, <https://www.geeksforgeeks.org/python-binomial-distribution/>
* Geek for Geeks - Python | Numpy np.hypergeometric() method, <https://www.geeksforgeeks.org/python-numpy-np-hypergeometric-method/>
* Jarkko Toivonen - Data Analysis with Python, <https://saskeli.github.io/data-analysis-with-python-summer-2019/numpy.html>
* Jason Brownlee - How to Generate Random Numbers in Python, <https://machinelearningmastery.com/how-to-generate-random-numbers-in-python/>
* John DeJesus - Hypergeometric Distribution Explained With Python, <https://towardsdatascience.com/hypergeometric-distribution-explained-with-python-2c80bc613bf4>
* Joshua Ebner - Numpy Random Seed Explained, <https://www.sharpsightlabs.com/blog/numpy-random-seed/>
* Jupyter Notebook - Markdown Cells, <https://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Working%20With%20Markdown%20Cells.html>
* Justin Bois - Lesson 23: Random number generation, <http://justinbois.github.io/bootcamp/2020/lessons/l23_random_number_generation.html>
* Lew Yerian - Understanding Permutations and Combinations, <https://www.isixsigma.com/community/blogs/understanding-permutations-and-combinations/#:~:text=Permutations%20are%20for%20lists%20(where,permutation%20is%20an%20ordered%20combination.&text=A%20true%20%E2%80%9Ccombination%E2%80%9D%20lock%20would,a%20true%20%E2%80%9Ccombination%E2%80%9D%20lock.>
* Manish Pathak - Probability Distributions in Python, <https://www.datacamp.com/community/tutorials/probability-distributions-python>
* Maths is Fun - The Binomial Distribution, <https://www.mathsisfun.com/data/binomial-distribution.html>
* Numpy Manual 1.19 - NumPy 1.19.0 Release Note - Changes, <https://numpy.org/doc/stable/release/1.19.0-notes.html#changes>
* Packt - NumPy random numbers, <https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781785285110/2/ch02lvl1sec16/numpy-random-numbers>
* RFunction.com - set.seed, <http://rfunction.com/archives/62>
* Royal Statistical Society - Notebook: The Laplace distribution, <https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2018.01185.x>
* Rylan Fowers - Python Poisson Distribution - Numpy Random Poisson, <https://www.youtube.com/watch?v=dGhDzCJryGA>
* Sachin Date - The Poisson Process: Everything you need to know, <https://towardsdatascience.com/the-poisson-process-everything-you-need-to-know-322aa0ab9e9a>
* Science Direct - Hypergeometric Distribution, <https://www.sciencedirect.com/topics/mathematics/hypergeometric-distribution>
* Stack Exchange - What exactly is a seed in a random number generator?, <https://stats.stackexchange.com/questions/354373/what-exactly-is-a-seed-in-a-random-number-generator>
* Stack Exchange - Where in R code should I use set.seed() function (specifically, before shuffling or after)?, <https://stats.stackexchange.com/questions/215209/where-in-r-code-should-i-use-set-seed-function-specifically-before-shuffling>
* Tutorials Point - Statistics: Laplace Distribution, <https://www.tutorialspoint.com/statistics/laplace_distribution.htm>
* w3shools.com - Binomial Distribution, <https://www.w3schools.com/python/numpy_random_binomial.asp>
* w3shools.com - Random Numbers in NumPy, <https://www.w3schools.com/python/numpy_random.asp>
* w3shools.com - Random Permutations, <https://www.w3schools.com/python/numpy_random_permutation.asp>
* Will Koehrsen - The Poisson Distribution and Poisson Process Explained, <https://towardsdatascience.com/the-poisson-distribution-and-poisson-process-explained-4e2cb17d459>
* Wolfram Mathworld - Binomial Distribution,<https://mathworld.wolfram.com/BinomialDistribution.html>
* Wolfram Mathworld - Hypergeometric Distribution, <https://mathworld.wolfram.com/HypergeometricDistribution.html>
* Wolfram Mathworld - Laplace Distribution, <https://mathworld.wolfram.com/LaplaceDistribution.html>
* Wolfram Mathworld - Normal Distribution, <https://mathworld.wolfram.com/NormalDistribution.html>
* Wolfram Mathworld - Pseudorandom Number, <https://mathworld.wolfram.com/PseudorandomNumber.html>
* Yourbasic.org - What’s a seed in a random number generator?, <https://yourbasic.org/algorithms/random-number-generator-seed/>
|
github_jupyter
|
# Import numpy to allow for analysis of the numpy.random package
# Import matplotlib.pyplot and seaborn for the creation of visualisations
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
#Inline Magic command implemented to ensure that the Plots are rendered inline
%matplotlib inline
#Setting of Seaborn dispays to enure uniformity throughout the Juypter Notebook
#Darkplot style selected to allow for optimal measurments of Plots
sns.set_style("darkgrid")
#Bright colour palette selected to allow for clear distinction of multiple outputs within one Plot
sns.set_palette("bright")
#Generate a 2 x 8 x 3 array (2 Groups with 8 Rows and 3 columns) of integers between 1 and 10, inclusive
#Setting of "rng" as the np.random.default_rng() function. This will be used throughout the Notebook
rng = np.random.default_rng()
rng.integers(1, 11, size=(2, 8, 3))
z = rng.integers(1, 11, size=(2, 8, 3))
print(z)
#Generate a 1 x 4 x 6 array (1 Group with 4 Rows and 6 columns) of integers between 1 and 5, inclusive
rng.integers(1, 6, size=(1, 4, 6))
x = rng.integers(1, 6, size=(1, 4, 6))
print(x)
#Visualise the above Integers
sns.distplot(x, bins=6)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
#Generate a random float between 0 and 1
rng.random()
#Generate a 4 x 3 Array (1 Array (Group) with 4 Rows and 3 columns)) of random numbers from -5 to 0
x = 5 * rng.random((4, 3)) - 5
print(x)
#Visualise the above Floats
sns.distplot(x, bins=8)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
#Generate an 4 x 2 x 2 Array (4 Arrays (Groups) with 2 Rows and 2 columns)) of random numbers
x = np.random.rand(4,2,2)
print(x)
#Visualise the above arrays of random Floats
sns.distplot(x[0,0], label="x[0,0]")
sns.distplot(x[0,1], label="x[0,1]")
sns.distplot(x[1,0], label="x[1,0]")
sns.distplot(x[1,1], label="x[1,1]")
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
plt.legend(loc="best")
#Generate an uniform random sample from 0 to 10 of 20 values
rng.choice(11, 20)
#It should be noted that this is the equivalent to rng.integers(0,11,20)
rng.integers(0,11,20)
#Generate an non-uniform random sample from 0 to 10 of 50 values
a = rng.choice(11, 50, p=[0.1, 0.2, 0.2, 0, 0.02, 0.25, 0.0, 0.1, 0.05, 0.05, 0.03])
print(a)
#Visualise the above Random Sample
sns.distplot(a)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
#Generate a uniform random sample from 0 to 100 of 50 values, without replacement:
y = np.random.choice(101, 50, replace=False)
print(y)
#Visualise the above Random Sample
sns.distplot(y)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
#Generate a uniform random sample of 3 European citys form a list of 10, without replacement:
cityList = ["Dublin", "London", "Rome", "Paris", "Berlin", "Istanbul", "Moscow", "Madrid", "Lisbon", "Zürich,"]
x = np.random.choice(cityList, 3, replace=False)
print(x)
#Return 20 Random Bytes
rng.bytes(20)
#Randomly shuffle the integers 0 to 20
arr = np.arange(20)
rng.shuffle(arr)
arr
#Randomly shuffle 10 Football Teams
Teams = ["Arsenal", "Liverpool", "Man United", "Man City", "Spurs", "Chelsea", "Southampton",
"Leicester", "Everton", "Leeds"]
rng.shuffle(Teams)
Teams
#Shuffle the range of 0-20 generated by np.arange
List = np.arange(21)
rng.shuffle(List)
List
#As x in an integer the function will assume the input is a range and randomly permute np.arange(x)
rng.permutation(5)
#As x is an array the function will make a copy and shuffle the elements randomly
rng.permutation([2, 3, 7, 11, 44])
#Generate and display 50 random numbers from the Normal Distribution
c = np.random.normal(size=50)
sns.distplot(c)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
#Generate and display 500 random numbers from the Normal Distribution with a mean of 0 and standard deviation of .1
mu, sigma = 0, 0.1
c = np.random.normal(mu, sigma, size=500)
sns.distplot(c)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
#Calculating the Binomial Distribution
#Number of employees
employees = 1000
#Cost per employee
wage = 22500
#Number of new apps developed a year
n = 2
#Probability of success for each app
p = 0.1
#Revenue per product
revenue = 100000
#Binomial random variables of the technology company
conversions = np.random.binomial(n, p, size=employees)
#Print some key metrics of the technology company
print('Average Conversions per Employee: ' + str(round(np.mean(conversions), 2)))
print('Standard Deviation of Conversions per Employee: ' + str(round(np.std(conversions), 2)))
print('Total Conversions: ' + str(np.sum(conversions)))
print('Total Revenues: ' + str(np.sum(conversions)*revenue))
print('Total Expense: ' + str(employees*wage + 10000000))
print('Total Profits: ' + str(np.sum(conversions)*revenue - employees*wage - 10000000))
# Simulate 1000 iterations of the above calculations
# Number of simulations
sims = 1000
sim_conversions = [np.sum(np.random.binomial(n, p, size=employees)) for i in range(sims)]
sim_profits = np.array(sim_conversions)*revenue - employees*wage - 5000000
# Plot the results as a histogram
fig, ax = plt.subplots(figsize=(14,7))
ax = sns.distplot(sim_profits, bins=20, label='simulation results')
ax.set_xlabel("Profits",fontsize=16)
ax.set_ylabel("Frequency",fontsize=16)
#Calculating the New Binomial distribution
#Number of employees
employees = 1000
#Cost per employee
wage = 22500
#New number of new apps created a year
n = 4
#New probability of success for each app
p = 0.12
#Revenue per app
revenue = 100000
#New binomial random variables of the technology company
new_conversions = np.random.binomial(n, p, size=employees)
#Print some key metrics of the technology company
print('Average Conversions per Employee: ' + str(round(np.mean(conversions), 2)))
print('Standard Deviation of Conversions per Employee: ' + str(round(np.std(conversions), 2)))
print('Total Conversions: ' + str(np.sum(new_conversions)))
print('Total Revenues: ' + str(np.sum(new_conversions)*revenue))
print('Total Expense: ' + str(employees*wage + 10000000))
print('Total Profits: ' + str(np.sum(new_conversions)*revenue - employees*wage - 10000000))
# Simulate 100 iterations with the new calculations
# Number of simulations
sims = 1000
imp_conversions = [np.sum(np.random.binomial(n, p, size=employees)) for i in range(sims)]
imp_profits = [np.array(imp_conversions)*revenue - employees*wage - 6000000]
# Plot the results as a histogram
fig, ax = plt.subplots(figsize=(14,7) )
ax = sns.distplot(imp_profits, bins=20, label='simulation results', color='red')
ax.set_xlabel("Profits",fontsize=16)
ax.set_ylabel("Frequency",fontsize=16)
# Plot the new results versus the old as a histogram
fig, ax = plt.subplots(figsize=(14,7))
ax = sns.distplot(sim_profits, bins=20, label='Original Simulation Results')
ax = sns.distplot(imp_profits, bins=20, label='Improved Simulation Results', color='red')
ax.set_xlabel("Profits",fontsize=16)
ax.set_ylabel("Frequency",fontsize=16)
plt.legend()
#Poisson Distribution of 100 samples at an interval of 10
x = rng.poisson(10, size=100)
#Generate a distplot to visualise:
sns.distplot(x)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
#Poisson Distribution of 500 samples at an interval of 2
q = rng.poisson(2, size=500)
#Generate a distplot to visualise:
sns.distplot(q)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
#There are 13 spades total in a deck (ngood), which means there are 39 other cards remaining (nbad).
#It is a 5 card hand (nspample) and it is a 52 card deck (size)
s = np.random.hypergeometric(13, 39, 5, 52)
plt.xlabel('Number of Spades Drawn')
plt.ylabel('Probability')
plt.title('Probability of getting a Spade in a 5 card hand in Poker')
plt.hist(s)
s = np.random.hypergeometric(26, 26, 5, 2)
sum(s>=2)/100. + sum(s<=3)/100
#Draw a laplace distribution from 10000 samples, a distribution peak centred on 100 and generate a exponential decay of 2
loc, scale = 100., 2.
x = np.random.laplace(loc, scale, size=10000)
#Generate a distplot to visualise:
sns.distplot(x)
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
#Set the Location and Scale for the Laplace Distribution
loc, scale = 0., 1.
s = np.random.laplace(loc, scale, 5000)
#Plot a histogram of the Laplace Distribution, including the probability density function
plt.hist(s, 30, density=True)
x = np.arange(-8., 8., .01)
pdf = np.exp(-abs(x-loc)/scale)/(2.*scale)
plt.plot(x, pdf, label='Laplace Distribution', color='red')
#Plot a Normal Distribution for comparison
g = (1/(scale * np.sqrt(2 * np.pi)) * np.exp(-(x - loc)**2 / (2 * scale**2)))
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
plt.plot(x,g, label='Normal Distribution', color='green')
plt.legend(loc='best')
#Generate 3 pseudo-random distributions of 50 numbers
d = np.random.rand(50)
e = np.random.rand(50)
f = np.random.rand(50)
print(d)
print(e)
print(f)
#Illustrate these 3 distributions on one Plot to compare for similarities
sns.distplot(d, label="D")
sns.distplot(e, label="E")
sns.distplot(f, label="F")
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
plt.legend(loc="best")
#Set the seed number to 10 for the three functions in the above example and plot the results
np.random.seed(10)
x = np.random.rand(50)
np.random.seed(10)
y = np.random.rand(50)
np.random.seed(10)
z = np.random.rand(50)
print(x)
print(y)
print(z)
sns.distplot(x, label="X")
sns.distplot(y, label="Y")
sns.distplot(z, label="Z")
plt.xlabel("Random Numbers")
plt.ylabel("Frequency")
plt.legend(loc="best")
| 0.678966 | 0.99091 |
# Save & Restore a Model
Save and Restore a model using TensorFlow.
This example is using the MNIST database of handwritten digits
(http://yann.lecun.com/exdb/mnist/).
- Author: Aymeric Damien
- Project: https://github.com/aymericdamien/TensorFlow-Examples/
```
from __future__ import print_function
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import tensorflow as tf
# Parameters
learning_rate = 0.001
batch_size = 100
display_step = 1
model_path = "/tmp/model.ckpt"
# Network Parameters
n_hidden_1 = 256 # 1st layer number of features
n_hidden_2 = 256 # 2nd layer number of features
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# Create model
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct model
pred = multilayer_perceptron(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Initializing the variables
init = tf.global_variables_initializer()
# 'Saver' op to save and restore all the variables
saver = tf.train.Saver()
# Running first session
print("Starting 1st session...")
with tf.Session() as sess:
# Initialize variables
sess.run(init)
# Training cycle
for epoch in range(3):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print ("Epoch:", '%04d' % (epoch+1), "cost=", \
"{:.9f}".format(avg_cost))
print("First Optimization Finished!")
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))
# Save model weights to disk
save_path = saver.save(sess, model_path)
print("Model saved in file: %s" % save_path)
# Running a new session
print("Starting 2nd session...")
with tf.Session() as sess:
# Initialize variables
sess.run(init)
# Restore model weights from previously saved model
load_path = saver.restore(sess, model_path)
print("Model restored from file: %s" % save_path)
# Resume training
for epoch in range(7):
avg_cost = 0.
total_batch = int(mnist.train.num_examples / batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch + 1), "cost=", \
"{:.9f}".format(avg_cost))
print("Second Optimization Finished!")
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Accuracy:", accuracy.eval(
{x: mnist.test.images, y: mnist.test.labels}))
test complete; Gopal
```
|
github_jupyter
|
from __future__ import print_function
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import tensorflow as tf
# Parameters
learning_rate = 0.001
batch_size = 100
display_step = 1
model_path = "/tmp/model.ckpt"
# Network Parameters
n_hidden_1 = 256 # 1st layer number of features
n_hidden_2 = 256 # 2nd layer number of features
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# Create model
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct model
pred = multilayer_perceptron(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Initializing the variables
init = tf.global_variables_initializer()
# 'Saver' op to save and restore all the variables
saver = tf.train.Saver()
# Running first session
print("Starting 1st session...")
with tf.Session() as sess:
# Initialize variables
sess.run(init)
# Training cycle
for epoch in range(3):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print ("Epoch:", '%04d' % (epoch+1), "cost=", \
"{:.9f}".format(avg_cost))
print("First Optimization Finished!")
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))
# Save model weights to disk
save_path = saver.save(sess, model_path)
print("Model saved in file: %s" % save_path)
# Running a new session
print("Starting 2nd session...")
with tf.Session() as sess:
# Initialize variables
sess.run(init)
# Restore model weights from previously saved model
load_path = saver.restore(sess, model_path)
print("Model restored from file: %s" % save_path)
# Resume training
for epoch in range(7):
avg_cost = 0.
total_batch = int(mnist.train.num_examples / batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch + 1), "cost=", \
"{:.9f}".format(avg_cost))
print("Second Optimization Finished!")
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Accuracy:", accuracy.eval(
{x: mnist.test.images, y: mnist.test.labels}))
test complete; Gopal
| 0.840423 | 0.946843 |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
import sys
SOURCE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__name__)))
sys.path.insert(0, SOURCE_DIR)
# !pip3 install pysptk
import malaya_speech
from pysptk import sptk
import numpy as np
import tensorflow as tf
# tf.compat.v1.enable_eager_execution()
vggvox_v2 = malaya_speech.gender.deep_model(model = 'vggvox-v2')
speaker_model = malaya_speech.speaker_vector.deep_model('vggvox-v2')
freqs = {'female': [100, 600], 'male': [50, 250]}
from scipy.signal import get_window
from scipy import signal
import soundfile as sf
sr = 22050
def butter_highpass(cutoff, fs, order=5):
nyq = 0.5 * fs
normal_cutoff = cutoff / nyq
b, a = signal.butter(order, normal_cutoff, btype='high', analog=False)
return b, a
b, a = butter_highpass(30, sr, order=5)
def speaker_normalization(f0, index_nonzero, mean_f0, std_f0):
f0 = f0.astype(float).copy()
f0[index_nonzero] = (f0[index_nonzero] - mean_f0) / std_f0
f0[index_nonzero] = np.clip(f0[index_nonzero], -3, 4)
return f0
def preprocess_wav(x):
if x.shape[0] % 256 == 0:
x = np.concatenate((x, np.array([1e-06])), axis=0)
y = signal.filtfilt(b, a, x)
wav = y * 0.96 + (np.random.uniform(size = y.shape[0]) - 0.5)*1e-06
return wav
def get_f0(wav, lo, hi):
f0_rapt = sptk.rapt(wav.astype(np.float32)*32768, sr, 256, min=lo, max=hi, otype=2)
index_nonzero = (f0_rapt != -1e10)
mean_f0, std_f0 = np.mean(f0_rapt[index_nonzero]), np.std(f0_rapt[index_nonzero])
return speaker_normalization(f0_rapt, index_nonzero, mean_f0, std_f0)
def get_speech(f):
x, fs = sf.read(f)
wav = preprocess_wav(x)
lo, hi = freqs.get(vggvox_v2(x), [50, 250])
print(lo, hi)
f0 = np.expand_dims(get_f0(wav, lo, hi), -1)
mel = malaya_speech.featurization.universal_mel(wav)
v = speaker_model([wav])[0]
v = v / v.max()
return wav, mel[:24 * 8], f0[:24 * 8], v
wav, mel, f0, v = get_speech('../speech/example-speaker/female.wav')
wav_1, mel_1, f0_1, v_1 = get_speech('../speech/example-speaker/khalil-nooh.wav')
mels, mel_lens = malaya_speech.padding.sequence_nd([mel, mel_1], dim = 0, return_len = True)
mels.shape, mel_lens
f0s, f0_lens = malaya_speech.padding.sequence_nd([f0, f0_1], dim = 0, return_len = True)
f0s.shape, f0_lens
vs = malaya_speech.padding.sequence_nd([v, v_1], dim = 0)
vs.shape
X = tf.placeholder(tf.float32, [None, None, 80])
X_f0 = tf.placeholder(tf.float32, [None, None, 1])
len_X = tf.placeholder(tf.int32, [None])
V = tf.placeholder(tf.float32, [None, 512])
# X = tf.convert_to_tensor(mels.astype(np.float32))
# X_f0 = tf.convert_to_tensor(f0s.astype(np.float32))
# len_X = tf.convert_to_tensor(mel_lens)
# V = tf.convert_to_tensor(vs.astype(np.float32))
from malaya_speech.train.model import speechsplit
hparams = speechsplit.hparams
interplnr = speechsplit.InterpLnr(hparams)
model = speechsplit.Model(hparams)
bottleneck_speaker = tf.keras.layers.Dense(hparams.dim_spk_emb)
speaker_dim = bottleneck_speaker(V)
x_f0_intrp = interplnr(tf.concat([X, X_f0], axis = -1), len_X)
x_f0_intrp.shape
f0_org_intrp = speechsplit.quantize_f0_tf(x_f0_intrp[:,:,-1])
x_f0_intrp_org = tf.concat((x_f0_intrp[:,:,:-1], f0_org_intrp), axis=-1)
x_f0_intrp_org, X, speaker_dim
codes_x, codes_f0, codes_2, encoder_outputs, mel_outputs = model(x_f0_intrp_org, X, speaker_dim)
codes_x.shape, codes_f0.shape, codes_2.shape, encoder_outputs.shape, mel_outputs.shape
sess = tf.Session()
sess.run(tf.global_variables_initializer())
o = sess.run([codes_x, codes_f0, codes_2, encoder_outputs, mel_outputs], feed_dict = {
X: mels, X_f0: f0s, len_X: mel_lens, V: vs
})
o
tf.trainable_variables()
saver = tf.train.Saver()
saver.save(sess, 'test/model.ckpt')
!ls -lh test
!rm -rf test
```
|
github_jupyter
|
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
import sys
SOURCE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__name__)))
sys.path.insert(0, SOURCE_DIR)
# !pip3 install pysptk
import malaya_speech
from pysptk import sptk
import numpy as np
import tensorflow as tf
# tf.compat.v1.enable_eager_execution()
vggvox_v2 = malaya_speech.gender.deep_model(model = 'vggvox-v2')
speaker_model = malaya_speech.speaker_vector.deep_model('vggvox-v2')
freqs = {'female': [100, 600], 'male': [50, 250]}
from scipy.signal import get_window
from scipy import signal
import soundfile as sf
sr = 22050
def butter_highpass(cutoff, fs, order=5):
nyq = 0.5 * fs
normal_cutoff = cutoff / nyq
b, a = signal.butter(order, normal_cutoff, btype='high', analog=False)
return b, a
b, a = butter_highpass(30, sr, order=5)
def speaker_normalization(f0, index_nonzero, mean_f0, std_f0):
f0 = f0.astype(float).copy()
f0[index_nonzero] = (f0[index_nonzero] - mean_f0) / std_f0
f0[index_nonzero] = np.clip(f0[index_nonzero], -3, 4)
return f0
def preprocess_wav(x):
if x.shape[0] % 256 == 0:
x = np.concatenate((x, np.array([1e-06])), axis=0)
y = signal.filtfilt(b, a, x)
wav = y * 0.96 + (np.random.uniform(size = y.shape[0]) - 0.5)*1e-06
return wav
def get_f0(wav, lo, hi):
f0_rapt = sptk.rapt(wav.astype(np.float32)*32768, sr, 256, min=lo, max=hi, otype=2)
index_nonzero = (f0_rapt != -1e10)
mean_f0, std_f0 = np.mean(f0_rapt[index_nonzero]), np.std(f0_rapt[index_nonzero])
return speaker_normalization(f0_rapt, index_nonzero, mean_f0, std_f0)
def get_speech(f):
x, fs = sf.read(f)
wav = preprocess_wav(x)
lo, hi = freqs.get(vggvox_v2(x), [50, 250])
print(lo, hi)
f0 = np.expand_dims(get_f0(wav, lo, hi), -1)
mel = malaya_speech.featurization.universal_mel(wav)
v = speaker_model([wav])[0]
v = v / v.max()
return wav, mel[:24 * 8], f0[:24 * 8], v
wav, mel, f0, v = get_speech('../speech/example-speaker/female.wav')
wav_1, mel_1, f0_1, v_1 = get_speech('../speech/example-speaker/khalil-nooh.wav')
mels, mel_lens = malaya_speech.padding.sequence_nd([mel, mel_1], dim = 0, return_len = True)
mels.shape, mel_lens
f0s, f0_lens = malaya_speech.padding.sequence_nd([f0, f0_1], dim = 0, return_len = True)
f0s.shape, f0_lens
vs = malaya_speech.padding.sequence_nd([v, v_1], dim = 0)
vs.shape
X = tf.placeholder(tf.float32, [None, None, 80])
X_f0 = tf.placeholder(tf.float32, [None, None, 1])
len_X = tf.placeholder(tf.int32, [None])
V = tf.placeholder(tf.float32, [None, 512])
# X = tf.convert_to_tensor(mels.astype(np.float32))
# X_f0 = tf.convert_to_tensor(f0s.astype(np.float32))
# len_X = tf.convert_to_tensor(mel_lens)
# V = tf.convert_to_tensor(vs.astype(np.float32))
from malaya_speech.train.model import speechsplit
hparams = speechsplit.hparams
interplnr = speechsplit.InterpLnr(hparams)
model = speechsplit.Model(hparams)
bottleneck_speaker = tf.keras.layers.Dense(hparams.dim_spk_emb)
speaker_dim = bottleneck_speaker(V)
x_f0_intrp = interplnr(tf.concat([X, X_f0], axis = -1), len_X)
x_f0_intrp.shape
f0_org_intrp = speechsplit.quantize_f0_tf(x_f0_intrp[:,:,-1])
x_f0_intrp_org = tf.concat((x_f0_intrp[:,:,:-1], f0_org_intrp), axis=-1)
x_f0_intrp_org, X, speaker_dim
codes_x, codes_f0, codes_2, encoder_outputs, mel_outputs = model(x_f0_intrp_org, X, speaker_dim)
codes_x.shape, codes_f0.shape, codes_2.shape, encoder_outputs.shape, mel_outputs.shape
sess = tf.Session()
sess.run(tf.global_variables_initializer())
o = sess.run([codes_x, codes_f0, codes_2, encoder_outputs, mel_outputs], feed_dict = {
X: mels, X_f0: f0s, len_X: mel_lens, V: vs
})
o
tf.trainable_variables()
saver = tf.train.Saver()
saver.save(sess, 'test/model.ckpt')
!ls -lh test
!rm -rf test
| 0.327668 | 0.234955 |
## Get sampling plan
To avoid that we do not sample rarer elements like Ru enough, we will sample all of them and a random set of the more common elements.
```
import pickle
import pandas as pd
from pymatgen import Element
Element('Cu').is_metal
with open('../oxidation_state_book/data/chemical_formulas_fixed.pkl', 'rb') as fh:
chemical_formulas = pickle.load(fh)
len(chemical_formulas)
with open('../oxidation_state_book/data/name_list.pkl', 'rb') as fh:
names = pickle.load(fh)
len(names)
chemical_formulas_list_for_df = []
for k,v in chemical_formulas.items():
try:
if k in names:
res_dict = {}
res_dict['name'] = k
for element in v.keys():
if Element(element).is_metal:
res_dict['metal'] = element
chemical_formulas_list_for_df.append(res_dict)
except Exception:
pass
df = pd.DataFrame(chemical_formulas_list_for_df)
df.head()
counts
counts = df['metal'].value_counts()
ordererd_metals = list(counts.keys())
fil = list(counts.values > 5000)
freuquent_metals = [m for i, m in enumerate(ordererd_metals) if fil[i]]
import numpy as np
sampled_names = []
for metal in ordererd_metals:
metal_mofs = df[df['metal']==metal]['name']
print('metal {}, len {}'.format(metal, len(metal_mofs)))
if metal in freuquent_metals:
print("metal is frequent")
names = np.random.choice(metal_mofs, 5000)
else:
names = metal_mofs
sampled_names.extend(names)
len(sampled_names)
np.random.shuffle(sampled_names)
with open('names_to_sample.pkl', 'wb') as fh:
pickle.dump(sampled_names, fh)
```
## Look into CoRE-MOF v2/Users/
```
import pandas as pd
df_core_mof = pd.read_csv('/Users/kevinmaikjablonka/Downloads/2019-07-01-ASR-public_12020.csv')
df_core_mof.head()
df_core_mof.columns
no_overlap_with_core = df_core_mof['CSD_overlap_inCoRE'].values == 'N'
no_overlap_with_csd = df_core_mof['CSD_overlap_inCCDC'].values == 'N'
has_doi = [isinstance(d, str)for d in df_core_mof['DOI_public'].values]
df_core_mof[no_overlap_with_csd * has_doi * no_overlap_with_csd]
```
## Now look at the Curated COF
```
df_cof_frameworks = pd.read_csv('../cof_frameworks.csv')
df_cof_papers = pd.read_csv('../cof_papers.csv')
df_cof_papers['paper_id_stripped'] = [d.strip('p') for d in df_cof_papers['CURATED-COFs paper ID'].values]
df_cof_frameworks['stripped_cof_id'] = [d[0:4] for d in df_cof_frameworks['CURATED-COFs ID'].values]
df_cof_frameworks = df_cof_frameworks.merge(df_cof_papers, left_on='stripped_cof_id', right_on='paper_id_stripped')
df_cof_frameworks.head()
from pymatgen import Element
def has_metal(element_string):
return any([Element(e.strip()).is_metal for e in element_string.split(',')])
has_metal_columns = [has_metal(e) for e in df_cof_frameworks['Elements']]
df_cof_frameworks['has_metal'] = has_metal_columns
```
df_cof_frameworks[df_cof_frameworks['has_metal'] == True]
```
from glob import glob
from pathlib import Path
extracted_cofs = [Path(p).stem for p in glob('../test_structures/cofs/*.cif')]
df_cof_frameworks[df_cof_frameworks["CURATED-COFs ID"].isin(extracted_cofs)]
cof_assignments = {
'11010N2': {'Ni': 2},
'12061N2' : {'Cu': 2},
'12062N2': {'Co': 2},
'13110N2': {'Cu': 2},
"15180N2": {'Cu': 2},
"15181N2":{'Cu' :2},
'15182N2' : {'Cu': 2},
"18080N2": {'Co': 2},
"18081N2" : {'Co': 2},
"18082N2" : {'Co' :2},
"18083N2" : {'Co': 2},
"19040N2" : {'V': 4},
"19041N2" : {'V': 4},
"19270N2": {'Zn': 2},
"19271N2": {'Zn': 2}
}
```
## Get relevant structures from Kulik as cif
```
df_kulik = pd.read_csv('/Users/kevinmaikjablonka/Downloads/Data-2/dft-results/CSD-results.csv')
kulik_csd_all = glob('/Users/kevinmaikjablonka/Downloads/Data-2/geometries/CSD/*.xyz')
from ase.io import read, write
import os
def xyz_to_cif(infile, outdir):
atoms = read(infile)
atoms.pbc = True
atoms.center(vacuum=5)
stem = Path(infile).stem
write(os.path.join(outdir, stem + '.cif'), atoms)
xyz_to_cif(kulik_csd_all[0], '/Users/kevinmaikjablonka/Downloads/kulik_csd_cif/')
kuliks = kulik_csd_all
for kulik in kuliks:
xyz_to_cif(kulik, '/Users/kevinmaikjablonka/Downloads/kulik_csd_cif/')
```
|
github_jupyter
|
import pickle
import pandas as pd
from pymatgen import Element
Element('Cu').is_metal
with open('../oxidation_state_book/data/chemical_formulas_fixed.pkl', 'rb') as fh:
chemical_formulas = pickle.load(fh)
len(chemical_formulas)
with open('../oxidation_state_book/data/name_list.pkl', 'rb') as fh:
names = pickle.load(fh)
len(names)
chemical_formulas_list_for_df = []
for k,v in chemical_formulas.items():
try:
if k in names:
res_dict = {}
res_dict['name'] = k
for element in v.keys():
if Element(element).is_metal:
res_dict['metal'] = element
chemical_formulas_list_for_df.append(res_dict)
except Exception:
pass
df = pd.DataFrame(chemical_formulas_list_for_df)
df.head()
counts
counts = df['metal'].value_counts()
ordererd_metals = list(counts.keys())
fil = list(counts.values > 5000)
freuquent_metals = [m for i, m in enumerate(ordererd_metals) if fil[i]]
import numpy as np
sampled_names = []
for metal in ordererd_metals:
metal_mofs = df[df['metal']==metal]['name']
print('metal {}, len {}'.format(metal, len(metal_mofs)))
if metal in freuquent_metals:
print("metal is frequent")
names = np.random.choice(metal_mofs, 5000)
else:
names = metal_mofs
sampled_names.extend(names)
len(sampled_names)
np.random.shuffle(sampled_names)
with open('names_to_sample.pkl', 'wb') as fh:
pickle.dump(sampled_names, fh)
import pandas as pd
df_core_mof = pd.read_csv('/Users/kevinmaikjablonka/Downloads/2019-07-01-ASR-public_12020.csv')
df_core_mof.head()
df_core_mof.columns
no_overlap_with_core = df_core_mof['CSD_overlap_inCoRE'].values == 'N'
no_overlap_with_csd = df_core_mof['CSD_overlap_inCCDC'].values == 'N'
has_doi = [isinstance(d, str)for d in df_core_mof['DOI_public'].values]
df_core_mof[no_overlap_with_csd * has_doi * no_overlap_with_csd]
df_cof_frameworks = pd.read_csv('../cof_frameworks.csv')
df_cof_papers = pd.read_csv('../cof_papers.csv')
df_cof_papers['paper_id_stripped'] = [d.strip('p') for d in df_cof_papers['CURATED-COFs paper ID'].values]
df_cof_frameworks['stripped_cof_id'] = [d[0:4] for d in df_cof_frameworks['CURATED-COFs ID'].values]
df_cof_frameworks = df_cof_frameworks.merge(df_cof_papers, left_on='stripped_cof_id', right_on='paper_id_stripped')
df_cof_frameworks.head()
from pymatgen import Element
def has_metal(element_string):
return any([Element(e.strip()).is_metal for e in element_string.split(',')])
has_metal_columns = [has_metal(e) for e in df_cof_frameworks['Elements']]
df_cof_frameworks['has_metal'] = has_metal_columns
from glob import glob
from pathlib import Path
extracted_cofs = [Path(p).stem for p in glob('../test_structures/cofs/*.cif')]
df_cof_frameworks[df_cof_frameworks["CURATED-COFs ID"].isin(extracted_cofs)]
cof_assignments = {
'11010N2': {'Ni': 2},
'12061N2' : {'Cu': 2},
'12062N2': {'Co': 2},
'13110N2': {'Cu': 2},
"15180N2": {'Cu': 2},
"15181N2":{'Cu' :2},
'15182N2' : {'Cu': 2},
"18080N2": {'Co': 2},
"18081N2" : {'Co': 2},
"18082N2" : {'Co' :2},
"18083N2" : {'Co': 2},
"19040N2" : {'V': 4},
"19041N2" : {'V': 4},
"19270N2": {'Zn': 2},
"19271N2": {'Zn': 2}
}
df_kulik = pd.read_csv('/Users/kevinmaikjablonka/Downloads/Data-2/dft-results/CSD-results.csv')
kulik_csd_all = glob('/Users/kevinmaikjablonka/Downloads/Data-2/geometries/CSD/*.xyz')
from ase.io import read, write
import os
def xyz_to_cif(infile, outdir):
atoms = read(infile)
atoms.pbc = True
atoms.center(vacuum=5)
stem = Path(infile).stem
write(os.path.join(outdir, stem + '.cif'), atoms)
xyz_to_cif(kulik_csd_all[0], '/Users/kevinmaikjablonka/Downloads/kulik_csd_cif/')
kuliks = kulik_csd_all
for kulik in kuliks:
xyz_to_cif(kulik, '/Users/kevinmaikjablonka/Downloads/kulik_csd_cif/')
| 0.201106 | 0.592637 |
<a href="https://colab.research.google.com/github/edgallojr/data-analytics-python-pandas/blob/main/data_analytics_python_panda.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# **Trabalhando com arquivo CSV**
```
from google.colab import drive
drive.mount('/content/drive')
#importando a biblioteca pandas
import pandas as pd
df = pd.read_csv("/content/drive/MyDrive/datasets/Gapminder.csv",error_bad_lines=False, sep=";")
#Visualizando as 5 primeiras linhas
df.head()
df = df.rename(columns={"country":"Pais", "continent": "continente", "year":"Ano", "lifeExp":"Expectativa de vida", "pop":"Pop Total", "gdpPercap": "PIB"})
df.head(10)
#Total de linhas e colunas
df.shape
df.columns
df.dtypes
df.tail(15)
df.describe()
df["continente"].unique()
Oceania = df.loc[df["continente"] == "Oceania"]
Oceania.head()
Oceania["continente"].unique()
df.groupby("continente")["Pais"].nunique()
df.groupby("Ano")["Expectativa de vida"].mean()
df["PIB"].mean()
df["PIB"].sum()
```
#**Trabalhando com Planilhas do Excel**
```
#Importando a biblioteca
import pandas as pd
#Leitura dos arquivos
df1 = pd.read_excel("/content/drive/MyDrive/datasets/Aracaju.xlsx")
df2 = pd.read_excel("/content/drive/MyDrive/datasets/Fortaleza.xlsx")
df3 = pd.read_excel("/content/drive/MyDrive/datasets/Natal.xlsx")
df4 = pd.read_excel("/content/drive/MyDrive/datasets/Recife.xlsx")
df5 = pd.read_excel("/content/drive/MyDrive/datasets/Salvador.xlsx")
df5.head()
#juntando todos os arquivos
df = pd.concat([df1,df2,df3,df4,df5])
#Exibindo as 5 primeiras linhas
df.head()
#Exibindo as 5 últimas linhas
df.tail()
df.sample(5)
#Verificando o tipo de dado de cada coluna
df.dtypes
#Alterando o tipo de dado da coluna LojaID
df["LojaID"] = df["LojaID"].astype("object")
df.dtypes
df.head()
```
**Tratando valores faltantes**
```
#Consultando linhas com valores faltantes
df.isnull().sum()
#Substituindo os valores nulos pela média
df["Vendas"].fillna(df["Vendas"].mean(), inplace=True)
df["Vendas"].mean()
df.isnull().sum()
df.sample(15)
#Substituindo os valores nulos por zero
df["Vendas"].fillna(0, inplace=True)
#Apagando as linhas com valores nulos
df.dropna(inplace=True)
#Apagando as linhas com valores nulos com base apenas em 1 coluna
df.dropna(subset=["Vendas"], inplace=True)
#Removendo linhas que estejam com valores faltantes em todas as colunas
df.dropna(how="all", inplace=True)
```
**Criando colunas novas**
```
#Criando a coluna de receita
df["Receita"] = df["Vendas"].mul(df["Qtde"])
df.head()
df["Receita/Vendas"] = df["Receita"] / df["Vendas"]
df.head()
#Retornando a maior receita
df["Receita"].max()
#Retornando a menor receita
df["Receita"].min()
#nlargest
df.nlargest(3, "Receita")
#nsamllest
df.nsmallest(3, "Receita")
#Agrupamento por cidade
df.groupby("Cidade")["Receita"].sum()
#Ordenando o conjunto de dados
df.sort_values("Receita", ascending=False).head(10)
```
#**Trabalhando com datas**
```
#Trasnformando a coluna de data em tipo inteiro
df["Data"] = df["Data"].astype("int64")
#Verificando o tipo de dado de cada coluna
df.dtypes
#Transformando coluna de data em data
df["Data"] = pd.to_datetime(df["Data"])
df.dtypes
#Agrupamento por ano
df.groupby(df["Data"].dt.year)["Receita"].sum()
#Criando uma nova coluna com o ano
df["Ano_Venda"] = df["Data"].dt.year
df.sample(5)
#Extraindo o mês e o dia
df["mes_venda"], df["dia_venda"] = (df["Data"].dt.month, df["Data"].dt.day)
df.sample(5)
#Retornando a data mais antiga
df["Data"].min()
#Calculando a diferença de dias
df["diferenca_dias"] = df["Data"] - df["Data"].min()
df.sample(5)
#Criando a coluna de trimestre
df["trimestre_venda"] = df["Data"].dt.quarter
df.sample(5)
#Filtrando as vendas de 2019 do mês de março
vendas_marco_19 = df.loc[(df["Data"].dt.year == 2019) & (df["Data"].dt.month == 3)]
vendas_marco_19.sample(20)
```
#**Visualização de dados**
```
df["LojaID"].value_counts(ascending=False)
#Gráfico de barras
df["LojaID"].value_counts(ascending=False).plot.bar()
#Gráfico de barras horizontais
df["LojaID"].value_counts().plot.barh()
#Gráfico de barras horizontais
df["LojaID"].value_counts(ascending=True).plot.barh();
#Gráfico de Pizza
df.groupby(df["Data"].dt.year)["Receita"].sum().plot.pie()
#Total vendas por cidade
df["Cidade"].value_counts()
#Adicionando um título e alterando o nome dos eixos
import matplotlib.pyplot as plt
df["Cidade"].value_counts().plot.bar(title="Total vendas por Cidade")
plt.xlabel("Cidade")
plt.ylabel("Total Vendas");
#Alterando a cor
df["Cidade"].value_counts().plot.bar(title="Total vendas por Cidade", color="red")
plt.xlabel("Cidade")
plt.ylabel("Total Vendas");
#Alterando o estilo
plt.style.use("ggplot")
df.groupby(df["mes_venda"])["Qtde"].sum().plot(title = "Total Produtos vendidos x mês")
plt.xlabel("Mês")
plt.ylabel("Total Produtos Vendidos")
plt.legend();
df.groupby(df["mes_venda"])["Qtde"].sum()
#Selecionando apenas as vendas de 2019
df_2019 = df[df["Ano_Venda"] == 2019]
df_2019.groupby(df_2019["mes_venda"])["Qtde"].sum()
#Total produtos vendidos por mês
df_2019.groupby(df_2019["mes_venda"])["Qtde"].sum().plot(marker = "o")
plt.xlabel("Mês")
plt.ylabel("Total Produtos Vendidos")
plt.legend();
#Hisograma
plt.hist(df["Qtde"], color="orangered");
plt.scatter(x=df_2019["dia_venda"], y = df_2019["Receita"]);
#Salvando em png
df_2019.groupby(df_2019["mes_venda"])["Qtde"].sum().plot(marker = "v")
plt.title("Quantidade de produtos vendidos x mês")
plt.xlabel("Mês")
plt.ylabel("Total Produtos Vendidos");
plt.legend()
plt.savefig("grafico QTDE x MES.png")
```
#**Análise Exploratória - Estudo de Caso**
```
#Importando as bibliotecas
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use("seaborn")
#Criando nosso DataFrame
df = pd.read_excel("/content/drive/MyDrive/datasets/AdventureWorks.xlsx")
#Visualizando as 5 primeiras linhas
df.head()
#Quantidade de linhas e colunas
df.shape
#Verificando os tipos de dados
df.dtypes
#Qual a Receita total?
df["Valor Venda"].sum()
#Qual o custo Total?
df["custo"] = df["Custo Unitário"].mul(df["Quantidade"]) #Criando a coluna de custo
df.head(1)
#Qual o custo Total?
round(df["custo"].sum(), 2)
#Agora que temos a receita e custo e o total, podemos achar o Lucro total
#Vamos criar uma coluna de Lucro que será Receita - Custo
df["lucro"] = df["Valor Venda"] - df["custo"]
df.head(1)
#Total Lucro
round(df["lucro"].sum(),2)
#Criando uma coluna com total de dias para enviar o produto
df["Tempo_envio"] = df["Data Envio"] - df["Data Venda"]
df.head(1)
```
**Agora, queremos saber a média do tempo de envio para cada Marca, e para isso precisamos transformar a coluna Tempo_envio em númerica**
```
#Extraindo apenas os dias
df["Tempo_envio"] = (df["Data Envio"] - df["Data Venda"]).dt.days
df.head(1)
#Verificando o tipo da coluna Tempo_envio
df["Tempo_envio"].dtype
#Média do tempo de envio por Marca
df.groupby("Marca")["Tempo_envio"].mean()
```
**Missing Values**
```
#Verificando se temos dados faltantes
df.isnull().sum()
```
**E, se a gente quiser saber o Lucro por Ano e Por Marca?**
```
#Vamos Agrupar por ano e marca
df.groupby([df["Data Venda"].dt.year, "Marca"])["lucro"].sum()
pd.options.display.float_format = '{:20,.2f}'.format
#Resetando o index
lucro_ano = df.groupby([df["Data Venda"].dt.year, "Marca"])["lucro"].sum().reset_index()
lucro_ano
#Qual o total de produtos vendidos?
df.groupby("Produto")["Quantidade"].sum().sort_values(ascending=False)
#Gráfico Total de produtos vendidos
df.groupby("Produto")["Quantidade"].sum().sort_values(ascending=True).plot.barh(title="Total Produtos Vendidos")
plt.xlabel("Total")
plt.ylabel("Produto");
df.groupby(df["Data Venda"].dt.year)["lucro"].sum().plot.bar(title="Lucro x Ano")
plt.xlabel("Ano")
plt.ylabel("Receita");
df.groupby(df["Data Venda"].dt.year)["lucro"].sum()
#Selecionando apenas as vendas de 2009
df_2009 = df[df["Data Venda"].dt.year == 2009]
df_2009.head()
df_2009.groupby(df_2009["Data Venda"].dt.month)["lucro"].sum().plot(title="Lucro x Mês")
plt.xlabel("Mês")
plt.ylabel("Lucro");
df_2009.groupby("Marca")["lucro"].sum().plot.bar(title="Lucro x Marca")
plt.xlabel("Marca")
plt.ylabel("Lucro")
plt.xticks(rotation='horizontal');
df_2009.groupby("Classe")["lucro"].sum().plot.bar(title="Lucro x Classe")
plt.xlabel("Classe")
plt.ylabel("Lucro")
plt.xticks(rotation='horizontal');
df["Tempo_envio"].describe()
#Gráfico de Boxplot
plt.boxplot(df["Tempo_envio"]);
#Histograma
plt.hist(df["Tempo_envio"]);
#Tempo mínimo de envio
df["Tempo_envio"].min()
#Tempo máximo de envio
df['Tempo_envio'].max()
#Identificando o Outlier
df[df["Tempo_envio"] == 20]
df.to_csv("df_vendas_novo.csv", index=False)
```
|
github_jupyter
|
from google.colab import drive
drive.mount('/content/drive')
#importando a biblioteca pandas
import pandas as pd
df = pd.read_csv("/content/drive/MyDrive/datasets/Gapminder.csv",error_bad_lines=False, sep=";")
#Visualizando as 5 primeiras linhas
df.head()
df = df.rename(columns={"country":"Pais", "continent": "continente", "year":"Ano", "lifeExp":"Expectativa de vida", "pop":"Pop Total", "gdpPercap": "PIB"})
df.head(10)
#Total de linhas e colunas
df.shape
df.columns
df.dtypes
df.tail(15)
df.describe()
df["continente"].unique()
Oceania = df.loc[df["continente"] == "Oceania"]
Oceania.head()
Oceania["continente"].unique()
df.groupby("continente")["Pais"].nunique()
df.groupby("Ano")["Expectativa de vida"].mean()
df["PIB"].mean()
df["PIB"].sum()
#Importando a biblioteca
import pandas as pd
#Leitura dos arquivos
df1 = pd.read_excel("/content/drive/MyDrive/datasets/Aracaju.xlsx")
df2 = pd.read_excel("/content/drive/MyDrive/datasets/Fortaleza.xlsx")
df3 = pd.read_excel("/content/drive/MyDrive/datasets/Natal.xlsx")
df4 = pd.read_excel("/content/drive/MyDrive/datasets/Recife.xlsx")
df5 = pd.read_excel("/content/drive/MyDrive/datasets/Salvador.xlsx")
df5.head()
#juntando todos os arquivos
df = pd.concat([df1,df2,df3,df4,df5])
#Exibindo as 5 primeiras linhas
df.head()
#Exibindo as 5 últimas linhas
df.tail()
df.sample(5)
#Verificando o tipo de dado de cada coluna
df.dtypes
#Alterando o tipo de dado da coluna LojaID
df["LojaID"] = df["LojaID"].astype("object")
df.dtypes
df.head()
#Consultando linhas com valores faltantes
df.isnull().sum()
#Substituindo os valores nulos pela média
df["Vendas"].fillna(df["Vendas"].mean(), inplace=True)
df["Vendas"].mean()
df.isnull().sum()
df.sample(15)
#Substituindo os valores nulos por zero
df["Vendas"].fillna(0, inplace=True)
#Apagando as linhas com valores nulos
df.dropna(inplace=True)
#Apagando as linhas com valores nulos com base apenas em 1 coluna
df.dropna(subset=["Vendas"], inplace=True)
#Removendo linhas que estejam com valores faltantes em todas as colunas
df.dropna(how="all", inplace=True)
#Criando a coluna de receita
df["Receita"] = df["Vendas"].mul(df["Qtde"])
df.head()
df["Receita/Vendas"] = df["Receita"] / df["Vendas"]
df.head()
#Retornando a maior receita
df["Receita"].max()
#Retornando a menor receita
df["Receita"].min()
#nlargest
df.nlargest(3, "Receita")
#nsamllest
df.nsmallest(3, "Receita")
#Agrupamento por cidade
df.groupby("Cidade")["Receita"].sum()
#Ordenando o conjunto de dados
df.sort_values("Receita", ascending=False).head(10)
#Trasnformando a coluna de data em tipo inteiro
df["Data"] = df["Data"].astype("int64")
#Verificando o tipo de dado de cada coluna
df.dtypes
#Transformando coluna de data em data
df["Data"] = pd.to_datetime(df["Data"])
df.dtypes
#Agrupamento por ano
df.groupby(df["Data"].dt.year)["Receita"].sum()
#Criando uma nova coluna com o ano
df["Ano_Venda"] = df["Data"].dt.year
df.sample(5)
#Extraindo o mês e o dia
df["mes_venda"], df["dia_venda"] = (df["Data"].dt.month, df["Data"].dt.day)
df.sample(5)
#Retornando a data mais antiga
df["Data"].min()
#Calculando a diferença de dias
df["diferenca_dias"] = df["Data"] - df["Data"].min()
df.sample(5)
#Criando a coluna de trimestre
df["trimestre_venda"] = df["Data"].dt.quarter
df.sample(5)
#Filtrando as vendas de 2019 do mês de março
vendas_marco_19 = df.loc[(df["Data"].dt.year == 2019) & (df["Data"].dt.month == 3)]
vendas_marco_19.sample(20)
df["LojaID"].value_counts(ascending=False)
#Gráfico de barras
df["LojaID"].value_counts(ascending=False).plot.bar()
#Gráfico de barras horizontais
df["LojaID"].value_counts().plot.barh()
#Gráfico de barras horizontais
df["LojaID"].value_counts(ascending=True).plot.barh();
#Gráfico de Pizza
df.groupby(df["Data"].dt.year)["Receita"].sum().plot.pie()
#Total vendas por cidade
df["Cidade"].value_counts()
#Adicionando um título e alterando o nome dos eixos
import matplotlib.pyplot as plt
df["Cidade"].value_counts().plot.bar(title="Total vendas por Cidade")
plt.xlabel("Cidade")
plt.ylabel("Total Vendas");
#Alterando a cor
df["Cidade"].value_counts().plot.bar(title="Total vendas por Cidade", color="red")
plt.xlabel("Cidade")
plt.ylabel("Total Vendas");
#Alterando o estilo
plt.style.use("ggplot")
df.groupby(df["mes_venda"])["Qtde"].sum().plot(title = "Total Produtos vendidos x mês")
plt.xlabel("Mês")
plt.ylabel("Total Produtos Vendidos")
plt.legend();
df.groupby(df["mes_venda"])["Qtde"].sum()
#Selecionando apenas as vendas de 2019
df_2019 = df[df["Ano_Venda"] == 2019]
df_2019.groupby(df_2019["mes_venda"])["Qtde"].sum()
#Total produtos vendidos por mês
df_2019.groupby(df_2019["mes_venda"])["Qtde"].sum().plot(marker = "o")
plt.xlabel("Mês")
plt.ylabel("Total Produtos Vendidos")
plt.legend();
#Hisograma
plt.hist(df["Qtde"], color="orangered");
plt.scatter(x=df_2019["dia_venda"], y = df_2019["Receita"]);
#Salvando em png
df_2019.groupby(df_2019["mes_venda"])["Qtde"].sum().plot(marker = "v")
plt.title("Quantidade de produtos vendidos x mês")
plt.xlabel("Mês")
plt.ylabel("Total Produtos Vendidos");
plt.legend()
plt.savefig("grafico QTDE x MES.png")
#Importando as bibliotecas
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use("seaborn")
#Criando nosso DataFrame
df = pd.read_excel("/content/drive/MyDrive/datasets/AdventureWorks.xlsx")
#Visualizando as 5 primeiras linhas
df.head()
#Quantidade de linhas e colunas
df.shape
#Verificando os tipos de dados
df.dtypes
#Qual a Receita total?
df["Valor Venda"].sum()
#Qual o custo Total?
df["custo"] = df["Custo Unitário"].mul(df["Quantidade"]) #Criando a coluna de custo
df.head(1)
#Qual o custo Total?
round(df["custo"].sum(), 2)
#Agora que temos a receita e custo e o total, podemos achar o Lucro total
#Vamos criar uma coluna de Lucro que será Receita - Custo
df["lucro"] = df["Valor Venda"] - df["custo"]
df.head(1)
#Total Lucro
round(df["lucro"].sum(),2)
#Criando uma coluna com total de dias para enviar o produto
df["Tempo_envio"] = df["Data Envio"] - df["Data Venda"]
df.head(1)
#Extraindo apenas os dias
df["Tempo_envio"] = (df["Data Envio"] - df["Data Venda"]).dt.days
df.head(1)
#Verificando o tipo da coluna Tempo_envio
df["Tempo_envio"].dtype
#Média do tempo de envio por Marca
df.groupby("Marca")["Tempo_envio"].mean()
#Verificando se temos dados faltantes
df.isnull().sum()
#Vamos Agrupar por ano e marca
df.groupby([df["Data Venda"].dt.year, "Marca"])["lucro"].sum()
pd.options.display.float_format = '{:20,.2f}'.format
#Resetando o index
lucro_ano = df.groupby([df["Data Venda"].dt.year, "Marca"])["lucro"].sum().reset_index()
lucro_ano
#Qual o total de produtos vendidos?
df.groupby("Produto")["Quantidade"].sum().sort_values(ascending=False)
#Gráfico Total de produtos vendidos
df.groupby("Produto")["Quantidade"].sum().sort_values(ascending=True).plot.barh(title="Total Produtos Vendidos")
plt.xlabel("Total")
plt.ylabel("Produto");
df.groupby(df["Data Venda"].dt.year)["lucro"].sum().plot.bar(title="Lucro x Ano")
plt.xlabel("Ano")
plt.ylabel("Receita");
df.groupby(df["Data Venda"].dt.year)["lucro"].sum()
#Selecionando apenas as vendas de 2009
df_2009 = df[df["Data Venda"].dt.year == 2009]
df_2009.head()
df_2009.groupby(df_2009["Data Venda"].dt.month)["lucro"].sum().plot(title="Lucro x Mês")
plt.xlabel("Mês")
plt.ylabel("Lucro");
df_2009.groupby("Marca")["lucro"].sum().plot.bar(title="Lucro x Marca")
plt.xlabel("Marca")
plt.ylabel("Lucro")
plt.xticks(rotation='horizontal');
df_2009.groupby("Classe")["lucro"].sum().plot.bar(title="Lucro x Classe")
plt.xlabel("Classe")
plt.ylabel("Lucro")
plt.xticks(rotation='horizontal');
df["Tempo_envio"].describe()
#Gráfico de Boxplot
plt.boxplot(df["Tempo_envio"]);
#Histograma
plt.hist(df["Tempo_envio"]);
#Tempo mínimo de envio
df["Tempo_envio"].min()
#Tempo máximo de envio
df['Tempo_envio'].max()
#Identificando o Outlier
df[df["Tempo_envio"] == 20]
df.to_csv("df_vendas_novo.csv", index=False)
| 0.318803 | 0.838448 |
# Modulo 1 - Setup & Hola Mundo
## ¿Qué es Python?
- Es un lenguaje de programación interpretado, de propósito general, y orientado a objetos.
- Creado por _Guido van Rossum_.
- El nombre de _Python_ proviene de un programa de televisión llamado _Monty Python's Flying Circus_ y no de _Python_, la serpiente.
- Beneficios de crear tu propio lenguaje de programación, puedes llamarlo como quieras.
### ¿Qué es un lenguaje interpretado?
> Es aquel lenguaje que es analizado y ejecutado por un interprete.
#### Diferencia entre compiladores e interpretes
Los compiladores traducen el código fuente de un programa al código máquina del sistema, los intérpretes sólo realizan la traducción típicamente, instrucción por instrucción.
#### P. Ej. El discurso del presidente
Imaginemos que tenemos un discurso del presidente que será transmitido a nivel internacional. El comite de relaciones exteriores de la Republica de los Estados Unidos Mexicanos esta considerando dos opciones:
1. Realizar y enviar copias traducidas a los diferentes paises.
2. Contratar personas que traduzcan mientras el discurso sea dado.
La primer opción tiene la ventaja de traducir con calma el discurso a los diferentes idiomas para que el mensaje sea claro, además de encontrar errores antes de que se de el discurso. La segunda opción, resulta ser de lo más común en esta clase de contextos, pero con la desventaja de no poder lidear con frases que no sean traducibles, como ***me canso ganso***, lo que puede ocacionar que los paises invitados mal interpreten el discurso.
### Python 3
- Python 3.0 fue publicado en 2008.
- Ya no posee compatibilidad con versiones anteriores.
- Sin embargo, muchas de sus características importantes se han respaldado para ser compatibles con la versión 2.7.
<img src="imgs/unit-1/python-2-vs-3-2018.png" alt="Diferencias entre Python2 y Python3" width="500px" />
## Instalación
### MacOS
1. Ir a https://www.python.org/downloads/release/python-373/
2. En la sección de archivos descarga el instalador para macOS ***macOS 64-bit/32-bit installer***.
3. Considera instalar un editor de texto, sugerimos:
- [Atom](https://atom.io/)
- [Visual Studio Code](https://code.visualstudio.com/)
4. Encuentra y ejecuta la terminal.
5. Escribe `python3.7` y da `Enter`.
6. Escribe `quit()` y da `Enter` para salir del interprete.
### Windows
1. Ir a https://www.python.org/downloads/release/python-373/
2. En la sección de archivos descarga el instalador para Windows.
- Asegurate que agregar Python 3.7 al path.
3. Considera instalar un editor de texto, sugerimos:
- [Atom](https://atom.io/)
- [Visual Studio Code](https://code.visualstudio.com/)
4. Encuentra y ejecuta `PowerShell`.
5. Escribe `python` y da `Enter`.
6. Escribe `quit()` y da `Enter` para salir del interprete.
### Linux
1. Ir a https://www.python.org/downloads/release/python-373/
2. En la sección de archivos descarga el instalador para Linux.
3. Considera instalar un editor de texto, sugerimos:
- [Atom](https://atom.io/)
- [Visual Studio Code](https://code.visualstudio.com/)
4. Encuentra y ejecuta la terminal.
5. Escribe `python3.7` y da `Enter`.
6. Escribe `quit()` y da `Enter` para salir del interprete.
## Añadir Paquetes
Debido a su gran popularidad, _Python_ ha formado una gran comunidad de colaboradores que desarrollan componentes de software y los ofrecen resto de la comunidad de forma gratuita y libre.
Lo que permite que los usuario de Python puedan compartir y colaborar de forma eficiente.
### pip
Es el programa que nos permite instalar paquetes. Es incluido de forma predeterminada a partir de _Python 3.4_.
Dicha herramienta está diseñada para ser utilizada desde la línea de comandos.
https://pypi.org/ es una forma rápida y sencilla de buscar e instalar paquetes que se encuentren disponibles en _pip_.
El siguiente comando instalará la última versión de un paquete y sus dependencias del índice de empaquetado de _Python_:
`python -m pip install NombreDelPaquete`
En caso de Linux y MacOS, será necesario especificar la version de _Python_ con la que se está trabajando:
`python3.7 -m pip install NombreDelPaquete`
Adicionalmente, podemos especificar alguno criterios en la instalación del modulo:
`python -m pip install NombreDelPaquete==1.0.4 # Alguna versión específica`
`python -m pip install "NombreDelPaquete>=1.0.4" # Alguna versión mínima`
#### Actualizar algún paquete
En ocasiones el paquete que queremos agregar ya se encuentra en instalado, por lo que sólo hace falta actualizarlo:
`python -m pip install --upgrade NombreDelPaquete`
#### Instalar pip
En caso de no tener instalado _pip_, puedes añadirlo de la siguiente manera:
`python -m ensurepip --default-pip`
### Archivos locales
En ocasiones querremos trabajar con paquetes que no se encuentren disponibles en _pip_, por lo que, para añadirlos basta con incluir su archivo `.py` dentro de la carpeta del proyecto.
### Añadiendo los modulos necesarios para el taller
`python -m pip install --user numpy scipy matplotlib ipython jupyter pandas sympy nose`
### En el código
```python
import paquete # Importamos el paquete a nuestro código
import paquete2 as p2 # Importamos el paquete a nuestro código, relacionado con un alias
from paquetote import paquete as alias # Importamos un paquete especifico de otro paquete a nuestro código, relacionado con un alias
paquete.foo()
p2.foo()
alias.foo()
```
## Ejecución de un script
Empecemos con el código, el siguiente script lo que hace es mostrar en pantalla y ya popular "¡Hola Mundo!", pero en ingles.
```
print("Hello World!")
```
Para ejecutar el script, lo que tenemos que hacer es:
1. Crear un archivo nuevo en nuestro editor de texto.
2. Guardalo como `Hello.py`.
3. Abrir la terminal en donde se encuentra el archivo.
4. Ejecutar el siguiente comando:
- Windows, `python Hello.py`.
- Linux y MacOS, `python3.7 Hello.py`.
En ocasiones querremos programar sin necesidad de crear y guardar algun archivo, para esto podemos hacer uso del interprete de _Python_.
1. Abrir la terminal.
2. Ejecutar el comando:
- Windows, `python`.
- Linux y MacOS, `python3.7`.
3. Obteniendo una ventana como la siguiente:
<img src="imgs/terminal-idle.png" alt="Interprete" width=500px>
4. Ahora, escribimos la instrucción:
```python
print("Hello World!")
```
__Felicidades ya han programado en _Python_.__
## IDE
Un entorno de desarrollo integrado, en inglés _Integrated Development Environment_, es una aplicación que proporciona servicios integrales para facilitarle al programador el desarrollo de software.
Para _Python_ existen varios entornos con los que podemos trabajar, en esta ocasión mencionaremos tres:
### IDLE
Es un entorno de desarrollo integrado para _Python_, que se ha incluido con la implementación predeterminada del lenguaje desde _1.5.2b1_.
<img src="imgs/python-idle.png" alt="Captura de pantalla de IDLE" width="500px">
### Anaconda
Es una distribución opensource de los lenguajes Python y R, utilizada en ciencia de datos, y aprendizaje automático.
Dentro de dicha distribución se incluyen un conjunto de herramientas, como el _IDE Spyder_. Además, podemos crear un _Jupyter Notebook_ con el que podemos ejecutar código escrito en _Python_.
<img src="imgs/anaconda-ide.png" alt="Captura de pantalla de Anaconda" width="500px">
### Pycharm
PyCharm es un entorno de desarrollo integrado, específicamente para el lenguaje Python. Desarrollado por la empresa _JetBrains_.
> Con el correo de la universidad, puden tener acceso de forma gratuita a la versión "pro" del IDE.
<img src="imgs/pycharm-ide.jpg" alt="Captura de pantalla de Pycharm" width="500px">
## Consejos
### Comentarios
Los comentarios son de suma importancia dentro del código, generalmente utilizados para documentar lo que tu programa hace, o incluso para deshabilitar secciones del programa.
> Los comentarios son ingnorados por el interprete.
#### Comentario en una linea
Utilizamos el `#` para comentar en una única linea del código.
```
# Muestra en pantalla Hola mundo
print("Hello World!")
# Grita en pantalla Hola mundo
print("HELLO WORLD!")
```
#### Comentario multilinea
Para realizar un comentario multilinea o un bloque de comentario, será necesario encerrar lo que querramos comentar con `'''`.
```
student_name = "Oscar"
major = "Computer Science"
'''
is_alive = False
print(student_name, major, is_alive)
'''
print(student_name, major)
```
### Nombre de la variables
El programador es libre de bautizar a la variables como quiera, siempre y cuando se entienda la razón de ser de esa variable.
> El nombre de la variable no cuesta nada.
Existen varios estilos para nombrar a las variables, la que veremos en este taller es _underscores_.
```
mensaje_para_el_mundo = "Hello World!"
print(mensaje_para_el_mundo)
```
## Más printing
```
primera_parte = "Es la primer parte de un mensaje..."
segunda_parte = "Para mi mejor amigo."
print(primera_parte + segunda_parte)
primera_parte = "Es la primer parte de un mensaje..."
segunda_parte = "Para mi mejor amigo."
print(primera_parte, segunda_parte)
mensaje = "Taller Python"
print(mensaje * 5)
formatter = "{} {} {} {}"
print(formatter.format(1, 2, 3, 4))
print(formatter.format("Juan", "Pedro", "Mario", "Marco"))
print(formatter.format(formatter, formatter, formatter, formatter))
```
|
github_jupyter
|
import paquete # Importamos el paquete a nuestro código
import paquete2 as p2 # Importamos el paquete a nuestro código, relacionado con un alias
from paquetote import paquete as alias # Importamos un paquete especifico de otro paquete a nuestro código, relacionado con un alias
paquete.foo()
p2.foo()
alias.foo()
print("Hello World!")
print("Hello World!")
# Muestra en pantalla Hola mundo
print("Hello World!")
# Grita en pantalla Hola mundo
print("HELLO WORLD!")
student_name = "Oscar"
major = "Computer Science"
'''
is_alive = False
print(student_name, major, is_alive)
'''
print(student_name, major)
mensaje_para_el_mundo = "Hello World!"
print(mensaje_para_el_mundo)
primera_parte = "Es la primer parte de un mensaje..."
segunda_parte = "Para mi mejor amigo."
print(primera_parte + segunda_parte)
primera_parte = "Es la primer parte de un mensaje..."
segunda_parte = "Para mi mejor amigo."
print(primera_parte, segunda_parte)
mensaje = "Taller Python"
print(mensaje * 5)
formatter = "{} {} {} {}"
print(formatter.format(1, 2, 3, 4))
print(formatter.format("Juan", "Pedro", "Mario", "Marco"))
print(formatter.format(formatter, formatter, formatter, formatter))
| 0.350088 | 0.940517 |
# 9. Data Analytics
## Question
What's Data Analytics? What are the tools in Python can be used for Data Analytics?
## What's Data Analytics

- Descriptive Analytics, which use data aggregation and data mining to provide insight into the past and answer: “What has happened?”
- Predictive Analytics, which use statistical models and forecasting techniques to understand the future and answer: “What could happen?”
- Prescriptive Analytics, which use optimization and simulation algorithms to advise on possible outcomes and answer: “What should we do?
## What are tools in Python for Data Science
- Pandas for Data Analysis
- Numpy for Data Processing
- Matplotlib for Data Visualization
## Pandas
Pandas is used for data manipulation, analysis and cleaning. You can do things like:
- Calculate statistics and answer questions about the data, like
- What's the average, median, max, or min of each column?
- Does column A correlate with column B?
- What does the distribution of data in column C look like?
- Clean the data by doing things like removing missing values and filtering rows or columns by some criteria
- Visualize the data with help from Matplotlib. Plot bars, lines, histograms, bubbles, and more.
- Store the cleaned, transformed data back into a CSV, other file or database
Pandas has:
- **Series** (1d homogeneous array)
- **DataFrame** (2d labeled heterogeneous array)
- **Panel** (general 3d array)

```
# Example of Data Analysis
import pandas as pd
#Read csv file
df = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/Salaries.csv")
#Display a few first records
df.head(5)
```

```
# Output basic statistics for the numeric columns
df.describe()
# Use regular matplotlib function to display a barplot
df.hist(column='salary', bins=50)
```
## Numpy
NumPy, which stands for Numerical Python, is a library consisting of multidimensional array objects and a collection of routines for processing those arrays. NumPy can perform the following operations:
- Mathematical and logical operations on arrays.
- Fourier transforms and routines for shape manipulation.
- Operations related to linear algebra. NumPy has in-built functions for linear algebra and random number generation.

```
# Example of generating arrays
import numpy as np
### Create your array by directly loading in the data. You can use a list or a tuple.
### If you want to be super thorough, specify the array type
np_array = np.array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
np_array = np.array([(1.5,2,3), (4,5,6)], dtype=float)
### Sometimes, the contents of the initial array may not be known, but we would like
### to initialise one anyways to use it later. We have a number of functions at out
### disposal here.
# Creates a 3x4 array of 0's
np.zeros((3,4))
# Creates a 2x3x4 array of int 1's
np.ones((2,3,4), dtype=np.int16)
# Creates an empty 2x3 array
np.empty((2,3))
### You can also create arrays with certain patterns like so
# Creating a 1D array of numbers from 10 to 30 in increments of 5
np.arange( 10, 30, 5 )
# Creating a 1D array of numbers from 0 to 2 in increments of 0.3
np.arange( 0, 2, 0.3 )
# Creating a 1D array of 9 numbers equally spaced from 0 to 2
np.linspace( 0, 2, 9 )
```
## Matplotlib
Matplotlib is one of the most popular Python packages used for data visualization.

Some types Of Plots:
– Bar Graph
– Histogram
– Scatter Plot
– Area Plot
– Pie Chart

```
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
mpl.style.use('seaborn')
X = np.arange(50)
Y = np.random.random((50))
plt.figure(figsize=(9.6, 7.2))
plt.subplot(2, 2, 1)
plt.bar(X, Y)
plt.subplot(2, 2, 2)
plt.hist(Y, 10)
plt.subplot(2, 2, 3)
plt.scatter(X, Y)
# plt.subplot(2, 2, 3)
# plt.stackplot(Y)
plt.subplot(2, 2, 4)
plt.pie(Y)
plt.show()
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
mpl.style.use('seaborn')
X = np.arange(50)
Y1 = np.random.random((50))
Y2 = np.random.random((50))
Y3 = np.random.random((50))
plt.figure(figsize=(9.6, 7.2))
plt.stackplot(X, Y1, Y2, Y3)
plt.show()
```
|
github_jupyter
|
# Example of Data Analysis
import pandas as pd
#Read csv file
df = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/Salaries.csv")
#Display a few first records
df.head(5)
# Output basic statistics for the numeric columns
df.describe()
# Use regular matplotlib function to display a barplot
df.hist(column='salary', bins=50)
# Example of generating arrays
import numpy as np
### Create your array by directly loading in the data. You can use a list or a tuple.
### If you want to be super thorough, specify the array type
np_array = np.array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
np_array = np.array([(1.5,2,3), (4,5,6)], dtype=float)
### Sometimes, the contents of the initial array may not be known, but we would like
### to initialise one anyways to use it later. We have a number of functions at out
### disposal here.
# Creates a 3x4 array of 0's
np.zeros((3,4))
# Creates a 2x3x4 array of int 1's
np.ones((2,3,4), dtype=np.int16)
# Creates an empty 2x3 array
np.empty((2,3))
### You can also create arrays with certain patterns like so
# Creating a 1D array of numbers from 10 to 30 in increments of 5
np.arange( 10, 30, 5 )
# Creating a 1D array of numbers from 0 to 2 in increments of 0.3
np.arange( 0, 2, 0.3 )
# Creating a 1D array of 9 numbers equally spaced from 0 to 2
np.linspace( 0, 2, 9 )
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
mpl.style.use('seaborn')
X = np.arange(50)
Y = np.random.random((50))
plt.figure(figsize=(9.6, 7.2))
plt.subplot(2, 2, 1)
plt.bar(X, Y)
plt.subplot(2, 2, 2)
plt.hist(Y, 10)
plt.subplot(2, 2, 3)
plt.scatter(X, Y)
# plt.subplot(2, 2, 3)
# plt.stackplot(Y)
plt.subplot(2, 2, 4)
plt.pie(Y)
plt.show()
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
mpl.style.use('seaborn')
X = np.arange(50)
Y1 = np.random.random((50))
Y2 = np.random.random((50))
Y3 = np.random.random((50))
plt.figure(figsize=(9.6, 7.2))
plt.stackplot(X, Y1, Y2, Y3)
plt.show()
| 0.763219 | 0.992116 |
# Q1. What are the benefits of the built-in array package, if any?
Ans : Arrays represent multiple data items of the same type using a single name.In arrays, the elements can be accessed randomly by using the index number. Arrays allocate memory in contiguous memory locations for all its elements. Hence there is no chance of extra memory being allocated in case of arrays. This avoids memory overflow or shortage of memory in arrays.
# Q2. What are some of the array package's limitations?
Ans : The number of elements to be stored in an array should be known in advance. An array is a static structure (which means the array is of fixed size). Once declared the size of the array cannot be modified. The memory which is allocated to it cannot be increased or decreased. Insertion and deletion are quite difficult in an array as the elements are stored in consecutive memory locations and the shifting operation is costly. Allocating more memory than the requirement leads to wastage of memory space and less allocation of memory also leads to a problem.
# Q3. Describe the main differences between the array and numpy packages.
```
# Ans : The array package doesn't provide any help with numerical calculation with the items insdie it in number form
# while NumPy give you a wide variety of numerical operations. An array is a single dimensional entity which hold
# the numerical data, while numpy can have more than 1 dimension.In case of array, item can be accessed by its
# index position and it is easy task while in numpy item is accessed by its column and row index, which makes
# it slightly time taking. Same goes with appending operation.In case of array we do not form a tabular
# structure, while in numpy it forms a tabular structure.
```
# Q4. Explain the distinctions between the empty, ones, and zeros functions.
```
# Ans : Empty function: An empty function is a function that does not contain any statement within its body. If you
# try to write a function definition without any statement in python ,it will return an error. To avoid this,
# we use pass statement. pass is a special statement in Python that does nothing. It only works as a dummy
# statement ones: This function returns a new array of given shape and data type, where the element’s value is 1.
# zeros: This function returns a new array of given shape and data type, where the element’s value is 0
```
# Q5. In the fromfunction function, which is used to construct new arrays, what is the role of the callable argument?
```
# Ans : Its function is to execute the function over each coordinate and the resulting array. The function is called
# with N parameters, where N is the rank of shape. Each parameter represents the coordinates of the array
# varying along a specific axis.
```
# Q6. What happens when a numpy array is combined with a single-value operand (a scalar, such as an int or a floating-point value) through addition, as in the expression A + n?
```
# Ans : If any scaler value such as integer is added to the numpy array then all the elements inside the array
# will add that value in it.
# Example :
import numpy as np
a=np.arange(9).reshape(3,3)
print(a)
print()
print(a+1)
```
# Q7. Can array-to-scalar operations use combined operation-assign operators (such as += or *=)? What is the outcome?
Ans : It will do the operation as per operators. Like if we use + operand it will update the current array by adding and when we use '*', it will update by multiplying.
```
# Example :
print(a)
a+=1
print(a)
a*=2
print(a)
```
# Q8. Does a numpy array contain fixed-length strings? What happens if you allocate a longer string to one of these arrays?
```
# Ans : Yes it is possible that we can include a string of fixed length in numpy array. The dtype of any numpy array
# containing string values is the maximum length of any string present in the array.
# Once set, it will only be able to store new string having length not more than the maximum length at the time
# of the creation. If we try to reassign some another string value having length greater than the maximum length
# of the existing elements, it simply discards all the values beyond the maximum length accept upto those values
# which are under the limit.
import numpy as np
name = np.array(['ram', 'mohan', 'shiva'])
name
name[name=='ram']='undertaker'
print(name)
```
# Q9. What happens when you combine two numpy arrays using an operation like addition (+) or multiplication (*)? What are the conditions for combining two numpy arrays?
```
# Ans : It will simply add or multiply element to element at same position.The only requirement which must be met are:
# 1)Data type should be same.
# 2) Shape of the two matrices must be same
# Example is as follows :
a1=a
a1
a2=a+2
a2
a1+a2
a1*a2
a1+a2.reshape(9,1)
```
# Q10. What is the best way to use a Boolean array to mask another array?
```
# Ans :
y = np.array([True,True,False,True])
x = np.array([1,2,3,4])
m = np.ma.masked_where(x>2,y)
print(list(m))
print(m.ndim)
```
# Q11. What are three different ways to get the standard deviation of a wide collection of data using both standard Python and its packages? Sort the three of them by how quickly they execute.
```
# Ans : Standard deviation can be calculated in amny ways. If wee see the formula of SD, it says
# std= Square Root of [ Summation of [square of (x-mean)/number of observation] ] .So this can be achive by:
# 1)Using math module :
import math
N = 1
avg=a
total=N
x=[1,2]
SD=math.sqrt((1-a)**2+(2-a)**2/N)
# 2)Using Numpy Array package
import numpy as np
SD=np.std(x)
# 3) General calculation without using any package
SD= (((1-avg)**2 + (2-avg)**2)/N)**1/2
```
|
github_jupyter
|
# Ans : The array package doesn't provide any help with numerical calculation with the items insdie it in number form
# while NumPy give you a wide variety of numerical operations. An array is a single dimensional entity which hold
# the numerical data, while numpy can have more than 1 dimension.In case of array, item can be accessed by its
# index position and it is easy task while in numpy item is accessed by its column and row index, which makes
# it slightly time taking. Same goes with appending operation.In case of array we do not form a tabular
# structure, while in numpy it forms a tabular structure.
# Ans : Empty function: An empty function is a function that does not contain any statement within its body. If you
# try to write a function definition without any statement in python ,it will return an error. To avoid this,
# we use pass statement. pass is a special statement in Python that does nothing. It only works as a dummy
# statement ones: This function returns a new array of given shape and data type, where the element’s value is 1.
# zeros: This function returns a new array of given shape and data type, where the element’s value is 0
# Ans : Its function is to execute the function over each coordinate and the resulting array. The function is called
# with N parameters, where N is the rank of shape. Each parameter represents the coordinates of the array
# varying along a specific axis.
# Ans : If any scaler value such as integer is added to the numpy array then all the elements inside the array
# will add that value in it.
# Example :
import numpy as np
a=np.arange(9).reshape(3,3)
print(a)
print()
print(a+1)
# Example :
print(a)
a+=1
print(a)
a*=2
print(a)
# Ans : Yes it is possible that we can include a string of fixed length in numpy array. The dtype of any numpy array
# containing string values is the maximum length of any string present in the array.
# Once set, it will only be able to store new string having length not more than the maximum length at the time
# of the creation. If we try to reassign some another string value having length greater than the maximum length
# of the existing elements, it simply discards all the values beyond the maximum length accept upto those values
# which are under the limit.
import numpy as np
name = np.array(['ram', 'mohan', 'shiva'])
name
name[name=='ram']='undertaker'
print(name)
# Ans : It will simply add or multiply element to element at same position.The only requirement which must be met are:
# 1)Data type should be same.
# 2) Shape of the two matrices must be same
# Example is as follows :
a1=a
a1
a2=a+2
a2
a1+a2
a1*a2
a1+a2.reshape(9,1)
# Ans :
y = np.array([True,True,False,True])
x = np.array([1,2,3,4])
m = np.ma.masked_where(x>2,y)
print(list(m))
print(m.ndim)
# Ans : Standard deviation can be calculated in amny ways. If wee see the formula of SD, it says
# std= Square Root of [ Summation of [square of (x-mean)/number of observation] ] .So this can be achive by:
# 1)Using math module :
import math
N = 1
avg=a
total=N
x=[1,2]
SD=math.sqrt((1-a)**2+(2-a)**2/N)
# 2)Using Numpy Array package
import numpy as np
SD=np.std(x)
# 3) General calculation without using any package
SD= (((1-avg)**2 + (2-avg)**2)/N)**1/2
| 0.394084 | 0.991652 |
# **Nomes**
Lucas Moura da Silva - RA: 148341 - Turma IA
Maria Paula Henriques Prandt - RA: 148153 - Turma IB
Viviane Fajardo Filgueiras - RA: 148760 - Turma IB
# ***Instalação do BioPython***
```
!pip install biopython
```
# ***BiBliotecas***
```
from google.colab import drive
drive.mount('/content/drive')
#Bibliotecas do BioPython
from Bio.Seq import Seq
from Bio import SeqIO
from Bio.SeqUtils import GC
from Bio.SeqUtils.ProtParam import ProteinAnalysis
import Bio.SeqUtils.ProtParam as prot
from Bio import pairwise2 as p2
from Bio.Blast import NCBIWWW
from Bio.Blast import NCBIXML
from collections import Counter
import collections
import matplotlib.pyplot as plt
import pylab as plb
import numpy as np
import math
```
## ***Sequencias***
## ***Questão a***
Usaremos as sequencias das seguintes organismo, de acordo com a identificacao do genbank:
* [MN908947.3](https://www.ncbi.nlm.nih.gov/nuccore/MN908947.3)
* [MT012098](https://www.ncbi.nlm.nih.gov/nuccore/MT012098)
* [MZ264787.1](https://www.ncbi.nlm.nih.gov/nuccore/MZ264787.1)
* [NC_019843.3](https://www.ncbi.nlm.nih.gov/nuccore/NC_019843.3)
E todos estes são coronavirus, mas os três primeiros foram isolados em 2020, em locais distintos e o ultimo foi isolado em 2012. Os locais respectivos de cada isolamento do virus foi realizado em:
* Wuhan, China;
* Kerala State, India;
* Manaus, Amazonas, Brazil;
* Oriente Médio;
Para sabermos qual suas sequencias segue o código abaixo
# **Questão b**
```
n = ['A','C','G','T']
seq1 = list()
seq2 = list()
seq3 = list()
seq4 = list()
seq_total = list()
id = list()
tamanhos = list()
# Note que nesses FORs iremos coletar as informações precisas para o item (c) e (d)
for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/MN908947_3.fasta','fasta'):
id.append(i.id)
tamanhos.append(len(i))
seq_total.append(i.seq)
print('\nid: ' + id[0])
print('sequencia: ' + i.seq)
print('tamanho: ' + str(str(len(i))))
for c in range(0, 4):
seq1.append(i.seq.count(n[c]))
for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/MT012098.fasta','fasta'):
id.append(i.id)
tamanhos.append(len(i))
seq_total.append(i.seq)
print('\nid: ' + id[1])
print('sequencia: ' + i.seq)
print('tamanho: ' + str(len(i)))
for c in range(0, 4):
seq2.append(i.seq.count(n[c]))
for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/MZ264787_1.fasta','fasta'):
id.append(i.id)
tamanhos.append(len(i))
seq_total.append(i.seq)
print('\nid: ' + id[2])
print('sequencia: ' + i.seq)
print('tamanho: ' + str(len(i)))
for c in range(0, 4):
seq3.append(i.seq.count(n[c]))
for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/NC_019843_3.fasta','fasta'):
id.append(i.id)
tamanhos.append(len(i))
seq_total.append(i.seq)
print('\nid: ' + id[3])
print('sequencia: ' + i.seq)
print('tamanho: ' + str(len(i)))
for c in range(0, 4):
seq4.append(i.seq.count(n[c]))
```
# **Questão c**
Para obtermos o gráfico em barras das frequencias dos nucleotídeos, escrevemos o seguinte código:
```
# Gráfico
labels = ['A','C','G','T']
x = np.arange(len(labels))
width = 0.15
hfont = {"fontsize":"12"}
hfont_title = {"fontsize": "15"}
fig, ax = plt.subplots()
rects1 = ax.bar(x - 1.5*width, seq1, width, label=id[0])
rects2 = ax.bar(x - 0.5*width, seq2, width, label=id[1])
rects3 = ax.bar(x + 0.5*width, seq3, width, label=id[2])
rects4 = ax.bar(x + 1.5*width, seq4, width, label=id[3])
ax.set_ylabel('Quantidade', **hfont_title)
ax.set_title('Proporção de nucleotídeos', **hfont_title)
ax.set_xticks(x)
ax.set_xticklabels(labels, **hfont)
ax.legend()
fig.tight_layout()
def autolabel(rects, ax):
# pega a altura de eixo y para calcular o posicao do label
(y_bottom, y_top) = ax.get_ylim()
y_height = y_top - y_bottom
for rect in rects:
height = rect.get_height()
# Fracao da altura do eixo assumido por este retangulo
p_height = (height / y_height)
# Caso de para colocar o label acima da barra
# Caso contrario, colocar dentro da barra
if p_height > 0.95:
label_position = height
else:
label_position = height - (y_height * 0.0001)
ax.text(rect.get_x() + rect.get_width()/2., label_position,
'%d' % int(height),
ha='center', va='bottom')
autolabel(rects1, ax)
autolabel(rects2, ax)
autolabel(rects3, ax)
autolabel(rects4, ax)
fig.set_figheight(14)
fig.set_figwidth(16)
plt.show()
```
Dado o gráfico acima, podemos observar que há diferenças significativas de nucleotideos entre as diferentes regiões, entretanto são muito pequenas entre as três primeiras sequencias, nas quias, ambas são provenientes do virus SARS-CoV-2. Contudo, a sequencia que mais se difere dentre as quatro é o vírus com origem no Oriente Médio, chamada de MERS-CoV.
# **Questão d**
```
gc_values = list()
# Função do biopython - conteúdo GC
# Aqui vamos calular o conteudo GC de cada sequência e armazenar na lista gc_values
gc_values.append(sorted(GC(i.seq)for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/MN908947_3.fasta','fasta')))
gc_values.append(sorted(GC(i.seq) for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/MT012098.fasta','fasta')))
gc_values.append(sorted(GC(i.seq) for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/MZ264787_1.fasta','fasta')))
gc_values.append(sorted(GC(i.seq) for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/NC_019843_3.fasta','fasta')))
# Calulo da temperatura de meeting (depende da concentração de [Na+])
print('Considerando que a concentração de [Na + ] = 100 mM\n')
for i in range(0, 4):
GC = gc_values[i][0]
tm = 64.9 + 0.41 * GC - (500/tamanhos[i])
print(f'A temperatura de melting da sequência {i+1} é de aproximadamente {tm:.2f}°C\n')
```
# ***Questão e***
Fazendo o alinhamento global 2 a 2 dos primeiros 800 nucleotídeos das sequencias, obteremos 6 alinhamento. Logo, de maneira mais sucinta, os scores máximos e as similaridades respectivamente entre as sequencias são:
* Entre MN908947.3 e MT012098.1 é 787.0 e 98.375%;
* Entre MN908947.3 e MZ264787.1 é 786.0 e 98.25%;
* Entre MN908947.3 e NC_019843.3 é 530.0 e 66.25%;
* Entre MT012098.1 e MZ264787.1 é 792.0 e 99.0%;
* Entre MT012098.1 e NC_019843.3 é 531.0 e 66.375%;
* Entre MZ264787.1 e NC_019843.3 é 530.0 e 66.25%;
Voltando para o topico da questão a, podemos ver claramente que há uma diferença entre as sequencias MN908947.3, MT012098.1 e MZ264787.1 com a NC_019843.3, mesmo pertencendo a mesma familia, Beta-CoV, elas se distinguem.
Abaixo temos o código que realizamos para a obtenção dos scores máximos e similaridades dos alinhamentos.
```
# fazendo o alinhamento global e a impresao do score
for i in range(len(seq_total)):
for j in range(i, len(seq_total)):
if(i != j):
alinhamento = p2.align.globalxx(seq_total[i][:800],seq_total[j][:800])
print(f'O score maximo entre {id[i]} e {id[j]} eh {alinhamento[0].score}')
print(f'Ja similariadade entre as sequencias sao de {alinhamento[0].score/8}%\n')
print('Alem disso, obtemos o seguinte alinhamento:\n')
print(p2.format_alignment(*alinhamento[0]))
```
# ***Questão f***
Para obtermos a frequencia dos aminoacidos, primeiro transformaremos as trincas de nucleotídeos em aminoacidos, de acordo com o código abaixo. Assim, podemos observar a frequencias de cada aminoacido para cada sequencias da familia betacoronavirus que apresentamos acima.
```
#tradução de cada sequencia
proteina_seq = list()
proteinas = ['A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y']
porcent = list()
#print(Bio.SeqUtils.ProtParamData.kd)
for i in seq_total:
proteina_seq.append(str(i.translate()))
for i in range(len(proteina_seq)):
a = ProteinAnalysis(proteina_seq[i])
acids = a.count_amino_acids()
porcent.append(a.get_amino_acids_percent())
prot1 = list()
prot2 = list()
prot3 = list()
prot4 = list()
for i in proteinas:
prot1.append(100*porcent[0][i])
for i in proteinas:
prot2.append(100*porcent[1][i])
for i in proteinas:
prot3.append(100*porcent[2][i])
for i in proteinas:
prot4.append(100*porcent[3][i])
```
A proteina de cada sequencia pode ser observada abaixo
```
for i in range(len(proteina_seq)):
print(f'A sequencia {id[i]} tem proteinas da seguinte forma:\n {proteina_seq[i]}\n')
```
No gráfico abaixo, podemos observar quais são as frequencias em porcentagem de cada aminoacido presente na proteina de cada respectivo coronavirus
```
#Usei porcentagem
x = np.arange(len(proteinas))
y = np.linspace(0, 16, 9)
width = 0.23
fig, ax = plt.subplots()
rects1 = ax.bar(x - 1.5*width, prot1, width, label=id[0])
rects2 = ax.bar(x - 0.5*width, prot2, width, label=id[1])
rects3 = ax.bar(x + 0.5*width, prot3, width, label=id[2])
rects4 = ax.bar(x + 1.5*width, prot4, width, label=id[3])
ax.set_ylabel('Frequencia', **hfont_title)
ax.set_title('Proporção de Aminoacidos', **hfont_title)
ax.set_xticks(x)
ax.set_xticklabels(proteinas, **hfont)
ax.legend(**hfont)
ax.set_yticklabels(y, **hfont)
def autolabelfloat(rects, ax, font):
# pega a altura de eixo y para calcular o posicao do label
(y_bottom, y_top) = ax.get_ylim()
y_height = y_top - y_bottom
for rect in rects:
height = rect.get_height()
# Fracao da altura do eixo assumido por este retangulo
p_height = (height / y_height)
# Caso de para colocar o label acima da barra
# Caso contrario, colocar dentro da barra
if p_height > 0.95:
label_position = height
else:
label_position = height - (y_height * 0.0001)
plt.rcParams["font.size"] = font
ax.text(rect.get_x() + rect.get_width()/2., label_position,
'%.2f' % height,
ha='center', va='bottom')
autolabelfloat(rects1, ax, 7)
autolabelfloat(rects2, ax, 7)
autolabelfloat(rects3, ax, 7)
autolabelfloat(rects4, ax, 7)
fig.set_figheight(7)
fig.set_figwidth(20)
plt.show()
```
# ***Questão g***
Temos que a frequencia de cada estrutura secundaria, sendo elas a dupla hélice, curva da proteína e folha beta respecivamente é dado por:
* MN908947.3:
* Dupla Helice: 0.32;
* Curva da proteina: 0.20;
* Folha Beta: 0.17;
* MT012098.1:
* Dupla Helice: 0.36;
* Curva da proteina: 0.20;
* Folha Beta: 0.25;
* MZ264787.1:
* Dupla Helice: 0.39;
* Curva da proteina: 0.16;
* Folha Beta: 0.24;
* NC_019843.3:
* Dupla Helice: 0.39;
* Curva da proteina: 0.18;
* Folha Beta: 0.26;
```
estruturas = list()
for i in range(len(proteina_seq)):
analyse = ProteinAnalysis(proteina_seq[i])
fracao = analyse.secondary_structure_fraction()
estruturas.append(fracao)
print(f'Proporcao de cada componente da estrutura secundaria de {id[i]} e:')
print(f'Dupla Helice: {fracao[0]:.2f};')
print(f'Curva da proteina: {fracao[1]:.2f};')
print(f'Folha Beta: {fracao[2]:.2f};\n')
```
Com as frequencias das estruturas, temos o seguinte gráfico abaixo
```
y = [0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4]
labels = ['Dupla Hélice','Curva da proteina','Folha Beta']
x = np.arange(len(labels))
fig, ax = plt.subplots()
rects1 = ax.bar(x - 1.5*width, estruturas[0], width, label=id[0])
rects2 = ax.bar(x - 0.5*width, estruturas[1], width, label=id[1])
rects3 = ax.bar(x + 0.5*width, estruturas[2], width, label=id[2])
rects4 = ax.bar(x + 1.5*width, estruturas[3], width, label=id[3])
ax.set_ylabel('Frequência', **hfont_title)
ax.set_title('Estruturas das Proteinas', **hfont_title)
ax.set_xticks(x)
ax.set_xticklabels(labels, **hfont)
ax.legend(**hfont)
#podemos fazer assim tambem, mas custara muita processamento
#ax.set_yticklabels(((ax.get_yticks() * 1000) // 10) /100 , **hfont)
ax.set_yticklabels(y , **hfont)
autolabelfloat(rects1, ax, 10)
autolabelfloat(rects2, ax, 10)
autolabelfloat(rects3, ax, 10)
autolabelfloat(rects4, ax, 10)
fig.set_figheight(8)
fig.set_figwidth(10)
plt.show()
```
|
github_jupyter
|
!pip install biopython
from google.colab import drive
drive.mount('/content/drive')
#Bibliotecas do BioPython
from Bio.Seq import Seq
from Bio import SeqIO
from Bio.SeqUtils import GC
from Bio.SeqUtils.ProtParam import ProteinAnalysis
import Bio.SeqUtils.ProtParam as prot
from Bio import pairwise2 as p2
from Bio.Blast import NCBIWWW
from Bio.Blast import NCBIXML
from collections import Counter
import collections
import matplotlib.pyplot as plt
import pylab as plb
import numpy as np
import math
n = ['A','C','G','T']
seq1 = list()
seq2 = list()
seq3 = list()
seq4 = list()
seq_total = list()
id = list()
tamanhos = list()
# Note que nesses FORs iremos coletar as informações precisas para o item (c) e (d)
for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/MN908947_3.fasta','fasta'):
id.append(i.id)
tamanhos.append(len(i))
seq_total.append(i.seq)
print('\nid: ' + id[0])
print('sequencia: ' + i.seq)
print('tamanho: ' + str(str(len(i))))
for c in range(0, 4):
seq1.append(i.seq.count(n[c]))
for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/MT012098.fasta','fasta'):
id.append(i.id)
tamanhos.append(len(i))
seq_total.append(i.seq)
print('\nid: ' + id[1])
print('sequencia: ' + i.seq)
print('tamanho: ' + str(len(i)))
for c in range(0, 4):
seq2.append(i.seq.count(n[c]))
for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/MZ264787_1.fasta','fasta'):
id.append(i.id)
tamanhos.append(len(i))
seq_total.append(i.seq)
print('\nid: ' + id[2])
print('sequencia: ' + i.seq)
print('tamanho: ' + str(len(i)))
for c in range(0, 4):
seq3.append(i.seq.count(n[c]))
for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/NC_019843_3.fasta','fasta'):
id.append(i.id)
tamanhos.append(len(i))
seq_total.append(i.seq)
print('\nid: ' + id[3])
print('sequencia: ' + i.seq)
print('tamanho: ' + str(len(i)))
for c in range(0, 4):
seq4.append(i.seq.count(n[c]))
# Gráfico
labels = ['A','C','G','T']
x = np.arange(len(labels))
width = 0.15
hfont = {"fontsize":"12"}
hfont_title = {"fontsize": "15"}
fig, ax = plt.subplots()
rects1 = ax.bar(x - 1.5*width, seq1, width, label=id[0])
rects2 = ax.bar(x - 0.5*width, seq2, width, label=id[1])
rects3 = ax.bar(x + 0.5*width, seq3, width, label=id[2])
rects4 = ax.bar(x + 1.5*width, seq4, width, label=id[3])
ax.set_ylabel('Quantidade', **hfont_title)
ax.set_title('Proporção de nucleotídeos', **hfont_title)
ax.set_xticks(x)
ax.set_xticklabels(labels, **hfont)
ax.legend()
fig.tight_layout()
def autolabel(rects, ax):
# pega a altura de eixo y para calcular o posicao do label
(y_bottom, y_top) = ax.get_ylim()
y_height = y_top - y_bottom
for rect in rects:
height = rect.get_height()
# Fracao da altura do eixo assumido por este retangulo
p_height = (height / y_height)
# Caso de para colocar o label acima da barra
# Caso contrario, colocar dentro da barra
if p_height > 0.95:
label_position = height
else:
label_position = height - (y_height * 0.0001)
ax.text(rect.get_x() + rect.get_width()/2., label_position,
'%d' % int(height),
ha='center', va='bottom')
autolabel(rects1, ax)
autolabel(rects2, ax)
autolabel(rects3, ax)
autolabel(rects4, ax)
fig.set_figheight(14)
fig.set_figwidth(16)
plt.show()
gc_values = list()
# Função do biopython - conteúdo GC
# Aqui vamos calular o conteudo GC de cada sequência e armazenar na lista gc_values
gc_values.append(sorted(GC(i.seq)for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/MN908947_3.fasta','fasta')))
gc_values.append(sorted(GC(i.seq) for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/MT012098.fasta','fasta')))
gc_values.append(sorted(GC(i.seq) for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/MZ264787_1.fasta','fasta')))
gc_values.append(sorted(GC(i.seq) for i in SeqIO.parse('/content/drive/Shareddrives/AB/Sequencias/NC_019843_3.fasta','fasta')))
# Calulo da temperatura de meeting (depende da concentração de [Na+])
print('Considerando que a concentração de [Na + ] = 100 mM\n')
for i in range(0, 4):
GC = gc_values[i][0]
tm = 64.9 + 0.41 * GC - (500/tamanhos[i])
print(f'A temperatura de melting da sequência {i+1} é de aproximadamente {tm:.2f}°C\n')
# fazendo o alinhamento global e a impresao do score
for i in range(len(seq_total)):
for j in range(i, len(seq_total)):
if(i != j):
alinhamento = p2.align.globalxx(seq_total[i][:800],seq_total[j][:800])
print(f'O score maximo entre {id[i]} e {id[j]} eh {alinhamento[0].score}')
print(f'Ja similariadade entre as sequencias sao de {alinhamento[0].score/8}%\n')
print('Alem disso, obtemos o seguinte alinhamento:\n')
print(p2.format_alignment(*alinhamento[0]))
#tradução de cada sequencia
proteina_seq = list()
proteinas = ['A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y']
porcent = list()
#print(Bio.SeqUtils.ProtParamData.kd)
for i in seq_total:
proteina_seq.append(str(i.translate()))
for i in range(len(proteina_seq)):
a = ProteinAnalysis(proteina_seq[i])
acids = a.count_amino_acids()
porcent.append(a.get_amino_acids_percent())
prot1 = list()
prot2 = list()
prot3 = list()
prot4 = list()
for i in proteinas:
prot1.append(100*porcent[0][i])
for i in proteinas:
prot2.append(100*porcent[1][i])
for i in proteinas:
prot3.append(100*porcent[2][i])
for i in proteinas:
prot4.append(100*porcent[3][i])
for i in range(len(proteina_seq)):
print(f'A sequencia {id[i]} tem proteinas da seguinte forma:\n {proteina_seq[i]}\n')
#Usei porcentagem
x = np.arange(len(proteinas))
y = np.linspace(0, 16, 9)
width = 0.23
fig, ax = plt.subplots()
rects1 = ax.bar(x - 1.5*width, prot1, width, label=id[0])
rects2 = ax.bar(x - 0.5*width, prot2, width, label=id[1])
rects3 = ax.bar(x + 0.5*width, prot3, width, label=id[2])
rects4 = ax.bar(x + 1.5*width, prot4, width, label=id[3])
ax.set_ylabel('Frequencia', **hfont_title)
ax.set_title('Proporção de Aminoacidos', **hfont_title)
ax.set_xticks(x)
ax.set_xticklabels(proteinas, **hfont)
ax.legend(**hfont)
ax.set_yticklabels(y, **hfont)
def autolabelfloat(rects, ax, font):
# pega a altura de eixo y para calcular o posicao do label
(y_bottom, y_top) = ax.get_ylim()
y_height = y_top - y_bottom
for rect in rects:
height = rect.get_height()
# Fracao da altura do eixo assumido por este retangulo
p_height = (height / y_height)
# Caso de para colocar o label acima da barra
# Caso contrario, colocar dentro da barra
if p_height > 0.95:
label_position = height
else:
label_position = height - (y_height * 0.0001)
plt.rcParams["font.size"] = font
ax.text(rect.get_x() + rect.get_width()/2., label_position,
'%.2f' % height,
ha='center', va='bottom')
autolabelfloat(rects1, ax, 7)
autolabelfloat(rects2, ax, 7)
autolabelfloat(rects3, ax, 7)
autolabelfloat(rects4, ax, 7)
fig.set_figheight(7)
fig.set_figwidth(20)
plt.show()
estruturas = list()
for i in range(len(proteina_seq)):
analyse = ProteinAnalysis(proteina_seq[i])
fracao = analyse.secondary_structure_fraction()
estruturas.append(fracao)
print(f'Proporcao de cada componente da estrutura secundaria de {id[i]} e:')
print(f'Dupla Helice: {fracao[0]:.2f};')
print(f'Curva da proteina: {fracao[1]:.2f};')
print(f'Folha Beta: {fracao[2]:.2f};\n')
y = [0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4]
labels = ['Dupla Hélice','Curva da proteina','Folha Beta']
x = np.arange(len(labels))
fig, ax = plt.subplots()
rects1 = ax.bar(x - 1.5*width, estruturas[0], width, label=id[0])
rects2 = ax.bar(x - 0.5*width, estruturas[1], width, label=id[1])
rects3 = ax.bar(x + 0.5*width, estruturas[2], width, label=id[2])
rects4 = ax.bar(x + 1.5*width, estruturas[3], width, label=id[3])
ax.set_ylabel('Frequência', **hfont_title)
ax.set_title('Estruturas das Proteinas', **hfont_title)
ax.set_xticks(x)
ax.set_xticklabels(labels, **hfont)
ax.legend(**hfont)
#podemos fazer assim tambem, mas custara muita processamento
#ax.set_yticklabels(((ax.get_yticks() * 1000) // 10) /100 , **hfont)
ax.set_yticklabels(y , **hfont)
autolabelfloat(rects1, ax, 10)
autolabelfloat(rects2, ax, 10)
autolabelfloat(rects3, ax, 10)
autolabelfloat(rects4, ax, 10)
fig.set_figheight(8)
fig.set_figwidth(10)
plt.show()
| 0.246896 | 0.703148 |
```
from datasets.karel import parser_for_synthesis, mutation, karel_runtime
from datasets import dataset, executor
import collections
import copy
import json
import cPickle as pickle
import pprint
import operator
import numpy as np
def visualize_trace(trace):
kr = karel_runtime.KarelRuntime()
event_idx = 0
for i, grid in enumerate(trace.grids):
while event_idx < len(trace.events) and trace.events[event_idx].timestep == i:
print trace.events[event_idx]
event_idx += 1
field = np.zeros((15, 18, 18), dtype=np.bool)
field.ravel()[grid] = True
kr.init_from_array(field)
kr.draw()
while event_idx < len(trace.events):
print trace.events[event_idx]
event_idx += 1
def visualize_grid(grid):
kr = karel_runtime.KarelRuntime()
field = np.zeros((15, 18, 18), dtype=np.bool)
field.ravel()[grid] = True
kr.init_from_array(field)
kr.draw()
dses = [dataset.KarelTorchDataset(
'data/karel/train.pkl', mutation.KarelExampleMutator([0.0] * i + [1.0], rng_fixed=True, add_trace=True)) for i in range(3)]
the_executor = executor.KarelExecutor()
all_results = []
for i in range(10000):
# First get the original trace lengths
for dist in range(4):
example = dses[max(dist - 1, 0)][i]
if dist > 0:
mutated_tests = example.ref_example.input_tests
for test_idx, test in enumerate(mutated_tests):
trace_length = len(test['trace'].grids) + len(test['trace'].events)
last_event = test['trace'].events[-1]
all_results.append({
'idx': i, 'test_idx': test_idx, 'dist': dist,
'trace_length': trace_length, 'last_event': last_event,
'example': example})
continue
for test_idx in range(5):
result = the_executor.execute(
example.code_sequence, None, example.input_tests[test_idx]['input'], record_trace=True, strict=True)
trace_length = len(result.trace.grids) + len(result.trace.events)
last_event = result.trace.events[-1]
all_results.append(
{'idx': i, 'test_idx': test_idx, 'dist': 0, 'trace_length': trace_length, 'last_event': last_event})
if i % 1000 == 0: print i
```
## Length distribution for unmodified code
```
# Run 1
for dist in range(4):
print np.percentile([r['trace_length'] for r in all_results if r['dist'] == dist], [50, 90, 99, 99.9, 99.99, 100])
# Run 2
for dist in range(4):
print np.percentile([r['trace_length'] for r in all_results if r['dist'] == dist], [50, 90, 99, 99.9, 99.99, 100])
# Run 3, with illegal actions gone + shorter cap
for dist in range(4):
print np.percentile([r['trace_length'] for r in all_results if r['dist'] == dist], [50, 90, 99, 99.9, 99.99, 100])
```
## What long traces end in
```
for dist in range(4):
results = [r for r in all_results if r['dist'] == dist]
results.sort(key=lambda r: r['trace_length'], reverse=True)
for r in results[:10]:
print dist, r['trace_length'], r['idx'], r['test_idx'], r['last_event']
for dist in range(4):
results = [r for r in all_results if r['dist'] == dist]
results.sort(key=lambda r: r['trace_length'], reverse=True)
for r in results[:10]:
print dist, r['trace_length'], r['idx'], r['test_idx'], r['last_event']
for dist in range(4):
results = [r for r in all_results if r['dist'] == dist]
results.sort(key=lambda r: r['trace_length'], reverse=True)
for r in results[:10]:
print dist, r['trace_length'], r['idx'], r['test_idx'], r['last_event']
results = [r for r in all_results if r['dist'] == 1]
results.sort(key=lambda r: r['trace_length'], reverse=True)
r = results[0]
print r['trace_length'], r['idx'], r['test_idx'], r['last_event']
print ' '.join(r['example'].ref_example.code_sequence)
visualize_trace(r['example'].ref_example.input_tests[0]['trace'])
results = [r for r in all_results if r['dist'] == 1]
results.sort(key=lambda r: r['trace_length'], reverse=True)
r = results[0]
print r['trace_length'], r['idx'], r['test_idx'], r['last_event']
dses[1][8580].ref_example.input_tests[0]['trace'].events[-1]
results = [r for r in all_results if r['dist'] == 3]
print results[25]
ex = results[25]['example']
print ' '.join(ex.ref_example.code_sequence)
print ' '.join(ex.code_sequence)
for i in range(5):
print 'Input:'
visualize_grid(ex.input_tests[i]['input'])
print 'Output:'
visualize_grid(ex.input_tests[i]['output'])
print
print 'Wrong trace:'
visualize_trace(ex.ref_example.input_tests[0]['trace'])
result = the_executor.execute(ex.code_sequence, None, ex.input_tests[0]['input'], record_trace=True, strict=True)
visualize_trace(result.trace)
```
# Old code
```
def vis(i, j, code=None):
result = the_executor.execute(
code if code else ds[i].code_sequence, None, ds[i].input_tests[j]['input'], record_trace=True, strict=True)
visualize_trace(result.trace)
original_lengths = []
mutated_lengths = []
def trace_length(example):
trace_lengths = []
for i in range(5):
result = the_executor.execute(
example.code_sequence, None, example.input_tests[i]['input'], record_trace=True, strict=True)
trace_lengths.append(len(result.trace.grids) + len(result.trace.events))
original_lengths.append((example.idx, i, trace_lengths[-1]))
mutated_test = example.ref_example.input_tests[i]
mutated_lengths.append((example.idx,
example.ref_example.code_sequence,
i,
len(mutated_test['trace'].grids) + len(mutated_test['trace'].events)))
#print 'Original:', trace_lengths
#print 'Mutated: ', [len(it['trace'].grids) + len(it['trace'].events) for it in example.ref_example.input_tests]
#print
for i in range(10000):
trace_length(ds[i])
if i % 1000 == 0:
print i
sorted(original_lengths, key=operator.itemgetter(-1), reverse=True)
' '.join(ds[8913].code_sequence)
vis(8913, 0)
for i, code, j, _ in sorted(mutated_lengths, key=operator.itemgetter(-1), reverse=True)[:5]:
vis(i, j, code)
print ' '.join(code)
print
print '======================='
print
```
|
github_jupyter
|
from datasets.karel import parser_for_synthesis, mutation, karel_runtime
from datasets import dataset, executor
import collections
import copy
import json
import cPickle as pickle
import pprint
import operator
import numpy as np
def visualize_trace(trace):
kr = karel_runtime.KarelRuntime()
event_idx = 0
for i, grid in enumerate(trace.grids):
while event_idx < len(trace.events) and trace.events[event_idx].timestep == i:
print trace.events[event_idx]
event_idx += 1
field = np.zeros((15, 18, 18), dtype=np.bool)
field.ravel()[grid] = True
kr.init_from_array(field)
kr.draw()
while event_idx < len(trace.events):
print trace.events[event_idx]
event_idx += 1
def visualize_grid(grid):
kr = karel_runtime.KarelRuntime()
field = np.zeros((15, 18, 18), dtype=np.bool)
field.ravel()[grid] = True
kr.init_from_array(field)
kr.draw()
dses = [dataset.KarelTorchDataset(
'data/karel/train.pkl', mutation.KarelExampleMutator([0.0] * i + [1.0], rng_fixed=True, add_trace=True)) for i in range(3)]
the_executor = executor.KarelExecutor()
all_results = []
for i in range(10000):
# First get the original trace lengths
for dist in range(4):
example = dses[max(dist - 1, 0)][i]
if dist > 0:
mutated_tests = example.ref_example.input_tests
for test_idx, test in enumerate(mutated_tests):
trace_length = len(test['trace'].grids) + len(test['trace'].events)
last_event = test['trace'].events[-1]
all_results.append({
'idx': i, 'test_idx': test_idx, 'dist': dist,
'trace_length': trace_length, 'last_event': last_event,
'example': example})
continue
for test_idx in range(5):
result = the_executor.execute(
example.code_sequence, None, example.input_tests[test_idx]['input'], record_trace=True, strict=True)
trace_length = len(result.trace.grids) + len(result.trace.events)
last_event = result.trace.events[-1]
all_results.append(
{'idx': i, 'test_idx': test_idx, 'dist': 0, 'trace_length': trace_length, 'last_event': last_event})
if i % 1000 == 0: print i
# Run 1
for dist in range(4):
print np.percentile([r['trace_length'] for r in all_results if r['dist'] == dist], [50, 90, 99, 99.9, 99.99, 100])
# Run 2
for dist in range(4):
print np.percentile([r['trace_length'] for r in all_results if r['dist'] == dist], [50, 90, 99, 99.9, 99.99, 100])
# Run 3, with illegal actions gone + shorter cap
for dist in range(4):
print np.percentile([r['trace_length'] for r in all_results if r['dist'] == dist], [50, 90, 99, 99.9, 99.99, 100])
for dist in range(4):
results = [r for r in all_results if r['dist'] == dist]
results.sort(key=lambda r: r['trace_length'], reverse=True)
for r in results[:10]:
print dist, r['trace_length'], r['idx'], r['test_idx'], r['last_event']
for dist in range(4):
results = [r for r in all_results if r['dist'] == dist]
results.sort(key=lambda r: r['trace_length'], reverse=True)
for r in results[:10]:
print dist, r['trace_length'], r['idx'], r['test_idx'], r['last_event']
for dist in range(4):
results = [r for r in all_results if r['dist'] == dist]
results.sort(key=lambda r: r['trace_length'], reverse=True)
for r in results[:10]:
print dist, r['trace_length'], r['idx'], r['test_idx'], r['last_event']
results = [r for r in all_results if r['dist'] == 1]
results.sort(key=lambda r: r['trace_length'], reverse=True)
r = results[0]
print r['trace_length'], r['idx'], r['test_idx'], r['last_event']
print ' '.join(r['example'].ref_example.code_sequence)
visualize_trace(r['example'].ref_example.input_tests[0]['trace'])
results = [r for r in all_results if r['dist'] == 1]
results.sort(key=lambda r: r['trace_length'], reverse=True)
r = results[0]
print r['trace_length'], r['idx'], r['test_idx'], r['last_event']
dses[1][8580].ref_example.input_tests[0]['trace'].events[-1]
results = [r for r in all_results if r['dist'] == 3]
print results[25]
ex = results[25]['example']
print ' '.join(ex.ref_example.code_sequence)
print ' '.join(ex.code_sequence)
for i in range(5):
print 'Input:'
visualize_grid(ex.input_tests[i]['input'])
print 'Output:'
visualize_grid(ex.input_tests[i]['output'])
print
print 'Wrong trace:'
visualize_trace(ex.ref_example.input_tests[0]['trace'])
result = the_executor.execute(ex.code_sequence, None, ex.input_tests[0]['input'], record_trace=True, strict=True)
visualize_trace(result.trace)
def vis(i, j, code=None):
result = the_executor.execute(
code if code else ds[i].code_sequence, None, ds[i].input_tests[j]['input'], record_trace=True, strict=True)
visualize_trace(result.trace)
original_lengths = []
mutated_lengths = []
def trace_length(example):
trace_lengths = []
for i in range(5):
result = the_executor.execute(
example.code_sequence, None, example.input_tests[i]['input'], record_trace=True, strict=True)
trace_lengths.append(len(result.trace.grids) + len(result.trace.events))
original_lengths.append((example.idx, i, trace_lengths[-1]))
mutated_test = example.ref_example.input_tests[i]
mutated_lengths.append((example.idx,
example.ref_example.code_sequence,
i,
len(mutated_test['trace'].grids) + len(mutated_test['trace'].events)))
#print 'Original:', trace_lengths
#print 'Mutated: ', [len(it['trace'].grids) + len(it['trace'].events) for it in example.ref_example.input_tests]
#print
for i in range(10000):
trace_length(ds[i])
if i % 1000 == 0:
print i
sorted(original_lengths, key=operator.itemgetter(-1), reverse=True)
' '.join(ds[8913].code_sequence)
vis(8913, 0)
for i, code, j, _ in sorted(mutated_lengths, key=operator.itemgetter(-1), reverse=True)[:5]:
vis(i, j, code)
print ' '.join(code)
print
print '======================='
print
| 0.272605 | 0.518546 |
```
import keras
```
# Dataset Preprocessing
### Einlesen der Daten aus dem JSON der BBL
```
import urllib.request, json
with urllib.request.urlopen("http://statistik.easycredit-bbl.de/XML/exchange/540/Schedule.php?type=json&saison=2017&fixedGamesOnly=0") as url:
games = json.loads(url.read().decode())
print(json.dumps(games, indent=4, sort_keys=True))
```
### Daten aufbereiten
#### Erstellen einer Liste für die Arenen & Teams
```
arena=[]
home_ids=[]
for i in range(0,len(games['competition'][0]['spiel'])):
if games['competition'][0]['spiel'][i]['home_id'] not in home_ids:
arena.append(games['competition'][0]['spiel'][i]['arenaName'])
home_ids.append(games['competition'][0]['spiel'][i]['home_id'])
print(len(arena)) #Um sicher zu gehen, dass alle Arenen vorhanden sind
print(len(home_ids)) #Um sicher zu gehen, dass alle Teams vorhanden sind
```
### Dataset zusammenstellen
```
dataset=[]
for i in range(0,len(games['competition'][0]['spiel'])):
datasetrow=[]
datasetrow.append(games['competition'][0]['spiel'][i]['home_id'])
datasetrow.append(games['competition'][0]['spiel'][i]['gast_id'])
datasetrow.append(int(games['competition'][0]['spiel'][i]['home_result']>games['competition'][0]['spiel'][i]['gast_result']))
dataset.append(datasetrow)
print(dataset)
```
#### Umwandlung des Datasets in ein Numpy Array
```
import numpy as np
# : -> auslesen aller zeilen
dataset=np.asarray(dataset)
print(dataset[:,0])
print(len(dataset))
```
#### One hot encoding der Teams
```
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer()
transformed_home_ids = encoder.fit_transform(dataset[:,0])
print(transformed_home_ids)
transformed_gast_ids = encoder.transform(dataset[:,1]) #ohne fit, damit die Teams eindeutig bleiben, nur transformation notwendig
print(transformed_gast_ids)
```
### Zusammenfügen der Spalten home_ids, gast_ids, win or loose
```
data=np.c_[transformed_home_ids,transformed_gast_ids,dataset[:,2]]
np.random.shuffle(data)
print(data)
print(len(data[0])) #Anzahl der Neuronen
neuronen = len(data[0])-1
```
# Netz Modellierung
```
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Dense
from keras import optimizers
adam = optimizers.Adam(lr=0.001) # lernrate
# Initialising the ANN
regressor = Sequential()
# Adding the input layer and the first hidden layer
regressor.add(Dense(units = neuronen, kernel_initializer = 'uniform', activation = 'relu', input_shape = (neuronen,)))
# Adding the second hidden layer
#regressor.add(Dense(units = 18, kernel_initializer = 'uniform', activation = 'relu'))
# Adding the output layer
regressor.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
#Summary anzeigen
regressor.summary()
# Compiling the ANN - wie soll es lernen
regressor.compile(optimizer = adam, loss = 'mean_squared_error', metrics = ['accuracy'])
# Fitting the ANN to the Training set
history = regressor.fit(data[:,0:neuronen], data[:,neuronen], batch_size = 10, epochs = 100, validation_split = 0.3)
import matplotlib.pyplot as plt
#Accuracy Diagramm
handles = []
label, = plt.plot(history.history['acc'], label="acc")
handles.append(label)
label, = plt.plot(history.history['val_acc'], label="val_acc")
handles.append(label)
plt.title('Kostenfunktion')
plt.ylabel('Kosten')
plt.xlabel('Epochen')
plt.legend(handles=handles, loc='upper right')
figure = plt.gcf() # get current figure
figure.set_size_inches(8, 6) # um die größe des Plots anzupassen
plt.show()
#Loss Diagramm
handles = []
label, = plt.plot(history.history['loss'], label="loss")
handles.append(label)
label, = plt.plot(history.history['val_loss'], label="val_loss")
handles.append(label)
plt.title('Kostenfunktion')
plt.ylabel('Kosten')
plt.xlabel('Epochen')
plt.legend(handles=handles, loc='upper right')
figure = plt.gcf() # get current figure
figure.set_size_inches(8, 6) # um die größe des Plots anzupassen
#plt.savefig(pathpathpaht) # hiermit kannst das ding als auch als bild an dem angegebenen ort plus name ablegen
plt.show()
```
### Netz als Pickle Datei speichern
```
import time as tm
import datetime
import pickle
def create_file_name():
ts = tm.time()
name = datetime.datetime.fromtimestamp(ts).strftime('%Y%m%d%H%M%S') + '_ann'
return name
path='./Netze/' #Pfad muss angepasst werden
name_file= create_file_name()
with open(path + name_file + '.pkl', 'wb') as output:
ann_net = {'history_val_loss':history.history['val_loss'],'history_loss':history.history['loss']}
pickle.dump(ann_net, output)
```
|
github_jupyter
|
import keras
import urllib.request, json
with urllib.request.urlopen("http://statistik.easycredit-bbl.de/XML/exchange/540/Schedule.php?type=json&saison=2017&fixedGamesOnly=0") as url:
games = json.loads(url.read().decode())
print(json.dumps(games, indent=4, sort_keys=True))
arena=[]
home_ids=[]
for i in range(0,len(games['competition'][0]['spiel'])):
if games['competition'][0]['spiel'][i]['home_id'] not in home_ids:
arena.append(games['competition'][0]['spiel'][i]['arenaName'])
home_ids.append(games['competition'][0]['spiel'][i]['home_id'])
print(len(arena)) #Um sicher zu gehen, dass alle Arenen vorhanden sind
print(len(home_ids)) #Um sicher zu gehen, dass alle Teams vorhanden sind
dataset=[]
for i in range(0,len(games['competition'][0]['spiel'])):
datasetrow=[]
datasetrow.append(games['competition'][0]['spiel'][i]['home_id'])
datasetrow.append(games['competition'][0]['spiel'][i]['gast_id'])
datasetrow.append(int(games['competition'][0]['spiel'][i]['home_result']>games['competition'][0]['spiel'][i]['gast_result']))
dataset.append(datasetrow)
print(dataset)
import numpy as np
# : -> auslesen aller zeilen
dataset=np.asarray(dataset)
print(dataset[:,0])
print(len(dataset))
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer()
transformed_home_ids = encoder.fit_transform(dataset[:,0])
print(transformed_home_ids)
transformed_gast_ids = encoder.transform(dataset[:,1]) #ohne fit, damit die Teams eindeutig bleiben, nur transformation notwendig
print(transformed_gast_ids)
data=np.c_[transformed_home_ids,transformed_gast_ids,dataset[:,2]]
np.random.shuffle(data)
print(data)
print(len(data[0])) #Anzahl der Neuronen
neuronen = len(data[0])-1
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Dense
from keras import optimizers
adam = optimizers.Adam(lr=0.001) # lernrate
# Initialising the ANN
regressor = Sequential()
# Adding the input layer and the first hidden layer
regressor.add(Dense(units = neuronen, kernel_initializer = 'uniform', activation = 'relu', input_shape = (neuronen,)))
# Adding the second hidden layer
#regressor.add(Dense(units = 18, kernel_initializer = 'uniform', activation = 'relu'))
# Adding the output layer
regressor.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
#Summary anzeigen
regressor.summary()
# Compiling the ANN - wie soll es lernen
regressor.compile(optimizer = adam, loss = 'mean_squared_error', metrics = ['accuracy'])
# Fitting the ANN to the Training set
history = regressor.fit(data[:,0:neuronen], data[:,neuronen], batch_size = 10, epochs = 100, validation_split = 0.3)
import matplotlib.pyplot as plt
#Accuracy Diagramm
handles = []
label, = plt.plot(history.history['acc'], label="acc")
handles.append(label)
label, = plt.plot(history.history['val_acc'], label="val_acc")
handles.append(label)
plt.title('Kostenfunktion')
plt.ylabel('Kosten')
plt.xlabel('Epochen')
plt.legend(handles=handles, loc='upper right')
figure = plt.gcf() # get current figure
figure.set_size_inches(8, 6) # um die größe des Plots anzupassen
plt.show()
#Loss Diagramm
handles = []
label, = plt.plot(history.history['loss'], label="loss")
handles.append(label)
label, = plt.plot(history.history['val_loss'], label="val_loss")
handles.append(label)
plt.title('Kostenfunktion')
plt.ylabel('Kosten')
plt.xlabel('Epochen')
plt.legend(handles=handles, loc='upper right')
figure = plt.gcf() # get current figure
figure.set_size_inches(8, 6) # um die größe des Plots anzupassen
#plt.savefig(pathpathpaht) # hiermit kannst das ding als auch als bild an dem angegebenen ort plus name ablegen
plt.show()
import time as tm
import datetime
import pickle
def create_file_name():
ts = tm.time()
name = datetime.datetime.fromtimestamp(ts).strftime('%Y%m%d%H%M%S') + '_ann'
return name
path='./Netze/' #Pfad muss angepasst werden
name_file= create_file_name()
with open(path + name_file + '.pkl', 'wb') as output:
ann_net = {'history_val_loss':history.history['val_loss'],'history_loss':history.history['loss']}
pickle.dump(ann_net, output)
| 0.512937 | 0.776708 |
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import scipy
#Data from acridone stern volmer
concKI = np.array([0, 0.040, 0.100, 0.200, 0.300, 0.500, 0.800]) #[KI] / M
intensity = np.array([16580, 3753, 1566, 721, 446, 242, 121]) #intensity data
tau = np.array([17.60, 3.90, 1.80, 0.95, 0.64, 0.39, 0.25]) #lifetime data
ratioint = 16580 / intensity #calculating I_0/I for each value
ratiotau = 17.60/tau #calculating tau_0/tau for each value
ratiointtau = ratioint/ratiotau #calculating I_0/I/tau_0/tau for each value - this is because the ratioint plot is curved
# plt.title('Quenching of acridone')
# plt.plot(concKI, ratioint, "o")
# plt.legend() # Shows the legend
# plt.xlabel('[KI] / M')
# plt.ylabel('I$_0$ / I')
# secaxy = ax.secondary_yaxis('right', functions=(CtoF, FtoC))
# secaxy.set_ylabel(r'$T\ [^oF]$')
# ## show the plot
# plt.show()
#intensity fitting - determining the linear regression to determine the line of best fit for the I_0/I data
intfit = scipy.stats.linregress(concKI, ratioint)
intslope= (intfit[0])
intint= (intfit[1])
fitint = intslope * concKI + intint
#lifetime fitting - determining the linear regression to determine the line of best fit for the tau_0/tau data
taufit = scipy.stats.linregress(concKI, ratiotau)
tauslope= (taufit[0])
tauint= (taufit[1])
fittau = tauslope * concKI + tauint
#ratio fitting - determining the linear regression to determine the line of best fit for the I_0/I/tau_0/tau data
ratiofit = scipy.stats.linregress(concKI, ratiointtau)
ratioslope= (ratiofit[0])
ratioint_tau= (ratiofit[1])
fitratio = ratioslope * concKI + ratioint_tau
sns.set_context('talk') #fancy very quick way to set how the graph looks using seaborn
fig,ax1 = plt.subplots(figsize=(6,6)) #setting the size to square
plt.title('Quenching of acridone emission intensity - not a straight line') #my title doh!
ax1.plot(concKI, ratioint, "o", color='#7570b3') #the data points - just choosing colours which should be good for the colourblind
ax1.plot(concKI, fitint, "-", color='#7570b3') #the fit
ax1.set_ylabel(r'$I_0/I$') #labelling my axis - I can't remember what the r was for...
plt.savefig('acridonequenchI0I.png',transparent=True)
# ax2 = ax1.twinx()
# ax2.plot(concKI, ratiotau, '^', color='#1b9e77')
# ax2.plot(concKI, fittau, "-", color='#1b9e77')
# ax2.set_ylabel(r'$\tau_o/\tau$')
ax1.set_xlabel('[KI] / M') #no r here...
plt.show() #prints my graph! Oh no it is curved it must be a combination of static and dynamic quenching
#that graph is awful - lets draw some nicer graphs
fig,ax1 = plt.subplots(figsize=(6,6))
plt.title('Quenching of acridone')
ax1.plot(concKI, ratiointtau, "o", color='#7570b3') #static data points
ax1.plot(concKI, fitratio, "-", color='#7570b3') #static fit
ax1.set_ylabel(r'$I_0/I / \tau_0 / \tau$')
ax2 = ax1.twinx()
ax2.plot(concKI, ratiotau, '^', color='#1b9e77') #dynamic data points
ax2.plot(concKI, fittau, "-", color='#1b9e77') #dynamic fit
ax2.set_ylabel(r'$\tau_o/\tau$')
ax1.set_xlabel('[KI] / M')
plt.savefig('acridonequenchI0I.png',transparent=True)
plt.show()
print ('Ks =' + str(ratiofit[0])) #static quenching constant - no units as an equilibrium constant - yay for activity
print ('kq =' + str(taufit[0]/(tau[0]*1e-9)) + ' M^-1 s^-1') #dynamic rate constant - this is taking the gradient and multiplying by the value of tau_0
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import scipy
#Data from acridone stern volmer
concKI = np.array([0, 0.040, 0.100, 0.200, 0.300, 0.500, 0.800]) #[KI] / M
intensity = np.array([16580, 3753, 1566, 721, 446, 242, 121]) #intensity data
tau = np.array([17.60, 3.90, 1.80, 0.95, 0.64, 0.39, 0.25]) #lifetime data
ratioint = 16580 / intensity #calculating I_0/I for each value
ratiotau = 17.60/tau #calculating tau_0/tau for each value
ratiointtau = ratioint/ratiotau #calculating I_0/I/tau_0/tau for each value - this is because the ratioint plot is curved
# plt.title('Quenching of acridone')
# plt.plot(concKI, ratioint, "o")
# plt.legend() # Shows the legend
# plt.xlabel('[KI] / M')
# plt.ylabel('I$_0$ / I')
# secaxy = ax.secondary_yaxis('right', functions=(CtoF, FtoC))
# secaxy.set_ylabel(r'$T\ [^oF]$')
# ## show the plot
# plt.show()
#intensity fitting - determining the linear regression to determine the line of best fit for the I_0/I data
intfit = scipy.stats.linregress(concKI, ratioint)
intslope= (intfit[0])
intint= (intfit[1])
fitint = intslope * concKI + intint
#lifetime fitting - determining the linear regression to determine the line of best fit for the tau_0/tau data
taufit = scipy.stats.linregress(concKI, ratiotau)
tauslope= (taufit[0])
tauint= (taufit[1])
fittau = tauslope * concKI + tauint
#ratio fitting - determining the linear regression to determine the line of best fit for the I_0/I/tau_0/tau data
ratiofit = scipy.stats.linregress(concKI, ratiointtau)
ratioslope= (ratiofit[0])
ratioint_tau= (ratiofit[1])
fitratio = ratioslope * concKI + ratioint_tau
sns.set_context('talk') #fancy very quick way to set how the graph looks using seaborn
fig,ax1 = plt.subplots(figsize=(6,6)) #setting the size to square
plt.title('Quenching of acridone emission intensity - not a straight line') #my title doh!
ax1.plot(concKI, ratioint, "o", color='#7570b3') #the data points - just choosing colours which should be good for the colourblind
ax1.plot(concKI, fitint, "-", color='#7570b3') #the fit
ax1.set_ylabel(r'$I_0/I$') #labelling my axis - I can't remember what the r was for...
plt.savefig('acridonequenchI0I.png',transparent=True)
# ax2 = ax1.twinx()
# ax2.plot(concKI, ratiotau, '^', color='#1b9e77')
# ax2.plot(concKI, fittau, "-", color='#1b9e77')
# ax2.set_ylabel(r'$\tau_o/\tau$')
ax1.set_xlabel('[KI] / M') #no r here...
plt.show() #prints my graph! Oh no it is curved it must be a combination of static and dynamic quenching
#that graph is awful - lets draw some nicer graphs
fig,ax1 = plt.subplots(figsize=(6,6))
plt.title('Quenching of acridone')
ax1.plot(concKI, ratiointtau, "o", color='#7570b3') #static data points
ax1.plot(concKI, fitratio, "-", color='#7570b3') #static fit
ax1.set_ylabel(r'$I_0/I / \tau_0 / \tau$')
ax2 = ax1.twinx()
ax2.plot(concKI, ratiotau, '^', color='#1b9e77') #dynamic data points
ax2.plot(concKI, fittau, "-", color='#1b9e77') #dynamic fit
ax2.set_ylabel(r'$\tau_o/\tau$')
ax1.set_xlabel('[KI] / M')
plt.savefig('acridonequenchI0I.png',transparent=True)
plt.show()
print ('Ks =' + str(ratiofit[0])) #static quenching constant - no units as an equilibrium constant - yay for activity
print ('kq =' + str(taufit[0]/(tau[0]*1e-9)) + ' M^-1 s^-1') #dynamic rate constant - this is taking the gradient and multiplying by the value of tau_0
| 0.521959 | 0.562207 |
```
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
import numpy as np
import os
import time
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import json
import glob
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from PIL import Image
tf.__version__
!python datasets/base.py ./images/char-2-epoch-10
# DATA_DIR = '/home/jackon/captcha-tensorflow/images/char-4-epoch-6/train' # 30241 images. validate accuracy: 87.6%
# DATA_DIR = '/home/jackon/captcha-tensorflow/images/char-4-epoch-60/train' # 302410 images. validate accuracy: 98.8%
DATA_DIR = './images/char-2-epoch-10/train'
H, W, C = 100, 80, 3 # height, width, 3(RGB channels)
N_LABELS = 256 # label_size
D = 2 # num_per_image
def parse_filepath(filepath):
try:
path, filename = os.path.split(filepath)
filename, ext = os.path.splitext(filename)
label, _ = filename.split("_")
return label
except Exception as e:
print('error to parse %s. %s' % (filepath, e))
return None, None
# create a pandas data frame of images, age, gender and race
files = glob.glob(os.path.join(DATA_DIR, "*.png"))
attributes = list(map(parse_filepath, files))
df = pd.DataFrame(attributes)
df['file'] = files
df.columns = ['label', 'file']
df = df.dropna()
df.head()
p = np.random.permutation(len(df))
train_up_to = int(len(df) * 0.7)
train_idx = p[:train_up_to]
test_idx = p[train_up_to:]
# split train_idx further into training and validation set
train_up_to = int(train_up_to * 0.7)
train_idx, valid_idx = train_idx[:train_up_to], train_idx[train_up_to:]
print('train count: %s, valid count: %s, test count: %s' % (
len(train_idx), len(valid_idx), len(test_idx)))
from tensorflow.keras.utils import to_categorical
from PIL import Image
def get_data_generator(df, indices, for_training, batch_size=16):
images, labels = [], []
while True:
for i in indices:
r = df.iloc[i]
file, label = r['file'], r['label']
im = Image.open(file)
# im = im.resize((H, W))
im = np.array(im) / 255.0
images.append(np.array(im))
labels.append(np.array([np.array(to_categorical(ord(i), N_LABELS)) for i in label]))
if len(images) >= batch_size:
# print(np.array(images), np.array(labels))
yield np.array(images), np.array(labels)
images, labels = [], []
if not for_training:
break
from tensorflow.keras.layers import Input, Dense, BatchNormalization, Conv2D, MaxPool2D, GlobalMaxPool2D, Dropout
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.models import Model
input_layer = tf.keras.Input(shape=(H, W, C))
x = layers.Conv2D(32, 3, activation='relu')(input_layer)
x = layers.MaxPooling2D((2, 2))(x)
x = layers.Conv2D(64, 3, activation='relu')(x)
x = layers.MaxPooling2D((2, 2))(x)
x = layers.Conv2D(64, 3, activation='relu')(x)
x = layers.MaxPooling2D((2, 2))(x)
x = layers.Flatten()(x)
x = layers.Dense(1024, activation='relu')(x)
# x = layers.Dropout(0.5)(x)
x = layers.Dense(D * N_LABELS, activation='softmax')(x)
x = layers.Reshape((D, N_LABELS))(x)
model = models.Model(inputs=input_layer, outputs=x)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics= ['accuracy'])
model.summary()
from tensorflow.keras.callbacks import ModelCheckpoint
batch_size = 64
valid_batch_size = 64
train_gen = get_data_generator(df, train_idx, for_training=True, batch_size=batch_size)
valid_gen = get_data_generator(df, valid_idx, for_training=True, batch_size=valid_batch_size)
callbacks = [
ModelCheckpoint("./model_checkpoint", monitor='val_loss')
]
history = model.fit(train_gen,
steps_per_epoch=len(train_idx)//batch_size,
epochs=5,
# callbacks=callbacks,
validation_data=valid_gen,
validation_steps=len(valid_idx)//valid_batch_size)
def plot_train_history(history):
fig, axes = plt.subplots(1, 2, figsize=(20, 5))
axes[0].plot(history.history['accuracy'], label='Train accuracy')
axes[0].plot(history.history['val_accuracy'], label='Val accuracy')
axes[0].set_xlabel('Epochs')
axes[0].legend()
axes[1].plot(history.history['loss'], label='Training loss')
axes[1].plot(history.history['val_loss'], label='Validation loss')
axes[1].set_xlabel('Epochs')
axes[1].legend()
plot_train_history(history)
test_gen = get_data_generator(df, test_idx, for_training=False, batch_size=128)
dict(zip(model.metrics_names, model.evaluate(test_gen, steps=len(test_idx)//128)))
test_gen = get_data_generator(df, test_idx, for_training=False, batch_size=128)
x_test, y_test = next(test_gen)
y_pred = model.predict_on_batch(x_test)
y_true = tf.math.argmax(y_test, axis=-1)
y_pred = tf.math.argmax(y_pred, axis=-1)
def format_y(y):
return ''.join(map(lambda x: chr(int(x)), y))
import math
n = 30
random_indices = np.random.permutation(n)
n_cols = 5
n_rows = math.ceil(n / n_cols)
fig, axes = plt.subplots(n_rows, n_cols, figsize=(15, 20))
for i, img_idx in enumerate(random_indices):
ax = axes.flat[i]
ax.imshow(x_test[img_idx])
ax.set_title('pred: %s' % format_y(y_pred[img_idx]))
ax.set_xlabel('true: %s' % format_y(y_true[img_idx]))
ax.set_xticks([])
ax.set_yticks([])
```
|
github_jupyter
|
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
import numpy as np
import os
import time
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import json
import glob
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from PIL import Image
tf.__version__
!python datasets/base.py ./images/char-2-epoch-10
# DATA_DIR = '/home/jackon/captcha-tensorflow/images/char-4-epoch-6/train' # 30241 images. validate accuracy: 87.6%
# DATA_DIR = '/home/jackon/captcha-tensorflow/images/char-4-epoch-60/train' # 302410 images. validate accuracy: 98.8%
DATA_DIR = './images/char-2-epoch-10/train'
H, W, C = 100, 80, 3 # height, width, 3(RGB channels)
N_LABELS = 256 # label_size
D = 2 # num_per_image
def parse_filepath(filepath):
try:
path, filename = os.path.split(filepath)
filename, ext = os.path.splitext(filename)
label, _ = filename.split("_")
return label
except Exception as e:
print('error to parse %s. %s' % (filepath, e))
return None, None
# create a pandas data frame of images, age, gender and race
files = glob.glob(os.path.join(DATA_DIR, "*.png"))
attributes = list(map(parse_filepath, files))
df = pd.DataFrame(attributes)
df['file'] = files
df.columns = ['label', 'file']
df = df.dropna()
df.head()
p = np.random.permutation(len(df))
train_up_to = int(len(df) * 0.7)
train_idx = p[:train_up_to]
test_idx = p[train_up_to:]
# split train_idx further into training and validation set
train_up_to = int(train_up_to * 0.7)
train_idx, valid_idx = train_idx[:train_up_to], train_idx[train_up_to:]
print('train count: %s, valid count: %s, test count: %s' % (
len(train_idx), len(valid_idx), len(test_idx)))
from tensorflow.keras.utils import to_categorical
from PIL import Image
def get_data_generator(df, indices, for_training, batch_size=16):
images, labels = [], []
while True:
for i in indices:
r = df.iloc[i]
file, label = r['file'], r['label']
im = Image.open(file)
# im = im.resize((H, W))
im = np.array(im) / 255.0
images.append(np.array(im))
labels.append(np.array([np.array(to_categorical(ord(i), N_LABELS)) for i in label]))
if len(images) >= batch_size:
# print(np.array(images), np.array(labels))
yield np.array(images), np.array(labels)
images, labels = [], []
if not for_training:
break
from tensorflow.keras.layers import Input, Dense, BatchNormalization, Conv2D, MaxPool2D, GlobalMaxPool2D, Dropout
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.models import Model
input_layer = tf.keras.Input(shape=(H, W, C))
x = layers.Conv2D(32, 3, activation='relu')(input_layer)
x = layers.MaxPooling2D((2, 2))(x)
x = layers.Conv2D(64, 3, activation='relu')(x)
x = layers.MaxPooling2D((2, 2))(x)
x = layers.Conv2D(64, 3, activation='relu')(x)
x = layers.MaxPooling2D((2, 2))(x)
x = layers.Flatten()(x)
x = layers.Dense(1024, activation='relu')(x)
# x = layers.Dropout(0.5)(x)
x = layers.Dense(D * N_LABELS, activation='softmax')(x)
x = layers.Reshape((D, N_LABELS))(x)
model = models.Model(inputs=input_layer, outputs=x)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics= ['accuracy'])
model.summary()
from tensorflow.keras.callbacks import ModelCheckpoint
batch_size = 64
valid_batch_size = 64
train_gen = get_data_generator(df, train_idx, for_training=True, batch_size=batch_size)
valid_gen = get_data_generator(df, valid_idx, for_training=True, batch_size=valid_batch_size)
callbacks = [
ModelCheckpoint("./model_checkpoint", monitor='val_loss')
]
history = model.fit(train_gen,
steps_per_epoch=len(train_idx)//batch_size,
epochs=5,
# callbacks=callbacks,
validation_data=valid_gen,
validation_steps=len(valid_idx)//valid_batch_size)
def plot_train_history(history):
fig, axes = plt.subplots(1, 2, figsize=(20, 5))
axes[0].plot(history.history['accuracy'], label='Train accuracy')
axes[0].plot(history.history['val_accuracy'], label='Val accuracy')
axes[0].set_xlabel('Epochs')
axes[0].legend()
axes[1].plot(history.history['loss'], label='Training loss')
axes[1].plot(history.history['val_loss'], label='Validation loss')
axes[1].set_xlabel('Epochs')
axes[1].legend()
plot_train_history(history)
test_gen = get_data_generator(df, test_idx, for_training=False, batch_size=128)
dict(zip(model.metrics_names, model.evaluate(test_gen, steps=len(test_idx)//128)))
test_gen = get_data_generator(df, test_idx, for_training=False, batch_size=128)
x_test, y_test = next(test_gen)
y_pred = model.predict_on_batch(x_test)
y_true = tf.math.argmax(y_test, axis=-1)
y_pred = tf.math.argmax(y_pred, axis=-1)
def format_y(y):
return ''.join(map(lambda x: chr(int(x)), y))
import math
n = 30
random_indices = np.random.permutation(n)
n_cols = 5
n_rows = math.ceil(n / n_cols)
fig, axes = plt.subplots(n_rows, n_cols, figsize=(15, 20))
for i, img_idx in enumerate(random_indices):
ax = axes.flat[i]
ax.imshow(x_test[img_idx])
ax.set_title('pred: %s' % format_y(y_pred[img_idx]))
ax.set_xlabel('true: %s' % format_y(y_true[img_idx]))
ax.set_xticks([])
ax.set_yticks([])
| 0.494873 | 0.359842 |
```
# 依赖:
# !pip install sentencepiece
# !pip install jieba
# !pip install regex
# !pip install tensorflow
# !pip install tensorflow-hub
import tensorflow_hub as hub
import tensorflow as tf
from gpt2_tokenizer import GPT2Tokenizer
tokenizer = GPT2Tokenizer(
'CPM-Generate/bpe_3w_new/vocab.json',
'CPM-Generate/bpe_3w_new/merges.txt',
model_file='CPM-Generate/bpe_3w_new/chinese_vocab.model')
gpt = hub.load('./cpm-lm-tf2/')
def sample(tokenizer, gpt, sentence, number=1, length=20):
inputs = tf.constant([tokenizer.encode(sentence)] * number, dtype=tf.int64)
length = tf.constant(length, dtype=tf.int64)
ret = gpt.signatures['serving_default'](inp=inputs, length=length)['output_0']
return [
tokenizer.decode(s).replace(' ', '')
for s in ret.numpy()
]
```
# 英文问答例子
```
ret = sample(tokenizer, gpt, '默写英文:\n狗dog\n猫cat\n鸟', 3, 10)
for x in ret:
print(x)
print('-' * 20)
```
# 默写古诗例子
```
ret = sample(tokenizer, gpt, '默写古诗:\n白日依山尽,黄河入海流。\n床前明月光,', 3, 10)
for x in ret:
print(x)
print('-' * 20)
```
# 不同的角色对话生成例子
```
ret = sample(tokenizer, gpt, '李大嘴:“各回各家,各找各妈!” \n佟掌柜:“', 3, 20)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '李大嘴:“各回各家,各找各妈!” \n白展堂:“', 3, 20)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '李大嘴:“各回各家,各找各妈!” \n莫小贝:“', 3, 20)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '李大嘴:“各回各家,各找各妈!” \n李白:“', 3, 20)
for x in ret:
print(x)
print('-' * 20)
```
# 问答的例子
```
ret = sample(tokenizer, gpt, '中国的首都是北京\n日本的首都是东京\n美国的首都是', 3, 3)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '李白所在朝代是唐\n李清照所在朝代是宋\n唐伯虎所在朝代是', 3, 1)
for x in ret:
print(x)
print('-' * 20)
```
# 算数例子
```
ret = sample(tokenizer, gpt, '1+1=2\n2+2=4\n3+3=6\n4+4=', 3, 1)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '1+1=2\n1+2=3\n1+3=4\n1+4=', 3, 1)
for x in ret:
print(x)
print('-' * 20)
```
# ???的例子
```
ret = sample(tokenizer, gpt, '''惊雷这通天修为
天塌地陷紫金锤
紫电这玄真火焰
''', 3, 30)
for x in ret:
print(x)
print('-' * 20)
```
# 写作文例子
```
ret = sample(tokenizer, gpt, '''爱情是''', 3, 50)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '''一时黛玉进了荣府,下了车。众嬷嬷引着,便往东转弯,穿过一个东西的穿堂,向南大厅之后,仪门内大院落,上面五间大正房,两边厢房鹿顶耳房钻山,四通八达,轩昂壮丽,比贾母处不同。黛玉便知这方是正经正内室,一条大甬路,直接出大门的。''', 3, 200)
for x in ret:
print(x)
print('-' * 20)
```
# 对话例子
```
ret = sample(tokenizer, gpt, '''A:“今天我想吃火锅”
B:“''', 3, 50)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '''A:“跟我一起去看电影吧”
B:“''', 3, 50)
for x in ret:
print(x)
print('-' * 20)
```
# 对联例子
```
ret = sample(tokenizer, gpt, '对对联:\n天对地\n雨对风\n大陆对长空\n雷隐隐对雾蒙蒙\n开心大吉对万事亨通\n王老五对', 3, 3)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '对对联:\n天对地\n雨对风\n大陆对长空\n雷隐隐对雾蒙蒙\n开心大吉对万事亨通\n爱因斯坦对', 3, 4)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '对对联:\n天对地\n雨对风\n大陆对长空\n雷隐隐对雾蒙蒙\n开心大吉对万事亨通\n李白对', 3, 3)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '对对联:\n天对地\n雨对风\n大陆对长空\n雷隐隐对雾蒙蒙\n开心大吉对万事亨通\n容嬷嬷对', 3, 3)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '对对联:\n天对地\n雨对风\n大陆对长空\n雷隐隐对雾蒙蒙\n开心大吉对万事亨通\n孙悟空对', 3, 3)
for x in ret:
print(x)
print('-' * 20)
```
|
github_jupyter
|
# 依赖:
# !pip install sentencepiece
# !pip install jieba
# !pip install regex
# !pip install tensorflow
# !pip install tensorflow-hub
import tensorflow_hub as hub
import tensorflow as tf
from gpt2_tokenizer import GPT2Tokenizer
tokenizer = GPT2Tokenizer(
'CPM-Generate/bpe_3w_new/vocab.json',
'CPM-Generate/bpe_3w_new/merges.txt',
model_file='CPM-Generate/bpe_3w_new/chinese_vocab.model')
gpt = hub.load('./cpm-lm-tf2/')
def sample(tokenizer, gpt, sentence, number=1, length=20):
inputs = tf.constant([tokenizer.encode(sentence)] * number, dtype=tf.int64)
length = tf.constant(length, dtype=tf.int64)
ret = gpt.signatures['serving_default'](inp=inputs, length=length)['output_0']
return [
tokenizer.decode(s).replace(' ', '')
for s in ret.numpy()
]
ret = sample(tokenizer, gpt, '默写英文:\n狗dog\n猫cat\n鸟', 3, 10)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '默写古诗:\n白日依山尽,黄河入海流。\n床前明月光,', 3, 10)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '李大嘴:“各回各家,各找各妈!” \n佟掌柜:“', 3, 20)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '李大嘴:“各回各家,各找各妈!” \n白展堂:“', 3, 20)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '李大嘴:“各回各家,各找各妈!” \n莫小贝:“', 3, 20)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '李大嘴:“各回各家,各找各妈!” \n李白:“', 3, 20)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '中国的首都是北京\n日本的首都是东京\n美国的首都是', 3, 3)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '李白所在朝代是唐\n李清照所在朝代是宋\n唐伯虎所在朝代是', 3, 1)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '1+1=2\n2+2=4\n3+3=6\n4+4=', 3, 1)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '1+1=2\n1+2=3\n1+3=4\n1+4=', 3, 1)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '''惊雷这通天修为
天塌地陷紫金锤
紫电这玄真火焰
''', 3, 30)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '''爱情是''', 3, 50)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '''一时黛玉进了荣府,下了车。众嬷嬷引着,便往东转弯,穿过一个东西的穿堂,向南大厅之后,仪门内大院落,上面五间大正房,两边厢房鹿顶耳房钻山,四通八达,轩昂壮丽,比贾母处不同。黛玉便知这方是正经正内室,一条大甬路,直接出大门的。''', 3, 200)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '''A:“今天我想吃火锅”
B:“''', 3, 50)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '''A:“跟我一起去看电影吧”
B:“''', 3, 50)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '对对联:\n天对地\n雨对风\n大陆对长空\n雷隐隐对雾蒙蒙\n开心大吉对万事亨通\n王老五对', 3, 3)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '对对联:\n天对地\n雨对风\n大陆对长空\n雷隐隐对雾蒙蒙\n开心大吉对万事亨通\n爱因斯坦对', 3, 4)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '对对联:\n天对地\n雨对风\n大陆对长空\n雷隐隐对雾蒙蒙\n开心大吉对万事亨通\n李白对', 3, 3)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '对对联:\n天对地\n雨对风\n大陆对长空\n雷隐隐对雾蒙蒙\n开心大吉对万事亨通\n容嬷嬷对', 3, 3)
for x in ret:
print(x)
print('-' * 20)
ret = sample(tokenizer, gpt, '对对联:\n天对地\n雨对风\n大陆对长空\n雷隐隐对雾蒙蒙\n开心大吉对万事亨通\n孙悟空对', 3, 3)
for x in ret:
print(x)
print('-' * 20)
| 0.192084 | 0.449755 |
```
import pandas as pd
pd.set_option("display.max_columns", 500)
pd.set_option("display.max_rows", 500)
```
## Table description generation
Enter the following info:
- Table name
- Location
- Separator
- Encoding (optional)
- Decimal mark (optional)
```
table = "DM_RETAIL_CLIE.csv"
location = "../../data/raw"
sep = ','
encoding = 'latin1'
decimal = ','
```
### Make a first view of the dataset to check most interesting columns
**Run this if it's a big file**
```
for chunk in pd.read_csv(f"{location}/{table}",
sep=sep,
encoding=encoding,
decimal=decimal,
chunksize=1000000):
df = chunk
break
```
**Run this if it's a relatively small file**
```
df = pd.read_csv(f"{location}/{table}",
sep=sep,
encoding=encoding,
decimal=decimal)
df.head(15)
df.dtypes
df.columns
```
*Based on last output, fill this list to mark most relevant columns*
```
to_use = ['DM_RETA_CLIE_ID', 'CIF_DM_ID', 'PRODUCTO', 'ACA', 'NUM_POL1',
'NUM_END', 'NUM_SECU_POL', 'MCA_ANU_POL', 'COD_DOCUMTO',
'FECHA_VIG_PER', 'FOR_COBRO', 'COD_MOD', 'COD_MARCA', 'PAT_VEH',
'MCA_0KM', 'COD_RAMO_VEH', 'MARCA', 'SUMA_ASEG', 'PRIMA', 'PREMIO',
'CIF_ID', 'CP_RIESGO', 'COD_AGENCIA_SOLIC', 'COD_AGENCIA_EMI',
'COD_PROD', 'PERIODO_FACT', 'TIPO_COMB', 'COD_AGENCIA', 'COD_INICIADOR',
'FECHA_SOLIC', 'COD_AGENCIA_GEST', 'CANAL', 'NOM_AGENCIA',
'ZONA_RETAIL', 'NUM_POL_ORI', 'PRODUCTO_PLAN', 'COBERTURA_PLAN',
'ES_ELEGIBLE', 'APTO_E_MAIL', 'COD_CIA', 'COD_SECC', 'COD_RAMO',
'ZONA_ADMIN', 'CANAL_ORIGEN', 'ORIGEN_DESC', 'PROVEEDOR_EQUIPO',
'DESC_EQUIPO', 'FECHA_INSTAL_EQUIPO', 'MARCA_SIMPLIFICADA', 'MODELO',
'GRUPO_COMBUSTIBLE', 'SCORE_PLAN', 'USR_ULT_ACT', 'F_ULT_ACT',
'USR_CARGA', 'F_CARGA', 'FLAG_ROBO_CONTENIDO', 'FLAG_EBILLING',
'VENC_PRENDA', 'ANTIG_MESES', 'FLAG_COBERTURA_309', 'NEGOCIO',
'PROB_BAJA_1MES', 'PROB_BAJA_3MESES', 'PROB_BAJA_6MESES', 'MCA_BAJA']
```
### Now write the file
**If it was a big file, read it completely with this line**
```
chunks = pd.read_csv(f"{location}/{table}",
sep=sep,
encoding=encoding,
decimal=decimal,
chunksize=1000000)
df = pd.concat(chunks)
f = open(f'../../docs/{table} feature description.csv','w')
f.write('Column;Used;Null Rate; dtype; Unique values; values\n')
for column in df.columns:
null_rate = round(df[column].isna().mean() * 100, 2)
unique_vals = df[column].nunique()
if (column in to_use) and null_rate < 50 and unique_vals > 1:
used = 'X'
else:
used=''
dtype = df[column].dtype
if(dtype == 'object'):
values = df[column].value_counts().head(10)
else:
values = f'[{df[column].min()};{df[column].max()}]'
f.write(f'{column};{used};{null_rate};{dtype};{unique_vals};"{values}"\n')
f.close()
```
|
github_jupyter
|
import pandas as pd
pd.set_option("display.max_columns", 500)
pd.set_option("display.max_rows", 500)
table = "DM_RETAIL_CLIE.csv"
location = "../../data/raw"
sep = ','
encoding = 'latin1'
decimal = ','
for chunk in pd.read_csv(f"{location}/{table}",
sep=sep,
encoding=encoding,
decimal=decimal,
chunksize=1000000):
df = chunk
break
df = pd.read_csv(f"{location}/{table}",
sep=sep,
encoding=encoding,
decimal=decimal)
df.head(15)
df.dtypes
df.columns
to_use = ['DM_RETA_CLIE_ID', 'CIF_DM_ID', 'PRODUCTO', 'ACA', 'NUM_POL1',
'NUM_END', 'NUM_SECU_POL', 'MCA_ANU_POL', 'COD_DOCUMTO',
'FECHA_VIG_PER', 'FOR_COBRO', 'COD_MOD', 'COD_MARCA', 'PAT_VEH',
'MCA_0KM', 'COD_RAMO_VEH', 'MARCA', 'SUMA_ASEG', 'PRIMA', 'PREMIO',
'CIF_ID', 'CP_RIESGO', 'COD_AGENCIA_SOLIC', 'COD_AGENCIA_EMI',
'COD_PROD', 'PERIODO_FACT', 'TIPO_COMB', 'COD_AGENCIA', 'COD_INICIADOR',
'FECHA_SOLIC', 'COD_AGENCIA_GEST', 'CANAL', 'NOM_AGENCIA',
'ZONA_RETAIL', 'NUM_POL_ORI', 'PRODUCTO_PLAN', 'COBERTURA_PLAN',
'ES_ELEGIBLE', 'APTO_E_MAIL', 'COD_CIA', 'COD_SECC', 'COD_RAMO',
'ZONA_ADMIN', 'CANAL_ORIGEN', 'ORIGEN_DESC', 'PROVEEDOR_EQUIPO',
'DESC_EQUIPO', 'FECHA_INSTAL_EQUIPO', 'MARCA_SIMPLIFICADA', 'MODELO',
'GRUPO_COMBUSTIBLE', 'SCORE_PLAN', 'USR_ULT_ACT', 'F_ULT_ACT',
'USR_CARGA', 'F_CARGA', 'FLAG_ROBO_CONTENIDO', 'FLAG_EBILLING',
'VENC_PRENDA', 'ANTIG_MESES', 'FLAG_COBERTURA_309', 'NEGOCIO',
'PROB_BAJA_1MES', 'PROB_BAJA_3MESES', 'PROB_BAJA_6MESES', 'MCA_BAJA']
chunks = pd.read_csv(f"{location}/{table}",
sep=sep,
encoding=encoding,
decimal=decimal,
chunksize=1000000)
df = pd.concat(chunks)
f = open(f'../../docs/{table} feature description.csv','w')
f.write('Column;Used;Null Rate; dtype; Unique values; values\n')
for column in df.columns:
null_rate = round(df[column].isna().mean() * 100, 2)
unique_vals = df[column].nunique()
if (column in to_use) and null_rate < 50 and unique_vals > 1:
used = 'X'
else:
used=''
dtype = df[column].dtype
if(dtype == 'object'):
values = df[column].value_counts().head(10)
else:
values = f'[{df[column].min()};{df[column].max()}]'
f.write(f'{column};{used};{null_rate};{dtype};{unique_vals};"{values}"\n')
f.close()
| 0.143638 | 0.556821 |
#Lab.08 / IBM3202 – Trajectory Analysis using MDanalysis
#Theoretical Aspects
Now that you have already generated a molecular dynamics trajectory in the previous tutorial, it is crucial to obtain quantifiable insights about your molecular system. There are a handfull of metrics that can be employed to achieve this, here we are going to focus on two of the most popular ones. RMSD and RMSF and distances. Due to time constrains we are not going to cover more advanced metrics but they will be available as an appendix.
<figure>
<center>
<img src="https://amarolab.ucsd.edu/syncImages/c0b042e1-4fe9-4727-9c0c-f556edb1b4a7sars_cov2_spike_protein.gif"/>
<figcaption>FIGURE 1. MD simulations of glycosylated SARS-CoV-2 spike protein attached to a membrane. Taken from the <a href="https://amarolab.ucsd.edu">Amaro Lab</a> at UCSD.</figcaption></center>
</figure>
⚠️⚠️ The following section is an adapted excerpt from the introduction of the Cpptraj tutorial by Daniel R. Roe, available in the [this link](http://ambermd.org/tutorials/analysis/tutorial1/).
## ***Root Mean Square Deviation (RMSD)*** overview
$RMSD$ measures the deviation of a target set of coordinates (i.e. a structure) to a reference set of coordinates, with $RMSD=0$ indicating a perfect overlap.
Then it follows that if we have a MD trajectory one would expect that the lower the RMSD the less changes happen in the time scale studied.
RMSD is defined as:
<center>
<font size="5">
$RMSD = \sqrt{\frac{\sum_{i = 0}^N m_i(X_i - Y_i)^2}{M}}$
</font>
</center>
Where **N** is the number of atoms, $m_{i}$ is the mass of atom $i$, $X_i$ is the coordinate vector for target atom $i$, $Y_i$ is the coordinate vector for reference atom $i$, and $M$ is the total mass. If the $RMSD$ is not mass-weighted, for all $i$, $m_i = 1$, and $M = N$.
When calculating $RMSD$ of a target to reference structure, there are two very important requirements as we will see soon in the practical part of this tutorial:
1. The number of atoms in the target must match the number of atoms in the reference.
2. The ordering of atoms in the target must match the ordering of atoms in the reference.
## ***Root Mean Square Fluctuation (RMSF)*** overview
As mentioned in the [MDanalysis user guide ](https://userguide.mdanalysis.org/stable/examples/analysis/alignment_and_rms/rmsf.html):
> The root-mean-square-fluctuation ($RMSF$) of a structure is **the time average of the RMSD**. It is calculated according to the below equation, where $x_i$
is the coordinates of particle $i$ and $⟨x_i⟩$ is the ensemble average position of $i$:
<center>
<font size="5">
$ ρ_i=\sqrt{⟨(x_i−⟨x_i⟩)^2⟩}$
</font>
</center>
> Where the $RMSD$ quantifies how much a structure diverges from a reference over time, the **$RSMF$ can reveal which areas of the system are the most mobile**. While $RMSD$ is frequently calculated to an initial state, the RMSF should be calculated to an average structure of the simulation. An area of the structure with high $RMSF$ values frequently diverges from the average, indicating high mobility. When $RMSF$ analysis is carried out on proteins, it is typically restricted to backbone or alpha-carbon atoms; these are more characteristic of conformational changes than the more flexible side-chains.
<figure>
<center>
<img src='https://www.frontiersin.org/files/Articles/329304/fphar-09-00492-HTML/image_m/fphar-09-00492-g002.jpg'/>
<figcaption>FIGURE 2. RMSD and RMSF plots of the structural changes occuring due to antagonist binding in the ligand binding pocket of androgen receptor, elucidated through MD simulations.<br>Sugunadevi S et al (2018)<i> Front Pharmacology 9, 492</i> </figcaption></center>
</figure>
### Distances
As you may recall trajectory files stores the position of each individual atoms, the calculation of distances through an MD simulation is usually really straightforward.
### MDanalysis package overview
As defined in the [documentation](https://docs.mdanalysis.org/stable/documentation_pages/overview.html):
> **MDAnalysis** is a Python package that provides classes to access data in molecular dynamics trajectories. It is object oriented so it treats atoms, groups of atoms, trajectories, etc as different objects. Each object has a number of operations defined on itself (also known as “methods”) and also contains values describing the object (“attributes”). For example, a **AtomGroup** object has a **center_of_mass()** method that returns the center of mass of the group of atoms. It also contains an attribute called residues that lists all the residues that belong to the group. Using methods such as **select_atoms()** (which uses CHARMM-style atom Selection commands) one can create new objects (in this case, another **AtomGroup**).
Example of MDanalysis code
A typical usage pattern is to iterate through a trajectory and analyze coordinates for every frame. In the following example the end-to-end distance of a protein and the radius of gyration of the backbone atoms are calculated:
```
#!pip3 install --upgrade MDAnalysis
#!pip install --upgrade MDAnalysisTests
import MDAnalysis
from MDAnalysis.tests.datafiles import PSF,DCD # test trajectory
import numpy.linalg
u = MDAnalysis.Universe(PSF,DCD) # always start with a Universe
nterm = u.select_atoms('segid 4AKE and name N')[0] # can access structure via segid (s4AKE) and atom name
cterm = u.select_atoms('segid 4AKE and name C')[-1] # ... takes the last atom named 'C'
bb = u.select_atoms('protein and backbone') # a selection (a AtomGroup)
for ts in u.trajectory: # iterate through all frames
r = cterm.pos - nterm.pos # end-to-end vector from atom positions
d = numpy.linalg.norm(r) # end-to-end distance
rgyr = bb.radius_of_gyration() # method of a AtomGroup; updates with each frame
print "frame = %d: d = %f Angstroem, Rgyr = %f Angstroem" % (ts.frame, d, rgyr)
```
## Basic concepts of MD analysis
1. Universes and atom groups
2. Selections
**Universe and AtomGroup**
MDAnalysis is object oriented. Molecular systems consist of Atom objects (instances of the class MDAnalysis.core.groups.Atom), which are grouped in AtomGroup instances. You build the AtomGroup of your system by loading a topology (list of atoms and possibly their connectivity) together with a trajectory (coordinate information) into the central data structure, the Universe object:
```
u = MDAnalysis.Universe(PSF, DCD)
print(u)
<Universe with 3341 atoms>
```
**Selections**
MDAnalysis comes with a fairly complete atom selection facility. Primarily, one uses the method select_atoms() of a Universe:
```
>>> CA = u.select_atoms("protein and name CA")
>>> CA
>>> <AtomGroup with 214 atoms>
```
but really any AtomGroup has a select_atoms() method:
```
>>> acidic = CA.select_atoms("resname ASP or resname GLU")
>>> acidic
<AtomGroup with 35 atoms>
>>> list(acidic.residues)
[<Residue GLU, 22>,
<Residue ASP, 33>,
<Residue GLU, 44>,
...
<Residue GLU, 210>]
```
See also All the selection keywords are described in the documentation.
Numerical ranges can be written as first-last (or equivalently first:last 1), where the range is inclusive. For example, get residues with residue IDs 5 to 100:
```
>>> u.select_atoms("resid 5-100")
<AtomGroup with 1439 atoms>
>>> u.select_atoms("resid 5-100").n_residues
96
```
Selections can be combined with boolean expressions. For example, to select the Cα atoms of all acidic residues [aspartic acid (“ASP”), glutamic acid (“GLU”), and histidines (named “HIS”, “HSD”, or “HSE”, depending on what force field is being used and what protonation state it is in)]:
```
>>> u.select_atoms("(resname ASP or resname GLU or resname HS*) and name CA")
<AtomGroup with 38 atoms>
```
We group with or separate selections by residue name (keyword resname). First either ASP, GLU, or any histidines are selected (we use “stemming” HS* to match any residue name that starts with “HS”). Then only those atoms whose name is “CA” are taken from the first set by an and selection. For convenience, the or in the first part of the selection can be taken implicitly with the shortcut syntax
```
>>> u.select_atoms("resname ASP GLU HS* and name CA")
<AtomGroup with 38 atoms>
```
If you want to dig deeper into the selection syntaxis of MDanalysis you can read the full documentation [here](https://docs.mdanalysis.org/1.0.0/documentation_pages/selections.html)
#Experimental Aspects
For this tutorial we are going to use a MD trajectory from DNA binding domain of the HIV integrase 1, as you can see in the PDB [entry](https://www.rcsb.org/structure/1IHV) this structure was solved using NMR, and was found to form a dimer in solution.
Here we are going to analyze 1000 frames of the integrase the monomeric and dimeric states and compare their RMSD, RMSF and measure distances.
<figure>
<center>
<img src='https://cdn.rcsb.org/images/structures/ih/1ihv/1ihv_chain-A.jpeg'/>
<figcaption>FIGURE 3. Cartoon representation of the structure of HIV integrase 1 (PDB 1IHV)</figcaption></center>
</figure>
#Part 0 Downloading and Installing the required software
## Installation
we must install the softwares to perform this tutorial. Namely:
- **MDAnalysis** for analyzing the data in molecular dynamics trajectories.
- **py3Dmol** for visualization of the protein structure.
```
!pip3 install --upgrade MDAnalysis
# Import MDanalysis
import MDAnalysis as mda
#from MDAnalysis.tests.datafiles import PSF, DCD, DCD2
from MDAnalysis.analysis import gnm
import matplotlib.pyplot as plt
%matplotlib inline
#Installing py3Dmol using pip
!pip install py3Dmol
#Importing py3Dmol for safety
import py3Dmol
!wget http://www.rcsb.org/pdb/files/1IHV.pdb.gz
!gunzip 1IHV.pdb.gz
# We can visualize the dimeric conformation
view=py3Dmol.view()
view.addModel(open('1IHV.pdb', 'r').read(),'pdb')
#Zooming into all visualized structures
view.zoomTo()
#Here we set the background color as white
view.setBackgroundColor('white')
#Here we set the visualization style for chain B and C
view.setStyle({'cartoon': {'color':'purple'}})
#And we finally visualize the structures using the command below
view.show()
```
## Downloading MD trajectories
```
#Here we copy to our Colab instance the trajectory files for the monomer and dimer
!wget https://github.com/pb3lab/ibm3202/raw/master/files/md_files/1ihv_dimer_protPBC.xtc
!wget https://github.com/pb3lab/ibm3202/raw/master/files/md_files/1ihv_mon_protPBC.xtc
!wget https://github.com/pb3lab/ibm3202/raw/master/files/md_files/1ihv_mon_protPBC.gro
!wget https://github.com/pb3lab/ibm3202/raw/master/files/md_files/1ihv_dimer_protPBC.gro
```
#Part I – Calculating $RMSD$ and $RMSF$
## I.1 - RMSD
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Please note that the following cells are part of the tutorial of MDanalysis available [here](https://userguide.mdanalysis.org/stable/examples/analysis/alignment_and_rms/aligning_trajectory_to_frame.html)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
© Copyright 2019-2020, Lily Wang, Irfan Alibay, Rocco Meli, Mieczyslaw Torchala, Yuxuan Zhuang, Richard J. Gowers, and Oliver Beckstein.
### I.1A - Calculating $RMSD$ against a reference frame
```
import MDAnalysis as mda
from MDAnalysis.analysis import align, rms
```
First we need to load our trajectory files onto MDanalysis. This is done by creating an instance of an **Universe** object.
```
#Here we create two Universes each containing the same trajectory of the monomeric trajectory,
#one called mobile and the other ref which will be used as reference
mobile = mda.Universe("/content/1ihv_mon_protPBC.gro", "/content/1ihv_mon_protPBC.xtc")
ref = mda.Universe("/content/1ihv_mon_protPBC.gro", "/content/1ihv_mon_protPBC.xtc")
```
While `align.alignto` aligns single structures, or a frame of a trajectory, `align.AlignTraj` efficiently aligns an entire trajectory to a reference.
We first check the $RMSD$ of our unaligned trajectory so we can compare results later. The code below sets the `mobile` trajectory to the last frame by indexing the last timestep, `ref` to the first frame by indexing the first timestep, and computes the root mean squared deviation between the $\alpha$-carbon positions.
```
mobile.trajectory[-1] # set mobile trajectory to last frame
ref.trajectory[0] # set reference trajectory to first frame
mobile_ca = mobile.select_atoms('name CA')
ref_ca = ref.select_atoms('name CA')
rms.rmsd(mobile_ca.positions, ref_ca.positions, superposition=False)
```
Now we can align the trajectory. We have already set ref to the first frame. In the cell below, we load the positions of the trajectory into memory so we can modify the trajectory in Python.
```
aligner = align.AlignTraj(mobile, ref, select='name CA', in_memory=True).run()
mobile.trajectory[-1] # set mobile trajectory to last frame
ref.trajectory[0] # set reference trajectory to first frame
mobile_ca = mobile.select_atoms('name CA')
ref_ca = ref.select_atoms('name CA')
rms.rmsd(mobile_ca.positions, ref_ca.positions, superposition=False)
```
**QUESTION❓:** How much does the RMSD before and after alignment compare?
### I.1B - RMSD of a Universe with multiple selections over time
It is more efficient to use the MDAnalysis.analysis.rms.RMSD class to calculate the $RMSD$ of an entire trajectory to a single reference point.
The rms.RMSD class first performs a rotational and translational alignment of the target trajectory to the reference universe at ref_frame, using the atoms in select to determine the transformation. Then, without further alignment, the RMSD of each group in the `groupselections` argument is calculated.
[Source](https://userguide.mdanalysis.org/stable/examples/analysis/alignment_and_rms/rmsd.html)
```
#Here we create two Universes each containing the same trajectory of the monomeric trajectory,
#one called mobile and the other ref which will be used as reference
monomer_mobile = mda.Universe("/content/1ihv_mon_protPBC.gro", "/content/1ihv_mon_protPBC.xtc")
monomer_ref = mda.Universe("/content/1ihv_mon_protPBC.gro", "/content/1ihv_mon_protPBC.xtc")
rms.rmsd(monomer_mobile.select_atoms('backbone').positions, # coordinates to align
monomer_ref.select_atoms('backbone').positions, # reference coordinates
center=True, # subtract the center of geometry
superposition=True) # superimpose coordinates
# Here we define two
Loop1 = 'backbone and resid 227-240'
Loop2 = 'backbone and resid 252-257'
#Here we calculate the RMSD
R_rmsd = rms.RMSD(mobile, # universe to align
ref, # reference universe or atomgroup
select='backbone', # group to superimpose and calculate RMSD
groupselections=[Loop1, Loop2], # groups for RMSD
ref_frame=0) # frame index of the reference
R_rmsd.run()
```
The data is saved in R_rmsd.results.rmsd as an array. We can check the dimensions of the array using the *shape* attribute.
```
R_rmsd.results.rmsd.shape
```
The variable `R_rmsd.results.rmsd` has a row for each timestep. The first two columns of each row are the frame index of the time step, and the time (which is guessed in trajectory formats without timesteps). The third column is $RMSD$ of the `select` argument. The last few columns are the $RMSD$ of the groups in `groupselections`.
#### Plotting the data
We can easily plot this data using the common data analysis package pandas. We turn the `R_rmsd.results.rmsd` array into a DataFrame and label each column below.
```
import pandas as pd
df_rmsd_mono = pd.DataFrame(R_rmsd.results.rmsd,
columns=['Frame', 'Time (ns)','Backbone','Loop1','Loop2'])
df_rmsd_mono
```
Here we use Plotly to easily create an interactive plot
```
import plotly.graph_objects as go
import plotly.express as px
fig = px.line(df_rmsd_mono, x="Frame", y="Backbone",
line_shape="spline", render_mode="svg",
labels={ "Backbone": "RMSD(Å)" })
fig.add_scatter(x=df_rmsd_mono["Frame"], y=df_rmsd_mono["Loop1"], name="Loop 1", showlegend=True )
fig.add_scatter(x=df_rmsd_mono["Frame"], y=df_rmsd_mono["Loop2"], name="Loop 2")
fig.add_scatter(x=df_rmsd_mono["Frame"], y=df_rmsd_mono["Backbone"], name="Backbone" )
fig.show()
```
**QUESTION:** What is the range (in angstroms) of the RMSD fluctuations?
### I.1B - Now is your turn to calculate the RMSD of the dimer
```
#Here we create two Universes each containing the same trajectory of the monomeric trajectory,
#one called mobile and the other ref which will be used as reference
dimer_mobile = mda.Universe("/content/1ihv_dimer_protPBC.gro", "/content/1ihv_dimer_protPBC.xtc")
dimer_ref = mda.Universe("/content/1ihv_dimer_protPBC.gro", "/content/1ihv_dimer_protPBC.xtc")
rms.rmsd(dimer_mobile.select_atoms('backbone').positions, # coordinates to align
dimer_ref.select_atoms('backbone').positions, # reference coordinates
center=True, # subtract the center of geometry
superposition=True) # superimpose coordinates
Loop1A = 'backbone and resid 227-240'
Loop2A = 'backbone and resid 252-257'
R_rmsd_dimer = rms.RMSD(dimer_mobile, # universe to align
dimer_ref, # reference universe or atomgroup
select='backbone', # group to superimpose and calculate RMSD
groupselections=[Loop1A, Loop2A], # groups for RMSD
ref_frame=0) # frame index of the reference
R_rmsd_dimer.run()
```
The data is saved in `R_rmsd.results.rmsd` as an array. We can check the dimensions of the array using the `shape` attribute.
```
R_rmsd_dimer.results.rmsd.shape
import pandas as pd
#Here we create the pandas dataframe from the R_rmsd_dimer.rmsd object
df_rmsd_dimer = pd.DataFrame(R_rmsd_dimer.results.rmsd,
columns=['Frame', 'Time (ns)','Backbone','Loop1A','Loop2A'])
df_rmsd_dimer
```
Lets plot the $RMSD$ over time for the Monomer and the Dimer
```
import plotly.express as px
fig = px.line(df_rmsd_mono, x="Frame", y="Backbone",
line_shape="spline", render_mode="svg",
labels={ "Backbone": "RMSD(Å)" })
fig.add_scatter(x=df_rmsd_mono["Frame"], y=df_rmsd_mono["Backbone"], name="Backbone Monomer" )
fig.add_scatter(x=df_rmsd_dimer["Frame"], y=df_rmsd_dimer["Backbone"], name="Backbone Dimer AVG" )
fig.show()
```
Now is your turn to explore if there are any changes in the RMSD of the loops over time
```
#Hints:
fig.add_scatter(x=df_rmsd_mono["Frame"], y=df_rmsd_mono["Loop1"], name="Loop 1", showlegend=True )
fig.add_scatter(x=df_rmsd_mono["Frame"], y=df_rmsd_mono["Loop2"], name="Loop 2")
```
##I.2 - RMSF
Now, we want to assess the average atomic fluctuations during the MD trajectories for both the monomeric and dimeric states of the integrase
```
#First we need to make sure that our universes are properly aligned
aligner = align.AlignTraj(monomer_mobile, monomer_ref, select='name CA', in_memory=True).run()
aligner = align.AlignTraj(dimer_mobile, dimer_ref, select='name CA', in_memory=True).run()
#Here we create a selection of the previously aligned trajectory
c_alphas_monomer = monomer_mobile.select_atoms('protein and name CA')
c_alphas_dimer = dimer_mobile.select_atoms('protein and name CA')
R_rmsf_mono = rms.RMSF(c_alphas_monomer).run()
R_rmsf_dimer = rms.RMSF(c_alphas_dimer).run()
rms.RMSF
c_alphas_dimer.resids
import pandas as pd
#Here we create a pandas dataframe
df_rmsf_mono = pd.DataFrame(R_rmsf_mono.results.rmsf,
columns=['BackboneRMSF'])
df_rmsf_mono = df_rmsf_mono.assign(Residue = c_alphas_monomer.resids)
df_rmsf_dimer = pd.DataFrame(R_rmsf_dimer.results.rmsf,
columns=['BackboneRMSF'])
df_rmsf_dimer = df_rmsf_dimer.assign(Residue = c_alphas_dimer.resids)
df_rmsf_dimer_A = df_rmsf_dimer.head(52)
df_rmsf_dimer_B = df_rmsf_dimer.tail(52)
import plotly.express as px
fig = px.line(df_rmsf_mono, x="Residue", y="BackboneRMSF",
line_shape="linear", render_mode="svg",
labels={ "BackboneRMSF": "RMSF(Å)" , "Residue":"Residue Number"}, color=None)
fig.add_scatter(x=df_rmsf_dimer_A["Residue"], y=df_rmsf_dimer_A["BackboneRMSF"], name="Dimer Chain A", line_shape="linear")
fig.add_scatter(x=df_rmsf_dimer_B["Residue"], y=df_rmsf_dimer_B["BackboneRMSF"], name="Dimer Chain B", line_shape="linear")
fig.add_scatter(x=df_rmsf_mono["Residue"], y=df_rmsf_mono["BackboneRMSF"], name="Monomer", line_shape="linear")
fig.show()
```
**QUESTIONS❓**
1. Is there any difference in local fluctuations between the monomeric and dimeric states?
2. Which region exhibit a high atomic fluctuation through the trajectory?
3. What are the structural features of these regions?
```
#Visualize using Py3Dmol the monomeric state
```
## I.3 - Pairwise RMSD
```
#Import modules
import MDAnalysis as mda
from MDAnalysis.analysis import diffusionmap, align
```
Pairwise RMSDs are an effective way to quickly view similarities and differences in conformations (as measured by RMSD) across a entire trajectory and not only in comparison to just one reference frame.
We are going to use the previously aligned trajectories **monomer_mobile** and **dimer_mobile**
We can then calculate a pairwise $RMSD$ matrix with the `diffusionmap.DistanceMatrix` class, by using the default the rms.rmsd metric.
```
#Monomer distance matrix calculation
matrix1 = diffusionmap.DistanceMatrix(monomer_mobile, select='name CA').run()
#Dimer distance matrix calculation
matrix2 = diffusionmap.DistanceMatrix(dimer_mobile, select='name CA').run()
```
The results array is in `matrix.results.dist_matrix` as a square array with the shape (#n_frames, #n_frame).
```
print(matrix1.results.dist_matrix.shape)
print(matrix2.results.dist_matrix.shape)
```
We can use the common plotting package matplotlib to create a heatmap from this array.
```
#Here we plot the Monomer RMSD matrix
plt.imshow(matrix1.results.dist_matrix, cmap='viridis')
plt.xlabel('Frame')
plt.ylabel('Frame')
plt.colorbar(label='RMSD (Angstrom)')
#Here we plot the Dimer RMSD matrix
plt.imshow(matrix2.results.dist_matrix, cmap='viridis')
plt.xlabel('Frame')
plt.ylabel('Frame')
plt.colorbar(label='RMSD (Angstrom)')
```
#Appendix A - Normal Mode and Principal Component Analysis
## I - Normal Mode analysis long range contacts
```
monomer_mobile = mda.Universe("/content/1ihv_mon_protPBC.gro", "/content/1ihv_mon_protPBC.xtc")
dimer_mobile = mda.Universe("/content/1ihv_dimer_protPBC.gro", "/content/1ihv_dimer_protPBC.xtc")
nma1 = gnm.GNMAnalysis(monomer_mobile, select='protein and name CA', cutoff=7.0)
nma1.run()
nma2 = gnm.GNMAnalysis(dimer_mobile,
select='protein and name CA',
cutoff=7.0)
nma2.run()
len(nma2.results)
%matplotlib inline
#sns.set_context('notebook')
%config InlineBackend.figure_format = 'retina'
## we plot the distribution of eigenvalues. The dominant conformation state is represented by the peak at:
eigenvalues1 = [res[1] for res in nma1.results]
eigenvalues2 = [res[1] for res in nma2.results]
histfig, histax = plt.subplots(nrows=2, sharex=True, sharey=True)
histax[0].hist(eigenvalues1)
histax[1].hist(eigenvalues2)
histax[1].set_xlabel('Eigenvalue')
histax[0].set_ylabel('Frequency (Monomer)')
histax[1].set_ylabel('Frequency (Dimer)');
plt.show()
import pandas as pd
import numpy as np
##Create Panda Dataframe Files
eu = pd.DataFrame({'LC': eigenvalues1,'NLC': eigenvalues2})
#Save Panda DataFrame
eu.to_csv('./DF_1.csv')
#inspect Dataframe
eu.head()
time1 = [res[0] for res in nma1.results]
time2 = [res[0] for res in nma2.results]
linefig, lineax = plt.subplots()
plt.plot(time1, eigenvalues1, label='Monomer')
plt.plot(time2, eigenvalues2, label='Dimer')
lineax.set_xlabel('Time (ps)')
lineax.set_ylabel('Eigenvalue')
plt.legend();
plt.show()
```
## II -PCA
```
#Import things
import MDAnalysis as mda
from MDAnalysis.analysis import diffusionmap, align
import matplotlib.pyplot as plt
%matplotlib inline
```
**WARNING!!!**
For best results, your trajectory should be aligned on your atom group selection before you run the analysis. Setting align=True will not give correct results in the PCA.
```
# Align the trajectory
aligner1 = align.AlignTraj(monomer_mobile, monomer_mobile, select='backbone', in_memory=True).run()
aligner2 = align.AlignTraj(dimer_mobile, dimer_mobile, select='backbone', in_memory=True).run()
```
###Overview of the method
**Principal component analysis (PCA)** is a statistical technique that decomposes a system of observations into linearly uncorrelated variables called principal components. These components are ordered so that the first principal component accounts for the largest variance in the data, and each following component accounts for lower and lower variance. PCA is often applied to molecular dynamics trajectories to **extract the large-scale conformational motions or “essential dynamics” of a protein**. The frame-by-frame conformational fluctuation can be considered a linear combination of the essential dynamics yielded by the PCA.
In MDAnalysis, the method is as follows:
> Optionally align each frame in your trajectory to the first frame.
Construct a 3N x 3N covariance for the N atoms in your trajectory. Optionally, you can provide a mean; otherwise the covariance is to the averaged structure over the trajectory.
Diagonalise the covariance matrix. The eigenvectors are the principal components, and their eigenvalues are the associated variance.
Sort the eigenvalues so that the principal components are ordered by variance.
```
import MDAnalysis as mda
import MDAnalysis.analysis.pca as pca
from MDAnalysis.coordinates.base import Timestep
import numpy as np
import os
import glob
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.cm
import matplotlib.ticker as ticker
%matplotlib inline
```
### Call the PCA function
You can choose how many principal components to save from the analysis with n_components. The default value is None, which saves all of them. You can also pass a mean reference structure to be used in calculating the covariance matrix. With the default value of None, the covariance uses the mean coordinates of the trajectory.
```
pcu1 = pca.PCA(monomer_mobile, select='protein and backbone',
align=False, mean=None,
n_components=None).run()
pcu2 = pca.PCA(dimer_mobile, select='protein and backbone',
align=False, mean=None,
n_components=None).run()
```
### The principal components are saved in pc.p_components.
If you kept all the components, you should have an array of shape (natoms×3,natoms×3)
```
backbone1 = monomer_mobile.select_atoms('protein and backbone')
n_bb1 = len(backbone1)
print('There are {} backbone atoms in the analysis'.format(n_bb1))
print(pcu1.p_components.shape)
backbone2 = dimer_mobile.select_atoms('protein and backbone')
n_bb2 = len(backbone2)
print('There are {} backbone atoms in the analysis'.format(n_bb2))
print(pcu2.p_components.shape)
```
### Get the variance of the first component
```
pcu1.variance[0],pcu2.variance[0]
```
This variance is somewhat meaningless by itself. It is much more intuitive to consider the variance of a principal component as a percentage of the total variance in the data. MDAnalysis also tracks the percentage cumulative variance in pc.cumulated_variance. As shown below, the first principal component contains 90.3% the total trajectory variance. The first three components combined account for 96.4% of the total variance.
```
print(pcu1.cumulated_variance[0])
print(pcu1.cumulated_variance[2])
print(pcu2.cumulated_variance[0])
print(pcu2.cumulated_variance[2])
plt.plot(pcu1.cumulated_variance[:10])
plt.xlabel('Principal component')
plt.ylabel('Cumulative variance');
plt.plot(pcu2.cumulated_variance[:10])
plt.xlabel('Principal component')
plt.ylabel('Cumulative variance');
```
### Visualising projections into a reduced dimensional space
The pc.transform() method transforms a given atom group into weights $w_i$
over each principal component $i$
.
$w_i(t)=(r(t)−r⎯⎯⎯)⋅u_i$
$r(t)$
are the atom group coordinates at time t
, r⎯⎯⎯
are the mean coordinates used in the PCA, and ui
is the i
th principal component eigenvector u
.
While the given atom group must have the same number of atoms that the principal components were calculated over, it does not have to be the same group.
Again, passing n_components=None will tranform your atom group over every component. Below, we limit the output to projections over 5 principal components only.
```
transformed1 = pcu1.transform(backbone1, n_components=5)
transformed2 = pcu2.transform(backbone2, n_components=5)
transformed1.shape, transformed2.shape
```
The output has the shape (n_frames, n_components). For easier analysis and plotting we can turn the array into a DataFrame.
```
df1 = pd.DataFrame(transformed1,
columns=['PC{}'.format(i+1) for i in range(5)])
df1['Time (ns)'] = df1.index * monomer_mobile.trajectory.dt
df1.head()
df2 = pd.DataFrame(transformed2,
columns=['PC{}'.format(i+1) for i in range(5)])
df2['Time (ns)'] = df2.index * dimer_mobile.trajectory.dt
df2.head()
```
There are several ways we can visualise the data. Using the Seaborn’s PairGrid tool is the quickest and easiest way, if you have seaborn already installed.
```
import seaborn as sns
g1 = sns.PairGrid(df1, hue='Time (ns)',
palette=sns.color_palette('Oranges_d',
n_colors=len(df1)))
g1.map(plt.scatter, marker='.')
g2 = sns.PairGrid(df2, hue='Time (ns)',
palette=sns.color_palette('Oranges_d',
n_colors=len(df2)))
g2.map(plt.scatter, marker='.')
```
Another way to investigate the essential motions of the trajectory is to project the original trajectory onto each of the principal components, to visualise the motion of the principal component. The product of the weights $w_i(t)$
for principal component $_i$
with the eigenvector $u_i$
describes fluctuations around the mean on that axis, so the projected trajectory $r_i(t)$
is simply the fluctuations added onto the mean positions r⎯⎯⎯
.
$r_i(t)=w_i(t)×u_i+r$
Below, we generate the projected coordinates of the first principal component. The mean positions are stored at pc.mean.
```
pc1u1 = pcu1.p_components[:, 0]
trans1u1 = transformed1[:, 0]
projectedu1 = np.outer(trans1u1, pc1u1)
coordinatesu1 = projectedu1.reshape(len(trans1u1), -1, 3) + pcu1.mean
pc1u2 = pcu2.p_components[:, 0]
trans1u2 = transformed2[:, 0]
projectedu2 = np.outer(trans1u2, pc1u2)
coordinatesu2 = projectedu2.reshape(len(trans1u2), -1, 3) - pcu2.mean
```
We can create a new universe from this to visualise the movement over the first principal component.
```
!pip3 install nglview
import nglview as nv
proj1 = mda.Merge(backbone2)
proj1.load_new(coordinatesu2, order="fac")
view = nv.show_mdanalysis(proj1.atoms)
view
from google.colab import output
output.enable_custom_widget_manager()
from google.colab import output
output.disable_custom_widget_manager()
```
If you have nglview installed, you can view the trajectory in the notebook. Otherwise, you can write the trajectory out to a file and use another program such as VMD. Below, we create a movie of the component.
```
!pip install moviepy==0.2.2.11
!pip install imageio==1.6
from nglview.contrib.movie import MovieMaker
movie = MovieMaker(view, output='pc1u1.gif', in_memory=True)
movie.make()
```
|
github_jupyter
|
#!pip3 install --upgrade MDAnalysis
#!pip install --upgrade MDAnalysisTests
import MDAnalysis
from MDAnalysis.tests.datafiles import PSF,DCD # test trajectory
import numpy.linalg
u = MDAnalysis.Universe(PSF,DCD) # always start with a Universe
nterm = u.select_atoms('segid 4AKE and name N')[0] # can access structure via segid (s4AKE) and atom name
cterm = u.select_atoms('segid 4AKE and name C')[-1] # ... takes the last atom named 'C'
bb = u.select_atoms('protein and backbone') # a selection (a AtomGroup)
for ts in u.trajectory: # iterate through all frames
r = cterm.pos - nterm.pos # end-to-end vector from atom positions
d = numpy.linalg.norm(r) # end-to-end distance
rgyr = bb.radius_of_gyration() # method of a AtomGroup; updates with each frame
print "frame = %d: d = %f Angstroem, Rgyr = %f Angstroem" % (ts.frame, d, rgyr)
u = MDAnalysis.Universe(PSF, DCD)
print(u)
<Universe with 3341 atoms>
>>> CA = u.select_atoms("protein and name CA")
>>> CA
>>> <AtomGroup with 214 atoms>
>>> acidic = CA.select_atoms("resname ASP or resname GLU")
>>> acidic
<AtomGroup with 35 atoms>
>>> list(acidic.residues)
[<Residue GLU, 22>,
<Residue ASP, 33>,
<Residue GLU, 44>,
...
<Residue GLU, 210>]
```
See also All the selection keywords are described in the documentation.
Numerical ranges can be written as first-last (or equivalently first:last 1), where the range is inclusive. For example, get residues with residue IDs 5 to 100:
Selections can be combined with boolean expressions. For example, to select the Cα atoms of all acidic residues [aspartic acid (“ASP”), glutamic acid (“GLU”), and histidines (named “HIS”, “HSD”, or “HSE”, depending on what force field is being used and what protonation state it is in)]:
We group with or separate selections by residue name (keyword resname). First either ASP, GLU, or any histidines are selected (we use “stemming” HS* to match any residue name that starts with “HS”). Then only those atoms whose name is “CA” are taken from the first set by an and selection. For convenience, the or in the first part of the selection can be taken implicitly with the shortcut syntax
If you want to dig deeper into the selection syntaxis of MDanalysis you can read the full documentation [here](https://docs.mdanalysis.org/1.0.0/documentation_pages/selections.html)
#Experimental Aspects
For this tutorial we are going to use a MD trajectory from DNA binding domain of the HIV integrase 1, as you can see in the PDB [entry](https://www.rcsb.org/structure/1IHV) this structure was solved using NMR, and was found to form a dimer in solution.
Here we are going to analyze 1000 frames of the integrase the monomeric and dimeric states and compare their RMSD, RMSF and measure distances.
<figure>
<center>
<img src='https://cdn.rcsb.org/images/structures/ih/1ihv/1ihv_chain-A.jpeg'/>
<figcaption>FIGURE 3. Cartoon representation of the structure of HIV integrase 1 (PDB 1IHV)</figcaption></center>
</figure>
#Part 0 Downloading and Installing the required software
## Installation
we must install the softwares to perform this tutorial. Namely:
- **MDAnalysis** for analyzing the data in molecular dynamics trajectories.
- **py3Dmol** for visualization of the protein structure.
## Downloading MD trajectories
#Part I – Calculating $RMSD$ and $RMSF$
## I.1 - RMSD
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Please note that the following cells are part of the tutorial of MDanalysis available [here](https://userguide.mdanalysis.org/stable/examples/analysis/alignment_and_rms/aligning_trajectory_to_frame.html)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
© Copyright 2019-2020, Lily Wang, Irfan Alibay, Rocco Meli, Mieczyslaw Torchala, Yuxuan Zhuang, Richard J. Gowers, and Oliver Beckstein.
### I.1A - Calculating $RMSD$ against a reference frame
First we need to load our trajectory files onto MDanalysis. This is done by creating an instance of an **Universe** object.
While `align.alignto` aligns single structures, or a frame of a trajectory, `align.AlignTraj` efficiently aligns an entire trajectory to a reference.
We first check the $RMSD$ of our unaligned trajectory so we can compare results later. The code below sets the `mobile` trajectory to the last frame by indexing the last timestep, `ref` to the first frame by indexing the first timestep, and computes the root mean squared deviation between the $\alpha$-carbon positions.
Now we can align the trajectory. We have already set ref to the first frame. In the cell below, we load the positions of the trajectory into memory so we can modify the trajectory in Python.
**QUESTION❓:** How much does the RMSD before and after alignment compare?
### I.1B - RMSD of a Universe with multiple selections over time
It is more efficient to use the MDAnalysis.analysis.rms.RMSD class to calculate the $RMSD$ of an entire trajectory to a single reference point.
The rms.RMSD class first performs a rotational and translational alignment of the target trajectory to the reference universe at ref_frame, using the atoms in select to determine the transformation. Then, without further alignment, the RMSD of each group in the `groupselections` argument is calculated.
[Source](https://userguide.mdanalysis.org/stable/examples/analysis/alignment_and_rms/rmsd.html)
The data is saved in R_rmsd.results.rmsd as an array. We can check the dimensions of the array using the *shape* attribute.
The variable `R_rmsd.results.rmsd` has a row for each timestep. The first two columns of each row are the frame index of the time step, and the time (which is guessed in trajectory formats without timesteps). The third column is $RMSD$ of the `select` argument. The last few columns are the $RMSD$ of the groups in `groupselections`.
#### Plotting the data
We can easily plot this data using the common data analysis package pandas. We turn the `R_rmsd.results.rmsd` array into a DataFrame and label each column below.
Here we use Plotly to easily create an interactive plot
**QUESTION:** What is the range (in angstroms) of the RMSD fluctuations?
### I.1B - Now is your turn to calculate the RMSD of the dimer
The data is saved in `R_rmsd.results.rmsd` as an array. We can check the dimensions of the array using the `shape` attribute.
Lets plot the $RMSD$ over time for the Monomer and the Dimer
Now is your turn to explore if there are any changes in the RMSD of the loops over time
##I.2 - RMSF
Now, we want to assess the average atomic fluctuations during the MD trajectories for both the monomeric and dimeric states of the integrase
**QUESTIONS❓**
1. Is there any difference in local fluctuations between the monomeric and dimeric states?
2. Which region exhibit a high atomic fluctuation through the trajectory?
3. What are the structural features of these regions?
## I.3 - Pairwise RMSD
Pairwise RMSDs are an effective way to quickly view similarities and differences in conformations (as measured by RMSD) across a entire trajectory and not only in comparison to just one reference frame.
We are going to use the previously aligned trajectories **monomer_mobile** and **dimer_mobile**
We can then calculate a pairwise $RMSD$ matrix with the `diffusionmap.DistanceMatrix` class, by using the default the rms.rmsd metric.
The results array is in `matrix.results.dist_matrix` as a square array with the shape (#n_frames, #n_frame).
We can use the common plotting package matplotlib to create a heatmap from this array.
#Appendix A - Normal Mode and Principal Component Analysis
## I - Normal Mode analysis long range contacts
## II -PCA
**WARNING!!!**
For best results, your trajectory should be aligned on your atom group selection before you run the analysis. Setting align=True will not give correct results in the PCA.
###Overview of the method
**Principal component analysis (PCA)** is a statistical technique that decomposes a system of observations into linearly uncorrelated variables called principal components. These components are ordered so that the first principal component accounts for the largest variance in the data, and each following component accounts for lower and lower variance. PCA is often applied to molecular dynamics trajectories to **extract the large-scale conformational motions or “essential dynamics” of a protein**. The frame-by-frame conformational fluctuation can be considered a linear combination of the essential dynamics yielded by the PCA.
In MDAnalysis, the method is as follows:
> Optionally align each frame in your trajectory to the first frame.
Construct a 3N x 3N covariance for the N atoms in your trajectory. Optionally, you can provide a mean; otherwise the covariance is to the averaged structure over the trajectory.
Diagonalise the covariance matrix. The eigenvectors are the principal components, and their eigenvalues are the associated variance.
Sort the eigenvalues so that the principal components are ordered by variance.
### Call the PCA function
You can choose how many principal components to save from the analysis with n_components. The default value is None, which saves all of them. You can also pass a mean reference structure to be used in calculating the covariance matrix. With the default value of None, the covariance uses the mean coordinates of the trajectory.
### The principal components are saved in pc.p_components.
If you kept all the components, you should have an array of shape (natoms×3,natoms×3)
### Get the variance of the first component
This variance is somewhat meaningless by itself. It is much more intuitive to consider the variance of a principal component as a percentage of the total variance in the data. MDAnalysis also tracks the percentage cumulative variance in pc.cumulated_variance. As shown below, the first principal component contains 90.3% the total trajectory variance. The first three components combined account for 96.4% of the total variance.
### Visualising projections into a reduced dimensional space
The pc.transform() method transforms a given atom group into weights $w_i$
over each principal component $i$
.
$w_i(t)=(r(t)−r⎯⎯⎯)⋅u_i$
$r(t)$
are the atom group coordinates at time t
, r⎯⎯⎯
are the mean coordinates used in the PCA, and ui
is the i
th principal component eigenvector u
.
While the given atom group must have the same number of atoms that the principal components were calculated over, it does not have to be the same group.
Again, passing n_components=None will tranform your atom group over every component. Below, we limit the output to projections over 5 principal components only.
The output has the shape (n_frames, n_components). For easier analysis and plotting we can turn the array into a DataFrame.
There are several ways we can visualise the data. Using the Seaborn’s PairGrid tool is the quickest and easiest way, if you have seaborn already installed.
Another way to investigate the essential motions of the trajectory is to project the original trajectory onto each of the principal components, to visualise the motion of the principal component. The product of the weights $w_i(t)$
for principal component $_i$
with the eigenvector $u_i$
describes fluctuations around the mean on that axis, so the projected trajectory $r_i(t)$
is simply the fluctuations added onto the mean positions r⎯⎯⎯
.
$r_i(t)=w_i(t)×u_i+r$
Below, we generate the projected coordinates of the first principal component. The mean positions are stored at pc.mean.
We can create a new universe from this to visualise the movement over the first principal component.
If you have nglview installed, you can view the trajectory in the notebook. Otherwise, you can write the trajectory out to a file and use another program such as VMD. Below, we create a movie of the component.
| 0.78316 | 0.963882 |
```
from nilearn.image import resample_to_img, smooth_img
from nilearn.plotting import plot_stat_map
import numpy as np
import nibabel as nb
import pylab as plt
from scipy.ndimage.filters import maximum_filter
from skimage.feature import peak_local_max
%matplotlib inline
import pyneurovault
from pyneurovault import api
# Get a collection
collection = api.get_collections(pks=1804)
plt.imshow(new_nii.get_data()[:,:,80])
slice = new_nii.get_data()[:,:,80]
slice[slice < 3] = 0
plt.imshow(slice)
from glob import glob
import os
maps = glob("D:/data/hcp_statmaps/*.nii.gz")
vetted = [v.split("_")[-1][:-4] for v in glob("D:/drive/workspace/atlas_analysis/vetted_thumbnails/*")]
maps = [map for map in maps if os.path.split(map)[-1][:-7] in vetted]
maps
os.path.split(map)[-1][:-7]
import png
from scipy.misc import imsave, imread
from scipy.signal import resample
imread("D:/data/pix2pix-hcp/train/B/100307_EMOTION.png")[:,20]
(((slice[:,20]+10)/20)*np.iinfo(np.uint16).max).astype(np.uint16)
import numpy as np
import tensorflow as tf
def np_to_tfrecords(X, Y, file_path_prefix, verbose=True):
"""
Converts a Numpy array (or two Numpy arrays) into a tfrecord file.
For supervised learning, feed training inputs to X and training labels to Y.
For unsupervised learning, only feed training inputs to X, and feed None to Y.
The length of the first dimensions of X and Y should be the number of samples.
Parameters
----------
X : numpy.ndarray of rank 2
Numpy array for training inputs. Its dtype should be float32, float64, or int64.
If X has a higher rank, it should be rshape before fed to this function.
Y : numpy.ndarray of rank 2 or None
Numpy array for training labels. Its dtype should be float32, float64, or int64.
None if there is no label array.
file_path_prefix : str
The path and name of the resulting tfrecord file to be generated, without '.tfrecords'
verbose : bool
If true, progress is reported.
Raises
------
ValueError
If input type is not float (64 or 32) or int.
"""
def _dtype_feature(ndarray):
"""match appropriate tf.train.Feature class with dtype of ndarray. """
assert isinstance(ndarray, np.ndarray)
dtype_ = ndarray.dtype
if dtype_ == np.float64 or dtype_ == np.float32:
return lambda array: tf.train.Feature(float_list=tf.train.FloatList(value=array))
elif dtype_ == np.int64:
return lambda array: tf.train.Feature(int64_list=tf.train.Int64List(value=array))
else:
raise ValueError("The input should be numpy ndarray. \
Instaed got {}".format(ndarray.dtype))
assert isinstance(X, np.ndarray)
assert len(X.shape) == 2 # If X has a higher rank,
# it should be rshape before fed to this function.
assert isinstance(Y, np.ndarray) or Y is None
# load appropriate tf.train.Feature class depending on dtype
dtype_feature_x = _dtype_feature(X)
if Y is not None:
assert X.shape[0] == Y.shape[0]
assert len(Y.shape) == 2
dtype_feature_y = _dtype_feature(Y)
# Generate tfrecord writer
result_tf_file = file_path_prefix + '.tfrecords'
writer = tf.python_io.TFRecordWriter(result_tf_file)
if verbose:
print("Serializing {:d} examples into {}".format(X.shape[0], result_tf_file))
# iterate over each sample,
# and serialize it as ProtoBuf.
for idx in range(X.shape[0]):
x = X[idx]
if Y is not None:
y = Y[idx]
d_feature = {}
d_feature['X'] = dtype_feature_x(x)
if Y is not None:
d_feature['Y'] = dtype_feature_y(y)
features = tf.train.Features(feature=d_feature)
example = tf.train.Example(features=features)
serialized = example.SerializeToString()
writer.write(serialized)
if verbose:
print("Writing {} done!".format(result_tf_file))
from glob import glob
import os
maps = glob("D:/data/hcp_statmaps/*.nii.gz")
from scipy.ndimage.interpolation import zoom
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
for map in maps:
old_nii = nb.load(map)
new_nii = smooth_img(old_nii, 6)
data = new_nii.get_data()
data = zoom(data, (64.0/data.shape[0],64.0/data.shape[1],64.0/data.shape[2]), order=0)
zeros = data == 0
#slice_mask = imresize(slice == 0, (512,512), interp="nearest")
#slice = imresize(slice, (512,512), interp="nearest")
#slice[slice_mask == 255] = 0
#slice = np.lib.pad(slice, [(slice.shape[1]-slice.shape[0]+292,0), (292,0)], 'constant', constant_values=(0, 0))
#plt.figure(figsize=(12,9))
#plt.subplot(2,2,1)
#zeros = np.logical_or(slice == slice[0,0], np.isnan(slice))
#slice = (slice - slice.min())/(slice.max()-slice.min())
#slice[zeros] = 0
data[data < 0] = (data[data < 0]/(-data[data < 0].min()))
data[data > 0] = (data[data > 0]/data[data > 0].max())
data = (((data+0.5)/1.0)).astype(np.float32)
peaks = peak_local_max(data, indices=False, min_distance=5, threshold_rel=0.85).astype(np.float32)
if peaks.sum():
#print(slice[:,20])
#plt.imshow(slice)
#plt.colorbar()
#imsave("D:/data/pix2pix-hcp/train/B/" + os.path.split(map)[-1][:-7] + ".png", slice)
#plt.subplot(2,2,2)
#plt.imshow(peaks)
#imsave("D:/data/pix2pix-hcp/train/A/" + os.path.split(map)[-1][:-7] + ".png", peaks)
# Create a feature
writer = tf.python_io.TFRecordWriter("D:/data/pix2pix-hcp/train/combined3d_tf/" + os.path.split(map)[-1][:-7] +".tfrecords")
example = tf.train.Example(features=tf.train.Features(feature={
'x': _int64_feature(data.shape[0]),
'y': _int64_feature(data.shape[1]),
'z': _int64_feature(data.shape[2]),
'imageA_raw': _bytes_feature(peaks.tostring()),
'imageB_raw': _bytes_feature(data.tostring())}))
# Serialize to string and write on the file
writer.write(example.SerializeToString())
writer.close()
data = new_nii.get_data()
peaks.astype(np.int16)
plt.imshow(zeros)
plt.imshow(slice_mask)
resample_to_img?
png.from_array?
plt.imshow(peaks)
slice.shape
512-219
slice.shape
from scipy.misc import imresize
s = imresize(slice, (512,512), interp="nearest")
s
```
|
github_jupyter
|
from nilearn.image import resample_to_img, smooth_img
from nilearn.plotting import plot_stat_map
import numpy as np
import nibabel as nb
import pylab as plt
from scipy.ndimage.filters import maximum_filter
from skimage.feature import peak_local_max
%matplotlib inline
import pyneurovault
from pyneurovault import api
# Get a collection
collection = api.get_collections(pks=1804)
plt.imshow(new_nii.get_data()[:,:,80])
slice = new_nii.get_data()[:,:,80]
slice[slice < 3] = 0
plt.imshow(slice)
from glob import glob
import os
maps = glob("D:/data/hcp_statmaps/*.nii.gz")
vetted = [v.split("_")[-1][:-4] for v in glob("D:/drive/workspace/atlas_analysis/vetted_thumbnails/*")]
maps = [map for map in maps if os.path.split(map)[-1][:-7] in vetted]
maps
os.path.split(map)[-1][:-7]
import png
from scipy.misc import imsave, imread
from scipy.signal import resample
imread("D:/data/pix2pix-hcp/train/B/100307_EMOTION.png")[:,20]
(((slice[:,20]+10)/20)*np.iinfo(np.uint16).max).astype(np.uint16)
import numpy as np
import tensorflow as tf
def np_to_tfrecords(X, Y, file_path_prefix, verbose=True):
"""
Converts a Numpy array (or two Numpy arrays) into a tfrecord file.
For supervised learning, feed training inputs to X and training labels to Y.
For unsupervised learning, only feed training inputs to X, and feed None to Y.
The length of the first dimensions of X and Y should be the number of samples.
Parameters
----------
X : numpy.ndarray of rank 2
Numpy array for training inputs. Its dtype should be float32, float64, or int64.
If X has a higher rank, it should be rshape before fed to this function.
Y : numpy.ndarray of rank 2 or None
Numpy array for training labels. Its dtype should be float32, float64, or int64.
None if there is no label array.
file_path_prefix : str
The path and name of the resulting tfrecord file to be generated, without '.tfrecords'
verbose : bool
If true, progress is reported.
Raises
------
ValueError
If input type is not float (64 or 32) or int.
"""
def _dtype_feature(ndarray):
"""match appropriate tf.train.Feature class with dtype of ndarray. """
assert isinstance(ndarray, np.ndarray)
dtype_ = ndarray.dtype
if dtype_ == np.float64 or dtype_ == np.float32:
return lambda array: tf.train.Feature(float_list=tf.train.FloatList(value=array))
elif dtype_ == np.int64:
return lambda array: tf.train.Feature(int64_list=tf.train.Int64List(value=array))
else:
raise ValueError("The input should be numpy ndarray. \
Instaed got {}".format(ndarray.dtype))
assert isinstance(X, np.ndarray)
assert len(X.shape) == 2 # If X has a higher rank,
# it should be rshape before fed to this function.
assert isinstance(Y, np.ndarray) or Y is None
# load appropriate tf.train.Feature class depending on dtype
dtype_feature_x = _dtype_feature(X)
if Y is not None:
assert X.shape[0] == Y.shape[0]
assert len(Y.shape) == 2
dtype_feature_y = _dtype_feature(Y)
# Generate tfrecord writer
result_tf_file = file_path_prefix + '.tfrecords'
writer = tf.python_io.TFRecordWriter(result_tf_file)
if verbose:
print("Serializing {:d} examples into {}".format(X.shape[0], result_tf_file))
# iterate over each sample,
# and serialize it as ProtoBuf.
for idx in range(X.shape[0]):
x = X[idx]
if Y is not None:
y = Y[idx]
d_feature = {}
d_feature['X'] = dtype_feature_x(x)
if Y is not None:
d_feature['Y'] = dtype_feature_y(y)
features = tf.train.Features(feature=d_feature)
example = tf.train.Example(features=features)
serialized = example.SerializeToString()
writer.write(serialized)
if verbose:
print("Writing {} done!".format(result_tf_file))
from glob import glob
import os
maps = glob("D:/data/hcp_statmaps/*.nii.gz")
from scipy.ndimage.interpolation import zoom
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
for map in maps:
old_nii = nb.load(map)
new_nii = smooth_img(old_nii, 6)
data = new_nii.get_data()
data = zoom(data, (64.0/data.shape[0],64.0/data.shape[1],64.0/data.shape[2]), order=0)
zeros = data == 0
#slice_mask = imresize(slice == 0, (512,512), interp="nearest")
#slice = imresize(slice, (512,512), interp="nearest")
#slice[slice_mask == 255] = 0
#slice = np.lib.pad(slice, [(slice.shape[1]-slice.shape[0]+292,0), (292,0)], 'constant', constant_values=(0, 0))
#plt.figure(figsize=(12,9))
#plt.subplot(2,2,1)
#zeros = np.logical_or(slice == slice[0,0], np.isnan(slice))
#slice = (slice - slice.min())/(slice.max()-slice.min())
#slice[zeros] = 0
data[data < 0] = (data[data < 0]/(-data[data < 0].min()))
data[data > 0] = (data[data > 0]/data[data > 0].max())
data = (((data+0.5)/1.0)).astype(np.float32)
peaks = peak_local_max(data, indices=False, min_distance=5, threshold_rel=0.85).astype(np.float32)
if peaks.sum():
#print(slice[:,20])
#plt.imshow(slice)
#plt.colorbar()
#imsave("D:/data/pix2pix-hcp/train/B/" + os.path.split(map)[-1][:-7] + ".png", slice)
#plt.subplot(2,2,2)
#plt.imshow(peaks)
#imsave("D:/data/pix2pix-hcp/train/A/" + os.path.split(map)[-1][:-7] + ".png", peaks)
# Create a feature
writer = tf.python_io.TFRecordWriter("D:/data/pix2pix-hcp/train/combined3d_tf/" + os.path.split(map)[-1][:-7] +".tfrecords")
example = tf.train.Example(features=tf.train.Features(feature={
'x': _int64_feature(data.shape[0]),
'y': _int64_feature(data.shape[1]),
'z': _int64_feature(data.shape[2]),
'imageA_raw': _bytes_feature(peaks.tostring()),
'imageB_raw': _bytes_feature(data.tostring())}))
# Serialize to string and write on the file
writer.write(example.SerializeToString())
writer.close()
data = new_nii.get_data()
peaks.astype(np.int16)
plt.imshow(zeros)
plt.imshow(slice_mask)
resample_to_img?
png.from_array?
plt.imshow(peaks)
slice.shape
512-219
slice.shape
from scipy.misc import imresize
s = imresize(slice, (512,512), interp="nearest")
s
| 0.487795 | 0.569194 |
# Forecasting with an RNN
## Setup
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
keras = tf.keras
def plot_series(time, series, format="-", start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label=label)
plt.xlabel("Time")
plt.ylabel("Value")
if label:
plt.legend(fontsize=14)
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
def window_dataset(series, window_size, batch_size=32,
shuffle_buffer=1000):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer)
dataset = dataset.map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
def model_forecast(model, series, window_size):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size))
ds = ds.batch(32).prefetch(1)
forecast = model.predict(ds)
return forecast
time = np.arange(4 * 365 + 1)
slope = 0.05
baseline = 10
amplitude = 40
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
```
## Simple RNN Forecasting
```
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = window_dataset(x_train, window_size, batch_size=128)
model = keras.models.Sequential([
keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
input_shape=[None]),
keras.layers.SimpleRNN(100, return_sequences=True),
keras.layers.SimpleRNN(100),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200.0)
])
lr_schedule = keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-7 * 10**(epoch / 20))
optimizer = keras.optimizers.SGD(lr=1e-7, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-7, 1e-4, 0, 30])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = window_dataset(x_train, window_size, batch_size=128)
valid_set = window_dataset(x_valid, window_size, batch_size=128)
model = keras.models.Sequential([
keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
input_shape=[None]),
keras.layers.SimpleRNN(100, return_sequences=True),
keras.layers.SimpleRNN(100),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200.0)
])
optimizer = keras.optimizers.SGD(lr=1.5e-6, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
early_stopping = keras.callbacks.EarlyStopping(patience=50)
model_checkpoint = keras.callbacks.ModelCheckpoint(
"my_checkpoint", save_best_only=True)
model.fit(train_set, epochs=500,
validation_data=valid_set,
callbacks=[early_stopping, model_checkpoint])
model = keras.models.load_model("my_checkpoint")
rnn_forecast = model_forecast(
model,
series[split_time - window_size:-1],
window_size)[:, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
```
## Sequence-to-Sequence Forecasting
```
def seq2seq_window_dataset(series, window_size, batch_size=32,
shuffle_buffer=1000):
series = tf.expand_dims(series, axis=-1)
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size + 1))
ds = ds.shuffle(shuffle_buffer)
ds = ds.map(lambda w: (w[:-1], w[1:]))
return ds.batch(batch_size).prefetch(1)
for X_batch, Y_batch in seq2seq_window_dataset(tf.range(10), 3,
batch_size=1):
print("X:", X_batch.numpy())
print("Y:", Y_batch.numpy())
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = seq2seq_window_dataset(x_train, window_size,
batch_size=128)
model = keras.models.Sequential([
keras.layers.SimpleRNN(100, return_sequences=True,
input_shape=[None, 1]),
keras.layers.SimpleRNN(100, return_sequences=True),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200)
])
lr_schedule = keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-7 * 10**(epoch / 30))
optimizer = keras.optimizers.SGD(lr=1e-7, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-7, 1e-4, 0, 30])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = seq2seq_window_dataset(x_train, window_size,
batch_size=128)
valid_set = seq2seq_window_dataset(x_valid, window_size,
batch_size=128)
model = keras.models.Sequential([
keras.layers.SimpleRNN(100, return_sequences=True,
input_shape=[None, 1]),
keras.layers.SimpleRNN(100, return_sequences=True),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200.0)
])
optimizer = keras.optimizers.SGD(lr=1e-6, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
early_stopping = keras.callbacks.EarlyStopping(patience=10)
model.fit(train_set, epochs=500,
validation_data=valid_set,
callbacks=[early_stopping])
rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)
rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
keras = tf.keras
def plot_series(time, series, format="-", start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label=label)
plt.xlabel("Time")
plt.ylabel("Value")
if label:
plt.legend(fontsize=14)
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
def window_dataset(series, window_size, batch_size=32,
shuffle_buffer=1000):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer)
dataset = dataset.map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
def model_forecast(model, series, window_size):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size))
ds = ds.batch(32).prefetch(1)
forecast = model.predict(ds)
return forecast
time = np.arange(4 * 365 + 1)
slope = 0.05
baseline = 10
amplitude = 40
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = window_dataset(x_train, window_size, batch_size=128)
model = keras.models.Sequential([
keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
input_shape=[None]),
keras.layers.SimpleRNN(100, return_sequences=True),
keras.layers.SimpleRNN(100),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200.0)
])
lr_schedule = keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-7 * 10**(epoch / 20))
optimizer = keras.optimizers.SGD(lr=1e-7, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-7, 1e-4, 0, 30])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = window_dataset(x_train, window_size, batch_size=128)
valid_set = window_dataset(x_valid, window_size, batch_size=128)
model = keras.models.Sequential([
keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
input_shape=[None]),
keras.layers.SimpleRNN(100, return_sequences=True),
keras.layers.SimpleRNN(100),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200.0)
])
optimizer = keras.optimizers.SGD(lr=1.5e-6, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
early_stopping = keras.callbacks.EarlyStopping(patience=50)
model_checkpoint = keras.callbacks.ModelCheckpoint(
"my_checkpoint", save_best_only=True)
model.fit(train_set, epochs=500,
validation_data=valid_set,
callbacks=[early_stopping, model_checkpoint])
model = keras.models.load_model("my_checkpoint")
rnn_forecast = model_forecast(
model,
series[split_time - window_size:-1],
window_size)[:, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
def seq2seq_window_dataset(series, window_size, batch_size=32,
shuffle_buffer=1000):
series = tf.expand_dims(series, axis=-1)
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size + 1))
ds = ds.shuffle(shuffle_buffer)
ds = ds.map(lambda w: (w[:-1], w[1:]))
return ds.batch(batch_size).prefetch(1)
for X_batch, Y_batch in seq2seq_window_dataset(tf.range(10), 3,
batch_size=1):
print("X:", X_batch.numpy())
print("Y:", Y_batch.numpy())
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = seq2seq_window_dataset(x_train, window_size,
batch_size=128)
model = keras.models.Sequential([
keras.layers.SimpleRNN(100, return_sequences=True,
input_shape=[None, 1]),
keras.layers.SimpleRNN(100, return_sequences=True),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200)
])
lr_schedule = keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-7 * 10**(epoch / 30))
optimizer = keras.optimizers.SGD(lr=1e-7, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-7, 1e-4, 0, 30])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = seq2seq_window_dataset(x_train, window_size,
batch_size=128)
valid_set = seq2seq_window_dataset(x_valid, window_size,
batch_size=128)
model = keras.models.Sequential([
keras.layers.SimpleRNN(100, return_sequences=True,
input_shape=[None, 1]),
keras.layers.SimpleRNN(100, return_sequences=True),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200.0)
])
optimizer = keras.optimizers.SGD(lr=1e-6, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
early_stopping = keras.callbacks.EarlyStopping(patience=10)
model.fit(train_set, epochs=500,
validation_data=valid_set,
callbacks=[early_stopping])
rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)
rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
| 0.916031 | 0.906281 |
# KMeans Clustering
This application lets users cluster data stored on [Geoscience ANALYST](https://mirageoscience.com/mining-industry-software/geoscience-analyst) objects using the [Scikit-Learn.KMeans](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html?highlight=kmeans#sklearn.cluster.KMeans)
clustering algorithm. Leveraging [Plotly](https://plotly.com/) visualization tools, users are able to assess the clustering
results using histogram, box, scatter, inertia and cross-correlation plots.
<img align="right" width="50%" src="./images/clustering_app.gif">
New user? Visit the [Getting Started](../installation.rst) page.
## Application
The following sections provide details on the different parameters controlling the application. Interactive widgets shown below are for demonstration purposes only.
```
from geoapps.processing import Clustering
app = Clustering(h5file=r"../../../assets/FlinFlon.geoh5")
app.main
```
## Project Selection
Select and connect to an existing **geoh5** project file containing data.
```
app.project_panel
```
See the [Project Panel](base_application.ipynb#Project-Panel) page for more details.
## Object and Data Selection
List of objects available from the target `geoh5` project. Only the selected data channels are used in the clustering routine.
```
app.data_panel
```
## Clustering
Select the number of clusters (groups) desired.
```
app.n_clusters
```
By default, the application will run
KMeans for 2, 4, 8, 16 and 32 groups in order to draw a meaningful :ref:`Inertia Curve <inertia_curve>`
### Refresh
Re-run the clustering after changing the list of input data or [Population Downsampling](#Population-Downsampling).
```
app.refresh_clusters
```
## Clusters Color
Assign a specific color to a given cluster group.
```
app.clusters_panel
```
## Analytics
Plotting options to analyze the selected data and KMeans clusters.
```
app.plotting_options
```
### Crossplot
See the [Scatter Plot](#Scatter-Plot) documentation for details.
```
app.figure
```
By default, the color values displayed correspond to the cluster groups.
### Statistics
Display statistics for the chosen data channels using [pandas.DataFrame.describe](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.describe.html).
```
app.dataframe.describe(percentiles=None, include=None, exclude=None)
```
### Confusion Matrix
Display the confusion matrix for the chosen data channels.
```
app.plotting_options.value = "Confusion Matrix" # Emulate button click
app.heatmap_fig
```
### Histograms
Display histograms for each data field.
```
app.plotting_options.value = "Histogram" # Emulate button click
app.histo_plots["Al2O3"]
```
By default, all fields are normalized between [0, 1].
#### Scale
Option to increase the weight of a specific data field.
```
app.scalings["Al2O3"]
```
#### Upper Bound
Upper bound (maximum) value used for the KMeans clustering.
```
app.upper_bounds["Al2O3"]
```
#### Lower Bound
Lower bound (minimum) value used for the KMeans clustering.
```
app.lower_bounds["Al2O3"]
```
### Inertia
Display the clusters inertia, or sum squares of distances between each sample
to the center of its cluster group. The optimal number of clusters is
generally thought to be at the point of maximum curvature.
```
app.plotting_options.value = "Inertia" # Emulate button click
app.inertia_plot
```
### Boxplot
Display boxplots describing the range of values within each cluster for a chosen data field.
```
app.plotting_options.value = "Boxplot" # Emulate button click
app.box_plots["Al2O3"]
```
## Output panel
Clusters can be exported directly to the target object by clicking on the export button. This can yield two possible outcomes:
- If cluster data with the same name exists on the object, a new data field is created.
- If a data field with the same name is found on the target object, values are replaced. This allows users to quickly experiment with different number of cluster without having to delete previous trials.
```
app.output_panel
```
### (Optional) GA Pro - Live link
See [Output Panel](base_application.ipynb#Output-Panel) base applications.
```
app.channels_plot_options.value = "V"
app.box_plots["V"].write_image("images/cluster_thumbnail.png")
app.plotting_options.value = "Crossplot" # Emulate button click
```
Need help? Contact us at support@mirageoscience.com
|
github_jupyter
|
from geoapps.processing import Clustering
app = Clustering(h5file=r"../../../assets/FlinFlon.geoh5")
app.main
app.project_panel
app.data_panel
app.n_clusters
app.refresh_clusters
app.clusters_panel
app.plotting_options
app.figure
app.dataframe.describe(percentiles=None, include=None, exclude=None)
app.plotting_options.value = "Confusion Matrix" # Emulate button click
app.heatmap_fig
app.plotting_options.value = "Histogram" # Emulate button click
app.histo_plots["Al2O3"]
app.scalings["Al2O3"]
app.upper_bounds["Al2O3"]
app.lower_bounds["Al2O3"]
app.plotting_options.value = "Inertia" # Emulate button click
app.inertia_plot
app.plotting_options.value = "Boxplot" # Emulate button click
app.box_plots["Al2O3"]
app.output_panel
app.channels_plot_options.value = "V"
app.box_plots["V"].write_image("images/cluster_thumbnail.png")
app.plotting_options.value = "Crossplot" # Emulate button click
| 0.612078 | 0.986803 |
# DataPipe Typing System
DataPipe typing system is introduced to make the graph of DataPipes more reliable and provide type inference for users. The typing system provide the flexibility for users to determine which level(s) to have type enforcement and risk false positive errors.
```
from torch.utils.data import IterDataPipe
from typing import Any, Iterator, List, Tuple, TypeVar, Set, Union
T_co = TypeVar('T_co', covariant=True)
# Hide traceback of Error
import functools
ipython = get_ipython()
ipython.showtraceback = functools.partial(ipython.showtraceback, exception_only=True)
```
## Compile-time
Compile-time typing is enabled by default for now. And it will generate an attribute of `type` for each DataPipe. If there is no type hint specified, the DataPipe is set to a default type `Any`.
### Invalid Typing
- Return type hint of `__iter__` is not `Iterator`
```
class InvalidDP1(IterDataPipe[int]):
def __iter__(self) -> str:
pass
```
- Return type hint of `__iter__` doesn't match or is subtype of the declared type hint
```
class InvalidDP2(IterDataPipe[int]):
def __iter__(self) -> Iterator[str]:
pass
```
### Valid Typing
- It's allowed that return type is a subtype of class type annotation
```
class DP(IterDataPipe[Tuple]):
def __iter__(self) -> Iterator[Tuple[int, str]]:
pass
class DP(IterDataPipe):
def __iter__(self) -> Iterator[int]:
pass
```
- Default Typing (Any) with/without return hint for `__iter__`
```
class DP(IterDataPipe):
def __iter__(self):
pass
print(DP.type)
class DP(IterDataPipe):
def __iter__(self) -> Iterator:
pass
print(DP.type)
class DP(IterDataPipe):
def __iter__(self) -> Iterator[T_co]:
pass
print(DP.type)
```
- Matched type hints (including equal but not same types)
```
class DP(IterDataPipe[Tuple[T_co, str]]):
def __iter__(self) -> Iterator[Tuple[T_co, str]]:
pass
print(DP.type)
T = TypeVar('T', int, str) # equals to Union[int, str]
class DP(IterDataPipe[Tuple[T, str]]):
def __iter__(self) -> Iterator[Tuple[Union[int, str], str]]:
pass
print(DP.type)
```
### Attribute `type`
The attribute `type` is added into each DataPipe class.
```
def print_helper(cls, obj):
print("DataPipe[{}]\nInstance type: {}"
.format(cls.type, obj.type))
class DP(IterDataPipe[List[int]]):
def __iter__(self) -> Iterator[List[int]]:
pass
print_helper(DP, DP())
class DP(IterDataPipe[Any]):
def __iter__(self) -> Iterator[Any]:
pass
print_helper(DP, DP())
class DP(IterDataPipe[tuple]):
def __iter__(self) -> Iterator[tuple]:
pass
print_helper(DP, DP())
```
## Construct-time
Construct-time type checking can be enabled by a decorator `argument_validation`. Users can opt in by attaching the decorator to `__init__`function, then users can run operations with the type inference of input `DataPipe`(s).
```
from torch.utils.data import argument_validation
class DP(IterDataPipe):
@argument_validation
def __init__(self, dp: IterDataPipe[Union[int, tuple]]):
self.dp = dp
def __iter__(self):
for d in self.dp:
yield d
dp = DP(range(10))
```
- When any input is annotated by `IterDataPipe` with detail typing hints, the `type` of input instance must be a subtype of the hint.
```
class Temp(IterDataPipe[str]):
def __iter__(self):
pass
dp = DP(Temp())
```
- Example of valid input `DataPipe`
```
class Temp(IterDataPipe[Tuple[int, T_co]]):
def __iter__(self):
pass
dp = DP(Temp())
```
## Runtime
Runtime type checking is enabled by a decorator `runtime_validation`. Users can opt in by attaching the decorator to `__iter__` to check the output data is an instance of subtype of `type` attribute of the DataPipe.
Note: This decorator is only allowed to be attached to `__iter__` for now. It can be extended into `__getitem__` and further `nonblocking` functions.
`runtime_validation_disabled` is a context manager to turn off the type validaiton during runtime. It's useful for DataLoader to disable the runtime validaiton after the first epoch is finished for better performance. Note: the runtime validation is enabled by default.
```
from torch.utils.data import runtime_validation, runtime_validation_disabled
class DP(IterDataPipe[Tuple[int, T_co]]):
def __init__(self, datasource):
self.ds = datasource
@runtime_validation
def __iter__(self):
for d in self.ds:
yield d
```
Raise `RuntimeError` when the data is not of subtype
- `str` is not subtype of `int`
```
dp = DP([(1, 1), (2, 2), ('3', 3)])
for d in dp:
print(d)
```
- Context manager to disable the runtime validation
```
with runtime_validation_disabled():
print(list(dp))
```
- `List` is not subtype of `Tuple`
```
dp = DP([(1, 1), (2, 2), [3, 3]])
for d in dp:
print(d)
```
- Context manager to disable the runtime validation
```
with runtime_validation_disabled():
print(list(dp))
```
- No error will be raised when all data pass the validation
```
dp = DP([(1, 1), (2, '2'), (3, 3.)])
for d in dp:
print(d)
```
|
github_jupyter
|
from torch.utils.data import IterDataPipe
from typing import Any, Iterator, List, Tuple, TypeVar, Set, Union
T_co = TypeVar('T_co', covariant=True)
# Hide traceback of Error
import functools
ipython = get_ipython()
ipython.showtraceback = functools.partial(ipython.showtraceback, exception_only=True)
class InvalidDP1(IterDataPipe[int]):
def __iter__(self) -> str:
pass
class InvalidDP2(IterDataPipe[int]):
def __iter__(self) -> Iterator[str]:
pass
class DP(IterDataPipe[Tuple]):
def __iter__(self) -> Iterator[Tuple[int, str]]:
pass
class DP(IterDataPipe):
def __iter__(self) -> Iterator[int]:
pass
class DP(IterDataPipe):
def __iter__(self):
pass
print(DP.type)
class DP(IterDataPipe):
def __iter__(self) -> Iterator:
pass
print(DP.type)
class DP(IterDataPipe):
def __iter__(self) -> Iterator[T_co]:
pass
print(DP.type)
class DP(IterDataPipe[Tuple[T_co, str]]):
def __iter__(self) -> Iterator[Tuple[T_co, str]]:
pass
print(DP.type)
T = TypeVar('T', int, str) # equals to Union[int, str]
class DP(IterDataPipe[Tuple[T, str]]):
def __iter__(self) -> Iterator[Tuple[Union[int, str], str]]:
pass
print(DP.type)
def print_helper(cls, obj):
print("DataPipe[{}]\nInstance type: {}"
.format(cls.type, obj.type))
class DP(IterDataPipe[List[int]]):
def __iter__(self) -> Iterator[List[int]]:
pass
print_helper(DP, DP())
class DP(IterDataPipe[Any]):
def __iter__(self) -> Iterator[Any]:
pass
print_helper(DP, DP())
class DP(IterDataPipe[tuple]):
def __iter__(self) -> Iterator[tuple]:
pass
print_helper(DP, DP())
from torch.utils.data import argument_validation
class DP(IterDataPipe):
@argument_validation
def __init__(self, dp: IterDataPipe[Union[int, tuple]]):
self.dp = dp
def __iter__(self):
for d in self.dp:
yield d
dp = DP(range(10))
class Temp(IterDataPipe[str]):
def __iter__(self):
pass
dp = DP(Temp())
class Temp(IterDataPipe[Tuple[int, T_co]]):
def __iter__(self):
pass
dp = DP(Temp())
from torch.utils.data import runtime_validation, runtime_validation_disabled
class DP(IterDataPipe[Tuple[int, T_co]]):
def __init__(self, datasource):
self.ds = datasource
@runtime_validation
def __iter__(self):
for d in self.ds:
yield d
dp = DP([(1, 1), (2, 2), ('3', 3)])
for d in dp:
print(d)
with runtime_validation_disabled():
print(list(dp))
dp = DP([(1, 1), (2, 2), [3, 3]])
for d in dp:
print(d)
with runtime_validation_disabled():
print(list(dp))
dp = DP([(1, 1), (2, '2'), (3, 3.)])
for d in dp:
print(d)
| 0.820685 | 0.891008 |
# Selección óptima de portafolios I
<img style="float: right; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Separation_theorem_of_MPT.svg/2000px-Separation_theorem_of_MPT.svg.png" width="400px" height="400px" />
En la clase pasada vimos que:
- La LAC describe las posibles selecciones de riesgo-rendimiento entre un activo libre de riesgo y un activo riesgoso.
- Su pendiente es igual al radio de Sharpe del activo riesgoso.
- La asignación óptima de capital para cualquier inversionista es el punto tangente de la curva de indiferencia del inversionista con la LAC (depende de las preferencias particulares - aversión al riesgo).
Para todo lo anterior, supusimos que ya teníamos el portafolio óptimo (activo riesgoso).
En el siguiente análisis:
**Objetivos:**
- ¿Cuál es el portafolio óptimo de activos riesgosos?
- ¿Cuál es el mejor portafolio de activos riesgosos?
- Es un portafolio eficiente en media-varianza.
- Problema: dado un conjunto de activos riesgosos, ¿cómo construimos la mejor combinación?
*Referencia:*
- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.
___
## 1. Maximizando el radio de Sharpe
### ¿Qué pasa si tenemos dos activos riesgosos?
Cuando tenemos dos o más activos riesgosos, tenemos disponibles diferentes LAC. ¿Qué significan sus pendientes?
<font color=blue> Ver en el tablero.</font>
Pregunta:
- ¿Qué es lo que se quiere?
**Conclusión:**
- El mejor portafolio de activos no depende de las preferencias individuales, y por tanto va a ser igual para todos.
- Dicho mejor portafolio maximiza el radio de Sharpe.
- A este portafolio lo llamaremos el portafolio eficiente en media-varianza (EMV)
**Idea principal: el portafolio óptimo de activos riesgosos es independiente de las preferencias del inversionista.**
- El portafolio EMV determina el portafolio óptimo de activos riesgosos.
- Todos tendremos el mismo portafolio de activos riesgosos (EMV), y lo combinaremos con el activo libre de reisgo de acuerdo con las preferencias de cada uno de nosotros (aversión al riesgo).
- La LAC combinando el activo libre de riesgo y el portafolio EMV, se vuelve el conjunto de portafolios eficientes.
Entonces, se deben seguir los siguientes pasos:
1. Crear la frontera media-varianza.
2. Encontrar el portafolio que maximize el radio de Sharpe (portafolio EMV).
3. Construir la frontera eficiente (LAC) del punto $(0,r_f)$ al punto $(\sigma_s,E[r_s])$ del portafolio EMV.
4. Combinar de acuerdo a sus preferencias.
___
## 2. Solución analítica del portafolio EMV: caso con dos activos.
Queremos solucionar el siguiente problema:
\begin{align}
\max_{w_1,w_2} &\quad \frac{E[r_p]-r_f}{\sigma_p}\\
\text{s.a.} &\quad E[r_p]=w_1E[r_1]+w_2E[r_2]\\
&\quad \sigma_p=\sqrt{w_1^2\sigma_1^2+w_2^2\sigma_2^2+2w_1w_2\rho_{12}\sigma_1\sigma_2}\\
&\quad w_1+w_2=1, \quad w_1,w_2\geq0
\end{align}
el cual es equivalente a
\begin{align}
\max_{w_1} &\quad \frac{w_1E[r_1]+(1-w_1)E[r_2]-r_f}{\sqrt{w_1^2\sigma_1^2+(1-w_1)^2\sigma_2^2+2w_1(1-w_1)\rho_{12}\sigma_1\sigma_2}}\\
\text{s.a.} &\quad 0\leq w_1\leq1
\end{align}
**Actividad.**
El anterior es un problema de maximizar una función de una variable en un dominio cerrado. No debaría representar dificultad.
Encontrar la solución analítica a este problema.
Quien primero lo haga, y salga a explicarlo al tablero, le subo alguna tarea o quiz a 100.
Deben llegar a:
$$w_{1,EMV}=\frac{(E[r_1]-r_f)\sigma_2^2-(E[r_2]-r_f)\sigma_{12}}{(E[r_2]-r_f)\sigma_1^2+(E[r_1]-r_f)\sigma_2^2-((E[r_1]-r_f)+(E[r_2]-r_f))\sigma_{12}}.$$
Si nadie lo ha hecho en 30 min., procederé a hacerlo yo.
**Nota:**
- así como obtuvimos una expresión para el peso del portafolio de mínima varianza con dos activos, obtenemos una expresión para el peso del portafolio Eficiente en Media-Varianza.
- Estas actividades son sin duda un buen ejercicio, y se pueden replicar usando técnicas de varias variables (multiplicadores de Lagrange) cuando se tengan más de dos activos.
- Sin embargo, la complejidad del problema crece considerablemente con el número de variables, y la solución analítica deja de ser viable cuando mencionamos que un portafolio bien diversificado consta aproximadamente de 50-60 activos.
- En esos casos, este problema se soluciona con rutinas numéricas que hagan la optimización por nosotros.
- Por eso, les enseño cómo resolver este problema con optimizadores numéricos, porque son una solución viable y escalable a más variables.
## 3. Ejemplo ilustrativo.
Retomamos el ejemplo de mercados de acciones en los países integrantes del $G5$: EU, RU, Francia, Alemania y Japón.
```
# Importamos pandas y numpy
import pandas as pd
import numpy as np
# Resumen en base anual de rendimientos esperados y volatilidades
annual_ret_summ = pd.DataFrame(columns=['EU', 'RU', 'Francia', 'Alemania', 'Japon'], index=['Media', 'Volatilidad'])
annual_ret_summ.loc['Media'] = np.array([0.1355, 0.1589, 0.1519, 0.1435, 0.1497])
annual_ret_summ.loc['Volatilidad'] = np.array([0.1535, 0.2430, 0.2324, 0.2038, 0.2298])
annual_ret_summ.round(4)
# Matriz de correlación
corr = pd.DataFrame(data= np.array([[1.0000, 0.5003, 0.4398, 0.3681, 0.2663],
[0.5003, 1.0000, 0.5420, 0.4265, 0.3581],
[0.4398, 0.5420, 1.0000, 0.6032, 0.3923],
[0.3681, 0.4265, 0.6032, 1.0000, 0.3663],
[0.2663, 0.3581, 0.3923, 0.3663, 1.0000]]),
columns=annual_ret_summ.columns, index=annual_ret_summ.columns)
corr.round(4)
```
Supondremos, además, que la tasa libre de riesgo es $r_f=5\%$.
```
# Tasa libre de riesgo
rf = 0.05
```
Entonces, supondremos que tenemos disponibles los activos correspondientes a los mercados de acciones de EU y Japón, y en adición el activo libre de riesgo.
#### 1. Construir la frontera de mínima varianza
```
# Vector de w variando entre 0 y 1 con n pasos
N = 101
w = np.linspace(0, 1, N)
# Rendimientos esperados individuales
# Activo1: EU, Activo2:Japon
E1 = annual_ret_summ.loc['Media', 'EU']
E2 = annual_ret_summ.loc['Media', 'Japon']
# Volatilidades individuales
s1 = annual_ret_summ.loc['Volatilidad', 'EU']
s2 = annual_ret_summ.loc['Volatilidad', 'Japon']
# Correlacion
r12 = corr.loc['EU', 'Japon']
# Covarianza
s12 = s1 * s2 * r12
E1, E2, s1, s2, r12, s12
# DataFrame de portafolios:
# 1. Índice: i
# 2. Columnas 1-2: w, 1-w
# 3. Columnas 3-4: E[r], sigma
# 4. Columna 5: Sharpe ratio
portafolios = pd.DataFrame(index=range(N),
data={'w': w,
'1-w': 1 - w,
'Media': w * E1 + (1 - w) * E2,
'Vol': ((w * s1) **2 + ((1 - w) * s2)**2 + 2 * w *(1 - w) * s1 * s2 * r12)**0.5
}
)
portafolios['RS'] = (portafolios['Media'] - rf) / portafolios['Vol']
portafolios
# Importar librerías de gráficos
from matplotlib import pyplot as plt
%matplotlib inline
# Gráfica de dispersión de puntos coloreando
# de acuerdo a SR
plt.figure(figsize=(6, 4))
plt.scatter(portafolios['Vol'], portafolios['Media'], c=portafolios['RS'], cmap='RdYlBu')
plt.colorbar()
plt.xlabel("Volatilidad $\sigma$")
plt.ylabel("Rendimiento esperado $E[r]$")
plt.grid()
```
#### 2. Encontrar el portafolio que maximiza el radio de Sharpe (EMV)
Primero, encontramos este portafolio con la fórmula que obtuvimos:
$$w_{1,EMV}=\frac{(E[r_1]-r_f)\sigma_2^2-(E[r_2]-r_f)\sigma_{12}}{(E[r_2]-r_f)\sigma_1^2+(E[r_1]-r_f)\sigma_2^2-((E[r_1]-r_f)+(E[r_2]-r_f))\sigma_{12}}.$$
```
# Fórmula que obtuvimos
w1EMV = ((E1 - rf) * s2**2 - (E2 - rf) * s12) / ((E2 - rf) * s1**2 + (E1 - rf) * s2**2 - (E1 - rf + E2 - rf) * s12)
w2EMV = 1 - w1EMV
w1EMV, w2EMV
```
Ahora sí, con la función scipy.optimize.minimize
```
# Importar el módulo optimize
from scipy.optimize import minimize
# Función objetivo (-SR)
def minus_SR(w, E1, E2, s1, s2, s12, rf):
Es = w * E1 + (1 - w) * E2
ss = ((w * s1)**2 + ((1 - w) * s2)**2 + 2 * w * (1 - w) * s12)**0.5
SR = (Es - rf) / ss
return -SR
# Dato inicial
w0 = 0.5
# Cotas de las variables
bnds = ((0, 1),)
# Optimización numérica
res = minimize(fun=minus_SR, x0=w0, args=(E1, E2, s1, s2, s12, rf), bounds=bnds)
# Resultado
res
```
Con lo anterior, podemos obtener datos de rendimiento esperado y volatilidad del portafolio EMV
```
# Rendimiento esperado y volatilidad del portafolio EMV
w1EMV_opt = res.x
w2EMV_opt = 1 - w1EMV_opt
w1EMV_opt, w2EMV_opt, w1EMV, w2EMV
E_EMV = w1EMV * E1 + w2EMV * E2
s_EMV = ((w1EMV * s1)**2 + (w2EMV * s2)**2 + 2 * w1EMV * w2EMV * s12)**0.5
E_EMV, s_EMV
# Gráfica de dispersión de puntos coloreando
# de acuerdo a SR, y portafolio EMV
plt.figure(figsize=(6, 4))
plt.scatter(portafolios['Vol'], portafolios['Media'], c=portafolios['RS'], cmap='RdYlBu', label='Front. Min. Var.')
plt.plot(s_EMV, E_EMV, 'or', ms=7, label='Portafolio EMV')
plt.legend()
plt.colorbar()
plt.xlabel("Volatilidad $\sigma$")
plt.ylabel("Rendimiento esperado $E[r]$")
plt.grid()
```
#### 3. Construir LAC
Ahora, dibujamos la LAC, combinando el portafolio EMV con el activo libre de riesgo:
```
# Vector de wp variando entre 0 y 1.5 con n pasos
N = 50
wp = np.linspace(0, 1.5, N)
# DataFrame de CAL:
# 1. Índice: i
# 2. Columnas 1-2: wp, wrf
# 3. Columnas 3-4: E[r], sigma
# 4. Columna 5: Sharpe ratio
LAC = pd.DataFrame(index=range(N), data={'wp': wp,
'wrf': 1 - wp,
'Media': wp * E_EMV + (1 - wp) * rf,
'Vol': wp * s_EMV
})
LAC['RS'] = (LAC['Media'] - rf) / LAC['Vol']
LAC.head()
# Gráfica de dispersión de puntos coloreando
# de acuerdo a SR, portafolio EMV y LAC
plt.figure(figsize=(6, 4))
plt.scatter(portafolios['Vol'], portafolios['Media'], c=portafolios['RS'], cmap='RdYlBu', label='Front. Min. Var.')
plt.plot(s_EMV, E_EMV, 'or', ms=7, label='Portafolio EMV')
plt.plot(LAC['Vol'], LAC['Media'], lw=3, label='LAC')
plt.legend()
plt.colorbar()
plt.xlabel("Volatilidad $\sigma$")
plt.ylabel("Rendimiento esperado $E[r]$")
plt.grid()
plt.axis([0.1, 0.2, 0.12, 0.18])
```
#### 4. Combinación óptima de acuerdo a preferencias
Con los datos anteriores, y la caracterización de aversión al riesgo, se escoge la combinación óptima entre el portafolio EMV y el activo libre de riesgo de acuerdo a:
$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}.$$
```
# Para gamma=7
g = 7
w_a = (E_EMV - rf) / (g * s_EMV**2)
w_a, 1 - w_a
w_a * w1EMV, w_a * w2EMV, 1 - w_a
```
<script>
$(document).ready(function(){
$('div.prompt').hide();
$('div.back-to-top').hide();
$('nav#menubar').hide();
$('.breadcrumb').hide();
$('.hidden-print').hide();
});
</script>
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Esteban Jiménez Rodríguez.
</footer>
|
github_jupyter
|
# Importamos pandas y numpy
import pandas as pd
import numpy as np
# Resumen en base anual de rendimientos esperados y volatilidades
annual_ret_summ = pd.DataFrame(columns=['EU', 'RU', 'Francia', 'Alemania', 'Japon'], index=['Media', 'Volatilidad'])
annual_ret_summ.loc['Media'] = np.array([0.1355, 0.1589, 0.1519, 0.1435, 0.1497])
annual_ret_summ.loc['Volatilidad'] = np.array([0.1535, 0.2430, 0.2324, 0.2038, 0.2298])
annual_ret_summ.round(4)
# Matriz de correlación
corr = pd.DataFrame(data= np.array([[1.0000, 0.5003, 0.4398, 0.3681, 0.2663],
[0.5003, 1.0000, 0.5420, 0.4265, 0.3581],
[0.4398, 0.5420, 1.0000, 0.6032, 0.3923],
[0.3681, 0.4265, 0.6032, 1.0000, 0.3663],
[0.2663, 0.3581, 0.3923, 0.3663, 1.0000]]),
columns=annual_ret_summ.columns, index=annual_ret_summ.columns)
corr.round(4)
# Tasa libre de riesgo
rf = 0.05
# Vector de w variando entre 0 y 1 con n pasos
N = 101
w = np.linspace(0, 1, N)
# Rendimientos esperados individuales
# Activo1: EU, Activo2:Japon
E1 = annual_ret_summ.loc['Media', 'EU']
E2 = annual_ret_summ.loc['Media', 'Japon']
# Volatilidades individuales
s1 = annual_ret_summ.loc['Volatilidad', 'EU']
s2 = annual_ret_summ.loc['Volatilidad', 'Japon']
# Correlacion
r12 = corr.loc['EU', 'Japon']
# Covarianza
s12 = s1 * s2 * r12
E1, E2, s1, s2, r12, s12
# DataFrame de portafolios:
# 1. Índice: i
# 2. Columnas 1-2: w, 1-w
# 3. Columnas 3-4: E[r], sigma
# 4. Columna 5: Sharpe ratio
portafolios = pd.DataFrame(index=range(N),
data={'w': w,
'1-w': 1 - w,
'Media': w * E1 + (1 - w) * E2,
'Vol': ((w * s1) **2 + ((1 - w) * s2)**2 + 2 * w *(1 - w) * s1 * s2 * r12)**0.5
}
)
portafolios['RS'] = (portafolios['Media'] - rf) / portafolios['Vol']
portafolios
# Importar librerías de gráficos
from matplotlib import pyplot as plt
%matplotlib inline
# Gráfica de dispersión de puntos coloreando
# de acuerdo a SR
plt.figure(figsize=(6, 4))
plt.scatter(portafolios['Vol'], portafolios['Media'], c=portafolios['RS'], cmap='RdYlBu')
plt.colorbar()
plt.xlabel("Volatilidad $\sigma$")
plt.ylabel("Rendimiento esperado $E[r]$")
plt.grid()
# Fórmula que obtuvimos
w1EMV = ((E1 - rf) * s2**2 - (E2 - rf) * s12) / ((E2 - rf) * s1**2 + (E1 - rf) * s2**2 - (E1 - rf + E2 - rf) * s12)
w2EMV = 1 - w1EMV
w1EMV, w2EMV
# Importar el módulo optimize
from scipy.optimize import minimize
# Función objetivo (-SR)
def minus_SR(w, E1, E2, s1, s2, s12, rf):
Es = w * E1 + (1 - w) * E2
ss = ((w * s1)**2 + ((1 - w) * s2)**2 + 2 * w * (1 - w) * s12)**0.5
SR = (Es - rf) / ss
return -SR
# Dato inicial
w0 = 0.5
# Cotas de las variables
bnds = ((0, 1),)
# Optimización numérica
res = minimize(fun=minus_SR, x0=w0, args=(E1, E2, s1, s2, s12, rf), bounds=bnds)
# Resultado
res
# Rendimiento esperado y volatilidad del portafolio EMV
w1EMV_opt = res.x
w2EMV_opt = 1 - w1EMV_opt
w1EMV_opt, w2EMV_opt, w1EMV, w2EMV
E_EMV = w1EMV * E1 + w2EMV * E2
s_EMV = ((w1EMV * s1)**2 + (w2EMV * s2)**2 + 2 * w1EMV * w2EMV * s12)**0.5
E_EMV, s_EMV
# Gráfica de dispersión de puntos coloreando
# de acuerdo a SR, y portafolio EMV
plt.figure(figsize=(6, 4))
plt.scatter(portafolios['Vol'], portafolios['Media'], c=portafolios['RS'], cmap='RdYlBu', label='Front. Min. Var.')
plt.plot(s_EMV, E_EMV, 'or', ms=7, label='Portafolio EMV')
plt.legend()
plt.colorbar()
plt.xlabel("Volatilidad $\sigma$")
plt.ylabel("Rendimiento esperado $E[r]$")
plt.grid()
# Vector de wp variando entre 0 y 1.5 con n pasos
N = 50
wp = np.linspace(0, 1.5, N)
# DataFrame de CAL:
# 1. Índice: i
# 2. Columnas 1-2: wp, wrf
# 3. Columnas 3-4: E[r], sigma
# 4. Columna 5: Sharpe ratio
LAC = pd.DataFrame(index=range(N), data={'wp': wp,
'wrf': 1 - wp,
'Media': wp * E_EMV + (1 - wp) * rf,
'Vol': wp * s_EMV
})
LAC['RS'] = (LAC['Media'] - rf) / LAC['Vol']
LAC.head()
# Gráfica de dispersión de puntos coloreando
# de acuerdo a SR, portafolio EMV y LAC
plt.figure(figsize=(6, 4))
plt.scatter(portafolios['Vol'], portafolios['Media'], c=portafolios['RS'], cmap='RdYlBu', label='Front. Min. Var.')
plt.plot(s_EMV, E_EMV, 'or', ms=7, label='Portafolio EMV')
plt.plot(LAC['Vol'], LAC['Media'], lw=3, label='LAC')
plt.legend()
plt.colorbar()
plt.xlabel("Volatilidad $\sigma$")
plt.ylabel("Rendimiento esperado $E[r]$")
plt.grid()
plt.axis([0.1, 0.2, 0.12, 0.18])
# Para gamma=7
g = 7
w_a = (E_EMV - rf) / (g * s_EMV**2)
w_a, 1 - w_a
w_a * w1EMV, w_a * w2EMV, 1 - w_a
| 0.424889 | 0.879923 |
# Perceptron
We will now implement the perceptron classifier. We put ourselves in a setting where we have access to training examples $\boldsymbol{x}_i$ and each of them is associated with a target $y\in{-1,1}$.
The perceptron classifier is a simple model that consists of a single neuron with a step activation function (also known as a [heavyside step function](https://en.wikipedia.org/wiki/Heaviside_step_function)).
One can visualize it as follows:

If you have questions or comments : charlotte[dot]laclau[at]univ-grenoble-alpes[dot]fr
## Forward propagation
Given an n-dimensional input $x\in\mathbb{R}^n$ and taking into account the bias unit, one moves from the input to the output with two steps:
- $s= \sum\limits_{i=0}^n w_ix_i$, where for consistency assume that the bias unit is $x_0$
- Application of the step function on s, that is
$a = \begin{cases}
1 & s\geq 0 \\
-1 & s < 0
\end{cases}
$
## Error propagation
Repeat until max_epochs (maximum number of iterations) or convergence is reached:
For each example in the training set:
1. Forward pass to calculate $a$
3. If the example is misclassified: update each of the weights $i$ with: $w_i^{(t+1)} = w_i^{(t)} + \eta*x_i*y_i$
## Why these update equations?
We are in the standard supervised classification setting, where one has $k$ examples $\vec{X}={\vec{x_1}, \ldots, \vec{x_k}}$, where $\vec{x_k} \in \mathbb{R}^n$. Each $\vec{x}_k \in \vec{X}$ is associated with a category $y_k \in \mathbb{Y}$, from a pre-defined set of categories. In the binary case $\mathbb{Y}=\{-1, +1\}$.
We then, want to learn a vector $\vec{w} \in \mathbb{R}^{n+1}$, to perform the above described classification step. For the weight vector $\vec{w}$ we moved from $n$ to $n+1$ dimensions to account for the bias unit.
Using the perceptron algorithm we want to minimize the number of examples we misclassify, and essentially if the examples are linearly separable, misclassify nothing. Hence, one can define a simple loss function:
$\mathbb{L} = -\sum\limits_{k} y_k(\vec{w}\cdot\vec{x_k})$
In the online case, where one updates the weights for a single instance $k$, this becomes:
$\mathbb{L} = - y_k(\vec{w}\cdot\vec{x_k})$, where L
To change the direction of $\vec{w}$ when we misclassify, we can:
$\nabla L = \frac{\partial L}{\partial w}= - y_k x_k$
We scale this update using the learning rate $\eta$ and we update by taking a step towards the negative of the gradient:
$w_k^{(t+1)} = w_k^{(t)} + \eta*x_k*y_k$
## The next steps
TBD:
1. create a simple training set like the OR function
2. Learn the OR function
Pseudo-code :
`input: X, Y, eta, w, n
for i in 1:n
pick an example randomly
result <- w*x
if (result<0)
result=0
else
result =1
error <- Y - result
w <- w + eta*error*x`
(indication: the if statement could be defined as a function beforehand in a step function)
3. We are now going to work on the sonar.txt dataset.
1. Download the dataset using the read_table function of the pandas librairy. This will create a dataframe (same than in R). Before starting, you can check the size of the data or other basic statistics (head, shape, etc.). The last column contains the class of each object.
2. Separate the features from the target (see loc); Transform the target into numeric values.
3. Split the dataset into train and test files (see train_test_split)
4. Use the perceptron classifier available in sckit-learn. The function presents a lot of options that you should explore.
5. Repeat the same operations but using K-folds instead of one train and one test file.
4. If you feel comfortable with Python, code the perceptron classifier
You may want to ckeck: `np.array`, `numpy.random.rand`, `numpy.dot`, `random.choice`
```
import random, numpy as np, matplotlib.pyplot as plt, time
%matplotlib inline
# Training data for the first question
training_data = [
(np.array([0,0,1]), 0),
(np.array([0,1,1]), 1),
(np.array([1,0,1]), 1),
(np.array([1,1,1]), 1),
]
def unit_step(value):
if value < 0:
return 0
else:
return 1
# or unit_step = lambda x: 0 if x <0 else 1
n = 20
eta = 0.2
errors = []
w = np.random.rand(3)
for i in range(n):
x, expected = random.choice(training_data)
result = np.dot(w,x)
error = expected - unit_step(result)
w += eta*error*x
errors.append(error)
for x, _ in training_data:
result = np.dot(x, w)
print("{}: {} -> {}".format(x[:2], result, unit_step(result)))
# Part 1
import sklearn as sk
from sklearn.linear_model import Perceptron
import pandas as pd
# Load dataset
sonar = pd.read_table('sonar.txt', header = None, delimiter=',')
sonar.head()
# In case of missing values, you can remove NaN elements using dropna function
# sonar = sonar.dropna(how="any", axis=0)
```
### Part 2: pre-processing
Before splitting the data into the test and the train set, you should check if you need to normalise the data. Usually, it is necessary if the scales of the variables are two different from one another. For the sonar data, all variables have values between 0 and 1, so it's fine!
```
# Import the function for splitting the data from sklearn
from sklearn.model_selection import train_test_split
#Separate data from label. To access elements of a dataframe using the position, use .loc
x = sonar.loc[:,range(60)]
target = sonar.loc[:,60]
# Use unique function to describe the possible labels
print("Unique labels: {0}".format(np.unique(target)))
```
Split the data: random_state allow you to control the randomness of your training and test set. Here I set it to 0 (it could be any integer of your choice)
By doing so, everytime that I run de following lines, if random_state is at 0, I will create the same train and test.
```
random_state = 0
# test_size indicates the proportion of the instances which are used of the test set
x_train, x_test, y_train, y_test = train_test_split(x, target, test_size=0.20, random_state=random_state)
```
### Part 3: Finally the perceptron!
Create the perceptron instance. This simply creates the model structure with the given hyper-parameters (options)
I set some of the options of the perceptron: maximum number of iterations and learning step
You also have options about regularisation to avoid overfitting (check it in the documentation of the perceptron)
```
# Options - hyperparameters
max_iter = 10
eta0= 0.1
# Create the perceptron instance. Again the random state (controls the random initialisation of weights)
clf = Perceptron(max_iter = max_iter, eta0=eta0, random_state=random_state)
```
Train the perceptron on the training instances. In this case, the model uses both the example and the label to learn the weights. This is crucial but will not give any information about the generalization capacity of you model.
```
# Training
clf.fit(x_train, y_train)
```
Step 2 : Train the perceptron on the training instances. In this case, the model uses both the example and the label to learn the weights. This is crucial but will not give any information about the generalization capacity of you model. To evaluate the true efficiency of the classifier, I will use it to predict the labels of the test set, which was not use when the model was trained.
```
# Make prediction
y_pred = clf.predict(x_test)
```
Step 3: evaluate the performance in terms of accuracy. The accuracy is simply the proportion of labels that are correctly predicted by your model.
```
# Measure the performance using the accuracy score
from sklearn.metrics import accuracy_score
print("accuracy: {0:.2f}%".format(accuracy_score(y_test, y_pred)*100))
```
|
github_jupyter
|
import random, numpy as np, matplotlib.pyplot as plt, time
%matplotlib inline
# Training data for the first question
training_data = [
(np.array([0,0,1]), 0),
(np.array([0,1,1]), 1),
(np.array([1,0,1]), 1),
(np.array([1,1,1]), 1),
]
def unit_step(value):
if value < 0:
return 0
else:
return 1
# or unit_step = lambda x: 0 if x <0 else 1
n = 20
eta = 0.2
errors = []
w = np.random.rand(3)
for i in range(n):
x, expected = random.choice(training_data)
result = np.dot(w,x)
error = expected - unit_step(result)
w += eta*error*x
errors.append(error)
for x, _ in training_data:
result = np.dot(x, w)
print("{}: {} -> {}".format(x[:2], result, unit_step(result)))
# Part 1
import sklearn as sk
from sklearn.linear_model import Perceptron
import pandas as pd
# Load dataset
sonar = pd.read_table('sonar.txt', header = None, delimiter=',')
sonar.head()
# In case of missing values, you can remove NaN elements using dropna function
# sonar = sonar.dropna(how="any", axis=0)
# Import the function for splitting the data from sklearn
from sklearn.model_selection import train_test_split
#Separate data from label. To access elements of a dataframe using the position, use .loc
x = sonar.loc[:,range(60)]
target = sonar.loc[:,60]
# Use unique function to describe the possible labels
print("Unique labels: {0}".format(np.unique(target)))
random_state = 0
# test_size indicates the proportion of the instances which are used of the test set
x_train, x_test, y_train, y_test = train_test_split(x, target, test_size=0.20, random_state=random_state)
# Options - hyperparameters
max_iter = 10
eta0= 0.1
# Create the perceptron instance. Again the random state (controls the random initialisation of weights)
clf = Perceptron(max_iter = max_iter, eta0=eta0, random_state=random_state)
# Training
clf.fit(x_train, y_train)
# Make prediction
y_pred = clf.predict(x_test)
# Measure the performance using the accuracy score
from sklearn.metrics import accuracy_score
print("accuracy: {0:.2f}%".format(accuracy_score(y_test, y_pred)*100))
| 0.700178 | 0.991135 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
from google.colab import drive
import os
```
**Mount Drive**
```
drive.mount('/content/drive', force_remount=True)
%cd /content/drive/My\ Drive/
os.chdir('/content/drive/My Drive/Colab Notebooks/Machine Learning/linear regression')
# Load the CSV file into a pandas dataframe
path ="Data"
housedata = pd.read_csv(path+"/house.csv")
housedata.head(5)
```
Convert the text features to numeric values.
```
def convert_text(text):
if type(text) == pd.core.series.Series:
new_dict ={}
value = 1
for key in text.unique():
new_dict[key] = value
value = value + 1
text = text.apply(lambda s: new_dict[s])
return new_dict, text
nbhdfeature , housedata['nbhd'] = convert_text(housedata['nbhd'])
brickfeature , housedata['brick'] = convert_text(housedata['brick'])
housedata.head(5)
```
Now I will split the dataset based on the train and test parameters and the trainsetsize value.
```
def split(data, size):
train = data.sample(frac = size)
test = data.drop(train.index)
return train , test
```
Now we will sove the problem using three different techniques \
1. Gausiian Solver
2. Cholesky Decompostion
3. QR Decomposition
**Gausian Mixture Solver**
```
def gauss_method(a,b):
augmentedMatrix = np.hstack((a,b)) * 1.0
n = augmentedMatrix.shape[0]
for i in range(0, n):
"""Set default pivot value as diagonal matrix """
pivot = augmentedMatrix[i][i]
pivotRow = i
"""Check for a bigger pivot value"""
for j in range(i+1, n):
if abs(augmentedMatrix[j][i]) > abs(pivot):
pivot = augmentedMatrix[j][i]
pivotRow = j
"""If pivot has changed. Swap the rows"""
if pivotRow != i:
for j in range(0, n+1):
augmentedMatrix[pivotRow][j], augmentedMatrix[i][j] = augmentedMatrix[i][j], augmentedMatrix[pivotRow][j]
"""Make all the column values below pivot as zero by performing matrix row operations"""
for j in range(i+1, n):
op = -1 * (augmentedMatrix[j][i]/augmentedMatrix[i][i])
for k in range(0, n+1):
augmentedMatrix[j][k] = augmentedMatrix[j][k] + ( op * augmentedMatrix[i][k] )
beta = np.zeros(n)
for i in range(n - 1, -1,-1):
diff = 0
for k in range (i + 1, n):
diff = diff + (beta[k] * augmentedMatrix[i][k])
beta[i] = (augmentedMatrix[i][n] - diff)/augmentedMatrix[i][i]
return beta
```
**Cholesky Decompostion**
```
def cholesky_method(a,b):
a = a *1
b = b* 1
n = a.shape[0]
# Create zero matrix for L
if a.shape[0] == a.shpae[1]:
L = np.zeros(shape = a.shpae)
# Perform the Cholesky decomposition
for i in range(0,n):
for k in range(0,n):
L[i][k] = a[i][k]
if i == k:
for m in range(0,j):
L[i][k] = L[i][k] - (L[i][m] * L[i][m])
L[i][k] = sqrt(L[i][K])
break;
base = 0
for m in range(0,j):
base = base + (L[i][m] *L[j][k])
L[i][k] = (L[i][k] -base)/L[j][j]
#Forward Substitution
y = np.zeors(n)
for i in range(0,n):
sub = 0
for m in range(i-1,-1,-1):
sub = sub + (y[m] *L[i][m])
y[i] = (b[i] - sub)/L[i][i]
#Backward Substitution
beta = np.zeros(n)
u = np.transpose(L)
for i in range(n-1,-1,1):
sub = 0
for m in range(i+1,n):
sub = sub + (beta[m] * u[i][m])
return beta
```
**QR Decomposition**
```
def QR_solver (a,b):
a = a*1
b = b*1
n = a.shape[1]
copy = np.arry(a,true)
for i in range (1, n):
sub = 0
for k in range(i-1,-1,-1):
sub = sub + (np.dot(copy[:,k],copy[:,k])/np.dot(copy[:,K]),copy[:,K])*copy[:,K]
copy[:,i] = copy[:,i] - sub
for i in range(0,n):
copy[:,i] = copy[:,i]/np.sum(np.square(copy[:,i]))
Q =copy
R = np.dot(np.transpose(copy),a)
b = np.dot(np.transpose(Q),b)
beta = np.zeros(n)
for i in range(i-1,-1,-1):
sub =0
for m in range(i+1,n):
sub = sub + (beta[m] * R[i][m])
beta[i] = (b[i] -sub)/R[i][i]
return beta
```
**Calcualate the RMSE**
```
def rmse(y_true, y_predict):
n = y_true.shape[0]
return 1 * np.sqrt(np.sum(np.square(y_true - y_predict)))/n
```
**Calculate Linear Regression prediction**
```
def linear_predict(X, M):
X = np.insert(X,0,1,axis=1)
return np.dot(X, np.transpose(M))
```
**Calculate Normal Equation**
```
def normal_equationSolver(X,Y,S=gauss_method):
if isinstance(X,np.ndarray) and isinstance(Y,np.ndarray):
if X.shape[0] != Y.shape[0]:
raise ValueError("The shape of X and Y is inconsistant")
X = np.insert(X, 0, 1, axis=1)
Xtranspose = X.T
XtX = np.dot(Xtranspose,X)
XtY = np.dot(Xtranspose,Y)
return S(XtX, XtY)
#Split the dataset innto trainset and test and then the subset xtrain,ytrain,xtest,ytest
train, test = split(housedata,0.8)
ytrain = pd.DataFrame(train,columns=['price']).to_numpy()
ytest = pd.DataFrame(test,columns=['price']).to_numpy()
print(ytrain.shape)
print(ytest.shape)
xtrain = pd.DataFrame(train,columns = ['sqft','bedrooms','bathrooms','brick','nbhd','offers']).to_numpy()
xtest = pd.DataFrame(test,columns = ['sqft','bedrooms','bathrooms','brick','nbhd','offers']).to_numpy()
gausiansolver = normal_equationSolver(xtrain,ytrain)
ypredictGausian = linear_predict(xtest,gausiansolver)
print("RMSE", (rmse(ytest.flatten(),ypredictGausian)))
print("Average Residual", (ytest.flatten() - ypredictGausian).mean())
#Plotting the Graph
plt.plot(ypredictGausian - ytest.flatten(), ytest,"ro",label="ytest - ybar vs ytest")
plt.title("Plot for gaussian solver")
plt.xlabel("ytest - ybar")
plt.ylabel("ytest")
plt.show()
```
**Solving for Cholesky Decomposition**
```
#Split the dataset innto trainset and test and then the subset xtrain,ytrain,xtest,ytest
train, test = split(housedata,0.8)
ytrain = pd.DataFrame(train,columns=['price']).to_numpy()
ytest = pd.DataFrame(test,columns=['price']).to_numpy()
print(ytrain.shape)
print(ytest.shape)
xtrain = pd.DataFrame(train,columns = ['sqft','bedrooms','bathrooms','brick','nbhd','offers']).to_numpy()
xtest = pd.DataFrame(test,columns = ['sqft','bedrooms','bathrooms','brick','nbhd','offers']).to_numpy()
Choleskysolver = normal_equationSolver(xtrain,ytrain)
ypredictCholesy = linear_predict(xtest,Choleskysolver)
print("RMSE", (rmse(ytest.flatten(),ypredictCholesy)))
print("Average Residual", (ytest.flatten() - ypredictCholesky).mean())
#Plotting the graph
plt.plot(ypredictCholskey - ytest.flatten(), ytest,"ro",label="ytest - ybar vs ytest")
plt.title("Plot for Cholskey solver")
plt.xlabel("ytest - ybar")
plt.ylabel("ytest")
plt.show()
```
**Solving for QR Decomposition**
```
#Split the dataset innto trainset and test and then the subset xtrain,ytrain,xtest,ytest
train, test = split(housedata,0.8)
ytrain = pd.DataFrame(train,columns=['price']).to_numpy()
ytest = pd.DataFrame(test,columns=['price']).to_numpy()
print(ytrain.shape)
print(ytest.shape)
xtrain = pd.DataFrame(train,columns = ['sqft','bedrooms','bathrooms','brick','nbhd','offers']).to_numpy()
xtest = pd.DataFrame(test,columns = ['sqft','bedrooms','bathrooms','brick','nbhd','offers']).to_numpy()
QRsolver = normal_equationSolver(xtrain,ytrain)
ypredictQR = linear_predict(xtest,QRsolver)
print("RMSE", (rmse(ytest.flatten(),ypredictQR)))
print("Average Residual", (ytest.flatten() - ypredictQR).mean())
#Plotting the graph
plt.plot(ypredictQR - ytest.flatten(), ytest,"ro",label="ytest - ybar vs ytest")
plt.title("Plot for QR solver")
plt.xlabel("ytest - ybar")
plt.ylabel("ytest")
plt.show()
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
from google.colab import drive
import os
drive.mount('/content/drive', force_remount=True)
%cd /content/drive/My\ Drive/
os.chdir('/content/drive/My Drive/Colab Notebooks/Machine Learning/linear regression')
# Load the CSV file into a pandas dataframe
path ="Data"
housedata = pd.read_csv(path+"/house.csv")
housedata.head(5)
def convert_text(text):
if type(text) == pd.core.series.Series:
new_dict ={}
value = 1
for key in text.unique():
new_dict[key] = value
value = value + 1
text = text.apply(lambda s: new_dict[s])
return new_dict, text
nbhdfeature , housedata['nbhd'] = convert_text(housedata['nbhd'])
brickfeature , housedata['brick'] = convert_text(housedata['brick'])
housedata.head(5)
def split(data, size):
train = data.sample(frac = size)
test = data.drop(train.index)
return train , test
def gauss_method(a,b):
augmentedMatrix = np.hstack((a,b)) * 1.0
n = augmentedMatrix.shape[0]
for i in range(0, n):
"""Set default pivot value as diagonal matrix """
pivot = augmentedMatrix[i][i]
pivotRow = i
"""Check for a bigger pivot value"""
for j in range(i+1, n):
if abs(augmentedMatrix[j][i]) > abs(pivot):
pivot = augmentedMatrix[j][i]
pivotRow = j
"""If pivot has changed. Swap the rows"""
if pivotRow != i:
for j in range(0, n+1):
augmentedMatrix[pivotRow][j], augmentedMatrix[i][j] = augmentedMatrix[i][j], augmentedMatrix[pivotRow][j]
"""Make all the column values below pivot as zero by performing matrix row operations"""
for j in range(i+1, n):
op = -1 * (augmentedMatrix[j][i]/augmentedMatrix[i][i])
for k in range(0, n+1):
augmentedMatrix[j][k] = augmentedMatrix[j][k] + ( op * augmentedMatrix[i][k] )
beta = np.zeros(n)
for i in range(n - 1, -1,-1):
diff = 0
for k in range (i + 1, n):
diff = diff + (beta[k] * augmentedMatrix[i][k])
beta[i] = (augmentedMatrix[i][n] - diff)/augmentedMatrix[i][i]
return beta
def cholesky_method(a,b):
a = a *1
b = b* 1
n = a.shape[0]
# Create zero matrix for L
if a.shape[0] == a.shpae[1]:
L = np.zeros(shape = a.shpae)
# Perform the Cholesky decomposition
for i in range(0,n):
for k in range(0,n):
L[i][k] = a[i][k]
if i == k:
for m in range(0,j):
L[i][k] = L[i][k] - (L[i][m] * L[i][m])
L[i][k] = sqrt(L[i][K])
break;
base = 0
for m in range(0,j):
base = base + (L[i][m] *L[j][k])
L[i][k] = (L[i][k] -base)/L[j][j]
#Forward Substitution
y = np.zeors(n)
for i in range(0,n):
sub = 0
for m in range(i-1,-1,-1):
sub = sub + (y[m] *L[i][m])
y[i] = (b[i] - sub)/L[i][i]
#Backward Substitution
beta = np.zeros(n)
u = np.transpose(L)
for i in range(n-1,-1,1):
sub = 0
for m in range(i+1,n):
sub = sub + (beta[m] * u[i][m])
return beta
def QR_solver (a,b):
a = a*1
b = b*1
n = a.shape[1]
copy = np.arry(a,true)
for i in range (1, n):
sub = 0
for k in range(i-1,-1,-1):
sub = sub + (np.dot(copy[:,k],copy[:,k])/np.dot(copy[:,K]),copy[:,K])*copy[:,K]
copy[:,i] = copy[:,i] - sub
for i in range(0,n):
copy[:,i] = copy[:,i]/np.sum(np.square(copy[:,i]))
Q =copy
R = np.dot(np.transpose(copy),a)
b = np.dot(np.transpose(Q),b)
beta = np.zeros(n)
for i in range(i-1,-1,-1):
sub =0
for m in range(i+1,n):
sub = sub + (beta[m] * R[i][m])
beta[i] = (b[i] -sub)/R[i][i]
return beta
def rmse(y_true, y_predict):
n = y_true.shape[0]
return 1 * np.sqrt(np.sum(np.square(y_true - y_predict)))/n
def linear_predict(X, M):
X = np.insert(X,0,1,axis=1)
return np.dot(X, np.transpose(M))
def normal_equationSolver(X,Y,S=gauss_method):
if isinstance(X,np.ndarray) and isinstance(Y,np.ndarray):
if X.shape[0] != Y.shape[0]:
raise ValueError("The shape of X and Y is inconsistant")
X = np.insert(X, 0, 1, axis=1)
Xtranspose = X.T
XtX = np.dot(Xtranspose,X)
XtY = np.dot(Xtranspose,Y)
return S(XtX, XtY)
#Split the dataset innto trainset and test and then the subset xtrain,ytrain,xtest,ytest
train, test = split(housedata,0.8)
ytrain = pd.DataFrame(train,columns=['price']).to_numpy()
ytest = pd.DataFrame(test,columns=['price']).to_numpy()
print(ytrain.shape)
print(ytest.shape)
xtrain = pd.DataFrame(train,columns = ['sqft','bedrooms','bathrooms','brick','nbhd','offers']).to_numpy()
xtest = pd.DataFrame(test,columns = ['sqft','bedrooms','bathrooms','brick','nbhd','offers']).to_numpy()
gausiansolver = normal_equationSolver(xtrain,ytrain)
ypredictGausian = linear_predict(xtest,gausiansolver)
print("RMSE", (rmse(ytest.flatten(),ypredictGausian)))
print("Average Residual", (ytest.flatten() - ypredictGausian).mean())
#Plotting the Graph
plt.plot(ypredictGausian - ytest.flatten(), ytest,"ro",label="ytest - ybar vs ytest")
plt.title("Plot for gaussian solver")
plt.xlabel("ytest - ybar")
plt.ylabel("ytest")
plt.show()
#Split the dataset innto trainset and test and then the subset xtrain,ytrain,xtest,ytest
train, test = split(housedata,0.8)
ytrain = pd.DataFrame(train,columns=['price']).to_numpy()
ytest = pd.DataFrame(test,columns=['price']).to_numpy()
print(ytrain.shape)
print(ytest.shape)
xtrain = pd.DataFrame(train,columns = ['sqft','bedrooms','bathrooms','brick','nbhd','offers']).to_numpy()
xtest = pd.DataFrame(test,columns = ['sqft','bedrooms','bathrooms','brick','nbhd','offers']).to_numpy()
Choleskysolver = normal_equationSolver(xtrain,ytrain)
ypredictCholesy = linear_predict(xtest,Choleskysolver)
print("RMSE", (rmse(ytest.flatten(),ypredictCholesy)))
print("Average Residual", (ytest.flatten() - ypredictCholesky).mean())
#Plotting the graph
plt.plot(ypredictCholskey - ytest.flatten(), ytest,"ro",label="ytest - ybar vs ytest")
plt.title("Plot for Cholskey solver")
plt.xlabel("ytest - ybar")
plt.ylabel("ytest")
plt.show()
#Split the dataset innto trainset and test and then the subset xtrain,ytrain,xtest,ytest
train, test = split(housedata,0.8)
ytrain = pd.DataFrame(train,columns=['price']).to_numpy()
ytest = pd.DataFrame(test,columns=['price']).to_numpy()
print(ytrain.shape)
print(ytest.shape)
xtrain = pd.DataFrame(train,columns = ['sqft','bedrooms','bathrooms','brick','nbhd','offers']).to_numpy()
xtest = pd.DataFrame(test,columns = ['sqft','bedrooms','bathrooms','brick','nbhd','offers']).to_numpy()
QRsolver = normal_equationSolver(xtrain,ytrain)
ypredictQR = linear_predict(xtest,QRsolver)
print("RMSE", (rmse(ytest.flatten(),ypredictQR)))
print("Average Residual", (ytest.flatten() - ypredictQR).mean())
#Plotting the graph
plt.plot(ypredictQR - ytest.flatten(), ytest,"ro",label="ytest - ybar vs ytest")
plt.title("Plot for QR solver")
plt.xlabel("ytest - ybar")
plt.ylabel("ytest")
plt.show()
| 0.501709 | 0.772745 |
# Trial 1
```
import pandas as pd
import numpy as np
import os
from numpy.random import randn as rn
```
# 1. Get the two df's
```
link = "C:\\Users\\MAHE\\Desktop\\Data Science\\Projects\\data\\banknifty"
os.chdir(link)
bn = pd.read_csv("all_here.csv", sep = ",")
bn.tail()
os.chdir("C:\\Users\\MAHE\\Desktop\\Data Science\\Projects\\data")
df = pd.read_csv("data.csv")
df.tail()
```
### Setting the "Date" as the index for the df of data
```
df = df.set_index("Date")
```
# 2. Merge both the df's:
```
merge_df= pd.merge(df,bn,on = "Date")
merge_df.head()
```
### drop extra columns:
```
merge_df = merge_df.drop(["Turnover in Lacs","Premium Turnover in Lacs","Open Int","Change in OI","Underlying Value","Shares Traded","Turnover (Rs. Cr)","Symbol","Option Type","Strike Price","No. of contracts"],axis = 1)
merge_df.head()
```
### Set "Expiry" as Index
Expiry: 1 in a week
Week: 5 Working Dates
Since we will be using MultiLevel Indexing, it is more logical to put "Expiry" as the outer index and "Date" as the inner index
```
merge_df = merge_df.set_index("Expiry")
merge_df.head()
close_val = merge_df["Close_x"].loc["10-Jan-2019"].mean()
close_val
```
# 3. Trial for "10-Jan-2019":
```
d = merge_df.loc["10-Jan-2019"]
d.head()
```
## Working with MultiLevel Indexing:
### Let the Outer-Index be "Expiry" and the inner index be "Date":
Expiry maps to 5 Dates in a week
```
inside = [] #Date
for i in d["Date"]:
inside.append(i)
inside
outside = list(np.tile("10-Jan-2019",632)) #Expiry
outside
#632 was given because we have 632 trial data in inside:
len(inside)
hier_index = list(zip(outside,inside))
hier_index
hier_index = pd.MultiIndex.from_tuples(hier_index)
hier_index
```
## Create a blueprint of th required final dataset:
### Use the random concept of numpy:
```
np.random.seed(101)
df1 = pd.DataFrame(data = np.round(rn(632,9),2),index = hier_index,columns = ['x','a','b','c','d','e','f','g','h'])
df1.head()
```
### Remove the duplicates in inside:
```
dates = set(inside)
dates = list(dates)
dates
dates.sort()
dates
```
## Get the rounded close values that belong to the 2nd dataframe "df" :
### Hence, take a temporary copy of the dataframe "d" and set "Date" as index:
```
temp_d = d
temp_d = temp_d.set_index("Date")
temp_d.head()
close_list = []
for i in dates:
temp = np.round(temp_d["Close_x"].loc[i].mean(),decimals = -2)
close_list.append(temp)
close_list
```
### Set the close values to column "x" of the blueprint:
```
k = 0
for i in dates:
df1.loc["10-Jan-2019"].loc[[i],["x"]] = close_list[k]
k+=1
df1
```
### Set a,b,c,d,e,f,g,h appropriately:
### Make a function to :
### 1. set the values a,b,c,d,e,f,g,h by x+100,200 .. as shown
```
def addsubround(x,a,b,c,d,e,f,g,h):
a = x+100
b = x+200
c = x+300
d = x+400
e = x-100
f = x-200
g = x-300
h = x-400
return (a,b,c,d,e,f,g,h)
k = 0
for i in dates:
tt = int((df1.loc["10-Jan-2019"].loc[[i],["x"]]).mean())
l = []
a,b,c,d,e,f,g,h = 0,0,0,0,0,0,0,0
l = addsubround(tt,a,b,c,d,e,f,g,h)
df1.loc["10-Jan-2019"].loc[[i],["a"]] = l[0]
df1.loc["10-Jan-2019"].loc[[i],["b"]] = l[1]
df1.loc["10-Jan-2019"].loc[[i],["c"]] = l[2]
df1.loc["10-Jan-2019"].loc[[i],["d"]] = l[3]
df1.loc["10-Jan-2019"].loc[[i],["e"]] = l[4]
df1.loc["10-Jan-2019"].loc[[i],["f"]] = l[5]
df1.loc["10-Jan-2019"].loc[[i],["g"]] = l[6]
df1.loc["10-Jan-2019"].loc[[i],["h"]] = l[7]
k+=1
df1
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import os
from numpy.random import randn as rn
link = "C:\\Users\\MAHE\\Desktop\\Data Science\\Projects\\data\\banknifty"
os.chdir(link)
bn = pd.read_csv("all_here.csv", sep = ",")
bn.tail()
os.chdir("C:\\Users\\MAHE\\Desktop\\Data Science\\Projects\\data")
df = pd.read_csv("data.csv")
df.tail()
df = df.set_index("Date")
merge_df= pd.merge(df,bn,on = "Date")
merge_df.head()
merge_df = merge_df.drop(["Turnover in Lacs","Premium Turnover in Lacs","Open Int","Change in OI","Underlying Value","Shares Traded","Turnover (Rs. Cr)","Symbol","Option Type","Strike Price","No. of contracts"],axis = 1)
merge_df.head()
merge_df = merge_df.set_index("Expiry")
merge_df.head()
close_val = merge_df["Close_x"].loc["10-Jan-2019"].mean()
close_val
d = merge_df.loc["10-Jan-2019"]
d.head()
inside = [] #Date
for i in d["Date"]:
inside.append(i)
inside
outside = list(np.tile("10-Jan-2019",632)) #Expiry
outside
#632 was given because we have 632 trial data in inside:
len(inside)
hier_index = list(zip(outside,inside))
hier_index
hier_index = pd.MultiIndex.from_tuples(hier_index)
hier_index
np.random.seed(101)
df1 = pd.DataFrame(data = np.round(rn(632,9),2),index = hier_index,columns = ['x','a','b','c','d','e','f','g','h'])
df1.head()
dates = set(inside)
dates = list(dates)
dates
dates.sort()
dates
temp_d = d
temp_d = temp_d.set_index("Date")
temp_d.head()
close_list = []
for i in dates:
temp = np.round(temp_d["Close_x"].loc[i].mean(),decimals = -2)
close_list.append(temp)
close_list
k = 0
for i in dates:
df1.loc["10-Jan-2019"].loc[[i],["x"]] = close_list[k]
k+=1
df1
def addsubround(x,a,b,c,d,e,f,g,h):
a = x+100
b = x+200
c = x+300
d = x+400
e = x-100
f = x-200
g = x-300
h = x-400
return (a,b,c,d,e,f,g,h)
k = 0
for i in dates:
tt = int((df1.loc["10-Jan-2019"].loc[[i],["x"]]).mean())
l = []
a,b,c,d,e,f,g,h = 0,0,0,0,0,0,0,0
l = addsubround(tt,a,b,c,d,e,f,g,h)
df1.loc["10-Jan-2019"].loc[[i],["a"]] = l[0]
df1.loc["10-Jan-2019"].loc[[i],["b"]] = l[1]
df1.loc["10-Jan-2019"].loc[[i],["c"]] = l[2]
df1.loc["10-Jan-2019"].loc[[i],["d"]] = l[3]
df1.loc["10-Jan-2019"].loc[[i],["e"]] = l[4]
df1.loc["10-Jan-2019"].loc[[i],["f"]] = l[5]
df1.loc["10-Jan-2019"].loc[[i],["g"]] = l[6]
df1.loc["10-Jan-2019"].loc[[i],["h"]] = l[7]
k+=1
df1
| 0.064935 | 0.665431 |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
import tensorflow as tf
from malaya.train.model.bigbird import modeling, utils
bert_config = {
'attention_probs_dropout_prob': 0.1,
'hidden_act': 'gelu',
'hidden_dropout_prob': 0.1,
'hidden_size': 256,
'initializer_range': 0.02,
'intermediate_size': 1024,
'max_position_embeddings': 2048,
'max_encoder_length': 1024,
'max_decoder_length': 1024,
'num_attention_heads': 4,
'num_hidden_layers': 2,
'type_vocab_size': 2,
'scope': 'bert',
'use_bias': True,
'rescale_embedding': False,
'vocab_model_file': None,
'attention_type': 'block_sparse',
'block_size': 16,
'num_rand_blocks': 3,
'vocab_size': 32000,
'couple_encoder_decoder': False,
'beam_size': 1,
'alpha': 0.0,
'label_smoothing': 0.1,
'norm_type': 'postnorm',
}
import sentencepiece as spm
vocab = 'sp10m.cased.translation.model'
sp = spm.SentencePieceProcessor()
sp.Load(vocab)
class Encoder:
def __init__(self, sp):
self.sp = sp
def encode(self, s):
return self.sp.EncodeAsIds(s) + [1]
def decode(self, ids, strip_extraneous=False):
return self.sp.DecodeIds(list(ids))
encoder = Encoder(sp)
model = modeling.TransformerModel(bert_config)
X = tf.placeholder(tf.int32, [None, None])
r = model(X, training = False)
r
logits = tf.identity(r[0][2], name = 'logits')
logits
ckpt_path = tf.train.latest_checkpoint('bigbird-small-en-ms')
ckpt_path
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
saver.restore(sess, ckpt_path)
pad_sequences = tf.keras.preprocessing.sequence.pad_sequences
import re
from unidecode import unidecode
def cleaning(string):
return re.sub(r'[ ]+', ' ', unidecode(string.replace('\n', ' '))).strip()
string = """
Amongst the wide-ranging initiatives proposed are a sustainable food labelling framework, a reformulation of processed foods, and a sustainability chapter in all EU bilateral trade agreements. The EU also plans to publish a proposal for a legislative framework for sustainable food systems by 2023 to ensure all foods on the EU market become increasingly sustainable.
"""
cleaning(string)
encoded = encoder.encode(f'{cleaning(string)}') + [1]
s = pad_sequences([encoded], padding='post', maxlen = 1024)
%%time
l = sess.run(r[0][2], feed_dict = {X: s})
encoder.decode([i for i in l[0].tolist() if i > 0])
# !wget https://f000.backblazeb2.com/file/malay-dataset/test-en-ms.tar.gz
# !tar -zxf test-en-ms.tar.gz
batch_size = 24
path = 'test-en'
with open(os.path.join(path, 'left.txt')) as fopen:
left = fopen.read().split('\n')
with open(os.path.join(path, 'right.txt')) as fopen:
right = fopen.read().split('\n')
len(left), len(right)
%%time
encoded = encoder.encode(left[0]) + [1]
s = pad_sequences([encoded], padding='post', maxlen = 1024)
%%time
p = sess.run(logits, feed_dict = {X: s}).tolist()
results = []
for row in p:
results.append([i for i in row if i not in [0, 1]])
results
from tensor2tensor.utils import bleu_hook
bleu_hook.compute_bleu(reference_corpus = [encoder.encode(right[0])],
translation_corpus = results)
from tqdm import tqdm
results = []
for i in tqdm(range(0, len(left), batch_size)):
index = min(i + batch_size, len(left))
x = left[i: index]
encoded = [encoder.encode(l) + [1] for l in x]
batch_x = pad_sequences(encoded, padding='post', maxlen = 1024)
p = sess.run(logits, feed_dict = {X: batch_x}).tolist()
result = []
for row in p:
result.append([i for i in row if i not in [0, 1]])
results.extend(result)
rights = [encoder.encode(r) for r in right[:len(results)]]
bleu_hook.compute_bleu(reference_corpus = rights,
translation_corpus = results)
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, 'output/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'Placeholder' in n.name
or 'logits' in n.name
or 'alphas' in n.name
or 'self/Softmax' in n.name)
and 'adam' not in n.name
and 'beta' not in n.name
and 'global_step' not in n.name
and 'gradients' not in n.name
]
)
strings.split(',')
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('output', strings)
from tensorflow.tools.graph_transforms import TransformGraph
transforms = ['add_default_attributes',
'remove_nodes(op=Identity, op=CheckNumerics, op=Dropout)',
'fold_batch_norms',
'fold_old_batch_norms',
'quantize_weights(fallback_min=-10, fallback_max=10)',
'strip_unused_nodes',
'sort_by_execution_order']
pb = 'output/frozen_model.pb'
input_graph_def = tf.GraphDef()
with tf.gfile.FastGFile(pb, 'rb') as f:
input_graph_def.ParseFromString(f.read())
inputs = ['Placeholder']
transformed_graph_def = TransformGraph(input_graph_def,
inputs,
['logits'], transforms)
with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:
f.write(transformed_graph_def.SerializeToString())
def load_graph(frozen_graph_filename, **kwargs):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# https://github.com/onnx/tensorflow-onnx/issues/77#issuecomment-445066091
# to fix import T5
for node in graph_def.node:
if node.op == 'RefSwitch':
node.op = 'Switch'
for index in xrange(len(node.input)):
if 'moving_' in node.input[index]:
node.input[index] = node.input[index] + '/read'
elif node.op == 'AssignSub':
node.op = 'Sub'
if 'use_locking' in node.attr:
del node.attr['use_locking']
elif node.op == 'AssignAdd':
node.op = 'Add'
if 'use_locking' in node.attr:
del node.attr['use_locking']
elif node.op == 'Assign':
node.op = 'Identity'
if 'use_locking' in node.attr:
del node.attr['use_locking']
if 'validate_shape' in node.attr:
del node.attr['validate_shape']
if len(node.input) == 2:
node.input[0] = node.input[1]
del node.input[1]
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('output/frozen_model.pb')
x = g.get_tensor_by_name('import/Placeholder:0')
logits = g.get_tensor_by_name('import/logits:0')
test_sess = tf.InteractiveSession(graph = g)
%%time
l = test_sess.run(logits, feed_dict = {x: s})
encoder.decode([i for i in l[0].tolist() if i > 0])
g = load_graph('output/frozen_model.pb.quantized')
x = g.get_tensor_by_name('import/Placeholder:0')
logits = g.get_tensor_by_name('import/logits:0')
test_sess = tf.InteractiveSession(graph = g)
%%time
l = test_sess.run(logits, feed_dict = {x: s})
encoder.decode([i for i in l[0].tolist() if i > 0])
```
|
github_jupyter
|
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
import tensorflow as tf
from malaya.train.model.bigbird import modeling, utils
bert_config = {
'attention_probs_dropout_prob': 0.1,
'hidden_act': 'gelu',
'hidden_dropout_prob': 0.1,
'hidden_size': 256,
'initializer_range': 0.02,
'intermediate_size': 1024,
'max_position_embeddings': 2048,
'max_encoder_length': 1024,
'max_decoder_length': 1024,
'num_attention_heads': 4,
'num_hidden_layers': 2,
'type_vocab_size': 2,
'scope': 'bert',
'use_bias': True,
'rescale_embedding': False,
'vocab_model_file': None,
'attention_type': 'block_sparse',
'block_size': 16,
'num_rand_blocks': 3,
'vocab_size': 32000,
'couple_encoder_decoder': False,
'beam_size': 1,
'alpha': 0.0,
'label_smoothing': 0.1,
'norm_type': 'postnorm',
}
import sentencepiece as spm
vocab = 'sp10m.cased.translation.model'
sp = spm.SentencePieceProcessor()
sp.Load(vocab)
class Encoder:
def __init__(self, sp):
self.sp = sp
def encode(self, s):
return self.sp.EncodeAsIds(s) + [1]
def decode(self, ids, strip_extraneous=False):
return self.sp.DecodeIds(list(ids))
encoder = Encoder(sp)
model = modeling.TransformerModel(bert_config)
X = tf.placeholder(tf.int32, [None, None])
r = model(X, training = False)
r
logits = tf.identity(r[0][2], name = 'logits')
logits
ckpt_path = tf.train.latest_checkpoint('bigbird-small-en-ms')
ckpt_path
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
saver.restore(sess, ckpt_path)
pad_sequences = tf.keras.preprocessing.sequence.pad_sequences
import re
from unidecode import unidecode
def cleaning(string):
return re.sub(r'[ ]+', ' ', unidecode(string.replace('\n', ' '))).strip()
string = """
Amongst the wide-ranging initiatives proposed are a sustainable food labelling framework, a reformulation of processed foods, and a sustainability chapter in all EU bilateral trade agreements. The EU also plans to publish a proposal for a legislative framework for sustainable food systems by 2023 to ensure all foods on the EU market become increasingly sustainable.
"""
cleaning(string)
encoded = encoder.encode(f'{cleaning(string)}') + [1]
s = pad_sequences([encoded], padding='post', maxlen = 1024)
%%time
l = sess.run(r[0][2], feed_dict = {X: s})
encoder.decode([i for i in l[0].tolist() if i > 0])
# !wget https://f000.backblazeb2.com/file/malay-dataset/test-en-ms.tar.gz
# !tar -zxf test-en-ms.tar.gz
batch_size = 24
path = 'test-en'
with open(os.path.join(path, 'left.txt')) as fopen:
left = fopen.read().split('\n')
with open(os.path.join(path, 'right.txt')) as fopen:
right = fopen.read().split('\n')
len(left), len(right)
%%time
encoded = encoder.encode(left[0]) + [1]
s = pad_sequences([encoded], padding='post', maxlen = 1024)
%%time
p = sess.run(logits, feed_dict = {X: s}).tolist()
results = []
for row in p:
results.append([i for i in row if i not in [0, 1]])
results
from tensor2tensor.utils import bleu_hook
bleu_hook.compute_bleu(reference_corpus = [encoder.encode(right[0])],
translation_corpus = results)
from tqdm import tqdm
results = []
for i in tqdm(range(0, len(left), batch_size)):
index = min(i + batch_size, len(left))
x = left[i: index]
encoded = [encoder.encode(l) + [1] for l in x]
batch_x = pad_sequences(encoded, padding='post', maxlen = 1024)
p = sess.run(logits, feed_dict = {X: batch_x}).tolist()
result = []
for row in p:
result.append([i for i in row if i not in [0, 1]])
results.extend(result)
rights = [encoder.encode(r) for r in right[:len(results)]]
bleu_hook.compute_bleu(reference_corpus = rights,
translation_corpus = results)
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, 'output/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'Placeholder' in n.name
or 'logits' in n.name
or 'alphas' in n.name
or 'self/Softmax' in n.name)
and 'adam' not in n.name
and 'beta' not in n.name
and 'global_step' not in n.name
and 'gradients' not in n.name
]
)
strings.split(',')
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('output', strings)
from tensorflow.tools.graph_transforms import TransformGraph
transforms = ['add_default_attributes',
'remove_nodes(op=Identity, op=CheckNumerics, op=Dropout)',
'fold_batch_norms',
'fold_old_batch_norms',
'quantize_weights(fallback_min=-10, fallback_max=10)',
'strip_unused_nodes',
'sort_by_execution_order']
pb = 'output/frozen_model.pb'
input_graph_def = tf.GraphDef()
with tf.gfile.FastGFile(pb, 'rb') as f:
input_graph_def.ParseFromString(f.read())
inputs = ['Placeholder']
transformed_graph_def = TransformGraph(input_graph_def,
inputs,
['logits'], transforms)
with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:
f.write(transformed_graph_def.SerializeToString())
def load_graph(frozen_graph_filename, **kwargs):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# https://github.com/onnx/tensorflow-onnx/issues/77#issuecomment-445066091
# to fix import T5
for node in graph_def.node:
if node.op == 'RefSwitch':
node.op = 'Switch'
for index in xrange(len(node.input)):
if 'moving_' in node.input[index]:
node.input[index] = node.input[index] + '/read'
elif node.op == 'AssignSub':
node.op = 'Sub'
if 'use_locking' in node.attr:
del node.attr['use_locking']
elif node.op == 'AssignAdd':
node.op = 'Add'
if 'use_locking' in node.attr:
del node.attr['use_locking']
elif node.op == 'Assign':
node.op = 'Identity'
if 'use_locking' in node.attr:
del node.attr['use_locking']
if 'validate_shape' in node.attr:
del node.attr['validate_shape']
if len(node.input) == 2:
node.input[0] = node.input[1]
del node.input[1]
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('output/frozen_model.pb')
x = g.get_tensor_by_name('import/Placeholder:0')
logits = g.get_tensor_by_name('import/logits:0')
test_sess = tf.InteractiveSession(graph = g)
%%time
l = test_sess.run(logits, feed_dict = {x: s})
encoder.decode([i for i in l[0].tolist() if i > 0])
g = load_graph('output/frozen_model.pb.quantized')
x = g.get_tensor_by_name('import/Placeholder:0')
logits = g.get_tensor_by_name('import/logits:0')
test_sess = tf.InteractiveSession(graph = g)
%%time
l = test_sess.run(logits, feed_dict = {x: s})
encoder.decode([i for i in l[0].tolist() if i > 0])
| 0.622689 | 0.242357 |
## The min-max normalization using .transform()
A very common operation is the min-max normalization. It consists in rescaling our value of interest by deducting the minimum value and dividing the result by the difference between the maximum and the minimum value. For example, to rescale student's weight data spanning from 160 pounds to 200 pounds, you subtract 160 from each student's weight and divide the result by 40 (200 - 160).
You're going to define and apply the min-max normalization to all the numerical variables in the restaurant data. You will first group the entries by the time the meal took place (Lunch or Dinner) and then apply the normalization to each group separately.
Instructions
1. Define the min-max normalization using the lambda method.
2. Group the data according to the time the meal took place.
3. Apply the transformation to the grouped data.
```
# Import pandas
import pandas as pd
# Import dataset
restaurant_data = pd.read_csv('restaurant.csv')
# Define the min-max transformation
min_max_tr = lambda x: (x - x.min()) / (x.max() - x.min())
# Group the data according to the time
restaurant_grouped = restaurant_data.groupby('time')
# Apply the transformation
restaurant_min_max_group = restaurant_grouped.transform(min_max_tr)
restaurant_min_max_group.head()
```
## Transforming values to probabilities
In this exercise, we will apply a probability distribution function to a pandas DataFrame with group related parameters by transforming the tip variable to probabilities.
The transformation will be a exponential transformation. The exponential distribution is defined as

where λ (lambda) is the mean of the group that the observation x belongs to.
You're going to apply the exponential distribution transformation to the size of each table in the dataset, after grouping the data according to the time of the day the meal took place. Remember to use each group's mean for the value of λ.
In Python, you can use the exponential as `np.exp()` from the NumPy library and the mean value as `.mean()`.
Instructions
1. Define the exponential distribution transformation `exp_tr`.
2. Group the data according to the time the meal took place.
3. Apply the transformation to the grouped data.
```
# Define the exponential transformation
exp_tr = lambda x: np.exp(-x.mean()*x) * x.mean()
# Group the data according to the time
restaurant_grouped = restaurant_data.groupby('time')
# Apply the transformation
restaurant_exp_group = restaurant_grouped['tip'].transform(exp_tr)
restaurant_exp_group.head()
```
## Validation of normalization
For this exercise, we will perform a z-score normalization and verify that it was performed correctly.
A distinct characteristic of normalized values is that they have a mean equal to zero and standard deviation equal to one.
After you apply the normalization transformation, you can group again on the same variable, and then check the mean and the standard deviation of each group.
You will apply the normalization transformation to every numeric variable in the `poker_grouped` dataset, which is the `poker_hands` dataset grouped by `Class`.
Instructions
1. Apply the normalization transformation to the grouped object `poker_grouped`.
2. Group `poker_trans` by class and print the mean and standard deviation to validate the normalization was done correctly.
```
# Import dataset
poker_hands = pd.read_csv('poker_hands.csv')
poker_grouped = poker_hands.groupby('Class')
zscore = lambda x: (x - x.mean()) / x.std()
# Apply the transformation
poker_trans = poker_grouped.transform(zscore)
poker_trans.head()
# Re-group the grouped object
poker_regrouped = poker_trans.groupby(poker_hands['Class'])
# Print each group's means and standard deviation
print(np.round(poker_regrouped.mean(), 3), '\n')
print(poker_regrouped.std())
```
## When to use transform()?
The `.transform()` function applies a function to all members of each group. Which of the following transformations would produce the same results in the whole dataset regardless of groupings?
`lambda x: np.random.randint(0,10)`
## Identifying missing values
The first step before missing value imputation is to identify if there are missing values in our data, and if so, from which group they arise.
For the same `restaurant_data` data you encountered in the lesson, an employee erased by mistake the tips left in 65 tables. The question at stake is how many missing entries came from tables that smokers where present vs tables with no-smokers present.
Your task is to group both datasets according to the `smoker` variable, count the number or present values and then calculate the difference.
**We're imputing tips to get you to practice the concepts taught in the lesson. From an ethical standpoint, you should not impute financial data in real life, as it could be considered fraud.**
Instructions
1. Group the data according to smoking status.
2. Calculate the number of non-missing values in each group.
3. Print the number of missing values in each group.
```
# Import dataset
restaurant_nan = pd.read_csv('restaurant_nan.csv')
# Group both objects according to smoke condition
restaurant_nan_grouped = restaurant_nan.groupby('smoker')
# Store the number of present values
restaurant_nan_nval = restaurant_nan_grouped['tip'].count()
# Print the group-wise missing entries
restaurant_nan_grouped['total_bill'].count() - restaurant_nan_nval
```
## Missing value imputation
As the majority of the real world data contain missing entries, replacing these entries with sensible values can increase the insight you can get from our data.
In the restaurant dataset, the "total_bill" column has some missing entries, meaning that you have not recorded how much some tables have paid. Your task in this exercise is to replace the missing entries with the **median** value of the amount paid, according to whether the entry was recorded on lunch or dinner (time variable).
Instructions
1. Define the lambda function that fills missing values with the median.
2. Group the data according to the time of each entry.
3. Apply and print the pre-defined transformation to impute the missing values in the `restaurant_data` dataset.
```
# Define the lambda function
missing_trans = lambda x: x.fillna(x.median())
# Group the data according to time
restaurant_grouped = restaurant_data.groupby('time')
# Apply the transformation
restaurant_impute = restaurant_grouped.transform(missing_trans)
restaurant_impute.head()
```
## When to use filtration?
When applying the `filter()` function on a grouped object, what you **can** use as a criterion for filtering?
- The number of missing values of a feature.
- The numerical mean of a feature.
- The numerical mean of more than one feature.
## Data filtration
As you noticed in the video lesson, you may need to filter your data for various reasons.
In this exercise, you will use filtering to select a specific part of our DataFrame:
- by the number of entries recorded in each day of the week
- by the mean amount of money the customers paid to the restaurant each day of the week
Instructions
1. Create a new DataFrame containing **only** the days when the count of `total_bill` is greater than 40.
2. From the `total_bill_40` DataFrame, select only the entries that have a mean `total_bill` greater than $20, grouped by day.
3. After applying the `.filter()` operation on `total_bill_20` in Step 2 in the Console, how many entries (rows) does the last DataFrame you created (`total_bill_20`) have?
```
# Filter the days where the count of total_bill is greater than $40
total_bill_40 = restaurant_data.groupby('day').filter(lambda x: x['total_bill'].count() > 40)
# Print the number of tables where total_bill is greater than $40
print('Number of tables where total_bill is greater than $40:', total_bill_40.shape[0])
# Select only the entries that have a mean total_bill greater than $20
total_bill_20 = total_bill_40.groupby('day').filter(lambda x : x['total_bill'].mean() > 20)
# Print days of the week that have a mean total_bill greater than $20
print('Days of the week that have a mean total_bill greater than $20:', total_bill_20.day.unique())
total_bill_20.shape
```
|
github_jupyter
|
# Import pandas
import pandas as pd
# Import dataset
restaurant_data = pd.read_csv('restaurant.csv')
# Define the min-max transformation
min_max_tr = lambda x: (x - x.min()) / (x.max() - x.min())
# Group the data according to the time
restaurant_grouped = restaurant_data.groupby('time')
# Apply the transformation
restaurant_min_max_group = restaurant_grouped.transform(min_max_tr)
restaurant_min_max_group.head()
# Define the exponential transformation
exp_tr = lambda x: np.exp(-x.mean()*x) * x.mean()
# Group the data according to the time
restaurant_grouped = restaurant_data.groupby('time')
# Apply the transformation
restaurant_exp_group = restaurant_grouped['tip'].transform(exp_tr)
restaurant_exp_group.head()
# Import dataset
poker_hands = pd.read_csv('poker_hands.csv')
poker_grouped = poker_hands.groupby('Class')
zscore = lambda x: (x - x.mean()) / x.std()
# Apply the transformation
poker_trans = poker_grouped.transform(zscore)
poker_trans.head()
# Re-group the grouped object
poker_regrouped = poker_trans.groupby(poker_hands['Class'])
# Print each group's means and standard deviation
print(np.round(poker_regrouped.mean(), 3), '\n')
print(poker_regrouped.std())
# Import dataset
restaurant_nan = pd.read_csv('restaurant_nan.csv')
# Group both objects according to smoke condition
restaurant_nan_grouped = restaurant_nan.groupby('smoker')
# Store the number of present values
restaurant_nan_nval = restaurant_nan_grouped['tip'].count()
# Print the group-wise missing entries
restaurant_nan_grouped['total_bill'].count() - restaurant_nan_nval
# Define the lambda function
missing_trans = lambda x: x.fillna(x.median())
# Group the data according to time
restaurant_grouped = restaurant_data.groupby('time')
# Apply the transformation
restaurant_impute = restaurant_grouped.transform(missing_trans)
restaurant_impute.head()
# Filter the days where the count of total_bill is greater than $40
total_bill_40 = restaurant_data.groupby('day').filter(lambda x: x['total_bill'].count() > 40)
# Print the number of tables where total_bill is greater than $40
print('Number of tables where total_bill is greater than $40:', total_bill_40.shape[0])
# Select only the entries that have a mean total_bill greater than $20
total_bill_20 = total_bill_40.groupby('day').filter(lambda x : x['total_bill'].mean() > 20)
# Print days of the week that have a mean total_bill greater than $20
print('Days of the week that have a mean total_bill greater than $20:', total_bill_20.day.unique())
total_bill_20.shape
| 0.631481 | 0.993248 |
# Fix Pack Upgrade of the Db2 Data Management Console
Fix Pack updates are regularly available for the Db2 Data Management Console. These fix packs include both code fixes as well as new capabilities. Click [What's New in 3.1.1](https://www.ibm.com/support/pages/ibm-db2-data-management-console-version-31x-releases-new-features-and-enhancements) to see a list of the new features.
This notebook will walk you through an update of the IBM Db2 Data Management Console in this virtual machine from 3.1 to 3.1.1. You can find the full instructions in the [Db2 Data Management Console Knowledge Center](https://www.ibm.com/support/knowledgecenter/SS5Q8A_3.1.x/com.ibm.datatools.dsweb.ots.upgrade.doc/topics/upgrade_dmctolatestver.html)
The Db2 Console installed in this demonstration platform, is at the 3.1 level. While most notebooks in this Hands-On lab work with both 3.1 and 3.1.1 some require that you upgrade to 3.1.1 before running the lab.
Let's start by checking the Version you are using now. You may want to arrange these instructions on the page so you can follow along as you move from the browser, to the Db2 Console to a OS Terminal and back.
1. Click http://localhost:11080/console and log in:
- Userid: db2inst1
- Password: db2inst1
2. Click the **D** icon at the very top right of the Db2 Console
3. Select **About**. You should see Version 3.1.0.0.
## Download the Lastest Db2 Data Management Console 3.1.1
The first step is to download the latest fixpack from IBM Fix Central.
1. Click https://www.ibm.com/support/fixcentral/
2. Enter **Db2 Data Management Console** in the **Product Selector** field
3. Click **IBM Db2 Data Management Console** in the search list
4. Select **3.1.1.0** in the list of available versions
5. Select **Linux** in the list of Platforms
6. Click **Continue**
7. Click **Browse for fixes**
8. Click **Continue** to search for available images
9. Click the checkbox beside **3.1.1.0-ibm-datamgtconsole-linux** to select the download image
10. Click **Continue**
11. If you are not already logged into IBM, log in using your IBMid or create a new IBMid
12. Select **Download using your browser (HTTPS)**
13. Click **Continue**
14. Click **3.1.1.0-ibm-datamgtconsole-linux.tgz** to start the download. Depending on your network speed, this should take anywhere from 1 to 15 minutes.
## Stop the Db2 Data Management Console
To install the fix pack update you must first stop the Db2 Console.
1. Click the **Terminal** icon at the bottom left of the Linux screen
2. Enter **cd dmc** to navigate to the Db2 Console install directory
3. Enter **./bin/stop.sh** to stop the Db2 Console service
4. Enter **./bin/status.sh** to check the Db2 Console service status

## Extract the Db2 Console 3.1.1 Code
Now that the Db2 Console is stopped you can extract the new code into the install directory.
1. Click the **Files** icon at the bottom left of the screen
2. Select **Downloads**. You should see the original Db2 Data Management Consile install file as well as the 3.1.1.0 fixpack you just downloaded.
4. Double click the **3.1.1.0-ibm-datamgtconsole-linux.tgz** file. The Archive Manager opens.
5. Double click file icon in the Name column
6. Double click **ibm-datamgmtconsole**
7. Select all the files in the **ibm-datamgmtconsole** directory

8. Select **Extract** at the top left of the Archive Manager.
9. Click **Home**
10. Click **dmc**

11. Make sure that **Selected files** is checked under the Extract options
12. Make sure that **Keep directory structure** is also checked
13. Click **Extract** at the top right of the Archive Manager.
14. Click the **Terminal** icon at the bottom left of the Linux screen
15. Make sure you are in the **dmc** directory
16. Enter **ls -l** to see the creation time of the files in the directory. The setup.sh should have a date of **Jan 20**.
## Restart the Db2 Console
Now that the files have been extracted you can run the setup.sh script to update the Db2 Console and restart it.
1. Click the Terminal icon at the bottom left of the Linux screen.
2. Make sure you are in the dmc directory
3. Enter **./setup.sh**
4. Enter **1** to accept the License Terms. The installation program will add new tables, views and aliases to the repository database as required. Only new tables are added with 3.1.1.

There is a pause between when the dsweb server is starting and when it is ready.

To check the status of the Db2 Console service, run the **bin/Status.sh** command from the command line.

## Check the 3.1.1 Console Operation
Now that the code is updated and the Db2 Console service is restarted you can explore some of the new function.
#### First, clear the browser cache:
1. Click the Chrome web browser icon at the bottom left of the screen.
2. Close any exsiting Db2 Console tabs
2. Click the elipsis menu at the top right of the Chrome
3. Select **More tools**
4. Select **Clear browsing data**
5. Select **All time**
6. Select **Clear data**
7. Close the **Settings** tab
#### Now re-open the Db2 Console:
1. Click http://localhost:11080/console or paste the URL into the web browser and log in:
- Userid: db2inst1
- Password: db2inst1
2. Click the **D** icon at the very top right of the Db2 Console
3. Select **About**. You should see Version 3.1.1.0
4. Click the **Bell** icon at the top right side of the Db2 Console to see the new notification center.
5. Click the **Gear** icon at the top right of the Db2 Console to see the new SNMP and Email alert options.
6. Click the **?** icon to see the new help option
For a full list of what is new, click [What's New in 3.1.1](https://www.ibm.com/support/pages/ibm-db2-data-management-console-version-31x-releases-new-features-and-enhancements).
## Update the 3.1.1 Console to Support IFrame Embedding
One of the great features of the Db2 Console is the ability to use parts of the user interface as a microservice. In the 3.1 Db2 Console there are no restrictions on this capability. However this left the console open to a possible Clickjacking attack.
### Clickjacking
Clickjacking, is a kind of security attack were a hacker uses an embedded IFrame to lure users into clicking on a button or link on another page when they were intending to click on the the top level page. Thus, the attacker is “hijacking” clicks meant for their page and routing them to another page, most likely owned by another application, domain, or both.
To avoid any possibility of this the 3.1.1 Db2 Console turns of the IFrame Embedding by default.
You can still share links to webpages and save bookmarks to specific pages unchanged from 3.1.
### Safely allowing IFrame Embedding
If you want to use IFrame embedding in your own Jupyter notebooks you can turn off the IFrame restrictions. We only recommend this if you are securely running the Db2 Console and the Jupyter notebooks in a secure environment or as part of the Db2 Data Management Console Hands on Lab in the supplied Virtual Machine.
### Updating the WebSphere Liberty dsweb settings
To continue to using IFrame Embedding in the hands on lab, you need to update the bootstrap settings in the WebSphere Liberty component of the Db2 Console:
#### Stop the Db2 Console
1. Run **./bin/stop.sh** in the Terminal from the **dmc** directory
#### Update the dsweb properties
1. Click the **Files** icon at the bottom left of the screen
2. Select **Home**
3. Select **dmc**, **wlp**, **usr**, **servers**, **dsweb**
4. Double-click **bootstrap.properties** to open the text editor
5. Remove **ui.http.response.append.header=Content-Security-Policy:DENY**
6. Click **Save**
#### Restart the Db2 Console
1. Run **./bin/startup.sh** in the Terminal from the **dmc** directory
#### Clear the browser cache:¶
1. Select **Clear browsing data** from the Chrome **More tools** menu
#### Credits: IBM 2019, 2020, Peter Kohlmann [kohlmann@ca.ibm.com]
|
github_jupyter
|
# Fix Pack Upgrade of the Db2 Data Management Console
Fix Pack updates are regularly available for the Db2 Data Management Console. These fix packs include both code fixes as well as new capabilities. Click [What's New in 3.1.1](https://www.ibm.com/support/pages/ibm-db2-data-management-console-version-31x-releases-new-features-and-enhancements) to see a list of the new features.
This notebook will walk you through an update of the IBM Db2 Data Management Console in this virtual machine from 3.1 to 3.1.1. You can find the full instructions in the [Db2 Data Management Console Knowledge Center](https://www.ibm.com/support/knowledgecenter/SS5Q8A_3.1.x/com.ibm.datatools.dsweb.ots.upgrade.doc/topics/upgrade_dmctolatestver.html)
The Db2 Console installed in this demonstration platform, is at the 3.1 level. While most notebooks in this Hands-On lab work with both 3.1 and 3.1.1 some require that you upgrade to 3.1.1 before running the lab.
Let's start by checking the Version you are using now. You may want to arrange these instructions on the page so you can follow along as you move from the browser, to the Db2 Console to a OS Terminal and back.
1. Click http://localhost:11080/console and log in:
- Userid: db2inst1
- Password: db2inst1
2. Click the **D** icon at the very top right of the Db2 Console
3. Select **About**. You should see Version 3.1.0.0.
## Download the Lastest Db2 Data Management Console 3.1.1
The first step is to download the latest fixpack from IBM Fix Central.
1. Click https://www.ibm.com/support/fixcentral/
2. Enter **Db2 Data Management Console** in the **Product Selector** field
3. Click **IBM Db2 Data Management Console** in the search list
4. Select **3.1.1.0** in the list of available versions
5. Select **Linux** in the list of Platforms
6. Click **Continue**
7. Click **Browse for fixes**
8. Click **Continue** to search for available images
9. Click the checkbox beside **3.1.1.0-ibm-datamgtconsole-linux** to select the download image
10. Click **Continue**
11. If you are not already logged into IBM, log in using your IBMid or create a new IBMid
12. Select **Download using your browser (HTTPS)**
13. Click **Continue**
14. Click **3.1.1.0-ibm-datamgtconsole-linux.tgz** to start the download. Depending on your network speed, this should take anywhere from 1 to 15 minutes.
## Stop the Db2 Data Management Console
To install the fix pack update you must first stop the Db2 Console.
1. Click the **Terminal** icon at the bottom left of the Linux screen
2. Enter **cd dmc** to navigate to the Db2 Console install directory
3. Enter **./bin/stop.sh** to stop the Db2 Console service
4. Enter **./bin/status.sh** to check the Db2 Console service status

## Extract the Db2 Console 3.1.1 Code
Now that the Db2 Console is stopped you can extract the new code into the install directory.
1. Click the **Files** icon at the bottom left of the screen
2. Select **Downloads**. You should see the original Db2 Data Management Consile install file as well as the 3.1.1.0 fixpack you just downloaded.
4. Double click the **3.1.1.0-ibm-datamgtconsole-linux.tgz** file. The Archive Manager opens.
5. Double click file icon in the Name column
6. Double click **ibm-datamgmtconsole**
7. Select all the files in the **ibm-datamgmtconsole** directory

8. Select **Extract** at the top left of the Archive Manager.
9. Click **Home**
10. Click **dmc**

11. Make sure that **Selected files** is checked under the Extract options
12. Make sure that **Keep directory structure** is also checked
13. Click **Extract** at the top right of the Archive Manager.
14. Click the **Terminal** icon at the bottom left of the Linux screen
15. Make sure you are in the **dmc** directory
16. Enter **ls -l** to see the creation time of the files in the directory. The setup.sh should have a date of **Jan 20**.
## Restart the Db2 Console
Now that the files have been extracted you can run the setup.sh script to update the Db2 Console and restart it.
1. Click the Terminal icon at the bottom left of the Linux screen.
2. Make sure you are in the dmc directory
3. Enter **./setup.sh**
4. Enter **1** to accept the License Terms. The installation program will add new tables, views and aliases to the repository database as required. Only new tables are added with 3.1.1.

There is a pause between when the dsweb server is starting and when it is ready.

To check the status of the Db2 Console service, run the **bin/Status.sh** command from the command line.

## Check the 3.1.1 Console Operation
Now that the code is updated and the Db2 Console service is restarted you can explore some of the new function.
#### First, clear the browser cache:
1. Click the Chrome web browser icon at the bottom left of the screen.
2. Close any exsiting Db2 Console tabs
2. Click the elipsis menu at the top right of the Chrome
3. Select **More tools**
4. Select **Clear browsing data**
5. Select **All time**
6. Select **Clear data**
7. Close the **Settings** tab
#### Now re-open the Db2 Console:
1. Click http://localhost:11080/console or paste the URL into the web browser and log in:
- Userid: db2inst1
- Password: db2inst1
2. Click the **D** icon at the very top right of the Db2 Console
3. Select **About**. You should see Version 3.1.1.0
4. Click the **Bell** icon at the top right side of the Db2 Console to see the new notification center.
5. Click the **Gear** icon at the top right of the Db2 Console to see the new SNMP and Email alert options.
6. Click the **?** icon to see the new help option
For a full list of what is new, click [What's New in 3.1.1](https://www.ibm.com/support/pages/ibm-db2-data-management-console-version-31x-releases-new-features-and-enhancements).
## Update the 3.1.1 Console to Support IFrame Embedding
One of the great features of the Db2 Console is the ability to use parts of the user interface as a microservice. In the 3.1 Db2 Console there are no restrictions on this capability. However this left the console open to a possible Clickjacking attack.
### Clickjacking
Clickjacking, is a kind of security attack were a hacker uses an embedded IFrame to lure users into clicking on a button or link on another page when they were intending to click on the the top level page. Thus, the attacker is “hijacking” clicks meant for their page and routing them to another page, most likely owned by another application, domain, or both.
To avoid any possibility of this the 3.1.1 Db2 Console turns of the IFrame Embedding by default.
You can still share links to webpages and save bookmarks to specific pages unchanged from 3.1.
### Safely allowing IFrame Embedding
If you want to use IFrame embedding in your own Jupyter notebooks you can turn off the IFrame restrictions. We only recommend this if you are securely running the Db2 Console and the Jupyter notebooks in a secure environment or as part of the Db2 Data Management Console Hands on Lab in the supplied Virtual Machine.
### Updating the WebSphere Liberty dsweb settings
To continue to using IFrame Embedding in the hands on lab, you need to update the bootstrap settings in the WebSphere Liberty component of the Db2 Console:
#### Stop the Db2 Console
1. Run **./bin/stop.sh** in the Terminal from the **dmc** directory
#### Update the dsweb properties
1. Click the **Files** icon at the bottom left of the screen
2. Select **Home**
3. Select **dmc**, **wlp**, **usr**, **servers**, **dsweb**
4. Double-click **bootstrap.properties** to open the text editor
5. Remove **ui.http.response.append.header=Content-Security-Policy:DENY**
6. Click **Save**
#### Restart the Db2 Console
1. Run **./bin/startup.sh** in the Terminal from the **dmc** directory
#### Clear the browser cache:¶
1. Select **Clear browsing data** from the Chrome **More tools** menu
#### Credits: IBM 2019, 2020, Peter Kohlmann [kohlmann@ca.ibm.com]
| 0.68637 | 0.700383 |
# Memory caching
In [Workflow notebook](basic_worflow.ipynb) you learnt about ``Workflows`` that specify processing by an execution graph and offer efficient recomputing. However, sometimes you might want to use ``Interfaces`` that gives better control of the execution of each step and can be easily combine with any Python code. Unfortunately, ``Interfaces`` do not offer any caching and you always dully recompute your task.
Solution to this problem can be a ``caching`` mechanism supported by Nipype. Nipype caching relies on the ``Memory`` class and creates an execution context that is bound to a disk cache.
When you instantiate the class you should provide ``base_dir`` and additional subdirectory called ``nipype_mem`` will be automatically created.
```
from nipype.caching import Memory
mem = Memory(base_dir='/output/workingdir')
```
If we want to ask for caching for the ``BET`` interface, we can use ``cache`` method that takes interfaces classes as an argument.
```
from nipype.interfaces import fsl
bet_mem = mem.cache(fsl.BET)
```
Now, ``bet_mem`` can be applied as a function with inputs of the ``BET`` interface as the function arguments. Those inputs are given as keyword arguments, bearing the same name as the name in the inputs specs of the interface.
```
bet_mem(in_file="/data/ds000114/sub-02/ses-test/anat/sub-02_ses-test_T1w.nii.gz",
out_file="/output/sub-02_T1w_brain.nii.gz",
mask=True)
```
As you can seen ``bet`` command was run as expected. We can now check the content of caching file:
```
! ls -l /output/workingdir/nipype_mem
```
A special subdirectory for our interface has been created. Let's try to run this command again:
```
bet_mem(in_file="/data/ds000114/sub-02/ses-test/anat/sub-02_ses-test_T1w.nii.gz",
out_file="/output/sub-02_T1w_brain.nii.gz",
mask=True)
```
Now, the ``bet`` command was not run, but precomputed outputs were collected!
If you created cached results that you're not going reuse, you can use [Memory.clear_runs_since()](http://nipy.org/nipype/0.10.0/users/caching_tutorial.html#nipype.caching.Memory.clear_runs_since) to flush the cache. Note, that if you use the method without any argument it will remove results used before current date, so will keep the results we've just calculated, let's check:
```
mem.clear_runs_since()
bet_mem(in_file="/data/ds000114/sub-02/ses-test/anat/sub-02_ses-test_T1w.nii.gz",
out_file="/output/sub-02_T1w_brain.nii.gz",
mask=True)
```
As you can see, Nipype again collected the old results. If we want to remove everything, we have to put some future date:
```
mem.clear_runs_since(year=2020, month=1, day=1)
```
You can also check [Memory.clear_runs_since()](http://nipy.org/nipype/0.10.0/users/caching_tutorial.html#nipype.caching.Memory.clear_runs_since).
|
github_jupyter
|
from nipype.caching import Memory
mem = Memory(base_dir='/output/workingdir')
from nipype.interfaces import fsl
bet_mem = mem.cache(fsl.BET)
bet_mem(in_file="/data/ds000114/sub-02/ses-test/anat/sub-02_ses-test_T1w.nii.gz",
out_file="/output/sub-02_T1w_brain.nii.gz",
mask=True)
! ls -l /output/workingdir/nipype_mem
bet_mem(in_file="/data/ds000114/sub-02/ses-test/anat/sub-02_ses-test_T1w.nii.gz",
out_file="/output/sub-02_T1w_brain.nii.gz",
mask=True)
mem.clear_runs_since()
bet_mem(in_file="/data/ds000114/sub-02/ses-test/anat/sub-02_ses-test_T1w.nii.gz",
out_file="/output/sub-02_T1w_brain.nii.gz",
mask=True)
mem.clear_runs_since(year=2020, month=1, day=1)
| 0.310067 | 0.92079 |
To run this example locally, execute: `ploomber examples -n spec-api-python`.
To start a free, hosted JupyterLab: [](https://mybinder.org/v2/gh/ploomber/binder-env/main?urlpath=git-pull%3Frepo%3Dhttps%253A%252F%252Fgithub.com%252Fploomber%252Fprojects%26urlpath%3Dlab%252Ftree%252Fprojects%252Fspec-api-python%252FREADME.ipynb%26branch%3Dmaster)
Found an issue? [Let us know.](https://github.com/ploomber/projects/issues/new?title=spec-api-python%20issue)
Have questions? [Ask us anything on Slack.](http://community.ploomber.io/)
# Your first Python pipeline
<!-- start description -->
Introductory tutorial to learn the basics of Ploomber.
<!-- end description -->
**Note:** This is intended for a quick and interactive experience. If you want
to learn about Ploomber's core concepts and design rationale, go to the
[the next tutorial](https://ploomber.readthedocs.io/en/stable/get-started/basic-concepts.html).
## Introduction
Ploomber allows you to build modular and maintainable pipelines. A pipeline (or **DAG**) is simply a group of tasks with a particular execution order, where subsequent (or **downstream** tasks) use previous (or **upstream**) tasks as inputs. This example pipeline contains three tasks, the first task, `get.py` gets some data, `clean.py` cleans it, and `plot.py` generates a visualization:
```
%%bash
ls *.py
```
**Note:** These tasks are Python scripts, but you can use functions, notebooks,
and SQL scripts. An upcoming guide explains how other types of tasks work.
## Integration with Jupyter
Ploomber integrates with Jupyter. If you open the scripts inside the
`jupyter notebook` app, they will render as notebooks. If you're using `jupyter lab`, you need to right click -> open with -> Notebook as depicted below:

Along with the `*.py` files, there is a `pipeline.yaml` file where we declare which files we use as tasks:
```yaml
# Content of pipeline.yaml
tasks:
- source: raw.py
product:
nb: output/raw.ipynb
data: output/data.csv
- source: clean.py
product:
nb: output/clean.ipynb
data: output/clean.csv
- source: plot.py
product: output/plot.ipynb
```
**Note:** The `pipeline.yaml` file is optional, but it gives you more flexibility.
[Click here](https://github.com/ploomber/projects/tree/master/templates/spec-api-directory) to see an example without a `pipeline.yaml` file.
Let's plot the pipeline:
```
%%bash
ploomber plot
from IPython.display import Image
Image(filename='pipeline.png')
```
You can see that our pipeline has a defined execution order: `get` -> `clean` -> `plot`.
Let's now execute the `status` command, which gives us an overview of the pipeline:
```
%%bash
ploomber status
```
We can see a summary of each task: last execution date, if it's outdated (i.e., source code changed since previous execution), product (output files), documentation (if any), and the source code location.
## How is execution order determined?
Ploomber infers the pipeline structure from your code. For example, to
clean the data, we must get it first; hence, we declare the following in `clean.py`:
~~~python
# this tells Ploomber to execute 'raw' task before 'clean'
upstream = ['raw']
~~~
Once we finish cleaning the data, we must save it somewhere (an output is known
as a **product**). Products can be files or SQL relations. Our current example
only generates files.
To specify where to save the output of each task, we use the `product`
key. For example, the `raw` task definition looks like this:
~~~yaml
- source: raw.py
product:
nb: output/raw.ipynb
data: output/data.csv
~~~
Scripts automatically generate a copy of themselves in Jupyter
notebook format (`.ipynb`). That's why we see a notebook in the `product`
dictionary (under the `nb` key). Generating a copy on each execution allows us to create standalone reports for each task, no need to write extra code to save our charts! Notebooks as outputs are an essential concept: `raw.py` is part of the pipeline's
source code, but `output/raw.ipynb` is not (it's an artifact generated by the source code).
If you don't want to generate output notebooks, you can use Python functions
as tasks. Our upcoming tutorial goes deeper into the different types of tasks.
## Building the pipeline
Let's build the pipeline:
```
%%bash
# takes a few seconds to finish
mkdir output
ploomber build
```
This pipeline saves all the output in the `output/` directory; we have a few
data files:
```
%%bash
ls output/*.csv
```
And a notebook for each script:
```
%%bash
ls output/*.ipynb
```
## Updating the pipeline
Quick experimentation is essential to analyze data. Ploomber allows
you to iterate faster and run more experiments.
Say you found a problematic column and need to add few more lines to your `clean.py` script. Since `raw.py` does not depend on `clean.py`, we don't have to rerun it. However, if we modify `clean.py` and want to bring our results up-to-date, we must run `clean.py`, and then `plot.py`, in that order. To save you valuable time, Ploomber keeps track of those dependencies and only reruns outdated tasks.
To see how it works, make some changes to the `clean.py` script, then build again:
```
%%bash
# takes a few seconds to finish
ploomber build
```
You'll see that `raw.py` didn't run because it was not affected by the change!
Incremental builds are a powerful feature: you can open any of the `.py` files in Jupyter, edit them interactively (as if they were notebooks), then call `ploomber build` to quickly get your results up-to-date.
## Where to go from here
That's it; this concludes our first tutorial. This tutorial shows a bit of what Ploomber can do for you. However, there are many other features to discover: task parallelization, parametrization, execution in the cloud, among others.
Want to dig deeper into Ploomber's core concepts and design rationale? Check out [the upcoming
tutorial](https://ploomber.readthedocs.io/en/stable/get-started/basic-concepts.html).
Have questions? [Ask us anything on Slack](http://community.ploomber.io/) or [open an issue](https://github.com/ploomber/ploomber/issues/new?title=Question) on GitHub.
Do you like our project? Show your support with a [star on GitHub](https://github.com/ploomber/ploomber)!
|
github_jupyter
|
%%bash
ls *.py
# Content of pipeline.yaml
tasks:
- source: raw.py
product:
nb: output/raw.ipynb
data: output/data.csv
- source: clean.py
product:
nb: output/clean.ipynb
data: output/clean.csv
- source: plot.py
product: output/plot.ipynb
%%bash
ploomber plot
from IPython.display import Image
Image(filename='pipeline.png')
%%bash
ploomber status
%%bash
# takes a few seconds to finish
mkdir output
ploomber build
%%bash
ls output/*.csv
%%bash
ls output/*.ipynb
%%bash
# takes a few seconds to finish
ploomber build
| 0.386763 | 0.945601 |
# Main thesis regressions
Be aware that this will not run unless you have the data stored in the right place. If you are interested please contact the author.
```
from collections import OrderedDict
from pathlib import Path
from pprint import pprint
import warnings
import linearmodels
import numpy as np
import pandas as pd
import plotly_express as px
import statsmodels.api as sm
from scipy.stats import anderson_ksamp
from tqdm.notebook import tqdm
from load_daily_data import load_frag_data, load_market_quality_statistics, load_copustat
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', None)
```
# Load data
## Fragmentation data
```
frag = load_frag_data()
# filter
print(frag.shape)
print("First date: \t", frag.index.min())
print("Last date: \t", frag.index.max())
frag.set_index("isin", append=True, inplace=True)
```
## Compustat data
```
compustat = load_copustat()
```
## Market quality data
```
filename = "20200129_09-02-47_liquidity_stats.csv"
filepath = Path(f"../statistics/daily_liquidity/{filename}")
assert filepath.is_file()
# load stats
daily_stats = load_market_quality_statistics(filepath=filepath)
# append "isin" to index
daily_stats.set_index("isin", append=True, inplace=True)
print(daily_stats.shape)
print("First date: \t", daily_stats.index.get_level_values("date").min())
print("Last date: \t", daily_stats.index.get_level_values("date").max())
daily_stats.rename(columns={"num_transactions": "num_orders_aggr"}, inplace=True)
daily_stats.rename(columns={"num_orders_total": "num_orders_passive"}, inplace=True)
daily_stats["quoted_rel_spread_bps_time_weighted"] *= 100
daily_stats["eff_rel_spread_bps_weighted"] *= 100
```
## Combine the three dataframes into one
```
# combine
stats = daily_stats.join(frag, how="left", lsuffix="_IMI", sort=False)
stats = stats.join(compustat, how="left", rsuffix="_compu", sort=False)
# first level of index needs to be entity variable
stats = stats.reset_index("date").set_index("date", append=True)
print("First date: \t", stats.index.get_level_values("date").min())
print("Last date: \t", stats.index.get_level_values("date").max())
print(stats.shape)
```
# Create quartiles
### By turnover
```
# condition = stats.index.get_level_values("date") < pd.Timestamp("2019-07-01")
turnover_stats = stats["turnover"].reset_index("isin").groupby("isin").median()
lower_quartile = turnover_stats["turnover"].quantile(0.25)
median = turnover_stats["turnover"].median()
upper_quartile = turnover_stats["turnover"].quantile(0.75)
conditions = {"3 bottom turnover": turnover_stats["turnover"] < lower_quartile,
"2 low turnover": (lower_quartile <= turnover_stats["turnover"]) & (turnover_stats["turnover"] < median),
"1 high turnover": (median <= turnover_stats["turnover"]) & (turnover_stats["turnover"] < upper_quartile),
"0 top turnover": upper_quartile <= turnover_stats["turnover"]
}
stats.reset_index("date", inplace=True)
for quartile, condition in conditions.items():
isins = turnover_stats[condition].index
stats.loc[isins, "turnover_category"] = quartile
stats.set_index("date", append=True, inplace=True)
num_stocks = stats["turnover_category"].reset_index().groupby("turnover_category")["isin"].nunique()
print(f"Total number of stocks {num_stocks.sum()}")
num_stocks
```
### Excluding low turnover stocks?
```
# exclude bottom turnover from sample?
stats = stats[~stats["turnover_category"].isin(["3 bottom turnover", "2 low turnover"])]
num_stocks = stats["turnover_category"].reset_index().groupby("turnover_category")["isin"].nunique()
print(f"Total number of stocks {num_stocks.sum()}")
num_stocks
relevant_isins = stats.index.get_level_values("isin").unique()
relevant_isins = relevant_isins.to_frame().reset_index(drop=True)
# # Export isins to csv?
# relevant_isins.to_csv("relevant_isins.csv", index=False)
```
### Market share quartiles
```
frag_measure = "market_share" # "non_fragmentation_index"
frag_per_isin = stats.groupby(["after_nonequivalence", "isin"])[frag_measure].quantile(0.5)
frag_per_isin = frag_per_isin.unstack("after_nonequivalence")
frag_per_isin[frag_measure] = frag_per_isin[True] - frag_per_isin[False]
frag_per_isin.drop(columns=[False, True], inplace=True)
condition = stats.index.get_level_values("date") < pd.Timestamp("2019-07-01")
frag_per_isin = stats.loc[condition, [frag_measure]].reset_index("isin")
frag_per_isin = frag_per_isin.groupby(["isin"]).quantile(0.50)
# # Option 1: simple
# # a stock is not fragmented, if on more than 50% of all trading days, there was no trading on other venues (see cell above)
# nonfragmentation = frag_per_isin[frag_measure] == 1
# frag_per_isin.loc[nonfragmentation, "fragmentation"] = "not fragmented"
# frag_per_isin.loc[~nonfragmentation, "fragmentation"] = "fragmented"
# Option 2: by quartiles
lower_quartile = frag_per_isin[frag_measure].quantile(0.25)
median = frag_per_isin[frag_measure].median()
upper_quartile = frag_per_isin[frag_measure].quantile(0.75)
conditions = {
"Q1": frag_per_isin[frag_measure] < lower_quartile,
"Q2": (lower_quartile <= frag_per_isin[frag_measure]) & (frag_per_isin[frag_measure] < median),
"Q3": (median <= frag_per_isin[frag_measure]) & (frag_per_isin[frag_measure] < upper_quartile),
"Q4": upper_quartile <= frag_per_isin[frag_measure],
}
for fragmentation, condition in conditions.items():
frag_per_isin.loc[condition, "fragmentation"] = fragmentation
frag_per_isin["fragmentation"].value_counts()
# left join to stats
stats = stats.join(frag_per_isin["fragmentation"], on="isin")
# showing those isin's that did not have 375 observations
num_dates = stats.reset_index().groupby(["fragmentation", "isin"])["date"].nunique()
num_dates[num_dates != 375]
condition = stats.index.get_level_values("date") < pd.Timestamp("2019-07-01")
num_stocks = stats.reset_index().groupby(["fragmentation"])[["isin"]].nunique() # .describe()
print(f"Total number of stocks {num_stocks['isin'].sum()}")
num_stocks
# remember: groups can change over time, that's why there are more stocks than total above
stats.reset_index().groupby(["group", "fragmentation"])[["isin"]].nunique()
stats.reset_index().groupby(["fragmentation", "turnover_category", "group"])[["isin"]].nunique()
stats[condition].reset_index().groupby(["fragmentation"])[[frag_measure]].describe()
stats.reset_index().groupby(["after_nonequivalence"])[["isin"]].describe()
```
## Market Cap variable
```
stats["market_cap"] = stats["shares_outstanding"] * stats["price_close"]
market_cap_average_log = np.log(stats.groupby("isin")["market_cap"].mean())
market_cap_average_log.name = "market_cap_average_log"
stats = stats.join(market_cap_average_log)
(stats.reset_index().groupby(["fragmentation"])[["market_cap_average_log"]].describe()).round(2)
```
## Fragmentation table
```
table = list()
for measure in ("market_share", "lit_frag", "market_cap", "turnover"):
descriptive = stats.reset_index().groupby(["fragmentation"])[[measure]].describe()
if measure == "market_cap":
descriptive /= 1e6
descriptive = descriptive.applymap("{:.0f}".format)
elif measure == "turnover":
descriptive /= 1e6
descriptive = descriptive.applymap("{:.1f}".format)
else:
descriptive = descriptive.applymap("{:.2f}".format)
descriptive = descriptive.loc[:, pd.IndexSlice[: , ["mean", "50%", "std"]]]
table.append(descriptive)
table = pd.concat(table, axis=1)
table.rename(
columns={
"market_share": "SIX market share",
"lit_frag": "LitFrag",
"market_cap": "Market Cap",
"turnover": "Turnover",
"mean": "Mean",
"std": "StDev",
"50%": "Median"
},
inplace=True,
)
table = table.T.reindex(["Mean", "Median", "StDev"], level=1).T
num_stocks = stats.reset_index().groupby("fragmentation")["isin"].nunique()
num_stocks = num_stocks.rename("Num stocks").to_frame()
num_stocks.columns = pd.MultiIndex.from_product([num_stocks.columns, ['']])
table = table.join(num_stocks)
for idx in range(4):
idx += 1
table.loc[f"Q{idx}", "Fragmentation"] = f"Quartile {idx}"
table.set_index("Fragmentation", inplace=True)
table = table[["Num stocks", "SIX market share", "LitFrag", "Turnover", "Market Cap"]]
table
print(table.to_latex())
```
## Time variables & dummies
```
# stats.loc[stats["fragmentation"].isin(["3_little_fragmented", "4_not_fragmented"]), "frag_dummy"] = 0
# stats["frag_dummy"].fillna(value=1, inplace=True)
# stats["frag_dummy"] = stats["frag_dummy"].astype(int)
# stats.reset_index().groupby(["frag_dummy"])[["isin"]].describe()
# stats[stats["frag_dummy"] == 1].index.get_level_values("isin").unique().to_frame().reset_index(drop=True).to_csv("frag_isins.csv", index=False)
dates = stats.index.get_level_values("date")
stats.loc[7 <= dates.month, "half_year"] = "H2"
stats["half_year"].fillna(value="H1", inplace=True)
stats["semester"] = dates.year.astype("str") + "_" + stats["half_year"]
stats["dummy_2019"] = dates.year == 2019
```
## Calculate daily returns & Amihud 2002
```
stats.sort_index(inplace=True)
stats["abs_simple_returns"] = np.abs(stats["price_close"] / stats["price_close"].groupby("isin").shift(1) - 1)
stats["amihud"] = stats["abs_simple_returns"] / stats["turnover"] * 1e9 # _simple_simple
stats[["amihud", "semester", "fragmentation"]].groupby(["fragmentation", "semester"]).mean()
# plot single measure for a quartile
measure = "eff_rel_spread_bps_weighted"
plot_data = stats.loc[stats["fragmentation"] == "Q4", measure].reset_index().dropna()
# px.scatter(plot_data, x="date", y=measure, color="isin")
isin = "CH0012549785"
# measures = ["price_mean", "price_close", "price_log", "price_reciprocal"]
measures = ["quoted_rel_spread_bps_time_weighted", "eff_rel_spread_bps_weighted", "min_tick_size"]
# measures = ["market_cap", "market_cap_average_log", "price_close", "shares_outstanding"]
plot_data = stats.loc[isin, measures]
plot_data = plot_data.stack().reset_index().rename(columns={"level_1": "measure", 0: "value"})
# px.scatter(plot_data, x="date", y="value", color="measure")
```
# Panel Regressions
## Define regressions
```
def run_panel_regression(
data: pd.DataFrame,
measures: list,
control_variables: list,
entity_effects: bool,
time_effects: bool
):
detailed_results = OrderedDict()
for idx, measure in enumerate(measures):
if measure.startswith(("time", "depth", "num", "message_counts", "value")) and not measure.endswith("percent"):
dependent = np.log(data[measure])
# measure = measure + "_log"
else:
dependent = data[measure]
if measure == "amihud":
control_variables = [var for var in exog_vars if var not in ["log_turnover", "RV_slow"]]
elif measure == "RV_slow" or measure == "VSMI":
control_variables = [var for var in exog_vars if var not in ["VSMI", "RV_slow"]]
elif measure in exog_vars:
control_variables = [var for var in exog_vars if var != measure]
else:
control_variables = exog_vars
exogenous = sm.add_constant(data[control_variables])
model = linearmodels.PanelOLS(dependent=dependent,
exog=exogenous,
entity_effects=entity_effects,
time_effects=time_effects,
)
try:
result = model.fit(cov_type='clustered',
cluster_entity=True,
cluster_time=True,
)
except Exception as exception:
print(measure)
print(exception)
continue
# store the result
detailed_results[measure] = result
return detailed_results
def deep_dive_coef(detailed_results, variable: str):
coef_results = pd.DataFrame(columns=["param", "lower", "upper", "tstat", "pvalue"]) # , "lower", "upper"
for measure, result in detailed_results.items():
param = result.params[variable]
lower, upper = result.conf_int().loc[variable]
tstat = result.tstats[variable]
pvalue = result.pvalues[variable]
coef_results.loc[measure] = (param, lower, upper, tstat, pvalue) # , lower, upper
return coef_results
def run_ols(data, measures, exog_vars):
detailed_results = OrderedDict()
for idx, measure in enumerate(measures):
if measure == "amihud":
control_variables = [var for var in exog_vars if var not in ["log_turnover", "RV_slow"]]
elif measure == "RV_slow" or measure == "VSMI":
control_variables = [var for var in exog_vars if var not in ["VSMI", "RV_slow"]]
elif measure in exog_vars:
control_variables = [var for var in exog_vars if var != measure]
else:
control_variables = exog_vars
exog = sm.add_constant(data[control_variables])
if measure.startswith(("time", "depth", "num", "message_counts", "value")) and not measure.endswith("percent"):
endog = np.log(data[measure])
else:
endog = data[measure]
model = linearmodels.PooledOLS(endog, exog)
result = model.fit(
cov_type='clustered',
cluster_entity=True,
cluster_time=True,
)
# store the result
detailed_results[measure] = result
return detailed_results
def highlight_lower_than(pvalue):
if pvalue < 0.01:
color = "navajowhite" # "darkgrey"
# output = "{:.3f} *".format(value)
elif pvalue < 0.05:
color = "blanchedalmond" # "silver"
elif pvalue < 0.1:
color = "cornsilk" # "gainsboro"
else:
color = None
return f"background-color: {color}"
def highlight_significance(data, pvalues):
background_colors = pvalues.applymap(highlight_lower_than)
return background_colors
def font_color(value):
color = 'red' if value < 0 else 'black'
return f"color: {color}"
def display_results(combined_results):
params = combined_results["param"]
pvalues = combined_results["pvalue"]
styled = params.round(3).style.applymap(font_color).apply(highlight_significance, pvalues=pvalues, axis=None)
return styled
def convert_to_significance(pvalue):
if pvalue < 0.01:
return "***"
elif pvalue < 0.05:
return "**"
elif pvalue < 0.05:
return "*"
else:
return ""
def format_pvalues(series):
return series.apply(lambda val: val.apply(convert_to_significance))
def format_stars(table, precision=3):
lower = table[["lower"]].round(precision).astype(str)
lower.columns = lower.columns.droplevel()
upper = table[["upper"]].round(precision).astype(str)
upper.columns = upper.columns.droplevel()
confidence = "[" + lower + ", " + upper + "]"
confidence.columns = pd.MultiIndex.from_product([['conf'], confidence.columns])
format_num = "{:." + f"{precision}" + "f}"
params = table["param"].applymap(lambda num: format_num.format(num))
pvalues = table["pvalue"]
tstats = table[["tstat"]].applymap(lambda num: "(" + format_num.format(num) + ")")
params = pvalues.applymap(convert_to_significance) + params
params.columns = pd.MultiIndex.from_product([['coef'], params.columns])
formatted = pd.concat([params, tstats, confidence])
formatted.columns.rename("coef_type", level=0, inplace=True)
formatted = formatted.stack("coef_type")
formatted.columns.rename("frag_quartile", inplace=True)
formatted = formatted.reindex(sorted(formatted.columns), axis=1)
formatted.sort_values(by=["measure", "coef_type"], ascending=True, inplace=True)
return formatted
liquidity_measures = [
'quoted_rel_spread_bps_time_weighted',
'eff_rel_spread_bps_weighted',
'depth_time_weighted_average',
]
amihud_turnover_measures = ["log_turnover", "RV_slow", "amihud"]
counts_measures = measures = [
'AT_proxy',
'num_orders_aggr',
'num_orders_passive',
'num_orders_deleted',
'num_orders_filled',
'value_entered_mean',
'value_entered_median',
'value_entered_total',
'value_filled_total',
]
all_measures = liquidity_measures + amihud_turnover_measures + counts_measures
measures = all_measures
control_vars = [
# "RV_slow",
"VSMI", # Riordan & Storkenmaier 2012 JFM, p.427, quotes Hendershott & Moulton 2011 JFM, p.583
"min_tick_size",
"price_log",
]
explaining_variable = "after_nonequivalence" # "dummy_2019"
exog_vars = [explaining_variable] + control_vars
exog_vars
```
## Run the regression
```
detailed_results = dict()
coef_results = dict()
conditions = {
"": pd.Series(True, index=stats.index), # all_
# "2019_only_": stats.index.get_level_values("date").year == 2019,
# "H2_only_": stats["half_year"] == "H2",
# "before_": stats.index.get_level_values("date") < pd.Timestamp("2019-07-01")
}
for condition_name, condition in conditions.items():
subset = stats[condition]
# # Full sample
# regression_name = f"{condition_name}Full sample"
# detailed_result = run_panel_regression(subset, measures, exog_vars, entity_effects=True, time_effects=False)
# detailed_results[regression_name] = detailed_result
# coef_result = deep_dive_coef(detailed_result, explaining_variable)
# coef_results[regression_name] = coef_result
# Per fragmentation quartile
for frag_dummy, data in tqdm(subset.groupby("fragmentation")):
regression_name = f"{condition_name}{frag_dummy}"
detailed_result = run_panel_regression(data, measures, exog_vars, entity_effects=True, time_effects=False)
detailed_results[regression_name] = detailed_result
coef_result = deep_dive_coef(detailed_result, explaining_variable)
coef_results[regression_name] = coef_result
```
### Create the tables
```
combined = pd.concat(coef_results)
combined.index.set_names(["fragmentation", "measure"], inplace=True)
combined = combined.unstack("fragmentation")
combined.columns.set_names(["coef_type", "fragmentation"], inplace=True)
combined = combined.reindex(combined.columns.sortlevel(level="fragmentation")[0], axis=1)
# Define here which variables we'd like to see
subset = liquidity_measures + amihud_turnover_measures # counts_measures / liquidity_measures / amihud_turnover_measures
subset = combined.loc[subset].copy()
export_this = format_stars(subset, precision=2)
export_this.reset_index("coef_type", inplace=True)
export_this["coef_type"] = export_this["coef_type"].astype("category")
export_this["coef_type"] = export_this["coef_type"].cat.reorder_categories(["coef", "tstat", "conf"], ordered=True)
export_this = export_this.sort_values(["measure", "coef_type"]).drop(columns="coef_type")
export_this.rename(
index={
"quoted_rel_spread_bps_time_weighted": "QSpread",
"eff_rel_spread_bps_weighted": "ESpread",
"depth_time_weighted_average": "lnDepth",
"AT_proxy": "AT_proxy",
"num_orders_aggr":"Num aggressive Orders",
"num_orders_deleted": "Num deleted Orders",
"num_orders_filled": "Num filled Orders",
"num_orders_passive": "Num passive Orders",
"value_entered_total": "Log Volume Entered",
"value_filled_total": "Log Volume Filled",
},
columns={col: "Quartile " + col[-1] for col in export_this.columns},
inplace=True,
)
export_this
print(export_this.to_latex())
display_results(combined)
measure = measures[0]
pprint(measures)
print(f"\nSelected: {measure}")
samples = combined.columns.get_level_values("fragmentation").unique().tolist()
regr_table = linearmodels.panel.compare([detailed_results.get(sample).get(measure) for sample in samples], precision="pvalues")
regr_table
```
# OLS with stock-level controls
Analoguous to Riordan & Storkenmeier 2012, Hendershott & Moulton 2011
Gives similar results as above.
```
if "market_cap_average_log" not in control_vars:
control_vars += ["market_cap_average_log"]
exog_vars = [explaining_variable] + control_vars
exog_vars
detailed_results = dict()
coef_results = dict()
conditions = {
"": pd.Series(True, index=stats.index), # all_
# "2019_only_": stats.index.get_level_values("date").year == 2019,
# "H2_only_": stats["half_year"] == "H2",
# "before": stats.index.get_level_values("date") < pd.Timestamp("2019-07-01")
}
for condition_name, condition in tqdm(conditions.items()):
subset = stats[condition]
# # Full sample
# regression_name = f"{condition_name}Full sample"
# detailed_result = run_panel_regression(subset, measures, exog_vars, entity_effects=True, time_effects=False)
# detailed_results[regression_name] = detailed_result
# coef_result = deep_dive_coef(detailed_result, explaining_variable[0])
# coef_results[regression_name] = coef_result
# Per fragmentation quartile
for frag_dummy, data in subset.groupby("fragmentation"):
regression_name = f"{condition_name}{frag_dummy}"
detailed_result = run_ols(data, measures, exog_vars)
detailed_results[regression_name] = detailed_result
coef_result = deep_dive_coef(detailed_result, explaining_variable)
coef_results[regression_name] = coef_result
combined = pd.concat(coef_results)
combined.index.set_names(["fragmentation", "measure"], inplace=True)
combined = combined.unstack("fragmentation")
combined.columns.set_names(["coef_type", "fragmentation"], inplace=True)
combined = combined.reindex(combined.columns.sortlevel(level="fragmentation")[0], axis=1)
export_this = format_stars(combined, precision=3)
# print(export_this.to_latex(sparsify=True))
export_this
display_results(combined)
pprint(measures)
measure = measures[0]
print(f"\nSelected: {measure}")
samples = combined.columns.get_level_values("fragmentation").unique().tolist()
linearmodels.panel.compare([detailed_results.get(sample).get(measure) for sample in samples], precision="pvalues")
```
|
github_jupyter
|
from collections import OrderedDict
from pathlib import Path
from pprint import pprint
import warnings
import linearmodels
import numpy as np
import pandas as pd
import plotly_express as px
import statsmodels.api as sm
from scipy.stats import anderson_ksamp
from tqdm.notebook import tqdm
from load_daily_data import load_frag_data, load_market_quality_statistics, load_copustat
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', None)
frag = load_frag_data()
# filter
print(frag.shape)
print("First date: \t", frag.index.min())
print("Last date: \t", frag.index.max())
frag.set_index("isin", append=True, inplace=True)
compustat = load_copustat()
filename = "20200129_09-02-47_liquidity_stats.csv"
filepath = Path(f"../statistics/daily_liquidity/{filename}")
assert filepath.is_file()
# load stats
daily_stats = load_market_quality_statistics(filepath=filepath)
# append "isin" to index
daily_stats.set_index("isin", append=True, inplace=True)
print(daily_stats.shape)
print("First date: \t", daily_stats.index.get_level_values("date").min())
print("Last date: \t", daily_stats.index.get_level_values("date").max())
daily_stats.rename(columns={"num_transactions": "num_orders_aggr"}, inplace=True)
daily_stats.rename(columns={"num_orders_total": "num_orders_passive"}, inplace=True)
daily_stats["quoted_rel_spread_bps_time_weighted"] *= 100
daily_stats["eff_rel_spread_bps_weighted"] *= 100
# combine
stats = daily_stats.join(frag, how="left", lsuffix="_IMI", sort=False)
stats = stats.join(compustat, how="left", rsuffix="_compu", sort=False)
# first level of index needs to be entity variable
stats = stats.reset_index("date").set_index("date", append=True)
print("First date: \t", stats.index.get_level_values("date").min())
print("Last date: \t", stats.index.get_level_values("date").max())
print(stats.shape)
# condition = stats.index.get_level_values("date") < pd.Timestamp("2019-07-01")
turnover_stats = stats["turnover"].reset_index("isin").groupby("isin").median()
lower_quartile = turnover_stats["turnover"].quantile(0.25)
median = turnover_stats["turnover"].median()
upper_quartile = turnover_stats["turnover"].quantile(0.75)
conditions = {"3 bottom turnover": turnover_stats["turnover"] < lower_quartile,
"2 low turnover": (lower_quartile <= turnover_stats["turnover"]) & (turnover_stats["turnover"] < median),
"1 high turnover": (median <= turnover_stats["turnover"]) & (turnover_stats["turnover"] < upper_quartile),
"0 top turnover": upper_quartile <= turnover_stats["turnover"]
}
stats.reset_index("date", inplace=True)
for quartile, condition in conditions.items():
isins = turnover_stats[condition].index
stats.loc[isins, "turnover_category"] = quartile
stats.set_index("date", append=True, inplace=True)
num_stocks = stats["turnover_category"].reset_index().groupby("turnover_category")["isin"].nunique()
print(f"Total number of stocks {num_stocks.sum()}")
num_stocks
# exclude bottom turnover from sample?
stats = stats[~stats["turnover_category"].isin(["3 bottom turnover", "2 low turnover"])]
num_stocks = stats["turnover_category"].reset_index().groupby("turnover_category")["isin"].nunique()
print(f"Total number of stocks {num_stocks.sum()}")
num_stocks
relevant_isins = stats.index.get_level_values("isin").unique()
relevant_isins = relevant_isins.to_frame().reset_index(drop=True)
# # Export isins to csv?
# relevant_isins.to_csv("relevant_isins.csv", index=False)
frag_measure = "market_share" # "non_fragmentation_index"
frag_per_isin = stats.groupby(["after_nonequivalence", "isin"])[frag_measure].quantile(0.5)
frag_per_isin = frag_per_isin.unstack("after_nonequivalence")
frag_per_isin[frag_measure] = frag_per_isin[True] - frag_per_isin[False]
frag_per_isin.drop(columns=[False, True], inplace=True)
condition = stats.index.get_level_values("date") < pd.Timestamp("2019-07-01")
frag_per_isin = stats.loc[condition, [frag_measure]].reset_index("isin")
frag_per_isin = frag_per_isin.groupby(["isin"]).quantile(0.50)
# # Option 1: simple
# # a stock is not fragmented, if on more than 50% of all trading days, there was no trading on other venues (see cell above)
# nonfragmentation = frag_per_isin[frag_measure] == 1
# frag_per_isin.loc[nonfragmentation, "fragmentation"] = "not fragmented"
# frag_per_isin.loc[~nonfragmentation, "fragmentation"] = "fragmented"
# Option 2: by quartiles
lower_quartile = frag_per_isin[frag_measure].quantile(0.25)
median = frag_per_isin[frag_measure].median()
upper_quartile = frag_per_isin[frag_measure].quantile(0.75)
conditions = {
"Q1": frag_per_isin[frag_measure] < lower_quartile,
"Q2": (lower_quartile <= frag_per_isin[frag_measure]) & (frag_per_isin[frag_measure] < median),
"Q3": (median <= frag_per_isin[frag_measure]) & (frag_per_isin[frag_measure] < upper_quartile),
"Q4": upper_quartile <= frag_per_isin[frag_measure],
}
for fragmentation, condition in conditions.items():
frag_per_isin.loc[condition, "fragmentation"] = fragmentation
frag_per_isin["fragmentation"].value_counts()
# left join to stats
stats = stats.join(frag_per_isin["fragmentation"], on="isin")
# showing those isin's that did not have 375 observations
num_dates = stats.reset_index().groupby(["fragmentation", "isin"])["date"].nunique()
num_dates[num_dates != 375]
condition = stats.index.get_level_values("date") < pd.Timestamp("2019-07-01")
num_stocks = stats.reset_index().groupby(["fragmentation"])[["isin"]].nunique() # .describe()
print(f"Total number of stocks {num_stocks['isin'].sum()}")
num_stocks
# remember: groups can change over time, that's why there are more stocks than total above
stats.reset_index().groupby(["group", "fragmentation"])[["isin"]].nunique()
stats.reset_index().groupby(["fragmentation", "turnover_category", "group"])[["isin"]].nunique()
stats[condition].reset_index().groupby(["fragmentation"])[[frag_measure]].describe()
stats.reset_index().groupby(["after_nonequivalence"])[["isin"]].describe()
stats["market_cap"] = stats["shares_outstanding"] * stats["price_close"]
market_cap_average_log = np.log(stats.groupby("isin")["market_cap"].mean())
market_cap_average_log.name = "market_cap_average_log"
stats = stats.join(market_cap_average_log)
(stats.reset_index().groupby(["fragmentation"])[["market_cap_average_log"]].describe()).round(2)
table = list()
for measure in ("market_share", "lit_frag", "market_cap", "turnover"):
descriptive = stats.reset_index().groupby(["fragmentation"])[[measure]].describe()
if measure == "market_cap":
descriptive /= 1e6
descriptive = descriptive.applymap("{:.0f}".format)
elif measure == "turnover":
descriptive /= 1e6
descriptive = descriptive.applymap("{:.1f}".format)
else:
descriptive = descriptive.applymap("{:.2f}".format)
descriptive = descriptive.loc[:, pd.IndexSlice[: , ["mean", "50%", "std"]]]
table.append(descriptive)
table = pd.concat(table, axis=1)
table.rename(
columns={
"market_share": "SIX market share",
"lit_frag": "LitFrag",
"market_cap": "Market Cap",
"turnover": "Turnover",
"mean": "Mean",
"std": "StDev",
"50%": "Median"
},
inplace=True,
)
table = table.T.reindex(["Mean", "Median", "StDev"], level=1).T
num_stocks = stats.reset_index().groupby("fragmentation")["isin"].nunique()
num_stocks = num_stocks.rename("Num stocks").to_frame()
num_stocks.columns = pd.MultiIndex.from_product([num_stocks.columns, ['']])
table = table.join(num_stocks)
for idx in range(4):
idx += 1
table.loc[f"Q{idx}", "Fragmentation"] = f"Quartile {idx}"
table.set_index("Fragmentation", inplace=True)
table = table[["Num stocks", "SIX market share", "LitFrag", "Turnover", "Market Cap"]]
table
print(table.to_latex())
# stats.loc[stats["fragmentation"].isin(["3_little_fragmented", "4_not_fragmented"]), "frag_dummy"] = 0
# stats["frag_dummy"].fillna(value=1, inplace=True)
# stats["frag_dummy"] = stats["frag_dummy"].astype(int)
# stats.reset_index().groupby(["frag_dummy"])[["isin"]].describe()
# stats[stats["frag_dummy"] == 1].index.get_level_values("isin").unique().to_frame().reset_index(drop=True).to_csv("frag_isins.csv", index=False)
dates = stats.index.get_level_values("date")
stats.loc[7 <= dates.month, "half_year"] = "H2"
stats["half_year"].fillna(value="H1", inplace=True)
stats["semester"] = dates.year.astype("str") + "_" + stats["half_year"]
stats["dummy_2019"] = dates.year == 2019
stats.sort_index(inplace=True)
stats["abs_simple_returns"] = np.abs(stats["price_close"] / stats["price_close"].groupby("isin").shift(1) - 1)
stats["amihud"] = stats["abs_simple_returns"] / stats["turnover"] * 1e9 # _simple_simple
stats[["amihud", "semester", "fragmentation"]].groupby(["fragmentation", "semester"]).mean()
# plot single measure for a quartile
measure = "eff_rel_spread_bps_weighted"
plot_data = stats.loc[stats["fragmentation"] == "Q4", measure].reset_index().dropna()
# px.scatter(plot_data, x="date", y=measure, color="isin")
isin = "CH0012549785"
# measures = ["price_mean", "price_close", "price_log", "price_reciprocal"]
measures = ["quoted_rel_spread_bps_time_weighted", "eff_rel_spread_bps_weighted", "min_tick_size"]
# measures = ["market_cap", "market_cap_average_log", "price_close", "shares_outstanding"]
plot_data = stats.loc[isin, measures]
plot_data = plot_data.stack().reset_index().rename(columns={"level_1": "measure", 0: "value"})
# px.scatter(plot_data, x="date", y="value", color="measure")
def run_panel_regression(
data: pd.DataFrame,
measures: list,
control_variables: list,
entity_effects: bool,
time_effects: bool
):
detailed_results = OrderedDict()
for idx, measure in enumerate(measures):
if measure.startswith(("time", "depth", "num", "message_counts", "value")) and not measure.endswith("percent"):
dependent = np.log(data[measure])
# measure = measure + "_log"
else:
dependent = data[measure]
if measure == "amihud":
control_variables = [var for var in exog_vars if var not in ["log_turnover", "RV_slow"]]
elif measure == "RV_slow" or measure == "VSMI":
control_variables = [var for var in exog_vars if var not in ["VSMI", "RV_slow"]]
elif measure in exog_vars:
control_variables = [var for var in exog_vars if var != measure]
else:
control_variables = exog_vars
exogenous = sm.add_constant(data[control_variables])
model = linearmodels.PanelOLS(dependent=dependent,
exog=exogenous,
entity_effects=entity_effects,
time_effects=time_effects,
)
try:
result = model.fit(cov_type='clustered',
cluster_entity=True,
cluster_time=True,
)
except Exception as exception:
print(measure)
print(exception)
continue
# store the result
detailed_results[measure] = result
return detailed_results
def deep_dive_coef(detailed_results, variable: str):
coef_results = pd.DataFrame(columns=["param", "lower", "upper", "tstat", "pvalue"]) # , "lower", "upper"
for measure, result in detailed_results.items():
param = result.params[variable]
lower, upper = result.conf_int().loc[variable]
tstat = result.tstats[variable]
pvalue = result.pvalues[variable]
coef_results.loc[measure] = (param, lower, upper, tstat, pvalue) # , lower, upper
return coef_results
def run_ols(data, measures, exog_vars):
detailed_results = OrderedDict()
for idx, measure in enumerate(measures):
if measure == "amihud":
control_variables = [var for var in exog_vars if var not in ["log_turnover", "RV_slow"]]
elif measure == "RV_slow" or measure == "VSMI":
control_variables = [var for var in exog_vars if var not in ["VSMI", "RV_slow"]]
elif measure in exog_vars:
control_variables = [var for var in exog_vars if var != measure]
else:
control_variables = exog_vars
exog = sm.add_constant(data[control_variables])
if measure.startswith(("time", "depth", "num", "message_counts", "value")) and not measure.endswith("percent"):
endog = np.log(data[measure])
else:
endog = data[measure]
model = linearmodels.PooledOLS(endog, exog)
result = model.fit(
cov_type='clustered',
cluster_entity=True,
cluster_time=True,
)
# store the result
detailed_results[measure] = result
return detailed_results
def highlight_lower_than(pvalue):
if pvalue < 0.01:
color = "navajowhite" # "darkgrey"
# output = "{:.3f} *".format(value)
elif pvalue < 0.05:
color = "blanchedalmond" # "silver"
elif pvalue < 0.1:
color = "cornsilk" # "gainsboro"
else:
color = None
return f"background-color: {color}"
def highlight_significance(data, pvalues):
background_colors = pvalues.applymap(highlight_lower_than)
return background_colors
def font_color(value):
color = 'red' if value < 0 else 'black'
return f"color: {color}"
def display_results(combined_results):
params = combined_results["param"]
pvalues = combined_results["pvalue"]
styled = params.round(3).style.applymap(font_color).apply(highlight_significance, pvalues=pvalues, axis=None)
return styled
def convert_to_significance(pvalue):
if pvalue < 0.01:
return "***"
elif pvalue < 0.05:
return "**"
elif pvalue < 0.05:
return "*"
else:
return ""
def format_pvalues(series):
return series.apply(lambda val: val.apply(convert_to_significance))
def format_stars(table, precision=3):
lower = table[["lower"]].round(precision).astype(str)
lower.columns = lower.columns.droplevel()
upper = table[["upper"]].round(precision).astype(str)
upper.columns = upper.columns.droplevel()
confidence = "[" + lower + ", " + upper + "]"
confidence.columns = pd.MultiIndex.from_product([['conf'], confidence.columns])
format_num = "{:." + f"{precision}" + "f}"
params = table["param"].applymap(lambda num: format_num.format(num))
pvalues = table["pvalue"]
tstats = table[["tstat"]].applymap(lambda num: "(" + format_num.format(num) + ")")
params = pvalues.applymap(convert_to_significance) + params
params.columns = pd.MultiIndex.from_product([['coef'], params.columns])
formatted = pd.concat([params, tstats, confidence])
formatted.columns.rename("coef_type", level=0, inplace=True)
formatted = formatted.stack("coef_type")
formatted.columns.rename("frag_quartile", inplace=True)
formatted = formatted.reindex(sorted(formatted.columns), axis=1)
formatted.sort_values(by=["measure", "coef_type"], ascending=True, inplace=True)
return formatted
liquidity_measures = [
'quoted_rel_spread_bps_time_weighted',
'eff_rel_spread_bps_weighted',
'depth_time_weighted_average',
]
amihud_turnover_measures = ["log_turnover", "RV_slow", "amihud"]
counts_measures = measures = [
'AT_proxy',
'num_orders_aggr',
'num_orders_passive',
'num_orders_deleted',
'num_orders_filled',
'value_entered_mean',
'value_entered_median',
'value_entered_total',
'value_filled_total',
]
all_measures = liquidity_measures + amihud_turnover_measures + counts_measures
measures = all_measures
control_vars = [
# "RV_slow",
"VSMI", # Riordan & Storkenmaier 2012 JFM, p.427, quotes Hendershott & Moulton 2011 JFM, p.583
"min_tick_size",
"price_log",
]
explaining_variable = "after_nonequivalence" # "dummy_2019"
exog_vars = [explaining_variable] + control_vars
exog_vars
detailed_results = dict()
coef_results = dict()
conditions = {
"": pd.Series(True, index=stats.index), # all_
# "2019_only_": stats.index.get_level_values("date").year == 2019,
# "H2_only_": stats["half_year"] == "H2",
# "before_": stats.index.get_level_values("date") < pd.Timestamp("2019-07-01")
}
for condition_name, condition in conditions.items():
subset = stats[condition]
# # Full sample
# regression_name = f"{condition_name}Full sample"
# detailed_result = run_panel_regression(subset, measures, exog_vars, entity_effects=True, time_effects=False)
# detailed_results[regression_name] = detailed_result
# coef_result = deep_dive_coef(detailed_result, explaining_variable)
# coef_results[regression_name] = coef_result
# Per fragmentation quartile
for frag_dummy, data in tqdm(subset.groupby("fragmentation")):
regression_name = f"{condition_name}{frag_dummy}"
detailed_result = run_panel_regression(data, measures, exog_vars, entity_effects=True, time_effects=False)
detailed_results[regression_name] = detailed_result
coef_result = deep_dive_coef(detailed_result, explaining_variable)
coef_results[regression_name] = coef_result
combined = pd.concat(coef_results)
combined.index.set_names(["fragmentation", "measure"], inplace=True)
combined = combined.unstack("fragmentation")
combined.columns.set_names(["coef_type", "fragmentation"], inplace=True)
combined = combined.reindex(combined.columns.sortlevel(level="fragmentation")[0], axis=1)
# Define here which variables we'd like to see
subset = liquidity_measures + amihud_turnover_measures # counts_measures / liquidity_measures / amihud_turnover_measures
subset = combined.loc[subset].copy()
export_this = format_stars(subset, precision=2)
export_this.reset_index("coef_type", inplace=True)
export_this["coef_type"] = export_this["coef_type"].astype("category")
export_this["coef_type"] = export_this["coef_type"].cat.reorder_categories(["coef", "tstat", "conf"], ordered=True)
export_this = export_this.sort_values(["measure", "coef_type"]).drop(columns="coef_type")
export_this.rename(
index={
"quoted_rel_spread_bps_time_weighted": "QSpread",
"eff_rel_spread_bps_weighted": "ESpread",
"depth_time_weighted_average": "lnDepth",
"AT_proxy": "AT_proxy",
"num_orders_aggr":"Num aggressive Orders",
"num_orders_deleted": "Num deleted Orders",
"num_orders_filled": "Num filled Orders",
"num_orders_passive": "Num passive Orders",
"value_entered_total": "Log Volume Entered",
"value_filled_total": "Log Volume Filled",
},
columns={col: "Quartile " + col[-1] for col in export_this.columns},
inplace=True,
)
export_this
print(export_this.to_latex())
display_results(combined)
measure = measures[0]
pprint(measures)
print(f"\nSelected: {measure}")
samples = combined.columns.get_level_values("fragmentation").unique().tolist()
regr_table = linearmodels.panel.compare([detailed_results.get(sample).get(measure) for sample in samples], precision="pvalues")
regr_table
if "market_cap_average_log" not in control_vars:
control_vars += ["market_cap_average_log"]
exog_vars = [explaining_variable] + control_vars
exog_vars
detailed_results = dict()
coef_results = dict()
conditions = {
"": pd.Series(True, index=stats.index), # all_
# "2019_only_": stats.index.get_level_values("date").year == 2019,
# "H2_only_": stats["half_year"] == "H2",
# "before": stats.index.get_level_values("date") < pd.Timestamp("2019-07-01")
}
for condition_name, condition in tqdm(conditions.items()):
subset = stats[condition]
# # Full sample
# regression_name = f"{condition_name}Full sample"
# detailed_result = run_panel_regression(subset, measures, exog_vars, entity_effects=True, time_effects=False)
# detailed_results[regression_name] = detailed_result
# coef_result = deep_dive_coef(detailed_result, explaining_variable[0])
# coef_results[regression_name] = coef_result
# Per fragmentation quartile
for frag_dummy, data in subset.groupby("fragmentation"):
regression_name = f"{condition_name}{frag_dummy}"
detailed_result = run_ols(data, measures, exog_vars)
detailed_results[regression_name] = detailed_result
coef_result = deep_dive_coef(detailed_result, explaining_variable)
coef_results[regression_name] = coef_result
combined = pd.concat(coef_results)
combined.index.set_names(["fragmentation", "measure"], inplace=True)
combined = combined.unstack("fragmentation")
combined.columns.set_names(["coef_type", "fragmentation"], inplace=True)
combined = combined.reindex(combined.columns.sortlevel(level="fragmentation")[0], axis=1)
export_this = format_stars(combined, precision=3)
# print(export_this.to_latex(sparsify=True))
export_this
display_results(combined)
pprint(measures)
measure = measures[0]
print(f"\nSelected: {measure}")
samples = combined.columns.get_level_values("fragmentation").unique().tolist()
linearmodels.panel.compare([detailed_results.get(sample).get(measure) for sample in samples], precision="pvalues")
| 0.485356 | 0.852445 |
```
%load_ext watermark
%watermark -v -p numpy,sklearn,scipy,matplotlib,tensorflow
```
**12장 – 분산 텐서플로**
_이 노트북은 11장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._
# 설정
파이썬 2와 3을 모두 지원합니다. 공통 모듈을 임포트하고 맷플롯립 그림이 노트북 안에 포함되도록 설정하고 생성한 그림을 저장하기 위한 함수를 준비합니다:
```
# 파이썬 2와 파이썬 3 지원
from __future__ import division, print_function, unicode_literals
# 공통
import numpy as np
import os
# 일관된 출력을 위해 유사난수 초기화
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# 맷플롯립 설정
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# 그림을 저장할 폴더
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "distributed"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
```
# 로컬 서버
```
import tensorflow as tf
c = tf.constant("Hello distributed TensorFlow!")
server = tf.train.Server.create_local_server()
with tf.Session(server.target) as sess:
print(sess.run(c))
```
# 클러스터
```
cluster_spec = tf.train.ClusterSpec({
"ps": [
"127.0.0.1:2221", # /job:ps/task:0
"127.0.0.1:2222", # /job:ps/task:1
],
"worker": [
"127.0.0.1:2223", # /job:worker/task:0
"127.0.0.1:2224", # /job:worker/task:1
"127.0.0.1:2225", # /job:worker/task:2
]})
task_ps0 = tf.train.Server(cluster_spec, job_name="ps", task_index=0)
task_ps1 = tf.train.Server(cluster_spec, job_name="ps", task_index=1)
task_worker0 = tf.train.Server(cluster_spec, job_name="worker", task_index=0)
task_worker1 = tf.train.Server(cluster_spec, job_name="worker", task_index=1)
task_worker2 = tf.train.Server(cluster_spec, job_name="worker", task_index=2)
```
# 여러 디바이스와 서버에 연산을 할당하기
```
reset_graph()
with tf.device("/job:ps"):
a = tf.Variable(1.0, name="a")
with tf.device("/job:worker"):
b = a + 2
with tf.device("/job:worker/task:1"):
c = a + b
with tf.Session("grpc://127.0.0.1:2221") as sess:
sess.run(a.initializer)
print(c.eval())
reset_graph()
with tf.device(tf.train.replica_device_setter(
ps_tasks=2,
ps_device="/job:ps",
worker_device="/job:worker")):
v1 = tf.Variable(1.0, name="v1") # /job:ps/task:0 (defaults to /cpu:0) 에 할당
v2 = tf.Variable(2.0, name="v2") # /job:ps/task:1 (defaults to /cpu:0) 에 할당
v3 = tf.Variable(3.0, name="v3") # /job:ps/task:0 (defaults to /cpu:0) 에 할당
s = v1 + v2 # /job:worker (defaults to task:0/cpu:0) 에 할당
with tf.device("/task:1"):
p1 = 2 * s # /job:worker/task:1 (defaults to /cpu:0) 에 할당
with tf.device("/cpu:0"):
p2 = 3 * s # /job:worker/task:1/cpu:0 에 할당
config = tf.ConfigProto()
config.log_device_placement = True
with tf.Session("grpc://127.0.0.1:2221", config=config) as sess:
v1.initializer.run()
```
# 리더 (Reader) - 예전 방법
```
reset_graph()
default1 = tf.constant([5.])
default2 = tf.constant([6])
default3 = tf.constant([7])
dec = tf.decode_csv(tf.constant("1.,,44"),
record_defaults=[default1, default2, default3])
with tf.Session() as sess:
print(sess.run(dec))
reset_graph()
test_csv = open("my_test.csv", "w")
test_csv.write("x1, x2 , target\n")
test_csv.write("1.,, 0\n")
test_csv.write("4., 5. , 1\n")
test_csv.write("7., 8. , 0\n")
test_csv.close()
filename_queue = tf.FIFOQueue(capacity=10, dtypes=[tf.string], shapes=[()])
filename = tf.placeholder(tf.string)
enqueue_filename = filename_queue.enqueue([filename])
close_filename_queue = filename_queue.close()
reader = tf.TextLineReader(skip_header_lines=1)
key, value = reader.read(filename_queue)
x1, x2, target = tf.decode_csv(value, record_defaults=[[-1.], [-1.], [-1]])
features = tf.stack([x1, x2])
instance_queue = tf.RandomShuffleQueue(
capacity=10, min_after_dequeue=2,
dtypes=[tf.float32, tf.int32], shapes=[[2],[]],
name="instance_q", shared_name="shared_instance_q")
enqueue_instance = instance_queue.enqueue([features, target])
close_instance_queue = instance_queue.close()
minibatch_instances, minibatch_targets = instance_queue.dequeue_up_to(2)
with tf.Session() as sess:
sess.run(enqueue_filename, feed_dict={filename: "my_test.csv"})
sess.run(close_filename_queue)
try:
while True:
sess.run(enqueue_instance)
except tf.errors.OutOfRangeError as ex:
print("더 이상 읽을 파일이 없습니다")
sess.run(close_instance_queue)
try:
while True:
print(sess.run([minibatch_instances, minibatch_targets]))
except tf.errors.OutOfRangeError as ex:
print("더 이상 훈련 샘플이 없습니다")
#coord = tf.train.Coordinator()
#threads = tf.train.start_queue_runners(coord=coord)
#filename_queue = tf.train.string_input_producer(["test.csv"])
#coord.request_stop()
#coord.join(threads)
```
# QueueRunner와 Coordinator
```
reset_graph()
filename_queue = tf.FIFOQueue(capacity=10, dtypes=[tf.string], shapes=[()])
filename = tf.placeholder(tf.string)
enqueue_filename = filename_queue.enqueue([filename])
close_filename_queue = filename_queue.close()
reader = tf.TextLineReader(skip_header_lines=1)
key, value = reader.read(filename_queue)
x1, x2, target = tf.decode_csv(value, record_defaults=[[-1.], [-1.], [-1]])
features = tf.stack([x1, x2])
instance_queue = tf.RandomShuffleQueue(
capacity=10, min_after_dequeue=2,
dtypes=[tf.float32, tf.int32], shapes=[[2],[]],
name="instance_q", shared_name="shared_instance_q")
enqueue_instance = instance_queue.enqueue([features, target])
close_instance_queue = instance_queue.close()
minibatch_instances, minibatch_targets = instance_queue.dequeue_up_to(2)
n_threads = 5
queue_runner = tf.train.QueueRunner(instance_queue, [enqueue_instance] * n_threads)
coord = tf.train.Coordinator()
with tf.Session() as sess:
sess.run(enqueue_filename, feed_dict={filename: "my_test.csv"})
sess.run(close_filename_queue)
enqueue_threads = queue_runner.create_threads(sess, coord=coord, start=True)
try:
while True:
print(sess.run([minibatch_instances, minibatch_targets]))
except tf.errors.OutOfRangeError as ex:
print("더 이상 훈련 샘플이 없습니다")
reset_graph()
def read_and_push_instance(filename_queue, instance_queue):
reader = tf.TextLineReader(skip_header_lines=1)
key, value = reader.read(filename_queue)
x1, x2, target = tf.decode_csv(value, record_defaults=[[-1.], [-1.], [-1]])
features = tf.stack([x1, x2])
enqueue_instance = instance_queue.enqueue([features, target])
return enqueue_instance
filename_queue = tf.FIFOQueue(capacity=10, dtypes=[tf.string], shapes=[()])
filename = tf.placeholder(tf.string)
enqueue_filename = filename_queue.enqueue([filename])
close_filename_queue = filename_queue.close()
instance_queue = tf.RandomShuffleQueue(
capacity=10, min_after_dequeue=2,
dtypes=[tf.float32, tf.int32], shapes=[[2],[]],
name="instance_q", shared_name="shared_instance_q")
minibatch_instances, minibatch_targets = instance_queue.dequeue_up_to(2)
read_and_enqueue_ops = [read_and_push_instance(filename_queue, instance_queue) for i in range(5)]
queue_runner = tf.train.QueueRunner(instance_queue, read_and_enqueue_ops)
with tf.Session() as sess:
sess.run(enqueue_filename, feed_dict={filename: "my_test.csv"})
sess.run(close_filename_queue)
coord = tf.train.Coordinator()
enqueue_threads = queue_runner.create_threads(sess, coord=coord, start=True)
try:
while True:
print(sess.run([minibatch_instances, minibatch_targets]))
except tf.errors.OutOfRangeError as ex:
print("더 이상 훈련 샘플이 없습니다")
```
# 타임아웃 지정하기
```
reset_graph()
q = tf.FIFOQueue(capacity=10, dtypes=[tf.float32], shapes=[()])
v = tf.placeholder(tf.float32)
enqueue = q.enqueue([v])
dequeue = q.dequeue()
output = dequeue + 1
config = tf.ConfigProto()
config.operation_timeout_in_ms = 1000
with tf.Session(config=config) as sess:
sess.run(enqueue, feed_dict={v: 1.0})
sess.run(enqueue, feed_dict={v: 2.0})
sess.run(enqueue, feed_dict={v: 3.0})
print(sess.run(output))
print(sess.run(output, feed_dict={dequeue: 5}))
print(sess.run(output))
print(sess.run(output))
try:
print(sess.run(output))
except tf.errors.DeadlineExceededError as ex:
print("dequeue 타임 아웃")
```
# Data API
텐서플로 1.4에서 소개된 Data API를 사용하면 손쉽게 데이터를 효율적으로 읽을 수 있습니다.
```
tf.reset_default_graph()
```
0에서 9까지 정수를 세 번 반복한 간단한 데이터셋을 일곱 개씩 배치로 만들어 시작해 보죠:
```
dataset = tf.data.Dataset.from_tensor_slices(np.arange(10))
dataset = dataset.repeat(3).batch(7)
```
첫 번째 줄은 0에서 9까지 정수를 담은 데이터셋을 만듭니다. 두 번째 줄은 이 데이터셋의 원소를 세 번 반복하고 일곱 개씩 담은 새로운 데이터셋을 만듭니다. 위에서 볼 수 있듯이 원본 데이터셋에서 여러 변환 메서드를 연결하여 호출하여 적용했습니다.
그다음, 데이터셋을 한 번 순회하는 원-샷-이터레이터(one-shot-iterator)를 만들고, 다음 원소를 지칭하는 텐서를 얻기 위해 `get_next()` 메서드를 호출합니다.
```
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
```
`next_element`를 반복적으로 평가해서 데이터셋을 순회해 보죠. 원소가 별로 없기 때문에 `OutOfRangeError`가 발생합니다:
```
with tf.Session() as sess:
try:
while True:
print(next_element.eval())
except tf.errors.OutOfRangeError:
print("완료")
```
좋네요! 잘 작동합니다.
늘 그렇듯이 텐서는 그래프를 실행(`sess.run()`)할 때마다 한 번만 평가된다는 것을 기억하세요. `next_element`에 의존하는 텐서를 여러개 평가하더라도 한 번만 평가됩니다. 또한 `next_element`를 동시에 두 번 실행해도 마찬가지입니다:
```
with tf.Session() as sess:
try:
while True:
print(sess.run([next_element, next_element]))
except tf.errors.OutOfRangeError:
print("완료")
```
`interleave()` 메서드는 강력하지만 처음에는 이해하기 좀 어렵습니다. 예제를 통해 이해하는 것이 가장 좋습니다:
```
tf.reset_default_graph()
dataset = tf.data.Dataset.from_tensor_slices(np.arange(10))
dataset = dataset.repeat(3).batch(7)
dataset = dataset.interleave(
lambda v: tf.data.Dataset.from_tensor_slices(v),
cycle_length=3,
block_length=2)
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
with tf.Session() as sess:
try:
while True:
print(next_element.eval(), end=",")
except tf.errors.OutOfRangeError:
print("완료")
```
`cycle_length=3`이므로 새로운 데이터셋은 이전 데이터셋에서 세 개의 원소를 추출합니다. 즉 `[0,1,2,3,4,5,6]`, `[7,8,9,0,1,2,3]`, `[4,5,6,7,8,9,0]` 입니다. 그다음 원소마다 하나의 데이터셋을 만들기 위해 람다(lambda) 함수를 호출합니다. `Dataset.from_tensor_slices()`를 사용했기 때문에 각 데이터셋은 차례대로 원소를 반환합니다. 다음 이 세 개의 데이터셋에서 각각 두 개의 아이템(`block_length=2`이므로)을 추출합니다. 세 개의 데이터셋의 아이템이 모두 소진될 때까지 반복됩니다. 즉 0,1 (첫 번째에서), 7,8 (두 번째에서), 4,5 (세 번째에서), 2,3 (첫 번째에서), 9,0 (두 번째에서) 등과 같은 식으로 8,9 (세 번째에서), 6 (첫 번째에서), 3 (두 번째에서), 0 (세 번째에서)까지 진행됩니다. 그다음에 원본 데이터셋에서 다음 번 세 개의 원소를 추출하려고 합니다. 하지만 두 개만 남아 있습니다. `[1,2,3,4,5,6,7]`와 `[8,9]` 입니다. 다시 이 원소로부터 데이터셋을 만들고 이 데이텃세의 아이템이 모두 소진될 때까지 두 개의 아이템을 추출합니다. 1,2 (첫 번째에서), 8,9 (두 번째에서), 3,4 (첫 번째에서), 5,6 (첫 번째에서), 7 (첫 번째에서)가 됩니다. 배열의 길이가 다르기 때문에 마지막에는 교대로 배치되지 않았습니다.
# 리더 (Reader) - 새로운 방법
`from_tensor_slices()`나 `from_tensor()`를 기반으로 한 원본 데이터셋을 사용하는 대신 리더 데이터셋을 사용할 수 있습니다. 복잡한 일들을 대부분 대신 처리해 줍니다(예를 들면, 스레드):
```
tf.reset_default_graph()
filenames = ["my_test.csv"]
dataset = tf.data.TextLineDataset(filenames)
```
각 줄을 어떻게 디코드해야 하는지는 알려 주어야 합니다:
```
def decode_csv_line(line):
x1, x2, y = tf.decode_csv(
line, record_defaults=[[-1.], [-1.], [-1.]])
X = tf.stack([x1, x2])
return X, y
```
그다음, 이 디코딩 함수를 `map()`을 사용하여 데이터셋에 있는 각 원소에 적용할 수 있습니다:
```
dataset = dataset.skip(1).map(decode_csv_line)
```
마지막으로 원-샷-이터레이터를 만들어 보죠:
```
it = dataset.make_one_shot_iterator()
X, y = it.get_next()
with tf.Session() as sess:
try:
while True:
X_val, y_val = sess.run([X, y])
print(X_val, y_val)
except tf.errors.OutOfRangeError as ex:
print("완료")
```
# 연습문제 해답
**Coming soon**
|
github_jupyter
|
%load_ext watermark
%watermark -v -p numpy,sklearn,scipy,matplotlib,tensorflow
# 파이썬 2와 파이썬 3 지원
from __future__ import division, print_function, unicode_literals
# 공통
import numpy as np
import os
# 일관된 출력을 위해 유사난수 초기화
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# 맷플롯립 설정
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# 그림을 저장할 폴더
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "distributed"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
import tensorflow as tf
c = tf.constant("Hello distributed TensorFlow!")
server = tf.train.Server.create_local_server()
with tf.Session(server.target) as sess:
print(sess.run(c))
cluster_spec = tf.train.ClusterSpec({
"ps": [
"127.0.0.1:2221", # /job:ps/task:0
"127.0.0.1:2222", # /job:ps/task:1
],
"worker": [
"127.0.0.1:2223", # /job:worker/task:0
"127.0.0.1:2224", # /job:worker/task:1
"127.0.0.1:2225", # /job:worker/task:2
]})
task_ps0 = tf.train.Server(cluster_spec, job_name="ps", task_index=0)
task_ps1 = tf.train.Server(cluster_spec, job_name="ps", task_index=1)
task_worker0 = tf.train.Server(cluster_spec, job_name="worker", task_index=0)
task_worker1 = tf.train.Server(cluster_spec, job_name="worker", task_index=1)
task_worker2 = tf.train.Server(cluster_spec, job_name="worker", task_index=2)
reset_graph()
with tf.device("/job:ps"):
a = tf.Variable(1.0, name="a")
with tf.device("/job:worker"):
b = a + 2
with tf.device("/job:worker/task:1"):
c = a + b
with tf.Session("grpc://127.0.0.1:2221") as sess:
sess.run(a.initializer)
print(c.eval())
reset_graph()
with tf.device(tf.train.replica_device_setter(
ps_tasks=2,
ps_device="/job:ps",
worker_device="/job:worker")):
v1 = tf.Variable(1.0, name="v1") # /job:ps/task:0 (defaults to /cpu:0) 에 할당
v2 = tf.Variable(2.0, name="v2") # /job:ps/task:1 (defaults to /cpu:0) 에 할당
v3 = tf.Variable(3.0, name="v3") # /job:ps/task:0 (defaults to /cpu:0) 에 할당
s = v1 + v2 # /job:worker (defaults to task:0/cpu:0) 에 할당
with tf.device("/task:1"):
p1 = 2 * s # /job:worker/task:1 (defaults to /cpu:0) 에 할당
with tf.device("/cpu:0"):
p2 = 3 * s # /job:worker/task:1/cpu:0 에 할당
config = tf.ConfigProto()
config.log_device_placement = True
with tf.Session("grpc://127.0.0.1:2221", config=config) as sess:
v1.initializer.run()
reset_graph()
default1 = tf.constant([5.])
default2 = tf.constant([6])
default3 = tf.constant([7])
dec = tf.decode_csv(tf.constant("1.,,44"),
record_defaults=[default1, default2, default3])
with tf.Session() as sess:
print(sess.run(dec))
reset_graph()
test_csv = open("my_test.csv", "w")
test_csv.write("x1, x2 , target\n")
test_csv.write("1.,, 0\n")
test_csv.write("4., 5. , 1\n")
test_csv.write("7., 8. , 0\n")
test_csv.close()
filename_queue = tf.FIFOQueue(capacity=10, dtypes=[tf.string], shapes=[()])
filename = tf.placeholder(tf.string)
enqueue_filename = filename_queue.enqueue([filename])
close_filename_queue = filename_queue.close()
reader = tf.TextLineReader(skip_header_lines=1)
key, value = reader.read(filename_queue)
x1, x2, target = tf.decode_csv(value, record_defaults=[[-1.], [-1.], [-1]])
features = tf.stack([x1, x2])
instance_queue = tf.RandomShuffleQueue(
capacity=10, min_after_dequeue=2,
dtypes=[tf.float32, tf.int32], shapes=[[2],[]],
name="instance_q", shared_name="shared_instance_q")
enqueue_instance = instance_queue.enqueue([features, target])
close_instance_queue = instance_queue.close()
minibatch_instances, minibatch_targets = instance_queue.dequeue_up_to(2)
with tf.Session() as sess:
sess.run(enqueue_filename, feed_dict={filename: "my_test.csv"})
sess.run(close_filename_queue)
try:
while True:
sess.run(enqueue_instance)
except tf.errors.OutOfRangeError as ex:
print("더 이상 읽을 파일이 없습니다")
sess.run(close_instance_queue)
try:
while True:
print(sess.run([minibatch_instances, minibatch_targets]))
except tf.errors.OutOfRangeError as ex:
print("더 이상 훈련 샘플이 없습니다")
#coord = tf.train.Coordinator()
#threads = tf.train.start_queue_runners(coord=coord)
#filename_queue = tf.train.string_input_producer(["test.csv"])
#coord.request_stop()
#coord.join(threads)
reset_graph()
filename_queue = tf.FIFOQueue(capacity=10, dtypes=[tf.string], shapes=[()])
filename = tf.placeholder(tf.string)
enqueue_filename = filename_queue.enqueue([filename])
close_filename_queue = filename_queue.close()
reader = tf.TextLineReader(skip_header_lines=1)
key, value = reader.read(filename_queue)
x1, x2, target = tf.decode_csv(value, record_defaults=[[-1.], [-1.], [-1]])
features = tf.stack([x1, x2])
instance_queue = tf.RandomShuffleQueue(
capacity=10, min_after_dequeue=2,
dtypes=[tf.float32, tf.int32], shapes=[[2],[]],
name="instance_q", shared_name="shared_instance_q")
enqueue_instance = instance_queue.enqueue([features, target])
close_instance_queue = instance_queue.close()
minibatch_instances, minibatch_targets = instance_queue.dequeue_up_to(2)
n_threads = 5
queue_runner = tf.train.QueueRunner(instance_queue, [enqueue_instance] * n_threads)
coord = tf.train.Coordinator()
with tf.Session() as sess:
sess.run(enqueue_filename, feed_dict={filename: "my_test.csv"})
sess.run(close_filename_queue)
enqueue_threads = queue_runner.create_threads(sess, coord=coord, start=True)
try:
while True:
print(sess.run([minibatch_instances, minibatch_targets]))
except tf.errors.OutOfRangeError as ex:
print("더 이상 훈련 샘플이 없습니다")
reset_graph()
def read_and_push_instance(filename_queue, instance_queue):
reader = tf.TextLineReader(skip_header_lines=1)
key, value = reader.read(filename_queue)
x1, x2, target = tf.decode_csv(value, record_defaults=[[-1.], [-1.], [-1]])
features = tf.stack([x1, x2])
enqueue_instance = instance_queue.enqueue([features, target])
return enqueue_instance
filename_queue = tf.FIFOQueue(capacity=10, dtypes=[tf.string], shapes=[()])
filename = tf.placeholder(tf.string)
enqueue_filename = filename_queue.enqueue([filename])
close_filename_queue = filename_queue.close()
instance_queue = tf.RandomShuffleQueue(
capacity=10, min_after_dequeue=2,
dtypes=[tf.float32, tf.int32], shapes=[[2],[]],
name="instance_q", shared_name="shared_instance_q")
minibatch_instances, minibatch_targets = instance_queue.dequeue_up_to(2)
read_and_enqueue_ops = [read_and_push_instance(filename_queue, instance_queue) for i in range(5)]
queue_runner = tf.train.QueueRunner(instance_queue, read_and_enqueue_ops)
with tf.Session() as sess:
sess.run(enqueue_filename, feed_dict={filename: "my_test.csv"})
sess.run(close_filename_queue)
coord = tf.train.Coordinator()
enqueue_threads = queue_runner.create_threads(sess, coord=coord, start=True)
try:
while True:
print(sess.run([minibatch_instances, minibatch_targets]))
except tf.errors.OutOfRangeError as ex:
print("더 이상 훈련 샘플이 없습니다")
reset_graph()
q = tf.FIFOQueue(capacity=10, dtypes=[tf.float32], shapes=[()])
v = tf.placeholder(tf.float32)
enqueue = q.enqueue([v])
dequeue = q.dequeue()
output = dequeue + 1
config = tf.ConfigProto()
config.operation_timeout_in_ms = 1000
with tf.Session(config=config) as sess:
sess.run(enqueue, feed_dict={v: 1.0})
sess.run(enqueue, feed_dict={v: 2.0})
sess.run(enqueue, feed_dict={v: 3.0})
print(sess.run(output))
print(sess.run(output, feed_dict={dequeue: 5}))
print(sess.run(output))
print(sess.run(output))
try:
print(sess.run(output))
except tf.errors.DeadlineExceededError as ex:
print("dequeue 타임 아웃")
tf.reset_default_graph()
dataset = tf.data.Dataset.from_tensor_slices(np.arange(10))
dataset = dataset.repeat(3).batch(7)
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
with tf.Session() as sess:
try:
while True:
print(next_element.eval())
except tf.errors.OutOfRangeError:
print("완료")
with tf.Session() as sess:
try:
while True:
print(sess.run([next_element, next_element]))
except tf.errors.OutOfRangeError:
print("완료")
tf.reset_default_graph()
dataset = tf.data.Dataset.from_tensor_slices(np.arange(10))
dataset = dataset.repeat(3).batch(7)
dataset = dataset.interleave(
lambda v: tf.data.Dataset.from_tensor_slices(v),
cycle_length=3,
block_length=2)
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
with tf.Session() as sess:
try:
while True:
print(next_element.eval(), end=",")
except tf.errors.OutOfRangeError:
print("완료")
tf.reset_default_graph()
filenames = ["my_test.csv"]
dataset = tf.data.TextLineDataset(filenames)
def decode_csv_line(line):
x1, x2, y = tf.decode_csv(
line, record_defaults=[[-1.], [-1.], [-1.]])
X = tf.stack([x1, x2])
return X, y
dataset = dataset.skip(1).map(decode_csv_line)
it = dataset.make_one_shot_iterator()
X, y = it.get_next()
with tf.Session() as sess:
try:
while True:
X_val, y_val = sess.run([X, y])
print(X_val, y_val)
except tf.errors.OutOfRangeError as ex:
print("완료")
| 0.456894 | 0.838779 |
```
# Basic setup for displaying bokeh plots in jupyter.
from bokeh.plotting import figure
from bokeh.io import output_notebook, show
# Additional requirements for Stacked Bar plot.
from bokeh.core.properties import value
from bokeh.models import ColumnDataSource
output_notebook()
# NumPy imports.
import numpy as np
import cPickle as pickle
import os
cwd = os.getcwd()
print(cwd)
# Raw data.
raws = pickle.load(open('2018-04-03-14-01-41-627705_singlerun_100.p', 'rb'))
print(type(raws))
print(len(raws))
print(raws[0])
# Tabulate runtimes per module.
parsed = {
'qc':[],
'databaseid':[],
'serotype':[],
'vf':[],
'amr':[],
'stx1':[],
'stx2':[],
'eae':[],
'total': []
}
for outerd in raws:
qc=0
databaseid=0
serotype=0
vf=0
amr=0
stx1=0
stx2=0
eae=0
total=0
for d in outerd.values()[0]:
if 'stx1' in d.keys()[0]:
stx1 += d.values()[0][2]
elif 'stx2' in d.keys()[0]:
stx2 += d.values()[0][2]
elif 'eae' in d.keys()[0]:
eae += d.values()[0][2]
elif 'vf' in d.keys()[0]:
vf += d.values()[0][2]
elif 'amr' in d.keys()[0]:
amr += d.values()[0][2]
elif 'serotype'in d.keys()[0]:
serotype += d.values()[0][2]
elif 'job_id' in d.keys()[0]:
databaseid += d.values()[0][2]
elif 'turtle' in d.keys()[0]:
databaseid += d.values()[0][2]
elif 'job_qc' in d.keys()[0]:
qc += d.values()[0][2]
elif 'total' in d.keys()[0]:
total += d.values()[0][2]
parsed['qc'].append(qc)
parsed['databaseid'].append(databaseid)
parsed['serotype'].append(serotype)
parsed['vf'].append(vf)
parsed['amr'].append(amr)
parsed['stx1'].append(stx1)
parsed['stx2'].append(stx2)
parsed['eae'].append(eae)
parsed['total'].append(total)
import numpy as np
from bokeh.models import ColumnDataSource
from bokeh.models.glyphs import Line
colormap = {'qc':'red',
'databaseid':'green',
'serotype':'blueviolet',
'vf':'crimson',
'amr':'firebrick',
'stx1':'darksalmon',
'stx2':'darkorange',
'eae':'darkgoldenrod',
'total': 'blue'}
colors = [colormap[x] for x in parsed.keys() for i in range(len(parsed.values()[0]))]
x_runs = [i for l in parsed.values() for i,x in enumerate(l)]
y_times = [x for l in parsed.values() for x in l]
p = figure(title = "Timings for Individual Runs",width=500,
height=500,)
p.xaxis.axis_label = 'Run #'
p.yaxis.axis_label = 'Runtime (seconds)'
print(np.mean(parsed['total']))
lines_source = ColumnDataSource(data=dict(y=[np.mean(parsed['total']) for i in range(len(parsed['total']))], x=[i for i,x in enumerate(parsed['total'])]))
line = Line(x='x',y='y', line_color="#666699", line_width=2)
p.add_glyph(lines_source, line)
p.circle(x_runs, y_times,
color=colors, fill_alpha=0.2, size=10)
show(p)
avgs = {k:np.mean(parsed[k]) for k in parsed}
print(avgs)
# Plot a histogram of the targets.
x = np.array(parsed['amr'])
print(x.mean())
hist, edges = np.histogram(x, density=True, bins=50)
p1 = figure(title="Histogram of Total Runtimes per Genome",tools="save",)
p1.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:],
fill_color="blue", line_color="#033649", alpha=0.5)
show(p1)
la = [i for i in parsed['total'] if i<140]
lb = [i for i in parsed['total'] if i>=140]
print(np.mean(la), np.mean(lb))
# Raw data.
raws_batch = pickle.load(open('2018-04-11-10-26-48-900902_spfy_class_11.p', 'rb'))
data_batch = raws_batch.data
type(data_batch)
data_batch[1]
import pandas as pd
def group_analyses(raws):
'''Tabulate runtimes per module.
'''
assert isinstance(raws, list)
# "grouped" is on a per job basis.
grouped = {}
for outerd in raws:
assert len(outerd.keys()) == 1
key = outerd.keys()[0]
lastkey = key
if key == 'total':
analysis = key
else:
filename, analysis = key.split('|')
if analysis not in grouped:
grouped.update({analysis:[]})
grouped[analysis].append(
outerd.values()[0][2]
)
total = grouped.pop('total')
df = pd.DataFrame(data=grouped)
return total,df
# Calculate averages.
batches = raws_batch.list_sizes # list_sizes
plot_data = {
'Batches': [str(b) for b in batches],
'QC': [],
'ID': [],
'VF': [],
'Serotype': [],
'AMR': []
}
avt = []
i = 0
for l in data_batch:
r = group_analyses(l)
# Remove the total since it's only 1.
total = r[0]
df = r[1]
#print(df)
amr = df[['job_amr','job_amr_beautify','job_amr_datastruct','job_amr_dict']].sum(axis=1)
serotype = df[['job_ectyper_beautify_serotype','job_ectyper_datastruct_serotype','job_ectyper_serotype']].sum(axis=1)
vf = df[['job_ectyper_beautify_vf','job_ectyper_datastruct_vf','job_ectyper_vf']].sum(axis=1)
dbid = df[['job_id','job_turtle']].sum(axis=1)
qc = df[['job_qc']].sum(axis=1)
plot_data['QC'].append(qc.sum())
plot_data['ID'].append(dbid.sum())
plot_data['VF'].append(vf.sum())
plot_data['Serotype'].append(serotype.sum())
plot_data['AMR'].append(amr.sum())
avt.append(total[0])
i += 1
print(plot_data)
print(avt)
print('cat')
for k, l in plot_data.items():
print(k,l)
if k == 'Batches':
continue
for i,n in enumerate(l):
l[i] = float(n)/60.0
print(plot_data)
avt = [i/60.0 for i in avt]
print(avt)
# Data.
tasks = ["QC", "ID", "VF", "Serotype", "AMR"] # subtasks
colors = [colormap['qc'], colormap['databaseid'], colormap['vf'],colormap['serotype'], colormap['amr']]
source = ColumnDataSource(data=plot_data)
# Plot.
p = figure(x_range=[str(b) for b in batches],
plot_height=350,
title="Runtimes for Analysis Modules",
x_axis_label="#Total Batch Size",
y_axis_label="Total Runtime per Batch (minutes)",hidpi=True)
p.vbar_stack(tasks, x='Batches', width=0.9, color=colors, source=source,
legend=[value(x) for x in tasks], name=tasks, alpha=0.5)
p.y_range.start = 0
p.x_range.range_padding = 0.1
p.xgrid.grid_line_color = None
p.axis.minor_tick_line_color = None
p.outline_line_color = None
p.legend.location = "center_left"
p.legend.orientation = "vertical"
p.line(x=[str(b) for b in batches], y=avt, color="cyan", line_width=2)
show(p)
# Calculate averages.
batches = raws_batch.list_sizes # list_sizes
pdata = {
'Batches': [str(b) for b in batches],
'QC': [],
'ID': [],
'VF': [],
'Serotype': [],
'AMR': []
}
i = 0
for l in data_batch:
r = group_analyses(l)
# Remove the total since it's only 1.
total = r[0]
df = r[1]
# Total amount of time spent per task (column) per file (row).
amr = df[['job_amr','job_amr_beautify','job_amr_datastruct','job_amr_dict']].sum(axis=1)
serotype = df[['job_ectyper_beautify_serotype','job_ectyper_datastruct_serotype','job_ectyper_serotype']].sum(axis=1)
vf = df[['job_ectyper_beautify_vf','job_ectyper_datastruct_vf','job_ectyper_vf']].sum(axis=1)
dbid = df[['job_id','job_turtle']].sum(axis=1)
qc = df[['job_qc']].sum(axis=1)
# Sum the columns and give me an average over the total number of files in a batch.
samr = amr.sum(axis=0)
print(samr/batches[i])
pdata['QC'].append(qc.mean())
pdata['ID'].append(dbid.mean())
pdata['VF'].append(vf.mean())
pdata['Serotype'].append(serotype.mean())
pdata['AMR'].append(amr.mean())
print(total)
i += 1
# print(plot_data)
l = [i for i in avt[1:]]
print(np.mean(l))
```
|
github_jupyter
|
# Basic setup for displaying bokeh plots in jupyter.
from bokeh.plotting import figure
from bokeh.io import output_notebook, show
# Additional requirements for Stacked Bar plot.
from bokeh.core.properties import value
from bokeh.models import ColumnDataSource
output_notebook()
# NumPy imports.
import numpy as np
import cPickle as pickle
import os
cwd = os.getcwd()
print(cwd)
# Raw data.
raws = pickle.load(open('2018-04-03-14-01-41-627705_singlerun_100.p', 'rb'))
print(type(raws))
print(len(raws))
print(raws[0])
# Tabulate runtimes per module.
parsed = {
'qc':[],
'databaseid':[],
'serotype':[],
'vf':[],
'amr':[],
'stx1':[],
'stx2':[],
'eae':[],
'total': []
}
for outerd in raws:
qc=0
databaseid=0
serotype=0
vf=0
amr=0
stx1=0
stx2=0
eae=0
total=0
for d in outerd.values()[0]:
if 'stx1' in d.keys()[0]:
stx1 += d.values()[0][2]
elif 'stx2' in d.keys()[0]:
stx2 += d.values()[0][2]
elif 'eae' in d.keys()[0]:
eae += d.values()[0][2]
elif 'vf' in d.keys()[0]:
vf += d.values()[0][2]
elif 'amr' in d.keys()[0]:
amr += d.values()[0][2]
elif 'serotype'in d.keys()[0]:
serotype += d.values()[0][2]
elif 'job_id' in d.keys()[0]:
databaseid += d.values()[0][2]
elif 'turtle' in d.keys()[0]:
databaseid += d.values()[0][2]
elif 'job_qc' in d.keys()[0]:
qc += d.values()[0][2]
elif 'total' in d.keys()[0]:
total += d.values()[0][2]
parsed['qc'].append(qc)
parsed['databaseid'].append(databaseid)
parsed['serotype'].append(serotype)
parsed['vf'].append(vf)
parsed['amr'].append(amr)
parsed['stx1'].append(stx1)
parsed['stx2'].append(stx2)
parsed['eae'].append(eae)
parsed['total'].append(total)
import numpy as np
from bokeh.models import ColumnDataSource
from bokeh.models.glyphs import Line
colormap = {'qc':'red',
'databaseid':'green',
'serotype':'blueviolet',
'vf':'crimson',
'amr':'firebrick',
'stx1':'darksalmon',
'stx2':'darkorange',
'eae':'darkgoldenrod',
'total': 'blue'}
colors = [colormap[x] for x in parsed.keys() for i in range(len(parsed.values()[0]))]
x_runs = [i for l in parsed.values() for i,x in enumerate(l)]
y_times = [x for l in parsed.values() for x in l]
p = figure(title = "Timings for Individual Runs",width=500,
height=500,)
p.xaxis.axis_label = 'Run #'
p.yaxis.axis_label = 'Runtime (seconds)'
print(np.mean(parsed['total']))
lines_source = ColumnDataSource(data=dict(y=[np.mean(parsed['total']) for i in range(len(parsed['total']))], x=[i for i,x in enumerate(parsed['total'])]))
line = Line(x='x',y='y', line_color="#666699", line_width=2)
p.add_glyph(lines_source, line)
p.circle(x_runs, y_times,
color=colors, fill_alpha=0.2, size=10)
show(p)
avgs = {k:np.mean(parsed[k]) for k in parsed}
print(avgs)
# Plot a histogram of the targets.
x = np.array(parsed['amr'])
print(x.mean())
hist, edges = np.histogram(x, density=True, bins=50)
p1 = figure(title="Histogram of Total Runtimes per Genome",tools="save",)
p1.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:],
fill_color="blue", line_color="#033649", alpha=0.5)
show(p1)
la = [i for i in parsed['total'] if i<140]
lb = [i for i in parsed['total'] if i>=140]
print(np.mean(la), np.mean(lb))
# Raw data.
raws_batch = pickle.load(open('2018-04-11-10-26-48-900902_spfy_class_11.p', 'rb'))
data_batch = raws_batch.data
type(data_batch)
data_batch[1]
import pandas as pd
def group_analyses(raws):
'''Tabulate runtimes per module.
'''
assert isinstance(raws, list)
# "grouped" is on a per job basis.
grouped = {}
for outerd in raws:
assert len(outerd.keys()) == 1
key = outerd.keys()[0]
lastkey = key
if key == 'total':
analysis = key
else:
filename, analysis = key.split('|')
if analysis not in grouped:
grouped.update({analysis:[]})
grouped[analysis].append(
outerd.values()[0][2]
)
total = grouped.pop('total')
df = pd.DataFrame(data=grouped)
return total,df
# Calculate averages.
batches = raws_batch.list_sizes # list_sizes
plot_data = {
'Batches': [str(b) for b in batches],
'QC': [],
'ID': [],
'VF': [],
'Serotype': [],
'AMR': []
}
avt = []
i = 0
for l in data_batch:
r = group_analyses(l)
# Remove the total since it's only 1.
total = r[0]
df = r[1]
#print(df)
amr = df[['job_amr','job_amr_beautify','job_amr_datastruct','job_amr_dict']].sum(axis=1)
serotype = df[['job_ectyper_beautify_serotype','job_ectyper_datastruct_serotype','job_ectyper_serotype']].sum(axis=1)
vf = df[['job_ectyper_beautify_vf','job_ectyper_datastruct_vf','job_ectyper_vf']].sum(axis=1)
dbid = df[['job_id','job_turtle']].sum(axis=1)
qc = df[['job_qc']].sum(axis=1)
plot_data['QC'].append(qc.sum())
plot_data['ID'].append(dbid.sum())
plot_data['VF'].append(vf.sum())
plot_data['Serotype'].append(serotype.sum())
plot_data['AMR'].append(amr.sum())
avt.append(total[0])
i += 1
print(plot_data)
print(avt)
print('cat')
for k, l in plot_data.items():
print(k,l)
if k == 'Batches':
continue
for i,n in enumerate(l):
l[i] = float(n)/60.0
print(plot_data)
avt = [i/60.0 for i in avt]
print(avt)
# Data.
tasks = ["QC", "ID", "VF", "Serotype", "AMR"] # subtasks
colors = [colormap['qc'], colormap['databaseid'], colormap['vf'],colormap['serotype'], colormap['amr']]
source = ColumnDataSource(data=plot_data)
# Plot.
p = figure(x_range=[str(b) for b in batches],
plot_height=350,
title="Runtimes for Analysis Modules",
x_axis_label="#Total Batch Size",
y_axis_label="Total Runtime per Batch (minutes)",hidpi=True)
p.vbar_stack(tasks, x='Batches', width=0.9, color=colors, source=source,
legend=[value(x) for x in tasks], name=tasks, alpha=0.5)
p.y_range.start = 0
p.x_range.range_padding = 0.1
p.xgrid.grid_line_color = None
p.axis.minor_tick_line_color = None
p.outline_line_color = None
p.legend.location = "center_left"
p.legend.orientation = "vertical"
p.line(x=[str(b) for b in batches], y=avt, color="cyan", line_width=2)
show(p)
# Calculate averages.
batches = raws_batch.list_sizes # list_sizes
pdata = {
'Batches': [str(b) for b in batches],
'QC': [],
'ID': [],
'VF': [],
'Serotype': [],
'AMR': []
}
i = 0
for l in data_batch:
r = group_analyses(l)
# Remove the total since it's only 1.
total = r[0]
df = r[1]
# Total amount of time spent per task (column) per file (row).
amr = df[['job_amr','job_amr_beautify','job_amr_datastruct','job_amr_dict']].sum(axis=1)
serotype = df[['job_ectyper_beautify_serotype','job_ectyper_datastruct_serotype','job_ectyper_serotype']].sum(axis=1)
vf = df[['job_ectyper_beautify_vf','job_ectyper_datastruct_vf','job_ectyper_vf']].sum(axis=1)
dbid = df[['job_id','job_turtle']].sum(axis=1)
qc = df[['job_qc']].sum(axis=1)
# Sum the columns and give me an average over the total number of files in a batch.
samr = amr.sum(axis=0)
print(samr/batches[i])
pdata['QC'].append(qc.mean())
pdata['ID'].append(dbid.mean())
pdata['VF'].append(vf.mean())
pdata['Serotype'].append(serotype.mean())
pdata['AMR'].append(amr.mean())
print(total)
i += 1
# print(plot_data)
l = [i for i in avt[1:]]
print(np.mean(l))
| 0.353763 | 0.54153 |
```
import re
import os
import sys
sys.path.insert(0, '../src/')
import numpy as np
rng = np.random.RandomState(0)
import matplotlib.pyplot as plt
import scipy.stats as stats
import astropy.units as u
import pandas as pd
import dynesty
from dynesty import plotting as dyplot
from dynesty import utils as dyfunc
nlive_init=100
nlive_batch=25
maxbatch=2
pfrac=0.8
dlogz = 1e-3 * (nlive_init - 1) + 0.01
def get_params_fit(results, return_sample=False):
samples = results.samples # samples
weights = np.exp(results.logwt - results.logz[-1]) # normalized weights
pmean, pcov = dyfunc.mean_and_cov(samples, weights) # weighted mean and covariance
samples_eq = dyfunc.resample_equal(samples, weights) # resample weighted samples
pmed = np.median(samples_eq,axis=0)
if return_sample:
return pmed, pmean, pcov, samples_eq
else:
return pmed, pmean, pcov
%load_ext autoreload
%autoreload 2
```
### Toy Model 1
```
k1 = -0.002
k2 = -0.01
b = 0.3
x = stats.truncnorm(loc=0.1, scale=2.5, a=0, b=2).rvs(10000)
X = x[:, np.newaxis]
y = 3 + rng.normal(0, k1*x**2 + k2*x + b, X.shape[0])
plt.scatter(x, k1*x**2 + k2*x + b)
plt.figure(figsize=(7,5))
X_ = np.linspace(0, 5, 100)
plt.scatter(X[:, 0], y, c='r', s=10, edgecolors=(0, 0, 0), alpha=0.7)
plt.plot(X_, 3*np.ones_like(X_), 'r', lw=3)
plt.tight_layout()
def prior(u):
v = u.copy()
v[0] = u[0] * 2 + 2
v[1] = u[1] * 0.02 - 0.01
v[2] = u[2] * 0.2 -0.1
v[3] = u[3] * 0.5
return v
def loglike(v):
mu, k1, k2, b = v
ypred = mu
sigma = (k1 * x**2 + k2 * x + b)
if min(sigma)<=0:
loglike = -1e100
residsq = (ypred - y)**2 / sigma**2
loglike = -0.5 * np.sum(residsq + np.log(2 * np.pi * sigma**2))
if not np.isfinite(loglike):
loglike = -1e100
return loglike
pdsampler = dynesty.DynamicNestedSampler(loglike, prior, 4)
pdsampler.run_nested(nlive_init=nlive_init,
nlive_batch=nlive_batch,
maxbatch=maxbatch,
dlogz_init=dlogz,
wt_kwargs={'pfrac': pfrac})
fig, ax = plt.subplots(4,4,figsize=(12,12))
dyplot.cornerplot(pdsampler.results, truths=[3, k1, k2, b], labels=["mu", "k1", "k2", "b"],
color="royalblue", truth_color="indianred",
title_kwargs={'fontsize':15, 'y': 1.04}, title_fmt='.3f',
label_kwargs={'fontsize':15}, show_titles=True, fig=(fig,ax))
plt.show()
pmed, pmean, pcov = get_params_fit(pdsampler.results)
plt.figure(figsize=(8,6))
Xp = np.linspace(0, 5, 100)
yp_mean, yp_std = pmed[0], pmed[1] * Xp**2 + pmed[2] * Xp + pmed[3]
plt.plot(Xp, yp_mean*np.ones_like(Xp), 'k', lw=3, zorder=9)
plt.fill_between(Xp, yp_mean - yp_std, yp_mean + yp_std, alpha=0.5, color='k')
plt.fill_between(Xp, yp_mean - 2*yp_std, yp_mean + 2*yp_std, alpha=0.3, color='k')
plt.fill_between(Xp, yp_mean - 3*yp_std, yp_mean + 3*yp_std, alpha=0.1, color='k')
plt.scatter(X[:, 0], y, c='r', s=5, zorder=10, edgecolors=(0, 0, 0), alpha=0.7)
plt.plot(Xp, 3*np.ones_like(Xp), 'r', lw=3)
plt.tight_layout()
```
### Toy Model 2
```
k1 = 0.005
k2 = 0.02
b = 0.1
x = rng.uniform(0, 5, 2000)
X = x[:, np.newaxis]
y = 3 + rng.normal(0, k1*x**2 + k2*x + b, X.shape[0])
plt.scatter(x, k1*x**2 + k2*x + b)
plt.figure(figsize=(7,5))
X_ = np.linspace(0, 5, 100)
plt.scatter(X[:, 0], y, c='r', s=10, edgecolors=(0, 0, 0), alpha=0.7)
plt.plot(X_, 3*np.ones_like(X_), 'r', lw=3)
plt.tight_layout()
pdsampler = dynesty.DynamicNestedSampler(loglike, prior, 4)
pdsampler.run_nested(nlive_init=nlive_init,
nlive_batch=nlive_batch,
maxbatch=maxbatch,
dlogz_init=dlogz,
wt_kwargs={'pfrac': pfrac})
fig, ax = plt.subplots(4,4,figsize=(12,12))
dyplot.cornerplot(pdsampler.results, truths=[3, k1, k2, b], labels=["mu", "k1", "k2", "b"],
color="royalblue", truth_color="indianred",
title_kwargs={'fontsize':15, 'y': 1.04}, title_fmt='.3f',
label_kwargs={'fontsize':15}, show_titles=True, fig=(fig,ax))
plt.show()
pmed, pmean, pcov = get_params_fit(pdsampler.results)
plt.figure(figsize=(8,6))
Xp = np.linspace(0, 5, 100)
yp_mean, yp_std = pmed[0], pmed[1] * Xp**2 + pmed[2] * Xp + pmed[3]
plt.plot(Xp, yp_mean*np.ones_like(Xp), 'k', lw=3, zorder=9)
plt.fill_between(Xp, yp_mean - yp_std, yp_mean + yp_std, alpha=0.5, color='k')
plt.fill_between(Xp, yp_mean - 2*yp_std, yp_mean + 2*yp_std, alpha=0.3, color='k')
plt.fill_between(Xp, yp_mean - 3*yp_std, yp_mean + 3*yp_std, alpha=0.1, color='k')
plt.scatter(X[:, 0], y, c='r', s=20, zorder=10, edgecolors=(0, 0, 0), alpha=0.7)
plt.plot(Xp, 3*np.ones_like(Xp), 'r', lw=3)
plt.tight_layout()
```
|
github_jupyter
|
import re
import os
import sys
sys.path.insert(0, '../src/')
import numpy as np
rng = np.random.RandomState(0)
import matplotlib.pyplot as plt
import scipy.stats as stats
import astropy.units as u
import pandas as pd
import dynesty
from dynesty import plotting as dyplot
from dynesty import utils as dyfunc
nlive_init=100
nlive_batch=25
maxbatch=2
pfrac=0.8
dlogz = 1e-3 * (nlive_init - 1) + 0.01
def get_params_fit(results, return_sample=False):
samples = results.samples # samples
weights = np.exp(results.logwt - results.logz[-1]) # normalized weights
pmean, pcov = dyfunc.mean_and_cov(samples, weights) # weighted mean and covariance
samples_eq = dyfunc.resample_equal(samples, weights) # resample weighted samples
pmed = np.median(samples_eq,axis=0)
if return_sample:
return pmed, pmean, pcov, samples_eq
else:
return pmed, pmean, pcov
%load_ext autoreload
%autoreload 2
k1 = -0.002
k2 = -0.01
b = 0.3
x = stats.truncnorm(loc=0.1, scale=2.5, a=0, b=2).rvs(10000)
X = x[:, np.newaxis]
y = 3 + rng.normal(0, k1*x**2 + k2*x + b, X.shape[0])
plt.scatter(x, k1*x**2 + k2*x + b)
plt.figure(figsize=(7,5))
X_ = np.linspace(0, 5, 100)
plt.scatter(X[:, 0], y, c='r', s=10, edgecolors=(0, 0, 0), alpha=0.7)
plt.plot(X_, 3*np.ones_like(X_), 'r', lw=3)
plt.tight_layout()
def prior(u):
v = u.copy()
v[0] = u[0] * 2 + 2
v[1] = u[1] * 0.02 - 0.01
v[2] = u[2] * 0.2 -0.1
v[3] = u[3] * 0.5
return v
def loglike(v):
mu, k1, k2, b = v
ypred = mu
sigma = (k1 * x**2 + k2 * x + b)
if min(sigma)<=0:
loglike = -1e100
residsq = (ypred - y)**2 / sigma**2
loglike = -0.5 * np.sum(residsq + np.log(2 * np.pi * sigma**2))
if not np.isfinite(loglike):
loglike = -1e100
return loglike
pdsampler = dynesty.DynamicNestedSampler(loglike, prior, 4)
pdsampler.run_nested(nlive_init=nlive_init,
nlive_batch=nlive_batch,
maxbatch=maxbatch,
dlogz_init=dlogz,
wt_kwargs={'pfrac': pfrac})
fig, ax = plt.subplots(4,4,figsize=(12,12))
dyplot.cornerplot(pdsampler.results, truths=[3, k1, k2, b], labels=["mu", "k1", "k2", "b"],
color="royalblue", truth_color="indianred",
title_kwargs={'fontsize':15, 'y': 1.04}, title_fmt='.3f',
label_kwargs={'fontsize':15}, show_titles=True, fig=(fig,ax))
plt.show()
pmed, pmean, pcov = get_params_fit(pdsampler.results)
plt.figure(figsize=(8,6))
Xp = np.linspace(0, 5, 100)
yp_mean, yp_std = pmed[0], pmed[1] * Xp**2 + pmed[2] * Xp + pmed[3]
plt.plot(Xp, yp_mean*np.ones_like(Xp), 'k', lw=3, zorder=9)
plt.fill_between(Xp, yp_mean - yp_std, yp_mean + yp_std, alpha=0.5, color='k')
plt.fill_between(Xp, yp_mean - 2*yp_std, yp_mean + 2*yp_std, alpha=0.3, color='k')
plt.fill_between(Xp, yp_mean - 3*yp_std, yp_mean + 3*yp_std, alpha=0.1, color='k')
plt.scatter(X[:, 0], y, c='r', s=5, zorder=10, edgecolors=(0, 0, 0), alpha=0.7)
plt.plot(Xp, 3*np.ones_like(Xp), 'r', lw=3)
plt.tight_layout()
k1 = 0.005
k2 = 0.02
b = 0.1
x = rng.uniform(0, 5, 2000)
X = x[:, np.newaxis]
y = 3 + rng.normal(0, k1*x**2 + k2*x + b, X.shape[0])
plt.scatter(x, k1*x**2 + k2*x + b)
plt.figure(figsize=(7,5))
X_ = np.linspace(0, 5, 100)
plt.scatter(X[:, 0], y, c='r', s=10, edgecolors=(0, 0, 0), alpha=0.7)
plt.plot(X_, 3*np.ones_like(X_), 'r', lw=3)
plt.tight_layout()
pdsampler = dynesty.DynamicNestedSampler(loglike, prior, 4)
pdsampler.run_nested(nlive_init=nlive_init,
nlive_batch=nlive_batch,
maxbatch=maxbatch,
dlogz_init=dlogz,
wt_kwargs={'pfrac': pfrac})
fig, ax = plt.subplots(4,4,figsize=(12,12))
dyplot.cornerplot(pdsampler.results, truths=[3, k1, k2, b], labels=["mu", "k1", "k2", "b"],
color="royalblue", truth_color="indianred",
title_kwargs={'fontsize':15, 'y': 1.04}, title_fmt='.3f',
label_kwargs={'fontsize':15}, show_titles=True, fig=(fig,ax))
plt.show()
pmed, pmean, pcov = get_params_fit(pdsampler.results)
plt.figure(figsize=(8,6))
Xp = np.linspace(0, 5, 100)
yp_mean, yp_std = pmed[0], pmed[1] * Xp**2 + pmed[2] * Xp + pmed[3]
plt.plot(Xp, yp_mean*np.ones_like(Xp), 'k', lw=3, zorder=9)
plt.fill_between(Xp, yp_mean - yp_std, yp_mean + yp_std, alpha=0.5, color='k')
plt.fill_between(Xp, yp_mean - 2*yp_std, yp_mean + 2*yp_std, alpha=0.3, color='k')
plt.fill_between(Xp, yp_mean - 3*yp_std, yp_mean + 3*yp_std, alpha=0.1, color='k')
plt.scatter(X[:, 0], y, c='r', s=20, zorder=10, edgecolors=(0, 0, 0), alpha=0.7)
plt.plot(Xp, 3*np.ones_like(Xp), 'r', lw=3)
plt.tight_layout()
| 0.526099 | 0.730386 |
# Solving Knapsack Problem with Amazon SageMaker RL
Knapsack is a canonical operations research problem. We start with a bag and a set of items. We choose which items to put in the bag. Our objective is to maximize the value of the items in the bag; but we cannot put all the items in as the bag capacity is limited. The problem is hard because the items have different values and weights, and there are many combinations to consider.
In the classic version of the problem, we pick the items in one shot. But in this baseline, we instead consider the items one at a time over a fixed time horizon.
## Problem Statement
We start with an empty bag and an item. We need to either put the item in the bag or throw it away. If we put it in the bag, we get a reward equal to the value of the item. If we throw the item away, we get a fixed penalty. In case the bag is too full to accommodate the item, we are forced to throw it away.
In the next step, another item appears and we need to decide again if we want to put it in the bag or throw it away. This process repeats for a fixed number of steps.
Since we do not know the value and weight of items that will come in the future, and the bag can only hold so many items, it is not obvious what is the right thing to do.
At each time step, our agent is aware of the following information:
- Weight capacity of the bag
- Volume capacity of the bag
- Sum of item weight in the bag
- Sum of item volume in the bag
- Sum of item value in the bag
- Current item weight
- Current item volume
- Current item value
- Time remaining
At each time step, our agent can take one of the following actions:
- Put the item in the bag
- Throw the item away
At each time step, our agent gets the following reward depending on their action:
- Item value if you put it in the bag and bag does not overflow
- A penalty if you throw the item away or if the item does not fit in the bag
The time horizon is 20 steps. You can see the specifics in the `KnapSackMediumEnv` class in `knapsack_env.py`. There are a couple of other classes that provide an easier (`KnapSackEnv`) and a more difficult version (`KnapSackHardEnv`) of this problem.
## Using Amazon SageMaker RL
Amazon SageMaker RL allows you to train your RL agents in cloud machines using docker containers. You do not have to worry about setting up your machines with the RL toolkits and deep learning frameworks. You can easily switch between many different machines setup for you, including powerful GPU machines that give a big speedup. You can also choose to use multiple machines in a cluster to further speedup training, often necessary for production level loads.
### Pre-requsites
#### Imports
To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations.
```
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import HTML
import time
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
```
#### Settings
You can run this notebook from your local host or from a SageMaker notebook instance. In both of these scenarios, you can run the following in either `local` or `SageMaker` modes. The `local` mode uses the SageMaker Python SDK to run your code in a local container before deploying to SageMaker. This can speed up iterative testing and debugging while using the same familiar Python SDK interface. You just need to set `local_mode = True`.
```
# run in local mode?
local_mode = False
# create unique job name
job_name_prefix = 'rl-knapsack'
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
print("Using s3 bucket %s" % s3_bucket) # create this bucket if it doesn't exist
s3_output_path = 's3://{}/'.format(s3_bucket) # SDK appends the job name and output folder
```
#### Install docker for `local` mode
In order to work in `local` mode, you need to have docker installed. When running from you local instance, please make sure that you have docker or docker-compose (for local CPU machines) and nvidia-docker (for local GPU machines) installed. Alternatively, when running from a SageMaker notebook instance, you can simply run the following script
Note, you can only run a single local notebook at one time.
```
if local_mode:
!/bin/bash ./common/setup.sh
```
#### Create an IAM role
Either get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running locally, set it to an IAM role with `AmazonSageMakerFullAccess` and `CloudWatchFullAccess permissions`.
```
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role()
print("Using IAM role arn: {}".format(role))
```
#### Setup the environment
The environment is defined in a Python file called `knapsack_env.py` in the `./src` directory. It implements the init(), step(), reset() and render() functions that describe how the environment behaves. This is consistent with Open AI Gym interfaces for defining an environment.
- Init() - initialize the environment in a pre-defined state
- Step() - take an action on the environment
- reset()- restart the environment on a new episode
- render() - get a rendered image of the environment in its current state
#### Configure the presets for RL algorithm
The presets that configure the RL training jobs are defined in the `preset-knapsack-clippedppo.py` in the `./src` directory. Using the preset file, you can define agent parameters to select the specific agent algorithm. You can also set the environment parameters, define the schedule and visualization parameters, and define the graph manager. The schedule presets will define the number of heat up steps, periodic evaluation steps, training steps between evaluations.
These can be overridden at runtime by specifying the RLCOACH_PRESET hyperparameter. Additionally, it can be used to define custom hyperparameters.
```
!pygmentize src/preset-knapsack-clippedppo.py
```
#### Write the Training Code
The training code is in the file `train-coach.py` which is also the `./src` directory.
```
!pygmentize src/train-coach.py
```
### Train the model using Python SDK/ script mode
If you are using local mode, the training will run on the notebook instance. When using SageMaker for training, you can select a GPU or CPU instance. The RLEstimator is used for training RL jobs.
- Specify the source directory where the environment, presets and training code is uploaded.
- Specify the entry point as the training code
- Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.
- Define the training parameters such as the instance count, job name, S3 path for output and job name.
- Specify the hyperparameters for the RL agent algorithm. The RLCOACH_PRESET can be used to specify the RL agent algorithm you want to use.
- Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
```
if local_mode:
instance_type = 'local'
else:
instance_type = "ml.m4.4xlarge"
estimator = RLEstimator(entry_point="train-coach.py",
source_dir='src',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='1.0.0',
framework=RLFramework.TENSORFLOW,
role=role,
instance_type=instance_type,
instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
hyperparameters = {
"RLCOACH_PRESET":"preset-knapsack-clippedppo",
"rl.agent_params.algorithm.discount": 0.9,
"rl.evaluation_steps:EnvironmentEpisodes": 8,
}
)
estimator.fit(wait=local_mode)
```
### Store intermediate training output and model checkpoints
The output from the training job above is stored on S3. The intermediate folder contains gifs and metadata of the training
```
job_name=estimator._current_job_name
print("Job name: {}".format(job_name))
s3_url = "s3://{}/{}".format(s3_bucket,job_name)
if local_mode:
output_tar_key = "{}/output.tar.gz".format(job_name)
else:
output_tar_key = "{}/output/output.tar.gz".format(job_name)
intermediate_folder_key = "{}/output/intermediate".format(job_name)
output_url = "s3://{}/{}".format(s3_bucket, output_tar_key)
intermediate_url = "s3://{}/{}".format(s3_bucket, intermediate_folder_key)
print("S3 job path: {}".format(s3_url))
print("Output.tar.gz location: {}".format(output_url))
print("Intermediate folder path: {}".format(intermediate_url))
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
```
### Visualization
#### Plot metrics for training job
We can pull the reward metric of the training and plot it to see the performance of the model over time.
```
%matplotlib inline
import pandas as pd
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
key = intermediate_folder_key + "/" + csv_file_name
wait_for_s3_object(s3_bucket, key, tmp_dir)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
df = df.dropna(subset=['Training Reward'])
x_axis = 'Episode #'
y_axis = 'Training Reward'
plt = df.plot(x=x_axis,y=y_axis, figsize=(12,5), legend=True, style='b-')
plt.set_ylabel(y_axis);
plt.set_xlabel(x_axis);
```
#### Visualize the rendered gifs
The latest gif file found in the gifs directory is displayed. You can replace the tmp.gif file below to visualize other files generated.
```
key = intermediate_folder_key + '/gifs'
wait_for_s3_object(s3_bucket, key, tmp_dir)
print("Copied gifs files to {}".format(tmp_dir))
glob_pattern = os.path.join("{}/*.gif".format(tmp_dir))
gifs = [file for file in glob.iglob(glob_pattern, recursive=True)]
extract_episode = lambda string: int(re.search('.*episode-(\d*)_.*', string, re.IGNORECASE).group(1))
gifs.sort(key=extract_episode)
print("GIFs found:\n{}".format("\n".join([os.path.basename(gif) for gif in gifs])))
# visualize a specific episode
gif_index = -1 # since we want last gif
gif_filepath = gifs[gif_index]
gif_filename = os.path.basename(gif_filepath)
print("Selected GIF: {}".format(gif_filename))
os.system("mkdir -p ./src/tmp_render/ && cp {} ./src/tmp_render/{}.gif".format(gif_filepath, gif_filename))
HTML('<img src="./src/tmp_render/{}.gif">'.format(gif_filename))
```
### Evaluation of RL models
We use the last checkpointed model to run evaluation for the RL Agent.
#### Load checkpointed model
Checkpointed data from the previously trained models will be passed on for evaluation / inference in the checkpoint channel. In local mode, we can simply use the local directory, whereas in the SageMaker mode, it needs to be moved to S3 first.
```
wait_for_s3_object(s3_bucket, output_tar_key, tmp_dir)
if not os.path.isfile("{}/output.tar.gz".format(tmp_dir)):
raise FileNotFoundError("File output.tar.gz not found")
os.system("tar -xvzf {}/output.tar.gz -C {}".format(tmp_dir, tmp_dir))
if local_mode:
checkpoint_dir = "{}/data/checkpoint".format(tmp_dir)
else:
checkpoint_dir = "{}/checkpoint".format(tmp_dir)
print("Checkpoint directory {}".format(checkpoint_dir))
if local_mode:
checkpoint_path = 'file://{}'.format(checkpoint_dir)
print("Local checkpoint file path: {}".format(checkpoint_path))
else:
checkpoint_path = "s3://{}/{}/checkpoint/".format(s3_bucket, job_name)
if not os.listdir(checkpoint_dir):
raise FileNotFoundError("Checkpoint files not found under the path")
os.system("aws s3 cp --recursive {} {}".format(checkpoint_dir, checkpoint_path))
print("S3 checkpoint file path: {}".format(checkpoint_path))
```
#### Run the evaluation step
Use the checkpointed model to run the evaluation step.
```
estimator_eval = RLEstimator(role=role,
source_dir='src/',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='1.0.0',
framework=RLFramework.TENSORFLOW,
entry_point="evaluate-coach.py",
instance_count=1,
instance_type=instance_type,
hyperparameters = {
"RLCOACH_PRESET":"preset-knapsack-clippedppo",
"evaluate_steps": 250, #5 episodes
}
)
estimator_eval.fit({'checkpoint': checkpoint_path})
```
### Visualize the output
Optionally, you can run the steps defined earlier to visualize the output
|
github_jupyter
|
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import HTML
import time
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
# run in local mode?
local_mode = False
# create unique job name
job_name_prefix = 'rl-knapsack'
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
print("Using s3 bucket %s" % s3_bucket) # create this bucket if it doesn't exist
s3_output_path = 's3://{}/'.format(s3_bucket) # SDK appends the job name and output folder
if local_mode:
!/bin/bash ./common/setup.sh
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role()
print("Using IAM role arn: {}".format(role))
!pygmentize src/preset-knapsack-clippedppo.py
!pygmentize src/train-coach.py
if local_mode:
instance_type = 'local'
else:
instance_type = "ml.m4.4xlarge"
estimator = RLEstimator(entry_point="train-coach.py",
source_dir='src',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='1.0.0',
framework=RLFramework.TENSORFLOW,
role=role,
instance_type=instance_type,
instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
hyperparameters = {
"RLCOACH_PRESET":"preset-knapsack-clippedppo",
"rl.agent_params.algorithm.discount": 0.9,
"rl.evaluation_steps:EnvironmentEpisodes": 8,
}
)
estimator.fit(wait=local_mode)
job_name=estimator._current_job_name
print("Job name: {}".format(job_name))
s3_url = "s3://{}/{}".format(s3_bucket,job_name)
if local_mode:
output_tar_key = "{}/output.tar.gz".format(job_name)
else:
output_tar_key = "{}/output/output.tar.gz".format(job_name)
intermediate_folder_key = "{}/output/intermediate".format(job_name)
output_url = "s3://{}/{}".format(s3_bucket, output_tar_key)
intermediate_url = "s3://{}/{}".format(s3_bucket, intermediate_folder_key)
print("S3 job path: {}".format(s3_url))
print("Output.tar.gz location: {}".format(output_url))
print("Intermediate folder path: {}".format(intermediate_url))
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
%matplotlib inline
import pandas as pd
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
key = intermediate_folder_key + "/" + csv_file_name
wait_for_s3_object(s3_bucket, key, tmp_dir)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
df = df.dropna(subset=['Training Reward'])
x_axis = 'Episode #'
y_axis = 'Training Reward'
plt = df.plot(x=x_axis,y=y_axis, figsize=(12,5), legend=True, style='b-')
plt.set_ylabel(y_axis);
plt.set_xlabel(x_axis);
key = intermediate_folder_key + '/gifs'
wait_for_s3_object(s3_bucket, key, tmp_dir)
print("Copied gifs files to {}".format(tmp_dir))
glob_pattern = os.path.join("{}/*.gif".format(tmp_dir))
gifs = [file for file in glob.iglob(glob_pattern, recursive=True)]
extract_episode = lambda string: int(re.search('.*episode-(\d*)_.*', string, re.IGNORECASE).group(1))
gifs.sort(key=extract_episode)
print("GIFs found:\n{}".format("\n".join([os.path.basename(gif) for gif in gifs])))
# visualize a specific episode
gif_index = -1 # since we want last gif
gif_filepath = gifs[gif_index]
gif_filename = os.path.basename(gif_filepath)
print("Selected GIF: {}".format(gif_filename))
os.system("mkdir -p ./src/tmp_render/ && cp {} ./src/tmp_render/{}.gif".format(gif_filepath, gif_filename))
HTML('<img src="./src/tmp_render/{}.gif">'.format(gif_filename))
wait_for_s3_object(s3_bucket, output_tar_key, tmp_dir)
if not os.path.isfile("{}/output.tar.gz".format(tmp_dir)):
raise FileNotFoundError("File output.tar.gz not found")
os.system("tar -xvzf {}/output.tar.gz -C {}".format(tmp_dir, tmp_dir))
if local_mode:
checkpoint_dir = "{}/data/checkpoint".format(tmp_dir)
else:
checkpoint_dir = "{}/checkpoint".format(tmp_dir)
print("Checkpoint directory {}".format(checkpoint_dir))
if local_mode:
checkpoint_path = 'file://{}'.format(checkpoint_dir)
print("Local checkpoint file path: {}".format(checkpoint_path))
else:
checkpoint_path = "s3://{}/{}/checkpoint/".format(s3_bucket, job_name)
if not os.listdir(checkpoint_dir):
raise FileNotFoundError("Checkpoint files not found under the path")
os.system("aws s3 cp --recursive {} {}".format(checkpoint_dir, checkpoint_path))
print("S3 checkpoint file path: {}".format(checkpoint_path))
estimator_eval = RLEstimator(role=role,
source_dir='src/',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='1.0.0',
framework=RLFramework.TENSORFLOW,
entry_point="evaluate-coach.py",
instance_count=1,
instance_type=instance_type,
hyperparameters = {
"RLCOACH_PRESET":"preset-knapsack-clippedppo",
"evaluate_steps": 250, #5 episodes
}
)
estimator_eval.fit({'checkpoint': checkpoint_path})
| 0.179135 | 0.973919 |
# Planning observations with `astroplan`
```
import numpy as np
import astropy.units as u
from astropy.time import Time
from astropy.coordinates import SkyCoord
import pytz
from astroplan import Observer, FixedTarget
```
## Time and Dates
- ### All dates and times in are UTC: *Coordinated Universal Time*
- All `Time` calculation assume that the time is UTC
- UTC is related to Greenwich Mean Time (GMT) but does not change with a change of seasons.
- Time will default to 00:00:00 UTC
```
date1 = Time("2016-10-26 12:26:15", format='iso')
print(date1)
date2 = Time("2016-10-26", format='iso')
print(date2)
```
### Current UTC Time
```
now = Time.now() # Current UTC Time
print(now)
```
### Different Date Formats
```
print(now.jd) # Julian Date
print(now.mjd) # Modified Julian Date
print(now.unix) # Seconds since the unix epoch (Jan 01, 1970 00:00:00 UTC)
print(now.decimalyear) # Fraction of the year (very useful for plotting)
```
### Math with Time and Dates
```
print("In 1 hour and 25 minutes it will be {0} UTC".format(now + 1*u.h + 25*u.min))
Christmas = Time("2016-12-25 00:00:00", format='iso')
dt = Christmas - now
print(dt.to(u.d)) # difference in days
print(dt.to(u.fortnight)) # difference in fortnights
print(dt.to(u.s)) # difference in seconds
```
### Working with timezones (local time)
- The python package `pytz` is used to try to deal with local timezones
- [Timezone List](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)
- Working with tomezones is a quick path to maddness!
- Only use timezone conversions for printouts, NEVER calculations!
```
mytimezone = pytz.timezone('US/Pacific')
local_now = now.to_datetime(mytimezone)
print("The current local time is {0}".format(local_now))
# Nepal is in a strange timezone!
everest_timezone = pytz.timezone('Asia/Kathmandu')
everest_local_now = now.to_datetime(everest_timezone)
print("The current local time on Mt. Everest is {0}".format(everest_local_now))
```
---
### [Accurate Time](http://bmmorris.blogspot.com/2015/06/ut1-utc-and-astropy.html) - `UT1`
`AstroPy` calculates the times of events to a very high accuracy. To do this, is has to account for the fact that Earth's rotation period is constantly changing due to tidal forces and changes in the Earth's moment of inertia.
To do this, `AstroPy` uses a time convention called `UT1`. This system is tied to the rotation of the Earth with repect to the positions of distant quasars. Since the Earth's rotation is constantly changing, the time system `UT1` is constanly changing with repect to `UTC`.
The orientation of the Earth, which must be measured continuously to keep `UT1` accurate. This measurement is logged by the International Earth Rotation and Reference Systems Service (IERS). They publish a "bulletin" with the most recent measurements of the Earth's orientation. This bulletin is constantly being updated.
You will run into occasions when you will get a warning that your dates are out of range of the IERS bulletin. To update the bulletin, run the follow block of code:
---
```
from astroplan import download_IERS_A
download_IERS_A()
```
## Setting your location - `Observer`
```
astrolab = Observer(longitude = -122.311473 * u.deg,
latitude = 47 * u.deg + 39 * u.arcmin + 15 * u.arcsec,
elevation = 63.4 * u.m,
timezone = 'US/Pacific',
name = "Astrolab"
)
astrolab
```
### Information at your location
```
sunset_here = astrolab.sun_set_time(now, which='nearest')
sunrise_here = astrolab.sun_rise_time(now, which='next')
midnight_here = astrolab.midnight(now, which='next')
print("Sunset will be at {0.iso} UTC".format(sunset_here))
print("Local Midnight will be at {0.iso} UTC".format(midnight_here))
print("Sunrise will be at {0.iso} UTC".format(sunrise_here))
print("Sunset will be at {0} local time".format(sunset_here.to_datetime(mytimezone)))
print("Local Midnight will be at {0} local time".format(midnight_here.to_datetime(mytimezone)))
print("Sunrise will be at {0} local time".format(sunrise_here.to_datetime(mytimezone)))
```
#### The Manastash Ridge Observatory (MRO) is operated by the Astronomy Department of the University of Washington for the training of graduate and undergraduate students as well as for astronomical research.
```
mro = Observer.at_site('mro')
mro
sunset_mro = mro.sun_set_time(now, which='nearest')
print("Sunset at MRO will be at {0} local time".format(sunset_mro.to_datetime(mytimezone)))
(sunset_here - sunset_mro).to(u.min)
```
#### Local Siderial Time (LST) will tell you the Right Ascension on the meridian right now.
- You can use a [star chart](./Astro_Coordinates.pdf) to find what constellations are visible now.
```
midnight_mro = mro.midnight(now, which='next')
astrolab.local_sidereal_time(midnight_mro)
```
#### Astronomical twilight is when the Sun is 18 degrees below the horizon
```
astro_set = mro.twilight_evening_astronomical(now, which='nearest')
astro_rise = mro.twilight_morning_astronomical(now, which='next')
print("Astronomical Evening Twilight starts at {0.iso} UTC".format(astro_set))
print("Astronomical Midnight is at {0.iso} UTC".format(midnight_mro))
print("Astronomical Morning Twilight starts at {0.iso} UTC".format(astro_rise))
observing_length = (astro_rise - astro_set).to(u.h)
print("You can observe for {0:.1f} at MRO tonight".format(observing_length))
# Local Times
print("Astronomical Evening Twilight starts at {0} local time".format(astro_set.to_datetime(mytimezone)))
print("Astronomical Midnight is at {0} local time".format(midnight_mro.to_datetime(mytimezone)))
print("Astronomical Morning Twilight starts at {0} local time".format(astro_rise.to_datetime(mytimezone)))
```
## Objects in the sky - `FixedTarget`
### You can define targets by [coordinates](./Astro_Coordinates.pdf)
```
coords = SkyCoord('02h19m00.0s', '+57d07m042s', frame='icrs')
ngc869 = FixedTarget(name='NGC869', coord=coords)
ngc869.ra
ngc869.ra.hms
astrolab.target_is_up(midnight_here, ngc869)
# Altitude and Azimuth of a target at a specific time
aa = astrolab.altaz(midnight_here, ngc869)
aa.alt.degree, aa.az.degree
# You can get the galactice coords of the target
aa.galactic
# You can get the coords at a different epoch (1950)
aa.fk4
```
### Most targets can be defined by name
```
my_target = FixedTarget.from_name("m31")
my_target.coord
my_target.ra.hms
```
## Objects in the sky - Moving Targets (solar system targets)
- `Astropy` used the `jplephem` package to calculate the positions
- The built-in solar system objects are: 'sun', 'mercury', 'venus', 'earth-moon-barycenter', 'earth', 'moon', 'mars', 'jupiter', 'saturn', 'uranus', 'neptune', 'pluto'
```
from astropy.coordinates import get_sun, get_body, get_moon
from astroplan import moon_illumination
get_body('jupiter',now)
moon_midnight = get_moon(midnight_here)
moon_illumination(midnight_here)
my_target.coord.separation(moon_midnight)
```
### You can turn solar system objects into pseudo `FixedTarget` objects for observational planning
```
mars_midnight = FixedTarget(name='Mars', coord=get_body('mars',midnight_mro))
mars_midnight
```
### Planning - Observing at MRO
#### [Air Mass](https://en.wikipedia.org/wiki/Air_mass_%28astronomy%29) is the optical path length through Earth’s atmosphere. At sea-level, the air mass at the zenith is 1. Air mass increases as you move toward the horizon, reaching a value of approximately 38 at the horizon.
- #### The best time to observe a target is at minimum airmass.
- #### When the airmass of your target is getting close to 2, you should be observing another target.
```
mro.target_is_up(midnight_mro, my_target)
```
Object is up at midnight at MRO - good
```
altaz_my_target = astrolab.altaz(midnight_mro, my_target)
altaz_my_target.alt, altaz_my_target.az
```
Nice high altitude - looking good
```
# You can find the airmass by using the .secz method
altaz_my_target.secz
```
Airmass < 2, you are good to go.
## Planning observation is easier with plots
```
%matplotlib inline
import matplotlib.pyplot as plt
from astroplan.plots import plot_sky, plot_airmass
plot_sky(my_target, mro, midnight_mro);
start_time = astro_set
end_time = astro_rise
delta_t = end_time - start_time
observe_time = start_time + delta_t * np.linspace(0.0, 1.0, 30)
# np.linspace(0, 1, 30) make 30 evenly spaced points from 0.0 to 1.0
plot_sky(my_target, mro, observe_time);
```
### Plot the airmass of the target over the night
```
plot_airmass(my_target, mro, observe_time);
```
This is good target for observation at MRO for most of the night
### Not all targets can (or should) be observed at all locations
```
mro.target_is_up(astro_set, mars_midnight)
plot_sky(mars_midnight, mro, observe_time);
plot_airmass(mars_midnight, mro, observe_time);
```
Not looking good
```
# astroplan sets the default limits of the airmass plot to [3,0].
# If you want to see a target at a higher airmass you have to set the limits yourself.
fig,ax = plt.subplots(1,1)
plot_airmass(mars_midnight, mro, observe_time)
ax.set_ylim([20,0]);
```
As you can see, this is bad target for observation at MRO.
### Finder Charts - (Warning: This may not always work depending on the Skyview website)
```
from astroplan.plots import plot_finder_image
from astroquery.skyview import SkyView
plot_finder_image(ngc869)
# plot_finder_image defaults to a field of view of 10 u*arcmin
# You can specify a different fov
plot_finder_image(ngc869, fov_radius= 1.3 * u.degree)
plot_finder_image(my_target, fov_radius= 90 * u.arcmin)
```
|
github_jupyter
|
import numpy as np
import astropy.units as u
from astropy.time import Time
from astropy.coordinates import SkyCoord
import pytz
from astroplan import Observer, FixedTarget
date1 = Time("2016-10-26 12:26:15", format='iso')
print(date1)
date2 = Time("2016-10-26", format='iso')
print(date2)
now = Time.now() # Current UTC Time
print(now)
print(now.jd) # Julian Date
print(now.mjd) # Modified Julian Date
print(now.unix) # Seconds since the unix epoch (Jan 01, 1970 00:00:00 UTC)
print(now.decimalyear) # Fraction of the year (very useful for plotting)
print("In 1 hour and 25 minutes it will be {0} UTC".format(now + 1*u.h + 25*u.min))
Christmas = Time("2016-12-25 00:00:00", format='iso')
dt = Christmas - now
print(dt.to(u.d)) # difference in days
print(dt.to(u.fortnight)) # difference in fortnights
print(dt.to(u.s)) # difference in seconds
mytimezone = pytz.timezone('US/Pacific')
local_now = now.to_datetime(mytimezone)
print("The current local time is {0}".format(local_now))
# Nepal is in a strange timezone!
everest_timezone = pytz.timezone('Asia/Kathmandu')
everest_local_now = now.to_datetime(everest_timezone)
print("The current local time on Mt. Everest is {0}".format(everest_local_now))
from astroplan import download_IERS_A
download_IERS_A()
astrolab = Observer(longitude = -122.311473 * u.deg,
latitude = 47 * u.deg + 39 * u.arcmin + 15 * u.arcsec,
elevation = 63.4 * u.m,
timezone = 'US/Pacific',
name = "Astrolab"
)
astrolab
sunset_here = astrolab.sun_set_time(now, which='nearest')
sunrise_here = astrolab.sun_rise_time(now, which='next')
midnight_here = astrolab.midnight(now, which='next')
print("Sunset will be at {0.iso} UTC".format(sunset_here))
print("Local Midnight will be at {0.iso} UTC".format(midnight_here))
print("Sunrise will be at {0.iso} UTC".format(sunrise_here))
print("Sunset will be at {0} local time".format(sunset_here.to_datetime(mytimezone)))
print("Local Midnight will be at {0} local time".format(midnight_here.to_datetime(mytimezone)))
print("Sunrise will be at {0} local time".format(sunrise_here.to_datetime(mytimezone)))
mro = Observer.at_site('mro')
mro
sunset_mro = mro.sun_set_time(now, which='nearest')
print("Sunset at MRO will be at {0} local time".format(sunset_mro.to_datetime(mytimezone)))
(sunset_here - sunset_mro).to(u.min)
midnight_mro = mro.midnight(now, which='next')
astrolab.local_sidereal_time(midnight_mro)
astro_set = mro.twilight_evening_astronomical(now, which='nearest')
astro_rise = mro.twilight_morning_astronomical(now, which='next')
print("Astronomical Evening Twilight starts at {0.iso} UTC".format(astro_set))
print("Astronomical Midnight is at {0.iso} UTC".format(midnight_mro))
print("Astronomical Morning Twilight starts at {0.iso} UTC".format(astro_rise))
observing_length = (astro_rise - astro_set).to(u.h)
print("You can observe for {0:.1f} at MRO tonight".format(observing_length))
# Local Times
print("Astronomical Evening Twilight starts at {0} local time".format(astro_set.to_datetime(mytimezone)))
print("Astronomical Midnight is at {0} local time".format(midnight_mro.to_datetime(mytimezone)))
print("Astronomical Morning Twilight starts at {0} local time".format(astro_rise.to_datetime(mytimezone)))
coords = SkyCoord('02h19m00.0s', '+57d07m042s', frame='icrs')
ngc869 = FixedTarget(name='NGC869', coord=coords)
ngc869.ra
ngc869.ra.hms
astrolab.target_is_up(midnight_here, ngc869)
# Altitude and Azimuth of a target at a specific time
aa = astrolab.altaz(midnight_here, ngc869)
aa.alt.degree, aa.az.degree
# You can get the galactice coords of the target
aa.galactic
# You can get the coords at a different epoch (1950)
aa.fk4
my_target = FixedTarget.from_name("m31")
my_target.coord
my_target.ra.hms
from astropy.coordinates import get_sun, get_body, get_moon
from astroplan import moon_illumination
get_body('jupiter',now)
moon_midnight = get_moon(midnight_here)
moon_illumination(midnight_here)
my_target.coord.separation(moon_midnight)
mars_midnight = FixedTarget(name='Mars', coord=get_body('mars',midnight_mro))
mars_midnight
mro.target_is_up(midnight_mro, my_target)
altaz_my_target = astrolab.altaz(midnight_mro, my_target)
altaz_my_target.alt, altaz_my_target.az
# You can find the airmass by using the .secz method
altaz_my_target.secz
%matplotlib inline
import matplotlib.pyplot as plt
from astroplan.plots import plot_sky, plot_airmass
plot_sky(my_target, mro, midnight_mro);
start_time = astro_set
end_time = astro_rise
delta_t = end_time - start_time
observe_time = start_time + delta_t * np.linspace(0.0, 1.0, 30)
# np.linspace(0, 1, 30) make 30 evenly spaced points from 0.0 to 1.0
plot_sky(my_target, mro, observe_time);
plot_airmass(my_target, mro, observe_time);
mro.target_is_up(astro_set, mars_midnight)
plot_sky(mars_midnight, mro, observe_time);
plot_airmass(mars_midnight, mro, observe_time);
# astroplan sets the default limits of the airmass plot to [3,0].
# If you want to see a target at a higher airmass you have to set the limits yourself.
fig,ax = plt.subplots(1,1)
plot_airmass(mars_midnight, mro, observe_time)
ax.set_ylim([20,0]);
from astroplan.plots import plot_finder_image
from astroquery.skyview import SkyView
plot_finder_image(ngc869)
# plot_finder_image defaults to a field of view of 10 u*arcmin
# You can specify a different fov
plot_finder_image(ngc869, fov_radius= 1.3 * u.degree)
plot_finder_image(my_target, fov_radius= 90 * u.arcmin)
| 0.641984 | 0.958226 |
# Local Kubernetes on KIND
> How to get a local kubernetes up and some basic commands to interact with it.
- toc: true
- badges: true
- comments: true
- categories: [kubernetes, docker]
- image: images/chart-preview.png
### Intro
Welcome to the first post of the [Seldon Super Series]()! This post is for those who don't yet have access to a kubernetes cluster.
We'll walkthrough how to use [Kind]() to launch a cluster on you local machine!
If you already have access to a kubernetes cluster, and also have `kubectl` installed, then move onto [part 2]()! Otherwise, follow along here before you move on!
### Reqs
* None! This is the first post in the series!
### Goals
* Launch a local kubernets cluster using kind, and install seldon on the cluster to allow you to follow along with the rest of the posts in this series
### Install kubectl
If this is the first time you've used kubernetes, you will need to install kubectl, the command line tool for interacting with kubernetes. This can be downloaded [here](https://kubernetes.io/docs/tasks/tools/install-kubectl/). On mac you can use `brew install kubectl`.
Check your install by running:
```
!kubectl version --client --short
```
You should see output similar to this. It shouldn't be a problem if you're version is a bit different than this.
### Install Kind
If you're on mac, it's as simple as `brew install kind`. If not, check out [this page](https://kind.sigs.k8s.io/docs/user/quick-start/)
### Create your First Cluster
Todo: ADD LOCAL REGISTRY. They need this for the following examples (or access to DockerHub)
```
!kind create cluster
```
It's as simple as that. If it is your first time running kind, it will automatically download the appropiate docker image (something like kindest/node:1.17.0), which may take a few minutes.
After that command is finished, check if your cluster is running:
```
!kubectl cluster-info
```
If you see output like above, displaying info about your Kubernetes master and KubeDNS, then you have successfully launched a local kubernetes cluster!
### Install Seldon-core
Because we will need seldon-core for all of the following posts, we will install it here. Anytime you need to re-launch a kind cluster to follow along the other posts, you will be able to run this notebook to get it back up and running.
To install seldon-core on the cluster, use helm. To install helm itself, find directions [here](https://helm.sh/), or use `brew install helm` on mac.
Once helm is installed, use it to install seldon-core and seldon-core-operator with the following command:
```
!helm install seldon-core seldon-core-operator \
--repo https://storage.googleapis.com/seldon-charts \
--set usageMetrics.enabled=true \
--set ambassador.enabled=true \
--namespace seldon-intro
!kubectl get pods
print("-----")
!kubectl get deployments
```
You should see a pod and deployment with `seldon-controller-manager` in the name. This pod and deployment house the seldon-core operator, extends the kubernetes api. For now, just confirming that pod is running is all we need.
### Bonus: Install kubectx and kubens
As you follow through the next posts, I will be using the `kubectx` and `kubens` command line tools. If you are on mac, you can install them with brew: `brew install kubectx`. This will download and install both `kubectx` and `kubens`. If you're not on mac, find install instructions [here](https://github.com/ahmetb/kubectx#installation).
These allow you to easily switch between kubernetes contexts and namespaces. You can perform all the same actions with `kubectl`, but kubectx and kubens make some common commands much quicker.
|
github_jupyter
|
!kubectl version --client --short
!kind create cluster
!kubectl cluster-info
!helm install seldon-core seldon-core-operator \
--repo https://storage.googleapis.com/seldon-charts \
--set usageMetrics.enabled=true \
--set ambassador.enabled=true \
--namespace seldon-intro
!kubectl get pods
print("-----")
!kubectl get deployments
| 0.368974 | 0.933673 |
```
# This allows multiple outputs from a single jupyter notebook cell:
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
%matplotlib inline
import pandas as pd
idf = pd.read_csv('../data/SPY_20110701_20120630_Bollinger.csv',index_col=0,parse_dates=True)
idf.shape
idf.head(3)
idf.tail(3)
```
---
#### Let's grab 5 months of data from the input data frame:
```
df = idf.loc['2011-07-01':'2011-12-30',:]
```
---
#### We can, of course, plot a basic ohlc or candlestick plot:
```
import mplfinance as mpf
mpf.__version__
mpf.plot(df,volume=True)
```
---
#### We can switch which panel is main and which contains volume:
```
mpf.plot(df,type='candle',volume=True,main_panel=1,volume_panel=0)
```
---
or we can make both panels the same size:
```
mpf.plot(df,type='candle',volume=True,panel_ratios=(1,1))
```
---
Or make the main panel 4 times that of the volume panel:
```
mpf.plot(df,type='candle',volume=True,panel_ratios=(4,1))
```
---
Let's add a third panel containing bollinger data.
By default addplot uses Panel 0.
```
ap0 = [ mpf.make_addplot(df['UpperB'],color='g'),#,width=0.75), # uses panel 0 by default
mpf.make_addplot(df['LowerB'],color='b')#,width=1.75), # uses panel 0 by default
]
mpf.plot(df,type='candle',volume=True,addplot=ap0,mav=(10,20,30))
ap2 = [ mpf.make_addplot(df['UpperB'],color='g',panel=2), # panel 2 specified
mpf.make_addplot(df['LowerB'],color='b',panel=2), # panel 2 specified
]
mpf.plot(df,type='candle',figscale=1.5,
volume=True,addplot=ap2, mav=(10,20,30), scale_width_adjustment=dict(lines=1.1))
```
---
Now that we have 3 panels, we can demonstrate how to use ` panel_ratios `
#### There are two ways to specify ` panel_ratios `:
1. As a sequence of numbers, **one for each panel**, to be applied in order to panel IDs 0, 1, 2, etc.
2. As a sequence of only **TWO** numbers: The first number will be applied *to the **main** panel*, and the second number will be applied *to all other panels*.
- In the ambiguous case where there are only two panels, the sequence of panel_ratio numbers will be treated as in item #1:<br> The first number will apply to Panel 0, and the second to Panel 1 (regardless of which panel the user chooses for the main panel).
---
Let's rearrange the above plot to have the main panel on the bottom, and the volume panel on top.<br>
Then we will demonstrate the above two uses of panel_ratios.
```
ap2 = [ mpf.make_addplot(df['UpperB'],color='g',panel=1),
mpf.make_addplot(df['LowerB'],color='b',panel=1,y_on_right=True),
#mpf.make_addplot(df['LowerB'],alpha=0.01,panel=2,y_on_right=True,secondary_y=False,ylim=(112,130)),
]
mpf.plot(df,type='candle',volume=True,main_panel=2,volume_panel=0,addplot=ap2)
```
---
As was mentioned previously,<br>the default for panel_ratios is that all panels are the same height *except* the main panel which is 2.5 times the height of the others.<br>**For the above example, this is equivalent to:**
1. specifying **` panel_ratios=(2,2,5) `<br>**(panel ID's 0 and 1 are each 2/5 the size of panel 2, and panel 2 is 5/2 times as high as each of the other two panels)<br><br>
2. or specifying **` panel_ratios=(5,2) `**<br>(main panel 5/2 times as high as the other panels, and all other panels 2/5 as high as the main panel),<br>
Both of the above `panel_ratios` specifications have the same effect, and both are equivalent to the default. Note carefully that the first specification requires that we have in mind that the main panel will be on the bottom. But the second specification does not. The second specification however requires that all panels, other than the main panel, be the same size (a typical practice). However if we want any panels *other than* the main panel to differ in size from each other, then we must use the first specification providing a ratio number for every panel in use.
For example:
```
mpf.plot(df,type='candle',volume=True,main_panel=2,volume_panel=0,
addplot=ap2,panel_ratios=(4,3,3))
mpf.plot(df,type='candle',volume=True,main_panel=2,volume_panel=0,
addplot=ap2,panel_ratios=(4,1.5,5))
```
---
---
Notice that in all of the above examples, mplfinance automatically determined ***how many*** panels we needed based on our specification of panel ID's. This automatic determination however requires that we do *not* skip any panel ID's. If we do, mplfinance will raise an exception:
```
mpf.plot(df,type='candle',volume=True,main_panel=3,volume_panel=0,addplot=ap2)
```
---
We can override this behavior by ***explicitly*** setting the number of panels, ***however*** the panel for the ID that was skipped will be empty of any plot:
```
mpf.plot(df,type='candle',volume=True,main_panel=3,volume_panel=0,addplot=ap2,num_panels=4)
```
---
Finally, we demonstate using these features to create a **MACD** plot (**M**oving **A**verage **C**onvergence **D**ivergence)
---
- First use Pandas to calculate the 12 period and 26 period exponential moving averages:
```
exp12 = df['Close'].ewm(span=12, adjust=False).mean()
exp26 = df['Close'].ewm(span=26, adjust=False).mean()
```
---
* The MACD Line is defined as the difference between these two moving averages:
```
macd = exp12 - exp26
```
---
* The MACD Signal is defined as the 9 period exponential moving average of the MACD Line:<br><br>
* We also calculate the difference between the MACD Line and the MACD Signal which we will plot as a histogram:
```
signal = macd.ewm(span=9, adjust=False).mean()
histogram = macd - signal
```
---
### Now create our MACD plot:
```
apds = [mpf.make_addplot(exp12,color='lime'),
mpf.make_addplot(exp26,color='c'),
mpf.make_addplot(histogram,type='bar',width=0.7,panel=1,
color='dimgray',alpha=1,secondary_y=False),#,ylim=(-.75,+.75)),
mpf.make_addplot(macd,panel=1,color='fuchsia',secondary_y=True,ylim=(-3,3),width=6,alpha=0.5),
mpf.make_addplot(signal,panel=1,color='b',secondary_y=True),
mpf.make_addplot(df['Volume'],panel=2,ylim=(+10000000,1000000000),alpha=0.01,y_on_right=True)#,secondary_y=False)
]
mpf.plot(df,type='candle',addplot=apds,figscale=1.1,figratio=(8,5),title='\nMACD',
style='blueskies',volume=True,volume_panel=2,panel_ratios=(6,3,4.5),ylim=(110,130))
```
---
- Just for fun, the same plot in a different style:
```
apds = [mpf.make_addplot(exp12,color='lime'),
mpf.make_addplot(exp26,color='c'),
mpf.make_addplot(histogram,type='bar',width=0.7,panel=1,
color='dimgray',alpha=1,secondary_y=False),#,ylim=(-.75,+.75)),
mpf.make_addplot(macd,panel=1,color='fuchsia',secondary_y=True,ylim=(-3,3),width=6,alpha=0.5),
mpf.make_addplot(signal,panel=1,color='b',secondary_y=True),
mpf.make_addplot(df['Volume'],panel=2,ylim=(+10000000,1000000000),alpha=0.01,y_on_right=False),#,secondary_y=False)
]
s = mpf.make_mpf_style(base_mpf_style='classic',rc={'figure.facecolor':'lightgray'})
mpf.plot(df,type='candle',addplot=apds,figscale=1.1,figratio=(8,5),title='\nMACD',
style=s,volume=True,volume_panel=2,panel_ratios=(6,3,2))
apds = [mpf.make_addplot(exp12,color='lime'),
mpf.make_addplot(exp26,color='c'),
mpf.make_addplot(histogram,type='bar',width=0.7,panel=1,
color='dimgray',alpha=1,secondary_y=False),#,ylim=(-.75,+.75)),
mpf.make_addplot(macd,panel=1,color='fuchsia',secondary_y=True,ylim=(-3,3),width=6,alpha=0.5),
mpf.make_addplot(signal,panel=1,color='b',secondary_y=True),
mpf.make_addplot(df['Volume'],panel=2,ylim=(+10000000,1000000000),alpha=0.01,y_on_right=True),#,secondary_y=False)
mpf.make_addplot(df,panel=3,type='ohlc',mav=(10,20),ylabel='OHLC',ylim=(80,150))
]
m = mpf.make_marketcolors(base_mpf_style='blueskies',ohlc='k')
s = mpf.make_mpf_style(base_mpf_style='blueskies',marketcolors=m)
mpf.plot(df,type='candle',addplot=apds,figscale=1.1,figratio=(8,5),title='\nMACD',
style=s,volume=True,volume_panel=2,panel_ratios=(6,3,4.5,6),ylim=(110,130),
scale_width_adjustment=dict(lines=2.1),update_width_config=dict(ohlc_linewidth=1.1)
)
```
|
github_jupyter
|
# This allows multiple outputs from a single jupyter notebook cell:
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
%matplotlib inline
import pandas as pd
idf = pd.read_csv('../data/SPY_20110701_20120630_Bollinger.csv',index_col=0,parse_dates=True)
idf.shape
idf.head(3)
idf.tail(3)
df = idf.loc['2011-07-01':'2011-12-30',:]
import mplfinance as mpf
mpf.__version__
mpf.plot(df,volume=True)
mpf.plot(df,type='candle',volume=True,main_panel=1,volume_panel=0)
mpf.plot(df,type='candle',volume=True,panel_ratios=(1,1))
mpf.plot(df,type='candle',volume=True,panel_ratios=(4,1))
ap0 = [ mpf.make_addplot(df['UpperB'],color='g'),#,width=0.75), # uses panel 0 by default
mpf.make_addplot(df['LowerB'],color='b')#,width=1.75), # uses panel 0 by default
]
mpf.plot(df,type='candle',volume=True,addplot=ap0,mav=(10,20,30))
ap2 = [ mpf.make_addplot(df['UpperB'],color='g',panel=2), # panel 2 specified
mpf.make_addplot(df['LowerB'],color='b',panel=2), # panel 2 specified
]
mpf.plot(df,type='candle',figscale=1.5,
volume=True,addplot=ap2, mav=(10,20,30), scale_width_adjustment=dict(lines=1.1))
ap2 = [ mpf.make_addplot(df['UpperB'],color='g',panel=1),
mpf.make_addplot(df['LowerB'],color='b',panel=1,y_on_right=True),
#mpf.make_addplot(df['LowerB'],alpha=0.01,panel=2,y_on_right=True,secondary_y=False,ylim=(112,130)),
]
mpf.plot(df,type='candle',volume=True,main_panel=2,volume_panel=0,addplot=ap2)
mpf.plot(df,type='candle',volume=True,main_panel=2,volume_panel=0,
addplot=ap2,panel_ratios=(4,3,3))
mpf.plot(df,type='candle',volume=True,main_panel=2,volume_panel=0,
addplot=ap2,panel_ratios=(4,1.5,5))
mpf.plot(df,type='candle',volume=True,main_panel=3,volume_panel=0,addplot=ap2)
mpf.plot(df,type='candle',volume=True,main_panel=3,volume_panel=0,addplot=ap2,num_panels=4)
exp12 = df['Close'].ewm(span=12, adjust=False).mean()
exp26 = df['Close'].ewm(span=26, adjust=False).mean()
macd = exp12 - exp26
signal = macd.ewm(span=9, adjust=False).mean()
histogram = macd - signal
apds = [mpf.make_addplot(exp12,color='lime'),
mpf.make_addplot(exp26,color='c'),
mpf.make_addplot(histogram,type='bar',width=0.7,panel=1,
color='dimgray',alpha=1,secondary_y=False),#,ylim=(-.75,+.75)),
mpf.make_addplot(macd,panel=1,color='fuchsia',secondary_y=True,ylim=(-3,3),width=6,alpha=0.5),
mpf.make_addplot(signal,panel=1,color='b',secondary_y=True),
mpf.make_addplot(df['Volume'],panel=2,ylim=(+10000000,1000000000),alpha=0.01,y_on_right=True)#,secondary_y=False)
]
mpf.plot(df,type='candle',addplot=apds,figscale=1.1,figratio=(8,5),title='\nMACD',
style='blueskies',volume=True,volume_panel=2,panel_ratios=(6,3,4.5),ylim=(110,130))
apds = [mpf.make_addplot(exp12,color='lime'),
mpf.make_addplot(exp26,color='c'),
mpf.make_addplot(histogram,type='bar',width=0.7,panel=1,
color='dimgray',alpha=1,secondary_y=False),#,ylim=(-.75,+.75)),
mpf.make_addplot(macd,panel=1,color='fuchsia',secondary_y=True,ylim=(-3,3),width=6,alpha=0.5),
mpf.make_addplot(signal,panel=1,color='b',secondary_y=True),
mpf.make_addplot(df['Volume'],panel=2,ylim=(+10000000,1000000000),alpha=0.01,y_on_right=False),#,secondary_y=False)
]
s = mpf.make_mpf_style(base_mpf_style='classic',rc={'figure.facecolor':'lightgray'})
mpf.plot(df,type='candle',addplot=apds,figscale=1.1,figratio=(8,5),title='\nMACD',
style=s,volume=True,volume_panel=2,panel_ratios=(6,3,2))
apds = [mpf.make_addplot(exp12,color='lime'),
mpf.make_addplot(exp26,color='c'),
mpf.make_addplot(histogram,type='bar',width=0.7,panel=1,
color='dimgray',alpha=1,secondary_y=False),#,ylim=(-.75,+.75)),
mpf.make_addplot(macd,panel=1,color='fuchsia',secondary_y=True,ylim=(-3,3),width=6,alpha=0.5),
mpf.make_addplot(signal,panel=1,color='b',secondary_y=True),
mpf.make_addplot(df['Volume'],panel=2,ylim=(+10000000,1000000000),alpha=0.01,y_on_right=True),#,secondary_y=False)
mpf.make_addplot(df,panel=3,type='ohlc',mav=(10,20),ylabel='OHLC',ylim=(80,150))
]
m = mpf.make_marketcolors(base_mpf_style='blueskies',ohlc='k')
s = mpf.make_mpf_style(base_mpf_style='blueskies',marketcolors=m)
mpf.plot(df,type='candle',addplot=apds,figscale=1.1,figratio=(8,5),title='\nMACD',
style=s,volume=True,volume_panel=2,panel_ratios=(6,3,4.5,6),ylim=(110,130),
scale_width_adjustment=dict(lines=2.1),update_width_config=dict(ohlc_linewidth=1.1)
)
| 0.457621 | 0.939192 |
# Generating snp_dist matrix for tSNE analysis - Typhimurium
```
# Importing libraries
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import altair as alt
from matplotlib.pyplot import figure
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import plot_roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve, auc
from sklearn import metrics
import io
from sklearn.decomposition import PCA
from Bio.Phylo.TreeConstruction import _Matrix
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.manifold import TSNE
pd.set_option('display.max_columns', 300)
%pylab inline
# Function to create matrix using snp_distance produced by snp_dists for group 1
def create_matrix(fname):
d = pd.read_csv(fname, header = 0)
d = d.rename(columns = {'Unnamed: 0':''})
d = d.replace(np.NaN, '')
d1 = d
d1_labels = d.columns
d2_labels = pd.DataFrame(d1_labels)
d2_labels = d1_labels.transpose()
names = d1_labels.to_list()
# first record is empty, remove it
names.pop(0)
d.to_csv('/Users/joaocarlosgomesneto/Documents/frontiers_paper_salmonella_newport_typhimurium/data/typhimurium/group_1/snp_matrix/d.csv', header = False, index = False)
f1 = '/Users/joaocarlosgomesneto/Documents/frontiers_paper_salmonella_newport_typhimurium/data/typhimurium/group_1/snp_matrix/d.csv'
df1 = pd.read_csv(f1, header = None)
df2 = df1.drop([0], axis = 1)
df3 = df2.replace(np.NaN, '')
df4 = np.array(df3)
df5 = np.tril(df4)
df6 = np.array(df5).tolist()
# extract lower triangualar matrix
lower = []
for i in range(0, len(df6)):
tmp = []
tmp = df6[i][:i]
lower.append(tmp)
# include diagonal to lower triangular matrix
for i in range(0, len(lower)):
lower[i].insert(len(lower[i]), 0)
matrix = lower
m = _Matrix(names, matrix)
return m
# Run function to get the matrix
f1 = '/Users/joaocarlosgomesneto/Documents/frontiers_paper_salmonella_newport_typhimurium/data/typhimurium/group_1/input_data/distace_snp_sites_1.tsv'
matrix = create_matrix(f1)
snpdistgroup1typh = matrix
# Run the tSNE program
X_tsne = TSNE(learning_rate=200, n_components = 2, n_iter = 1000, random_state = 1).fit_transform(snpdistgroup1typh)
X_tsne #get the output
# Create a dataframe with tSNE output
np.random.seed(1)
a = pd.DataFrame(X_tsne)
a.columns = ['tSNE1', 'tSNE2']
a = a.reset_index()
a
# Get the genome ids
b = pd.DataFrame(snpdistgroup1typh.names)
b.columns = ['id']
b = b.reset_index()
b
# Merge all datasets
snp_group_1_typhimurium = pd.merge(b, a, on = 'index')
snp_group_1_typhimurium = snp_group_1_typhimurium[['id', 'tSNE1', 'tSNE2']]
snp_group_1_typhimurium
# Export the data
snp_group_1_typhimurium.to_csv('/Users/joaocarlosgomesneto/Documents/frontiers_paper_salmonella_newport_typhimurium/data/typhimurium/group_1/snp_matrix/snp_group_1_typhimurium.csv', header = True, index = False)
```
|
github_jupyter
|
# Importing libraries
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import altair as alt
from matplotlib.pyplot import figure
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import plot_roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve, auc
from sklearn import metrics
import io
from sklearn.decomposition import PCA
from Bio.Phylo.TreeConstruction import _Matrix
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.manifold import TSNE
pd.set_option('display.max_columns', 300)
%pylab inline
# Function to create matrix using snp_distance produced by snp_dists for group 1
def create_matrix(fname):
d = pd.read_csv(fname, header = 0)
d = d.rename(columns = {'Unnamed: 0':''})
d = d.replace(np.NaN, '')
d1 = d
d1_labels = d.columns
d2_labels = pd.DataFrame(d1_labels)
d2_labels = d1_labels.transpose()
names = d1_labels.to_list()
# first record is empty, remove it
names.pop(0)
d.to_csv('/Users/joaocarlosgomesneto/Documents/frontiers_paper_salmonella_newport_typhimurium/data/typhimurium/group_1/snp_matrix/d.csv', header = False, index = False)
f1 = '/Users/joaocarlosgomesneto/Documents/frontiers_paper_salmonella_newport_typhimurium/data/typhimurium/group_1/snp_matrix/d.csv'
df1 = pd.read_csv(f1, header = None)
df2 = df1.drop([0], axis = 1)
df3 = df2.replace(np.NaN, '')
df4 = np.array(df3)
df5 = np.tril(df4)
df6 = np.array(df5).tolist()
# extract lower triangualar matrix
lower = []
for i in range(0, len(df6)):
tmp = []
tmp = df6[i][:i]
lower.append(tmp)
# include diagonal to lower triangular matrix
for i in range(0, len(lower)):
lower[i].insert(len(lower[i]), 0)
matrix = lower
m = _Matrix(names, matrix)
return m
# Run function to get the matrix
f1 = '/Users/joaocarlosgomesneto/Documents/frontiers_paper_salmonella_newport_typhimurium/data/typhimurium/group_1/input_data/distace_snp_sites_1.tsv'
matrix = create_matrix(f1)
snpdistgroup1typh = matrix
# Run the tSNE program
X_tsne = TSNE(learning_rate=200, n_components = 2, n_iter = 1000, random_state = 1).fit_transform(snpdistgroup1typh)
X_tsne #get the output
# Create a dataframe with tSNE output
np.random.seed(1)
a = pd.DataFrame(X_tsne)
a.columns = ['tSNE1', 'tSNE2']
a = a.reset_index()
a
# Get the genome ids
b = pd.DataFrame(snpdistgroup1typh.names)
b.columns = ['id']
b = b.reset_index()
b
# Merge all datasets
snp_group_1_typhimurium = pd.merge(b, a, on = 'index')
snp_group_1_typhimurium = snp_group_1_typhimurium[['id', 'tSNE1', 'tSNE2']]
snp_group_1_typhimurium
# Export the data
snp_group_1_typhimurium.to_csv('/Users/joaocarlosgomesneto/Documents/frontiers_paper_salmonella_newport_typhimurium/data/typhimurium/group_1/snp_matrix/snp_group_1_typhimurium.csv', header = True, index = False)
| 0.412648 | 0.660166 |
# Starbucks Capstone Challenge
## Contents
1. [Introduction](#Introduction)
2. [Data Sets](#Data-Sets)
3. [Assess data](#Assess-the-data)
4. [Data cleaning](#Clean-the-data)
5. [Modelling](#Modelling)
## Introduction
This data set contains simulated data that mimics customer behavior on the Starbucks rewards mobile app. Once every few days, Starbucks sends out an offer to users of the mobile app. An offer can be merely an advertisement for a drink or an actual offer such as a discount or BOGO (buy one get one free). Some users might not receive any offer during certain weeks.
Not all users receive the same offer, and that is the challenge to solve with this data set.
Your task is to combine transaction, demographic and offer data to determine which demographic groups respond best to which offer type. This data set is a simplified version of the real Starbucks app because the underlying simulator only has one product whereas Starbucks actually sells dozens of products.
Every offer has a validity period before the offer expires. As an example, a BOGO offer might be valid for only 5 days. You'll see in the data set that informational offers have a validity period even though these ads are merely providing information about a product; for example, if an informational offer has 7 days of validity, you can assume the customer is feeling the influence of the offer for 7 days after receiving the advertisement.
You'll be given transactional data showing user purchases made on the app including the timestamp of purchase and the amount of money spent on a purchase. This transactional data also has a record for each offer that a user receives as well as a record for when a user actually views the offer. There are also records for when a user completes an offer.
Keep in mind as well that someone using the app might make a purchase through the app without having received an offer or seen an offer.
### Example
To give an example, a user could receive a discount offer buy 10 dollars get 2 off on Monday. The offer is valid for 10 days from receipt. If the customer accumulates at least 10 dollars in purchases during the validity period, the customer completes the offer.
However, there are a few things to watch out for in this data set. Customers do not opt into the offers that they receive; in other words, a user can receive an offer, never actually view the offer, and still complete the offer. For example, a user might receive the "buy 10 dollars get 2 dollars off offer", but the user never opens the offer during the 10 day validity period. The customer spends 15 dollars during those ten days. There will be an offer completion record in the data set; however, the customer was not influenced by the offer because the customer never viewed the offer.
## Data Sets
The data is contained in three files:
* portfolio.json - containing offer ids and meta data about each offer (duration, type, etc.)
* profile.json - demographic data for each customer
* transcript.json - records for transactions, offers received, offers viewed, and offers completed
```
!pip install imblearn
!pip install shap
import pandas as pd
import numpy as np
import json
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from imblearn.over_sampling import SMOTE
from sklearn.model_selection import train_test_split
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
import warnings
import shap
# read in the json files
portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True)
profile = pd.read_json('data/profile.json', orient='records', lines=True)
transcript = pd.read_json('data/transcript.json', orient='records', lines=True)
```
## Assess the data
### Portfolio dataset
Details of each offer sent.
* id (string) - offer id
* offer_type (string) - type of offer ie BOGO, discount, informational
* difficulty (int) - minimum required spend to complete an offer
* reward (int) - reward given for completing an offer
* duration (int) - time for offer to be open, in days
* channels (list of strings)
```
def assess_data(df, group_by=None):
'''
Prints out a number of useful characteristics of a DataFrame, including
the shape, number of null values, and an optional groupby count.
INPUTS:
df: pandas DataFrame. The DataFrame to explore.
group_by: str. A column label to group df by.
RETURNS:
None
'''
print('DataFrame shape: ', df.shape)
print()
print('Data types:')
print(df.dtypes)
print()
print('Column stats:')
print(df.describe())
print()
print('Null values:')
print(df.isnull().sum())
if group_by:
print()
print('Group by {}:'.format(group_by))
print(df.groupby(group_by).count())
portfolio.head()
assess_data(portfolio, 'offer_type')
```
### Profile
Demographic data for each customer.
* age (int) - age of the customer
* became_member_on (int) - date when customer created an app account
* gender (str) - gender of the customer (note some entries contain 'O' for other rather than M or F)
* id (str) - customer id
* income (float) - customer's income
```
profile.head()
assess_data(profile, 'gender')
# A max age of 118 is quite high - let's plot a histogram
plt.hist(profile['age']);
# It seems like the age of 188 could represent a null value
# This is supported by the counts matching with the null value counts above
profile[profile['age']==118].count()
# Let's plot the histogram of income to see if there are issues
# There doesn't seem to be any outliers or suprises
plt.hist(profile['income']);
```
### Transcript
Records for transactions, offers received, offers viewed, and offers completed.
* event (str) - record description (ie transaction, offer received, offer viewed, etc.)
* person (str) - customer id
* time (int) - time in hours since start of test. The data begins at time t=0
* value - (dict of strings) - either an offer id or transaction amount depending on the record
```
transcript.head()
assess_data(transcript, 'event')
# To understand the data more, we can look at all of the events for one user
transcript[transcript['person']=='78afa995795e4d85b5d9ceeca43f5fef']
```
## Clean the data
```
def column_bool(df, column, bool_item):
'''
Iterates through the given column in a DataFrame and produces a new column
populated with 1s if the column contains the desired item.
INPUTS:
df: pandas DataFrame. The DataFrame that contains the column.
column: str. The name of the column to iterate through.
bool_item: str. The name of the item to check for.
'''
bool_list = []
for item in df[column]:
if bool_item in item:
bool_list.append(1)
else:
bool_list.append(0)
df[bool_item] = bool_list
# To use in modelling, we need to convert the channel column to dummy variables
for channel in ['web', 'email', 'mobile', 'social']:
column_bool(portfolio, 'channels', channel)
portfolio.drop('channels', axis=1, inplace=True)
portfolio
# We also need to do this for the offer_type column
for item in ['bogo', 'informational', 'discount']:
column_bool(portfolio, 'offer_type', item)
portfolio.drop('offer_type', axis=1, inplace=True)
portfolio
```
### Profile
```
# Let's drop the cases where age = 118 since these could be rows with lots of nulls
# This is roughly 13% of users, but we still have 14,825 users
profile.drop(profile[profile['age']==118].index, inplace=True)
# Let's check that all nulls came from the age=118 people
profile.isnull().sum()
# The became_member_on column is currently an integer
# To use it in analysis, we should convert it to datetime
profile['became_member_on'] = pd.to_datetime(profile['became_member_on'].map(str), format='%Y-%m-%d')
profile.head()
# We should convert the gender column to booleans
for gender in ['F', 'M', 'O']:
column_bool(profile, 'gender', gender)
profile.drop('gender', axis=1, inplace=True)
profile.head()
```
### Transcript
```
# We are just going to be looking at whether an offer is used so we don't need the transaction events
transcript.drop(transcript[transcript['event']=='transaction'].index, inplace=True)
# After dropping transactions, we can pull out the offer ids from the value column
offer = []
for value in transcript['value']:
offer.append(list(value.values())[0])
transcript['offer'] = offer
transcript.drop('value', axis=1, inplace=True)
transcript.head()
```
In order to model whether an offer will be used or not, we want to create a DataFrame where each existing offer and person combination has just one row. We also want boolean columns indicating whether the offer has been viewed and used.
```
# In order to combine the rows below, time needs to be a string
transcript['time'] = transcript['time'].apply(lambda x: str(x))
transcript['time']
# This creates one row for each person-offer combination and the sequence of events for each combination
event_df = pd.DataFrame(transcript.groupby(['person','offer']).agg({'event': ', '.join, 'time': ', '.join})).reset_index()
event_df.head()
# Check that the merging happened in time order
time_unsorted = []
time_sorted = []
i = 0
while i < len(event_df['time']):
time_list = event_df['time'][i].split(', ')
time_list = list(map(int, time_list))
if time_list == sorted(time_list):
time_sorted.append(i)
else:
time_unsorted.append(i)
i+=1
# If they are in time order, time_unsorted should be empty
time_unsorted
# We now need to create the boolean columns for the offer being viewed and used
events = event_df['event'].str.split(', ')
received = []
viewed = []
used = []
# This iterates through each person-offer combination present
for event in events:
# Tries to get the time order of events
# Gets the location of offer received
try:
received_loc = event.index('offer received')
except:
received_loc = -1
# Gets the location of offer viewed
try:
viewed_loc = event.index('offer viewed')
except:
viewed_loc = -1
# Gets the location of offer completed
try:
completed_loc = event.index('offer completed')
except:
completed_loc = -1
# If there is an offer received in the timeline, return 1
if received_loc >= 0:
received.append(1)
else:
received.append(0)
# If there is an offer viewed in the timeline, return 1
if viewed_loc >= 0:
viewed.append(1)
# If viewed_loc is not -1 and the offer viewed happens before the
# offer completed, then it returns 1
# This is to prevent including instances where the offer was completed
# without viewing the offer (and so unintentionally)
if viewed_loc < completed_loc:
used.append(1)
else:
used.append(0)
else:
viewed.append(0)
used.append(0)
# Sets the results from the cell above as new columns
event_df['offer_received'] = received
event_df['offer_viewed'] = viewed
event_df['offer_used'] = used
event_df.drop(['event', 'time'], axis=1, inplace=True)
event_df.head()
# Check that every row has had an offer received
event_df.groupby('offer_received').count()
# As all of offer_received are 1s, we can drop this column
event_df.drop('offer_received', axis=1, inplace=True)
event_df.head()
```
### Merge the datasets
```
# Merging the event_df and portfolio DataFrames
df = event_df.merge(portfolio, left_on='offer', right_on='id')
df.drop('id', axis=1, inplace=True)
df.head()
# Merging the above with the profile DataFrames
df = df.merge(profile, left_on='person', right_on='id')
df.drop(['id'], axis=1, inplace=True)
df.head()
# Double check there are no nulls
df.isnull().sum()
```
### Feature building
```
# In the current format, became_member_on is difficult to use in a model
# We can re-format it to be the length of membership as at a certain date
# This date can be the date of the newest member joining, so it will be a measure of relative membership length
latest_join = df['became_member_on'].max()
membership_length = df['became_member_on'].apply(lambda x: latest_join - x).dt.days
df['membership_length'] = membership_length
df.head()
# We can create a number of new features to hopefully improve accuracy
# Create the squares of features
def col_sq(column, df=df):
'''
Takes a column and creates a column of it squared.
INPUTS:
column: str. The name of the column to be squared.
df: pandas DataFrame. Contains the column in question and is the destination of the new column.
RETURNS:
None
'''
col_name = column + '_2'
df[col_name] = df[column] ** 2
col_sq('age')
col_sq('reward')
col_sq('difficulty')
col_sq('duration')
# Create the difference between the difficulty and reward
df['reward_difference'] = df['difficulty'] - df['reward']
col_sq('reward_difference')
# Check that it has worked
df.head()
# We should normalise the income column
df['income_norm'] = (df['income'] - df['income'].mean())/df['income'].std()
df.head()
```
### Over-sampling
```
# Check the difference between the number of offers used and unsued
df['offer_viewed'].value_counts()
# Although the ratio of c.4:1 is not extreme, it would still improve model accuracy to make it more even
sns.countplot(x = 'offer_viewed', data = df)
plt.show()
# Get list of columns contained in the DataFrame
df.columns
# Choose the columns to include in y and X
y_column = ['offer_used']
X_columns = ['reward', 'difficulty',
'duration', 'web', 'mobile', 'social', 'bogo',
'discount', 'age', 'M', 'O',
'membership_length', 'age_2', 'reward_2', 'difficulty_2', 'duration_2',
'reward_difference', 'reward_difference_2', 'income_norm']
# Select the y and X DataFrames
y = df.loc[:, y_column]
X = df.loc[:, X_columns]
# We will now over-sample the unused offer rows to make the proportion equal
# Initiate SMOTE
os = SMOTE()
# Split the data into training and testing datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Resample data and convert into DataFrames
os_df_X, os_df_y = os.fit_resample(X_train, y_train)
os_df_X = pd.DataFrame(data=os_df_X, columns=X_columns)
os_df_y= pd.DataFrame(data=os_df_y, columns=y_column)
# Check that the proportion is now equal
print("Proportion of unused offers in oversampled data is ",len(os_df_y[os_df_y['offer_used']==0])/len(os_df_X))
print("Proportion of used offers data in oversampled data is ",len(os_df_y[os_df_y['offer_used']==1])/len(os_df_X))
```
## Modelling
```
def logit(y_train, X_train):
'''
Fits a Logit model and prints out the summary table.
INPUTS:
y_train: list. The training set of the dependent variable.
X_train: pandas DataFrame. The training set of the independent variable.
RETURNS:
None
'''
logit_model = sm.Logit(y_train, X_train)
fitted_model = logit_model.fit()
print(fitted_model.summary2())
def logreg(X_train, y_train, X_test):
'''
Fits a Logistic Regression and predicts y values. It then prints out the
confusion matrix and classification report.
INPUTS:
y_train: list. The training set of the dependent variable.
X_train: pandas DataFrame. The training set of the independent variables.
X_test: pandas DataFrame. The test set of the independent variables.
RETURNS:
logreg: classifier. Fitted Logistic Regression model.
y_pred: list. Predicted y values from X_test.
'''
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test)
print(classification_report(y_test, y_pred))
return logreg, y_pred
```
### Base model with new features
```
# Fit a logit model
logit(y_train, X_train)
X_columns_base = ['age', 'M', 'O', 'membership_length', 'age_2', 'income_norm']
X = X_train[X_columns_base]
X_test = X_test[X_columns_base]
logreg, y_pred = logreg(X, y_train, X_test)
```
### After over-sampling
```
# Fit a logit model
logit(os_df_y, os_df_X)
# Remove the columns where the p-value is > 0.05
X_columns_updated = ['duration', 'web', 'mobile', 'social',
'bogo', 'discount', 'age', 'M',
'membership_length', 'age_2', 'reward_2', 'difficulty_2',
'duration_2', 'reward_difference_2', 'income_norm']
X = os_df_X[X_columns_updated]
y = os_df_y[y_column]
# Re-fit the logit model
logit(y, X)
# Remove the social column as p > 0.05
X_columns_updated = ['duration', 'web', 'mobile',
'bogo', 'discount', 'age', 'M',
'membership_length', 'age_2', 'reward_2', 'difficulty_2',
'duration_2', 'reward_difference_2', 'income_norm']
X = os_df_X[X_columns_updated]
y = os_df_y[y_column]
# Re-fit the logit model
logit(y, X)
# Fit logistic regression
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
logreg, y_pred = logreg(X_train, y_train, X_test)
```
### Tuning model
```
# Set up parameters for GridSearchCV
parameters = {
'penalty': ['l1', 'l2'],
'C': [1.0, 6.0],
'solver': ['lbfgs', 'liblinear'],
'max_iter': [1000]
}
# Turns warnings off as the GridSearchCV produces a lot of warnings
# They are around the lbfgs solver not accepting l1 penalty and the liblinear model not converging
warnings.filterwarnings("ignore")
# Run GridSearchCV
clf = GridSearchCV(logreg, param_grid = parameters, cv = 5)
best_clf = clf.fit(X_train, y_train)
# Get predictions from tuned model
y_pred_tuned = clf.predict(X_test)
# Get the optimal parameters from tuned model
clf.best_params_
# Print classification report
print(classification_report(y_test, y_pred_tuned))
```
### Model explanability
```
# SHAP doesn't allow GridSearchCV objects to be passed through
# I have created a logreg with the optimal parameters from GridSearchCV
logreg_tuned = LogisticRegression(C=6.0, max_iter=1000, penalty='l1', solver='liblinear')
# Initialise SHAP
shap.initjs()
# compute SHAP values
shap_fig = plt.figure()
explainer = shap.Explainer(logreg, X_train)
shap_values = explainer(X_train)
#summary_plot
shap.plots.beeswarm(shap_values)
# Get coefficients from optimised model
coef_dict = {}
for coef, feat in zip(logreg.coef_[0,:],X_columns_updated):
coef_dict[feat] = coef
coef_dict
```
|
github_jupyter
|
!pip install imblearn
!pip install shap
import pandas as pd
import numpy as np
import json
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from imblearn.over_sampling import SMOTE
from sklearn.model_selection import train_test_split
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
import warnings
import shap
# read in the json files
portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True)
profile = pd.read_json('data/profile.json', orient='records', lines=True)
transcript = pd.read_json('data/transcript.json', orient='records', lines=True)
def assess_data(df, group_by=None):
'''
Prints out a number of useful characteristics of a DataFrame, including
the shape, number of null values, and an optional groupby count.
INPUTS:
df: pandas DataFrame. The DataFrame to explore.
group_by: str. A column label to group df by.
RETURNS:
None
'''
print('DataFrame shape: ', df.shape)
print()
print('Data types:')
print(df.dtypes)
print()
print('Column stats:')
print(df.describe())
print()
print('Null values:')
print(df.isnull().sum())
if group_by:
print()
print('Group by {}:'.format(group_by))
print(df.groupby(group_by).count())
portfolio.head()
assess_data(portfolio, 'offer_type')
profile.head()
assess_data(profile, 'gender')
# A max age of 118 is quite high - let's plot a histogram
plt.hist(profile['age']);
# It seems like the age of 188 could represent a null value
# This is supported by the counts matching with the null value counts above
profile[profile['age']==118].count()
# Let's plot the histogram of income to see if there are issues
# There doesn't seem to be any outliers or suprises
plt.hist(profile['income']);
transcript.head()
assess_data(transcript, 'event')
# To understand the data more, we can look at all of the events for one user
transcript[transcript['person']=='78afa995795e4d85b5d9ceeca43f5fef']
def column_bool(df, column, bool_item):
'''
Iterates through the given column in a DataFrame and produces a new column
populated with 1s if the column contains the desired item.
INPUTS:
df: pandas DataFrame. The DataFrame that contains the column.
column: str. The name of the column to iterate through.
bool_item: str. The name of the item to check for.
'''
bool_list = []
for item in df[column]:
if bool_item in item:
bool_list.append(1)
else:
bool_list.append(0)
df[bool_item] = bool_list
# To use in modelling, we need to convert the channel column to dummy variables
for channel in ['web', 'email', 'mobile', 'social']:
column_bool(portfolio, 'channels', channel)
portfolio.drop('channels', axis=1, inplace=True)
portfolio
# We also need to do this for the offer_type column
for item in ['bogo', 'informational', 'discount']:
column_bool(portfolio, 'offer_type', item)
portfolio.drop('offer_type', axis=1, inplace=True)
portfolio
# Let's drop the cases where age = 118 since these could be rows with lots of nulls
# This is roughly 13% of users, but we still have 14,825 users
profile.drop(profile[profile['age']==118].index, inplace=True)
# Let's check that all nulls came from the age=118 people
profile.isnull().sum()
# The became_member_on column is currently an integer
# To use it in analysis, we should convert it to datetime
profile['became_member_on'] = pd.to_datetime(profile['became_member_on'].map(str), format='%Y-%m-%d')
profile.head()
# We should convert the gender column to booleans
for gender in ['F', 'M', 'O']:
column_bool(profile, 'gender', gender)
profile.drop('gender', axis=1, inplace=True)
profile.head()
# We are just going to be looking at whether an offer is used so we don't need the transaction events
transcript.drop(transcript[transcript['event']=='transaction'].index, inplace=True)
# After dropping transactions, we can pull out the offer ids from the value column
offer = []
for value in transcript['value']:
offer.append(list(value.values())[0])
transcript['offer'] = offer
transcript.drop('value', axis=1, inplace=True)
transcript.head()
# In order to combine the rows below, time needs to be a string
transcript['time'] = transcript['time'].apply(lambda x: str(x))
transcript['time']
# This creates one row for each person-offer combination and the sequence of events for each combination
event_df = pd.DataFrame(transcript.groupby(['person','offer']).agg({'event': ', '.join, 'time': ', '.join})).reset_index()
event_df.head()
# Check that the merging happened in time order
time_unsorted = []
time_sorted = []
i = 0
while i < len(event_df['time']):
time_list = event_df['time'][i].split(', ')
time_list = list(map(int, time_list))
if time_list == sorted(time_list):
time_sorted.append(i)
else:
time_unsorted.append(i)
i+=1
# If they are in time order, time_unsorted should be empty
time_unsorted
# We now need to create the boolean columns for the offer being viewed and used
events = event_df['event'].str.split(', ')
received = []
viewed = []
used = []
# This iterates through each person-offer combination present
for event in events:
# Tries to get the time order of events
# Gets the location of offer received
try:
received_loc = event.index('offer received')
except:
received_loc = -1
# Gets the location of offer viewed
try:
viewed_loc = event.index('offer viewed')
except:
viewed_loc = -1
# Gets the location of offer completed
try:
completed_loc = event.index('offer completed')
except:
completed_loc = -1
# If there is an offer received in the timeline, return 1
if received_loc >= 0:
received.append(1)
else:
received.append(0)
# If there is an offer viewed in the timeline, return 1
if viewed_loc >= 0:
viewed.append(1)
# If viewed_loc is not -1 and the offer viewed happens before the
# offer completed, then it returns 1
# This is to prevent including instances where the offer was completed
# without viewing the offer (and so unintentionally)
if viewed_loc < completed_loc:
used.append(1)
else:
used.append(0)
else:
viewed.append(0)
used.append(0)
# Sets the results from the cell above as new columns
event_df['offer_received'] = received
event_df['offer_viewed'] = viewed
event_df['offer_used'] = used
event_df.drop(['event', 'time'], axis=1, inplace=True)
event_df.head()
# Check that every row has had an offer received
event_df.groupby('offer_received').count()
# As all of offer_received are 1s, we can drop this column
event_df.drop('offer_received', axis=1, inplace=True)
event_df.head()
# Merging the event_df and portfolio DataFrames
df = event_df.merge(portfolio, left_on='offer', right_on='id')
df.drop('id', axis=1, inplace=True)
df.head()
# Merging the above with the profile DataFrames
df = df.merge(profile, left_on='person', right_on='id')
df.drop(['id'], axis=1, inplace=True)
df.head()
# Double check there are no nulls
df.isnull().sum()
# In the current format, became_member_on is difficult to use in a model
# We can re-format it to be the length of membership as at a certain date
# This date can be the date of the newest member joining, so it will be a measure of relative membership length
latest_join = df['became_member_on'].max()
membership_length = df['became_member_on'].apply(lambda x: latest_join - x).dt.days
df['membership_length'] = membership_length
df.head()
# We can create a number of new features to hopefully improve accuracy
# Create the squares of features
def col_sq(column, df=df):
'''
Takes a column and creates a column of it squared.
INPUTS:
column: str. The name of the column to be squared.
df: pandas DataFrame. Contains the column in question and is the destination of the new column.
RETURNS:
None
'''
col_name = column + '_2'
df[col_name] = df[column] ** 2
col_sq('age')
col_sq('reward')
col_sq('difficulty')
col_sq('duration')
# Create the difference between the difficulty and reward
df['reward_difference'] = df['difficulty'] - df['reward']
col_sq('reward_difference')
# Check that it has worked
df.head()
# We should normalise the income column
df['income_norm'] = (df['income'] - df['income'].mean())/df['income'].std()
df.head()
# Check the difference between the number of offers used and unsued
df['offer_viewed'].value_counts()
# Although the ratio of c.4:1 is not extreme, it would still improve model accuracy to make it more even
sns.countplot(x = 'offer_viewed', data = df)
plt.show()
# Get list of columns contained in the DataFrame
df.columns
# Choose the columns to include in y and X
y_column = ['offer_used']
X_columns = ['reward', 'difficulty',
'duration', 'web', 'mobile', 'social', 'bogo',
'discount', 'age', 'M', 'O',
'membership_length', 'age_2', 'reward_2', 'difficulty_2', 'duration_2',
'reward_difference', 'reward_difference_2', 'income_norm']
# Select the y and X DataFrames
y = df.loc[:, y_column]
X = df.loc[:, X_columns]
# We will now over-sample the unused offer rows to make the proportion equal
# Initiate SMOTE
os = SMOTE()
# Split the data into training and testing datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Resample data and convert into DataFrames
os_df_X, os_df_y = os.fit_resample(X_train, y_train)
os_df_X = pd.DataFrame(data=os_df_X, columns=X_columns)
os_df_y= pd.DataFrame(data=os_df_y, columns=y_column)
# Check that the proportion is now equal
print("Proportion of unused offers in oversampled data is ",len(os_df_y[os_df_y['offer_used']==0])/len(os_df_X))
print("Proportion of used offers data in oversampled data is ",len(os_df_y[os_df_y['offer_used']==1])/len(os_df_X))
def logit(y_train, X_train):
'''
Fits a Logit model and prints out the summary table.
INPUTS:
y_train: list. The training set of the dependent variable.
X_train: pandas DataFrame. The training set of the independent variable.
RETURNS:
None
'''
logit_model = sm.Logit(y_train, X_train)
fitted_model = logit_model.fit()
print(fitted_model.summary2())
def logreg(X_train, y_train, X_test):
'''
Fits a Logistic Regression and predicts y values. It then prints out the
confusion matrix and classification report.
INPUTS:
y_train: list. The training set of the dependent variable.
X_train: pandas DataFrame. The training set of the independent variables.
X_test: pandas DataFrame. The test set of the independent variables.
RETURNS:
logreg: classifier. Fitted Logistic Regression model.
y_pred: list. Predicted y values from X_test.
'''
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test)
print(classification_report(y_test, y_pred))
return logreg, y_pred
# Fit a logit model
logit(y_train, X_train)
X_columns_base = ['age', 'M', 'O', 'membership_length', 'age_2', 'income_norm']
X = X_train[X_columns_base]
X_test = X_test[X_columns_base]
logreg, y_pred = logreg(X, y_train, X_test)
# Fit a logit model
logit(os_df_y, os_df_X)
# Remove the columns where the p-value is > 0.05
X_columns_updated = ['duration', 'web', 'mobile', 'social',
'bogo', 'discount', 'age', 'M',
'membership_length', 'age_2', 'reward_2', 'difficulty_2',
'duration_2', 'reward_difference_2', 'income_norm']
X = os_df_X[X_columns_updated]
y = os_df_y[y_column]
# Re-fit the logit model
logit(y, X)
# Remove the social column as p > 0.05
X_columns_updated = ['duration', 'web', 'mobile',
'bogo', 'discount', 'age', 'M',
'membership_length', 'age_2', 'reward_2', 'difficulty_2',
'duration_2', 'reward_difference_2', 'income_norm']
X = os_df_X[X_columns_updated]
y = os_df_y[y_column]
# Re-fit the logit model
logit(y, X)
# Fit logistic regression
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
logreg, y_pred = logreg(X_train, y_train, X_test)
# Set up parameters for GridSearchCV
parameters = {
'penalty': ['l1', 'l2'],
'C': [1.0, 6.0],
'solver': ['lbfgs', 'liblinear'],
'max_iter': [1000]
}
# Turns warnings off as the GridSearchCV produces a lot of warnings
# They are around the lbfgs solver not accepting l1 penalty and the liblinear model not converging
warnings.filterwarnings("ignore")
# Run GridSearchCV
clf = GridSearchCV(logreg, param_grid = parameters, cv = 5)
best_clf = clf.fit(X_train, y_train)
# Get predictions from tuned model
y_pred_tuned = clf.predict(X_test)
# Get the optimal parameters from tuned model
clf.best_params_
# Print classification report
print(classification_report(y_test, y_pred_tuned))
# SHAP doesn't allow GridSearchCV objects to be passed through
# I have created a logreg with the optimal parameters from GridSearchCV
logreg_tuned = LogisticRegression(C=6.0, max_iter=1000, penalty='l1', solver='liblinear')
# Initialise SHAP
shap.initjs()
# compute SHAP values
shap_fig = plt.figure()
explainer = shap.Explainer(logreg, X_train)
shap_values = explainer(X_train)
#summary_plot
shap.plots.beeswarm(shap_values)
# Get coefficients from optimised model
coef_dict = {}
for coef, feat in zip(logreg.coef_[0,:],X_columns_updated):
coef_dict[feat] = coef
coef_dict
| 0.488771 | 0.985923 |
```
! mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json
! kaggle datasets download -d mlg-ulb/creditcardfraud
!unzip creditcardfraud.zip
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df=pd.read_csv('creditcard.csv')
df.head()
df.info()
df.isna().sum()
df.describe()
plt.figure(figsize=(20,60))
for i,feature in enumerate(df.columns):
plt.subplot(11,3,i+1)
sns.histplot(x=feature,data=df)
plt.show()
df.Class.value_counts()
plt.figure(figsize=(25,10))
sns.heatmap(df.corr(),annot=True)
df[['Amount','Class']].groupby('Class').describe()
plt.boxplot(x='Amount',data=df)
df.drop(df[df.Amount>10000].index,inplace=True) #removing the outliers
df[df.Class==0].plot.scatter('Amount','Time')
df[df.Class==1].plot.scatter('Amount','Time')
x=df.drop(['Class'],axis=1)
y=df['Class']
from sklearn.linear_model import LogisticRegression
import numpy as np
reg_model = LogisticRegression(max_iter=200,random_state=12, solver='liblinear')
reg_model.fit(x,y)
# coefficient matrix
coefficients = pd.concat([pd.DataFrame(x.columns),pd.DataFrame(np.transpose(reg_model.coef_))], axis = 1)
coefficients.columns = ['Feature','Importance Coefficient']
coefficients.sort_values(by='Importance Coefficient', inplace=True)
# Plotting coefficient values
plt.figure(figsize=(20,5))
sns.barplot(x='Feature', y='Importance Coefficient', data=coefficients)
plt.title("Logistic Regression with L2 Regularisation Feature Importance", fontsize=18)
plt.show()
x.drop(['Amount','Time'],axis=1,inplace=True) #Since there is no significant relationship between these two let's drop these
```
## Since the classes are highly imbalanced we have to perform some sampling methods to balance the data so that we could get the best result from our model.
```
leg_df = df[df.Class == 0]
fraud_df = df[df.Class == 1]
no_of_samples = round(leg_df.shape[0] * 0.05)
no_of_samples
from imblearn.over_sampling import RandomOverSampler
from sklearn.utils import resample
leg_df_2 = resample(leg_df, n_samples=no_of_samples, random_state=15)
# leg_df_2.describe()
df_sampled = pd.concat([leg_df_2,fraud_df],axis=0)
x_sampled = df_sampled.drop('Class', axis=1)
y_sampled = df_sampled.Class
ros = RandomOverSampler(random_state=42)
x,y = ros.fit_resample(x_sampled,y_sampled)
y.value_counts()
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,shuffle=True,test_size=0.2)
from sklearn.linear_model import LogisticRegression
lr=LogisticRegression(max_iter=100,random_state=10)
lr.fit(x_train,y_train)
y_pred=lr.predict(x_test)
lr.score(x_test,y_test)
from sklearn.metrics import confusion_matrix,f1_score,recall_score,precision_score
sns.heatmap(confusion_matrix(y_test,y_pred),annot=True)
from sklearn.metrics import f1_score,recall_score,precision_score
def print_scores(y_test,y_pred):
print(f'The precision score is {precision_score(y_test,y_pred)}')
print(f'The recall score is {recall_score(y_test,y_pred)}')
print(f'The f1 score is {f1_score(y_test,y_pred)}')
print_scores(y_test,y_pred)
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from xgboost import XGBClassifier
from sklearn.model_selection import GridSearchCV
xgb=XGBClassifier()
params={'eta':[0.1,0.01],
'max_depth':[1,3,4],
'max_leaf_nodes':[10,20,30],
'objective':['binary:logistic']}
clf=GridSearchCV(xgb,params)
clf.fit(x_train,y_train)
clf.best_params_
clf.best_score_
clf1=XGBClassifier(4,0.1,max_leaf_nodes=10,objective='binary:logistic')
clf1.fit(x_train,y_train)
y_pred1=clf1.predict(x_test)
print(f'F1 score {f1_score(y_test,y_pred1)}')
print(f'Precision {precision_score(y_test,y_pred1)}')
print(f'Recall {recall_score(y_test,y_pred1)}')
sns.heatmap(confusion_matrix(y_test,y_pred1),annot=True)
```
|
github_jupyter
|
! mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json
! kaggle datasets download -d mlg-ulb/creditcardfraud
!unzip creditcardfraud.zip
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df=pd.read_csv('creditcard.csv')
df.head()
df.info()
df.isna().sum()
df.describe()
plt.figure(figsize=(20,60))
for i,feature in enumerate(df.columns):
plt.subplot(11,3,i+1)
sns.histplot(x=feature,data=df)
plt.show()
df.Class.value_counts()
plt.figure(figsize=(25,10))
sns.heatmap(df.corr(),annot=True)
df[['Amount','Class']].groupby('Class').describe()
plt.boxplot(x='Amount',data=df)
df.drop(df[df.Amount>10000].index,inplace=True) #removing the outliers
df[df.Class==0].plot.scatter('Amount','Time')
df[df.Class==1].plot.scatter('Amount','Time')
x=df.drop(['Class'],axis=1)
y=df['Class']
from sklearn.linear_model import LogisticRegression
import numpy as np
reg_model = LogisticRegression(max_iter=200,random_state=12, solver='liblinear')
reg_model.fit(x,y)
# coefficient matrix
coefficients = pd.concat([pd.DataFrame(x.columns),pd.DataFrame(np.transpose(reg_model.coef_))], axis = 1)
coefficients.columns = ['Feature','Importance Coefficient']
coefficients.sort_values(by='Importance Coefficient', inplace=True)
# Plotting coefficient values
plt.figure(figsize=(20,5))
sns.barplot(x='Feature', y='Importance Coefficient', data=coefficients)
plt.title("Logistic Regression with L2 Regularisation Feature Importance", fontsize=18)
plt.show()
x.drop(['Amount','Time'],axis=1,inplace=True) #Since there is no significant relationship between these two let's drop these
leg_df = df[df.Class == 0]
fraud_df = df[df.Class == 1]
no_of_samples = round(leg_df.shape[0] * 0.05)
no_of_samples
from imblearn.over_sampling import RandomOverSampler
from sklearn.utils import resample
leg_df_2 = resample(leg_df, n_samples=no_of_samples, random_state=15)
# leg_df_2.describe()
df_sampled = pd.concat([leg_df_2,fraud_df],axis=0)
x_sampled = df_sampled.drop('Class', axis=1)
y_sampled = df_sampled.Class
ros = RandomOverSampler(random_state=42)
x,y = ros.fit_resample(x_sampled,y_sampled)
y.value_counts()
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,shuffle=True,test_size=0.2)
from sklearn.linear_model import LogisticRegression
lr=LogisticRegression(max_iter=100,random_state=10)
lr.fit(x_train,y_train)
y_pred=lr.predict(x_test)
lr.score(x_test,y_test)
from sklearn.metrics import confusion_matrix,f1_score,recall_score,precision_score
sns.heatmap(confusion_matrix(y_test,y_pred),annot=True)
from sklearn.metrics import f1_score,recall_score,precision_score
def print_scores(y_test,y_pred):
print(f'The precision score is {precision_score(y_test,y_pred)}')
print(f'The recall score is {recall_score(y_test,y_pred)}')
print(f'The f1 score is {f1_score(y_test,y_pred)}')
print_scores(y_test,y_pred)
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from xgboost import XGBClassifier
from sklearn.model_selection import GridSearchCV
xgb=XGBClassifier()
params={'eta':[0.1,0.01],
'max_depth':[1,3,4],
'max_leaf_nodes':[10,20,30],
'objective':['binary:logistic']}
clf=GridSearchCV(xgb,params)
clf.fit(x_train,y_train)
clf.best_params_
clf.best_score_
clf1=XGBClassifier(4,0.1,max_leaf_nodes=10,objective='binary:logistic')
clf1.fit(x_train,y_train)
y_pred1=clf1.predict(x_test)
print(f'F1 score {f1_score(y_test,y_pred1)}')
print(f'Precision {precision_score(y_test,y_pred1)}')
print(f'Recall {recall_score(y_test,y_pred1)}')
sns.heatmap(confusion_matrix(y_test,y_pred1),annot=True)
| 0.675551 | 0.549459 |
# 3. XGBoost_GPU
**Start from the most basic features, and try to improve step by step.**
Kaggle score:
## Run name
```
import time
project_name = 'TalkingdataAFD2018'
step_name = 'XGBoost_GPU'
time_str = time.strftime("%Y%m%d_%H%M%S", time.localtime())
run_name = '%s_%s_%s' % (project_name, step_name, time_str)
print('run_name: %s' % run_name)
t0 = time.time()
```
## Important params
```
date = 6
print('date: ', date)
test_n_rows = None
# test_n_rows = 18790469
# test_n_rows = 10*10000
day_rows = {
0: {
'n_skiprows': 1,
'n_rows': 10 * 10000
},
6: {
'n_skiprows': 1,
'n_rows': 9308568
},
7: {
'n_skiprows': 1 + 9308568,
'n_rows': 59633310
},
8: {
'n_skiprows': 1 + 9308568 + 59633310,
'n_rows': 62945075
},
9: {
'n_skiprows': 1 + 9308568 + 59633310 + 62945075,
'n_rows': 53016937
}
}
n_skiprows = day_rows[date]['n_skiprows']
n_rows = day_rows[date]['n_rows']
```
## Import PKGs
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from IPython.display import display
import os
import gc
import time
import random
import zipfile
import h5py
import pickle
import math
from PIL import Image
import shutil
from tqdm import tqdm
import multiprocessing
from multiprocessing import cpu_count
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import accuracy_score
random_num = np.random.randint(10000)
print('random_num: %s' % random_num)
```
## Project folders
```
cwd = os.getcwd()
input_folder = os.path.join(cwd, 'input')
output_folder = os.path.join(cwd, 'output')
model_folder = os.path.join(cwd, 'model')
log_folder = os.path.join(cwd, 'log')
print('input_folder: \t\t\t%s' % input_folder)
print('output_folder: \t\t\t%s' % output_folder)
print('model_folder: \t\t\t%s' % model_folder)
print('log_folder: \t\t\t%s' % log_folder)
train_csv_file = os.path.join(input_folder, 'train.csv')
train_sample_csv_file = os.path.join(input_folder, 'train_sample.csv')
test_csv_file = os.path.join(input_folder, 'test.csv')
sample_submission_csv_file = os.path.join(input_folder, 'sample_submission.csv')
print('\ntrain_csv_file: \t\t%s' % train_csv_file)
print('train_sample_csv_file: \t\t%s' % train_sample_csv_file)
print('test_csv_file: \t\t\t%s' % test_csv_file)
print('sample_submission_csv_file: \t%s' % sample_submission_csv_file)
```
## Load data
```
# %%time
train_csv = pd.read_csv(train_csv_file, skiprows=range(1, n_skiprows), nrows=n_rows, parse_dates=['click_time'])
test_csv = pd.read_csv(test_csv_file, nrows=test_n_rows, parse_dates=['click_time'])
sample_submission_csv = pd.read_csv(sample_submission_csv_file)
print('train_csv.shape: \t\t', train_csv.shape)
print('test_csv.shape: \t\t', test_csv.shape)
print('sample_submission_csv.shape: \t', sample_submission_csv.shape)
print('train_csv.dtypes: \n', train_csv.dtypes)
display(train_csv.head(2))
display(test_csv.head(2))
display(sample_submission_csv.head(2))
y_data = train_csv['is_attributed']
train_csv.drop(['is_attributed'], axis=1, inplace=True)
display(y_data.head())
```
## Features
```
train_csv['day'] = train_csv['click_time'].dt.day.astype('uint8')
train_csv['hour'] = train_csv['click_time'].dt.hour.astype('uint8')
train_csv['minute'] = train_csv['click_time'].dt.minute.astype('uint8')
train_csv['second'] = train_csv['click_time'].dt.second.astype('uint8')
print('train_csv.shape: \t', train_csv.shape)
display(train_csv.head(2))
test_csv['day'] = test_csv['click_time'].dt.day.astype('uint8')
test_csv['hour'] = test_csv['click_time'].dt.hour.astype('uint8')
test_csv['minute'] = test_csv['click_time'].dt.minute.astype('uint8')
test_csv['second'] = test_csv['click_time'].dt.second.astype('uint8')
print('test_csv.shape: \t', test_csv.shape)
display(test_csv.head(2))
arr = np.array([[3,6,6],[4,5,1]])
print(arr)
np.ravel_multi_index(arr, (7,6))
print(arr)
print(np.ravel_multi_index(arr, (7,6), order='F'))
def df_add_counts(df, cols, tag="_count"):
arr_slice = df[cols].values
unq, unqtags, counts = np.unique(np.ravel_multi_index(arr_slice.T, arr_slice.max(0) + 1), return_inverse=True, return_counts=True)
df["_".join(cols) + tag] = counts[unqtags]
return df
def df_add_uniques(df, cols, tag="_unique"):
gp = df[cols] \
.groupby(by=cols[0:len(cols) - 1])[cols[len(cols) - 1]] \
.nunique() \
.reset_index() \
.rename(index=str, columns={cols[len(cols) - 1]: "_".join(cols)+tag})
df = df.merge(gp, on=cols[0:len(cols) - 1], how='left')
return df
train_csv = df_add_counts(train_csv, ['ip', 'day', 'hour'])
train_csv = df_add_counts(train_csv, ['ip', 'app'])
train_csv = df_add_counts(train_csv, ['ip', 'app', 'os'])
train_csv = df_add_counts(train_csv, ['ip', 'device'])
train_csv = df_add_counts(train_csv, ['app', 'channel'])
train_csv = df_add_uniques(train_csv, ['ip', 'channel'])
display(train_csv.head())
test_csv = df_add_counts(test_csv, ['ip', 'day', 'hour'])
test_csv = df_add_counts(test_csv, ['ip', 'app'])
test_csv = df_add_counts(test_csv, ['ip', 'app', 'os'])
test_csv = df_add_counts(test_csv, ['ip', 'device'])
test_csv = df_add_counts(test_csv, ['app', 'channel'])
test_csv = df_add_uniques(test_csv, ['ip', 'channel'])
display(test_csv.head())
```
## Prepare data
```
train_useless_features = ['click_time', 'attributed_time']
train_csv.drop(train_useless_features, axis=1, inplace=True)
test_useless_features = ['click_time', 'click_id']
test_csv.drop(test_useless_features, axis=1, inplace=True)
display(train_csv.head())
display(test_csv.head())
x_train, x_val, y_train, y_val = train_test_split(train_csv, y_data, test_size=0.01, random_state=2017)
x_test = test_csv
print(x_train.shape)
print(y_train.shape)
print(x_val.shape)
print(y_val.shape)
print(x_test.shape)
```
## Train
```
import xgboost as xgb
from sklearn.metrics import roc_auc_score
xg_train = xgb.DMatrix(x_train, label=y_train)
xg_val = xgb.DMatrix(x_val, label=y_val)
xg_test = xgb.DMatrix(x_test)
# setup parameters for xgboost
params = {
'num_boost_round': 10,
'obj': None,
'feval': None,
'maximize': False,
'early_stopping_rounds': 10,
}
watchlist = [(xg_train, 'train'), (xg_val, 'val')]
xgboost.train(params, dtrain, evals=watchlist, obj=None, feval=None, maximize=False, early_stopping_rounds=None, evals_result=None, verbose_eval=True, xgb_model=None, callbacks=None, learning_rates=None)
clf.fit(
x_train,
y_train,
eval_set=[(x_train, y_train), (x_val, y_val)],
eval_metric='logloss',
verbose=True
)
evals_result = clf.evals_result()
y_train_proba = clf.predict_proba(x_train)
y_train_pred = (y_train_proba>=0.5).astype(int)
acc_train = accuracy_score(y_train, y_train_pred)
roc_train = roc_auc_score(y_train, y_train_proba)
print('acc_train: %.4f \t roc_train: %.4f' % (acc_train, roc_train))
y_train_pred = clf.predict(x_train)
acc_train = accuracy_score(y_train, y_train_pred)
roc_train = roc_auc_score(y_train, y_train_proba)
print('acc_train: %.4f \t roc_train: %.4f' % (acc_train, roc_train))
y_val_proba = clf.predict_proba(x_val)
y_val_pred = (y_val_proba>=0.5).astype(int)
acc_val = accuracy_score(y_val, y_val_pred)
roc_val = roc_auc_score(y_val, y_val_proba)
print('acc_val: %.4f \t roc_val: %.4f' % (acc_val, roc_val))
```
## Predict
```
run_name_acc = run_name + '_' + str(int(roc_val*10000)).zfill(4)
print(run_name_acc)
y_test_proba = clf.predict_proba(x_test)
print(y_test_proba.shape)
print(y_test_proba[:20])
def save_proba(y_train_proba, y_train, y_val_proba, y_val, y_test_proba, click_ids, file_name):
print(click_ids[:5])
if os.path.exists(file_name):
os.remove(file_name)
print('File removed: \t%s' % file_name)
with h5py.File(file_name) as h:
h.create_dataset('y_train_proba', data=y_train_proba)
h.create_dataset('y_train', data=y_train)
h.create_dataset('y_val_proba', data=y_val_proba)
h.create_dataset('y_val', data=y_val)
h.create_dataset('y_test_proba', data=y_test_proba)
h.create_dataset('click_ids', data=click_ids)
print('File saved: \t%s' % file_name)
def load_proba(file_name):
with h5py.File(file_name, 'r') as h:
y_train_proba = np.array(h['y_train_proba'])
y_train = np.array(h['y_train'])
y_val_proba = np.array(h['y_val_proba'])
y_val = np.array(h['y_val'])
y_test_proba = np.array(h['y_test_proba'])
click_ids = np.array(h['click_ids'])
print('File loaded: \t%s' % file_name)
print(click_ids[:5])
return y_train_proba, y_train, y_val_proba, y_val, y_test_proba, click_ids
y_proba_file = os.path.join(model_folder, 'proba_%s.p' % run_name_acc)
save_proba(y_train_proba, y_train, y_val_proba, y_val, y_test_proba, np.array(sample_submission_csv['click_id']), y_proba_file)
y_train_proba, y_train, y_val_proba, y_val, y_test_proba, click_ids = load_proba(y_proba_file)
print(y_train_proba.shape)
print(y_train.shape)
print(y_val_proba.shape)
print(y_val.shape)
print(y_test_proba.shape)
print(len(click_ids))
# %%time
submission_csv_file = os.path.join(output_folder, 'pred_%s.csv' % run_name_acc)
print(submission_csv_file)
submission_csv = pd.DataFrame({ 'click_id': click_ids , 'is_attributed': y_test_proba })
submission_csv.to_csv(submission_csv_file, index = False)
print('Time cost: %.2f s' % (time.time() - t0))
print('random_num: ', random_num)
print('date: ', date)
print(run_name_acc)
print('Done!')
```
|
github_jupyter
|
import time
project_name = 'TalkingdataAFD2018'
step_name = 'XGBoost_GPU'
time_str = time.strftime("%Y%m%d_%H%M%S", time.localtime())
run_name = '%s_%s_%s' % (project_name, step_name, time_str)
print('run_name: %s' % run_name)
t0 = time.time()
date = 6
print('date: ', date)
test_n_rows = None
# test_n_rows = 18790469
# test_n_rows = 10*10000
day_rows = {
0: {
'n_skiprows': 1,
'n_rows': 10 * 10000
},
6: {
'n_skiprows': 1,
'n_rows': 9308568
},
7: {
'n_skiprows': 1 + 9308568,
'n_rows': 59633310
},
8: {
'n_skiprows': 1 + 9308568 + 59633310,
'n_rows': 62945075
},
9: {
'n_skiprows': 1 + 9308568 + 59633310 + 62945075,
'n_rows': 53016937
}
}
n_skiprows = day_rows[date]['n_skiprows']
n_rows = day_rows[date]['n_rows']
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from IPython.display import display
import os
import gc
import time
import random
import zipfile
import h5py
import pickle
import math
from PIL import Image
import shutil
from tqdm import tqdm
import multiprocessing
from multiprocessing import cpu_count
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import accuracy_score
random_num = np.random.randint(10000)
print('random_num: %s' % random_num)
cwd = os.getcwd()
input_folder = os.path.join(cwd, 'input')
output_folder = os.path.join(cwd, 'output')
model_folder = os.path.join(cwd, 'model')
log_folder = os.path.join(cwd, 'log')
print('input_folder: \t\t\t%s' % input_folder)
print('output_folder: \t\t\t%s' % output_folder)
print('model_folder: \t\t\t%s' % model_folder)
print('log_folder: \t\t\t%s' % log_folder)
train_csv_file = os.path.join(input_folder, 'train.csv')
train_sample_csv_file = os.path.join(input_folder, 'train_sample.csv')
test_csv_file = os.path.join(input_folder, 'test.csv')
sample_submission_csv_file = os.path.join(input_folder, 'sample_submission.csv')
print('\ntrain_csv_file: \t\t%s' % train_csv_file)
print('train_sample_csv_file: \t\t%s' % train_sample_csv_file)
print('test_csv_file: \t\t\t%s' % test_csv_file)
print('sample_submission_csv_file: \t%s' % sample_submission_csv_file)
# %%time
train_csv = pd.read_csv(train_csv_file, skiprows=range(1, n_skiprows), nrows=n_rows, parse_dates=['click_time'])
test_csv = pd.read_csv(test_csv_file, nrows=test_n_rows, parse_dates=['click_time'])
sample_submission_csv = pd.read_csv(sample_submission_csv_file)
print('train_csv.shape: \t\t', train_csv.shape)
print('test_csv.shape: \t\t', test_csv.shape)
print('sample_submission_csv.shape: \t', sample_submission_csv.shape)
print('train_csv.dtypes: \n', train_csv.dtypes)
display(train_csv.head(2))
display(test_csv.head(2))
display(sample_submission_csv.head(2))
y_data = train_csv['is_attributed']
train_csv.drop(['is_attributed'], axis=1, inplace=True)
display(y_data.head())
train_csv['day'] = train_csv['click_time'].dt.day.astype('uint8')
train_csv['hour'] = train_csv['click_time'].dt.hour.astype('uint8')
train_csv['minute'] = train_csv['click_time'].dt.minute.astype('uint8')
train_csv['second'] = train_csv['click_time'].dt.second.astype('uint8')
print('train_csv.shape: \t', train_csv.shape)
display(train_csv.head(2))
test_csv['day'] = test_csv['click_time'].dt.day.astype('uint8')
test_csv['hour'] = test_csv['click_time'].dt.hour.astype('uint8')
test_csv['minute'] = test_csv['click_time'].dt.minute.astype('uint8')
test_csv['second'] = test_csv['click_time'].dt.second.astype('uint8')
print('test_csv.shape: \t', test_csv.shape)
display(test_csv.head(2))
arr = np.array([[3,6,6],[4,5,1]])
print(arr)
np.ravel_multi_index(arr, (7,6))
print(arr)
print(np.ravel_multi_index(arr, (7,6), order='F'))
def df_add_counts(df, cols, tag="_count"):
arr_slice = df[cols].values
unq, unqtags, counts = np.unique(np.ravel_multi_index(arr_slice.T, arr_slice.max(0) + 1), return_inverse=True, return_counts=True)
df["_".join(cols) + tag] = counts[unqtags]
return df
def df_add_uniques(df, cols, tag="_unique"):
gp = df[cols] \
.groupby(by=cols[0:len(cols) - 1])[cols[len(cols) - 1]] \
.nunique() \
.reset_index() \
.rename(index=str, columns={cols[len(cols) - 1]: "_".join(cols)+tag})
df = df.merge(gp, on=cols[0:len(cols) - 1], how='left')
return df
train_csv = df_add_counts(train_csv, ['ip', 'day', 'hour'])
train_csv = df_add_counts(train_csv, ['ip', 'app'])
train_csv = df_add_counts(train_csv, ['ip', 'app', 'os'])
train_csv = df_add_counts(train_csv, ['ip', 'device'])
train_csv = df_add_counts(train_csv, ['app', 'channel'])
train_csv = df_add_uniques(train_csv, ['ip', 'channel'])
display(train_csv.head())
test_csv = df_add_counts(test_csv, ['ip', 'day', 'hour'])
test_csv = df_add_counts(test_csv, ['ip', 'app'])
test_csv = df_add_counts(test_csv, ['ip', 'app', 'os'])
test_csv = df_add_counts(test_csv, ['ip', 'device'])
test_csv = df_add_counts(test_csv, ['app', 'channel'])
test_csv = df_add_uniques(test_csv, ['ip', 'channel'])
display(test_csv.head())
train_useless_features = ['click_time', 'attributed_time']
train_csv.drop(train_useless_features, axis=1, inplace=True)
test_useless_features = ['click_time', 'click_id']
test_csv.drop(test_useless_features, axis=1, inplace=True)
display(train_csv.head())
display(test_csv.head())
x_train, x_val, y_train, y_val = train_test_split(train_csv, y_data, test_size=0.01, random_state=2017)
x_test = test_csv
print(x_train.shape)
print(y_train.shape)
print(x_val.shape)
print(y_val.shape)
print(x_test.shape)
import xgboost as xgb
from sklearn.metrics import roc_auc_score
xg_train = xgb.DMatrix(x_train, label=y_train)
xg_val = xgb.DMatrix(x_val, label=y_val)
xg_test = xgb.DMatrix(x_test)
# setup parameters for xgboost
params = {
'num_boost_round': 10,
'obj': None,
'feval': None,
'maximize': False,
'early_stopping_rounds': 10,
}
watchlist = [(xg_train, 'train'), (xg_val, 'val')]
xgboost.train(params, dtrain, evals=watchlist, obj=None, feval=None, maximize=False, early_stopping_rounds=None, evals_result=None, verbose_eval=True, xgb_model=None, callbacks=None, learning_rates=None)
clf.fit(
x_train,
y_train,
eval_set=[(x_train, y_train), (x_val, y_val)],
eval_metric='logloss',
verbose=True
)
evals_result = clf.evals_result()
y_train_proba = clf.predict_proba(x_train)
y_train_pred = (y_train_proba>=0.5).astype(int)
acc_train = accuracy_score(y_train, y_train_pred)
roc_train = roc_auc_score(y_train, y_train_proba)
print('acc_train: %.4f \t roc_train: %.4f' % (acc_train, roc_train))
y_train_pred = clf.predict(x_train)
acc_train = accuracy_score(y_train, y_train_pred)
roc_train = roc_auc_score(y_train, y_train_proba)
print('acc_train: %.4f \t roc_train: %.4f' % (acc_train, roc_train))
y_val_proba = clf.predict_proba(x_val)
y_val_pred = (y_val_proba>=0.5).astype(int)
acc_val = accuracy_score(y_val, y_val_pred)
roc_val = roc_auc_score(y_val, y_val_proba)
print('acc_val: %.4f \t roc_val: %.4f' % (acc_val, roc_val))
run_name_acc = run_name + '_' + str(int(roc_val*10000)).zfill(4)
print(run_name_acc)
y_test_proba = clf.predict_proba(x_test)
print(y_test_proba.shape)
print(y_test_proba[:20])
def save_proba(y_train_proba, y_train, y_val_proba, y_val, y_test_proba, click_ids, file_name):
print(click_ids[:5])
if os.path.exists(file_name):
os.remove(file_name)
print('File removed: \t%s' % file_name)
with h5py.File(file_name) as h:
h.create_dataset('y_train_proba', data=y_train_proba)
h.create_dataset('y_train', data=y_train)
h.create_dataset('y_val_proba', data=y_val_proba)
h.create_dataset('y_val', data=y_val)
h.create_dataset('y_test_proba', data=y_test_proba)
h.create_dataset('click_ids', data=click_ids)
print('File saved: \t%s' % file_name)
def load_proba(file_name):
with h5py.File(file_name, 'r') as h:
y_train_proba = np.array(h['y_train_proba'])
y_train = np.array(h['y_train'])
y_val_proba = np.array(h['y_val_proba'])
y_val = np.array(h['y_val'])
y_test_proba = np.array(h['y_test_proba'])
click_ids = np.array(h['click_ids'])
print('File loaded: \t%s' % file_name)
print(click_ids[:5])
return y_train_proba, y_train, y_val_proba, y_val, y_test_proba, click_ids
y_proba_file = os.path.join(model_folder, 'proba_%s.p' % run_name_acc)
save_proba(y_train_proba, y_train, y_val_proba, y_val, y_test_proba, np.array(sample_submission_csv['click_id']), y_proba_file)
y_train_proba, y_train, y_val_proba, y_val, y_test_proba, click_ids = load_proba(y_proba_file)
print(y_train_proba.shape)
print(y_train.shape)
print(y_val_proba.shape)
print(y_val.shape)
print(y_test_proba.shape)
print(len(click_ids))
# %%time
submission_csv_file = os.path.join(output_folder, 'pred_%s.csv' % run_name_acc)
print(submission_csv_file)
submission_csv = pd.DataFrame({ 'click_id': click_ids , 'is_attributed': y_test_proba })
submission_csv.to_csv(submission_csv_file, index = False)
print('Time cost: %.2f s' % (time.time() - t0))
print('random_num: ', random_num)
print('date: ', date)
print(run_name_acc)
print('Done!')
| 0.279632 | 0.644001 |
# VacationPy
----
#### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
```
### Store Part I results into DataFrame
* Load the csv exported in Part I to a DataFrame
```
weather_df = pd.read_csv("weather.csv")
weather_df
```
### Humidity Heatmap
* Configure gmaps.
* Use the Lat and Lng as locations and Humidity as the weight.
* Add Heatmap layer to map.
```
gmaps.configure(api_key=g_key)
locations = weather_df[["latitude", "longitude"]].astype(float)
humidity_val = weather_df["humidity"].astype(float)
```
### Create new DataFrame fitting weather criteria
* Narrow down the cities to fit weather conditions.
* Drop any rows will null values.
```
ideal_weath = weather_df[weather_df["humidity"] < 50]
ideal_weath = ideal_weath[ideal_weath["max temp"] < 80]
ideal_weath = ideal_weath[ideal_weath["clouds"] < 50]
ideal_weath
```
### Hotel Map
* Store into variable named `hotel_df`.
* Add a "Hotel Name" column to the DataFrame.
* Set parameters to search for hotels with 5000 meters.
* Hit the Google Places API for each city's coordinates.
* Store the first Hotel result into the DataFrame.
* Plot markers on top of the heatmap.
```
hotel_df = ideal_weath[["latitude", "longitude"]]
hotel_df
base_place = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
for x in range(len(hotel_df)):
params = {"location" : f"{hotel_df.iloc[x,0]},{hotel_df.iloc[x,1]}",
"radius" : 5000,
"type" : "hotel",
"key" : g_key
}
reponse = requests.get(base_place, params=params)
place_data = reponse.json()
hotel_df["Hotel Name"] = place_data["results"][0]["name"]
hotel_df
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# Add marker layer ontop of heat map
mark_layer = gmaps.symbol_layer(locations, fill color = "rgba(0,150,0,0.4)",
stroke_color = "rgba(0,0,150,0.4)", scale=2)
# Display figure
fig = gmaps.figure()
fig.add_layer(mark_layer)
```
|
github_jupyter
|
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
weather_df = pd.read_csv("weather.csv")
weather_df
gmaps.configure(api_key=g_key)
locations = weather_df[["latitude", "longitude"]].astype(float)
humidity_val = weather_df["humidity"].astype(float)
ideal_weath = weather_df[weather_df["humidity"] < 50]
ideal_weath = ideal_weath[ideal_weath["max temp"] < 80]
ideal_weath = ideal_weath[ideal_weath["clouds"] < 50]
ideal_weath
hotel_df = ideal_weath[["latitude", "longitude"]]
hotel_df
base_place = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
for x in range(len(hotel_df)):
params = {"location" : f"{hotel_df.iloc[x,0]},{hotel_df.iloc[x,1]}",
"radius" : 5000,
"type" : "hotel",
"key" : g_key
}
reponse = requests.get(base_place, params=params)
place_data = reponse.json()
hotel_df["Hotel Name"] = place_data["results"][0]["name"]
hotel_df
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# Add marker layer ontop of heat map
mark_layer = gmaps.symbol_layer(locations, fill color = "rgba(0,150,0,0.4)",
stroke_color = "rgba(0,0,150,0.4)", scale=2)
# Display figure
fig = gmaps.figure()
fig.add_layer(mark_layer)
| 0.417034 | 0.797162 |
```
%matplotlib inline
from numpy import *
from matplotlib.pyplot import *
from scipy.sparse.linalg import *
```
Consider the following one-dimensional PDE:
$$
-u_{xx}(x) = f(x)\quad\mathrm{ in }\ \Omega = (0, \pi)
$$
$$
u(x) = 0, \quad\mathrm{ on }\ \partial\Omega = \{0, \pi\}
$$
Given the following $4^{th}$ order finite difference approximation of the second order derivative:
$$u_{xx}(x_i) = \frac{-u_{i-2}+16u_{i-1}-30u_i+16u_{i+1}-u_{i+2}}{12h^2}$$
Implement a function that given the domain interval, the forcing function, the number of discretization points, the boundary conditions, returns the matrix $A$ and the the right hand side $b$.
```
def finDif(omega,f,n,bc):
omg0=omega[0]
omg1=omega[-1]
h = ( omg1-omg0 )/(n-1)
""" diagonal elements as per 4th order finite difference"""
# constructing A
c0 = 30*ones((n,))
c1 = -16*ones((n-1,))
c2 = ones((n-2,))
A = (diag(c0, 0) + diag(c1, -1) + diag(c1, +1) + diag(c2, -2) + diag(c2, +2))
A /= 12.*h*h
#print(A)
#print(linalg.cond(A))
# constructing b
x = linspace(omg0, omg1, n)
b = f(x)
# boundary conditions
A[0,:] = A[:,0] = 0
A[0,0] = A[-1,-1] = 1
b[0] = bc[0]
A[-1,:] = A[:,-1] = 0
b[-1] = bc[-1]
return A, b
```
Call the function using:
```
omega = [0,pi]
f = lambda x : sin(x)
n=100
bc = [0,0]
A, b = finDif(omega, f, n, bc)
#print(A)
```
Implement two functions that compute the LU and the Cholesky factorization of the system matrix $A$
```
"""LU factorization"""
def LU(A):
A = A.copy()
N = len(A)
for k in range(N-1):
if (abs(A[k,k]) < 1e-15):
raise RuntimeError("Null pivot")
A[k+1:N,k] /= A[k,k]
for j in range(k+1,N):
A[k+1:N,j] -= A[k+1:N,k]*A[k,j]
L=tril(A)
for i in range(N):
L[i,i]=1.0
U = triu(A)
return L, U
L, U = LU(A)
"""Cholesky decomposition"""
def cholesky(A):
A = A.copy()
N = len(A)
for k in range(N-1):
A[k,k] = sqrt(A[k,k])
A[k+1:N,k] = A[k+1:N,k]/A[k,k]
for j in range(k+1,N):
A[j:N,j] = A[j:N,j] - A[j:N,k]*A[j,k]
A[-1,-1] = sqrt(A[-1,-1])
L=tril(A)
return L, L.transpose()
HT, H = cholesky(A)
```
Implement forward and backward substitution functions to exploit the developed factorization methods to solve the derived linear system of equations.
```
def L_solve(L,rhs):
x = zeros_like(rhs)
N = len(L)
x[0] = rhs[0]/L[0,0]
for i in range(1,N):
x[i] = (rhs[i] - dot(L[i, 0:i], x[0:i]))/L[i,i]
return x
def U_solve(U,rhs):
x = zeros_like(rhs)
N = len(U)
x[-1] = rhs[-1]/L[-1,-1]
for i in reversed(range(N-1)):
x[i] = (rhs[i] - dot(U[i, i+1:N], x[i+1:N]))/U[i,i]
return x
```
Solve the derived linear system using the implemented functions and plot the computed solution:
```
x = linspace(omega[0], omega[-1], n)
y_lu = L_solve(L,b)
u_lu = U_solve(U,y_lu)
_ =plot(x,u_lu,'black',linestyle='dotted', label='LU' )
_=legend()
```
Considering the new domain $\Omega = (0,1)$ and the forcing term $f(x) = x(1-x)$ with B.C. $u(x) = 0$, on $\partial \Omega = {0,1}$ produce a plot and a table where you show the decay of the error w.r.t. the number of grid points.
(The analytical solution for the above problems is $u_{an} = \frac{x^4}{12} - \frac{x^3}{6} + \frac{x}{12}$)
```
def errors(omega, f, bc, points):
errors = []
for i in range(len(points)):
n = points[i]
x = linspace(omega[0], omega[1], n)
A_n, bn = finDif(omega, f, n, bc)
L_n, Un = LU(A_n)
w_n = L_solve(L_n, bn)
u_n = U_solve(Un, w_n)
errors.append(linalg.norm((x**4/12. - x**3/6. + x/12) - u_n, 2))
return errors
f = lambda x: x*(1-x)
points = list(range(10, 200, 10))
errors = errors([0,1], f, [0,0], points)
_ = plot(points, errors , 'black',linestyle='dotted')
```
Exploit the derived LU factorizations to compute the condition number of the system's matrix $A$ using the original problem formulation.
```
# inverse power method
def IPM(A, x0, mu, eps=1.0e-12, nmax=1000):
M = A - mu*eye(len(A))
L,U = LU(M)
q = x0/linalg.norm(x0,2)
err = eps + 1.0
it = 0
while (err > eps and it < nmax ):
y = L_solve(L, q)
x = U_solve(U, y)
q = x/linalg.norm(x,2)
z = dot(A,q)
l = dot(q.T,z)
err = linalg.norm(z-l*q,2)
it += 1
print("error_IPM =", err, "iterations_IPM =", it)
print("lambda_IPM =", l)
return l,q
# power method to compute
def PM(A, z0, tol=1e-12, nmax=1000):
q = z0/linalg.norm(z0,2)
it = 0
err = tol + 1.
while (it < nmax and err > tol):
z = dot(A,q)
l = dot(q.T,z)
err = linalg.norm(z-l*q,2)
q = z/linalg.norm(z,2)
it += 1
print("error_PM =", err, "iterations_PM =", it)
print("lambda_PM =", l)
return l,q
#l,x = PM(A,z0)
#l_np, x_np = numpy.linalg.eig(A)
#print("numpy")
#print(l_np)
# computes max and min eigenvalues
def condNumb(A):
z0 = ones((len(A), ))
lmax = PM(A, z0)[0]
lmin = IPM(A, z0, 0.0)[0]
return lmax/lmin
condNumb(A)
```
Implement a preconditioned Conjugant Gradient method to solve the original linear system of equations using an iterative method:
```
# conjugate gradient
def cg(A, b, P, nmax=len(A), eps=1e-10):
x = zeros_like(b)
it = 0
r = b - dot(A,x)
tol = eps + 1
N=len(A)
rho_old = 1.
p_old = zeros_like(b)
while (it < nmax and tol > eps):
it += 1
z = linalg.solve(P,r)
rho = dot(r,z)
if (it > 1):
beta = rho/rho_old
p = z + beta*p_old
else:
p = z
q = dot(A,p)
alpha = rho/(dot(p,q))
x += p*alpha
r -= q*alpha
p_old = p
rho_old = rho
tol = linalg.norm(r,2)
return x
```
Consider the following time dependent variation of the PDE starting from the orginal problem formulation:
$$
u'(t)-u_{xx} = \alpha(t)f(x)
$$
for $t\in [0,T]$, with $\alpha(t) = \cos(t)$ and $T = 6\pi$
Use the same finite difference scheme to derive the semi-discrete formulation and solve it using a forward Euler's method.
Plot the time dependent solution solution at $x = \pi/2$, $x=1$,
$x=\pi$
```
# forward Euler routine
def fe(u0,t0,tf,h,alpha,A,b):
t = arange(t0,tf+1e-10, h)
sol = zeros((len(t), len(u0)))
sol[0] = u0
for i in range(1,len(t)):
u2 = -dot(A,sol[i-1])
af = alpha(t[i-1])*b
sol[i] = sol[i-1] + h*u2 + h*af
return sol, t
# plots
omega = [0, pi]
x2=(omega[-1] - omega[0])
val1 = round(n / x2 * pi/2.) -1
val2= round(n/x2) - 1
val3 = round(n/x2*pi) - 1
t0 = 0
tf = 6*pi
alpha = lambda y: cos(y)
max, vect = PM(A, ones_like(x))
h = 1/(max)
# u0 = sin(x)
u0 = sin(x)
sol, t = fe(u0,t0,tf, h,alpha, A,b)
_ = plot(t, sol[:,val2], 'black',linestyle='dotted',label='x=1')
_ = plot(t, sol[:,val1], 'blue',linestyle='dotted',label='x=π/2')
_ = plot(t, sol[:,val3], 'red',linestyle='dotted',label='x=π')
_ = legend()
```
Given the original $Au = b$ system, implement an algorithm to compute the eigenvalues and eigenvectors of the matrix $A$. Exploit the computed LU factorization
```
def eigenvalue_LU(A,eps,nmax):
B = A.copy()
val_old = np.diag(B)
err = eps+1.0
it = 0
while it < nmax and err > eps:
L,U = LU(B)
B = U@L
val_new = np.diag(B)
err = np.linalg.norm(val_new - val_old,2)
it += 1
val_old = val_new
return val_new
```
Compute the inverse of the matrix A exploiting the derived LU factorization
```
def inverse(A):
B=A.copy()
I=eye(n)
for i in range(n):
B[:,i]=U_solve(U,L_solve(L,I[:,i]))
return B
```
Consider the following Cauchy problem
$$
\begin{cases}
y'= -ty^2 \quad 0\le t \le 2\\
y(0) = 1
\end{cases}
$$
Implement a Backward Euler's method in a suitable function and solve the resulting non-linear equation using a Newton's method.
```
"""newton method"""
def newton(f,f_prime,x0,epsilon=1e-11,iter=1000):
x = x0
for n in range(0,iter):
if abs(f(x)) < epsilon:
return x
if f_prime(x) == 0:
return None
x = x - f(x)/f_prime(x)
return x
f=lambda t,y: -t*(y**2)
f_prime=lambda t,y: -t*2*y
"""backward euler"""
def b_euler(y0,g,g1,omega,n):
tspace=linspace(omega[0],omega[1],n)
h=(omega[1]-omega[0])/n
f=lambda t,z,x: z-h*g(t,z)-x
f1=lambda t,z,x: 1-h*g1(t,z)
y=zeros(n)
y[0]=y0
for i in range(1,n):
fn=lambda z: f(tspace[i],z,y[i-1])
fn1=lambda z: f1(tspace[i],z,y[i-1])
y[i]=newton(fn,fn1,y[i-1])
return y
n=25
y=b_euler(1,f,f_prime,array([0,2]),n)
plot(linspace(0,2,n),y,'black',linestyle='dotted',label="approx_sol")
plot(linspace(0,2,n),2/(linspace(0,2,n)**2+2),'go',label="exact_sol")
_=legend()
```
|
github_jupyter
|
%matplotlib inline
from numpy import *
from matplotlib.pyplot import *
from scipy.sparse.linalg import *
def finDif(omega,f,n,bc):
omg0=omega[0]
omg1=omega[-1]
h = ( omg1-omg0 )/(n-1)
""" diagonal elements as per 4th order finite difference"""
# constructing A
c0 = 30*ones((n,))
c1 = -16*ones((n-1,))
c2 = ones((n-2,))
A = (diag(c0, 0) + diag(c1, -1) + diag(c1, +1) + diag(c2, -2) + diag(c2, +2))
A /= 12.*h*h
#print(A)
#print(linalg.cond(A))
# constructing b
x = linspace(omg0, omg1, n)
b = f(x)
# boundary conditions
A[0,:] = A[:,0] = 0
A[0,0] = A[-1,-1] = 1
b[0] = bc[0]
A[-1,:] = A[:,-1] = 0
b[-1] = bc[-1]
return A, b
omega = [0,pi]
f = lambda x : sin(x)
n=100
bc = [0,0]
A, b = finDif(omega, f, n, bc)
#print(A)
"""LU factorization"""
def LU(A):
A = A.copy()
N = len(A)
for k in range(N-1):
if (abs(A[k,k]) < 1e-15):
raise RuntimeError("Null pivot")
A[k+1:N,k] /= A[k,k]
for j in range(k+1,N):
A[k+1:N,j] -= A[k+1:N,k]*A[k,j]
L=tril(A)
for i in range(N):
L[i,i]=1.0
U = triu(A)
return L, U
L, U = LU(A)
"""Cholesky decomposition"""
def cholesky(A):
A = A.copy()
N = len(A)
for k in range(N-1):
A[k,k] = sqrt(A[k,k])
A[k+1:N,k] = A[k+1:N,k]/A[k,k]
for j in range(k+1,N):
A[j:N,j] = A[j:N,j] - A[j:N,k]*A[j,k]
A[-1,-1] = sqrt(A[-1,-1])
L=tril(A)
return L, L.transpose()
HT, H = cholesky(A)
def L_solve(L,rhs):
x = zeros_like(rhs)
N = len(L)
x[0] = rhs[0]/L[0,0]
for i in range(1,N):
x[i] = (rhs[i] - dot(L[i, 0:i], x[0:i]))/L[i,i]
return x
def U_solve(U,rhs):
x = zeros_like(rhs)
N = len(U)
x[-1] = rhs[-1]/L[-1,-1]
for i in reversed(range(N-1)):
x[i] = (rhs[i] - dot(U[i, i+1:N], x[i+1:N]))/U[i,i]
return x
x = linspace(omega[0], omega[-1], n)
y_lu = L_solve(L,b)
u_lu = U_solve(U,y_lu)
_ =plot(x,u_lu,'black',linestyle='dotted', label='LU' )
_=legend()
def errors(omega, f, bc, points):
errors = []
for i in range(len(points)):
n = points[i]
x = linspace(omega[0], omega[1], n)
A_n, bn = finDif(omega, f, n, bc)
L_n, Un = LU(A_n)
w_n = L_solve(L_n, bn)
u_n = U_solve(Un, w_n)
errors.append(linalg.norm((x**4/12. - x**3/6. + x/12) - u_n, 2))
return errors
f = lambda x: x*(1-x)
points = list(range(10, 200, 10))
errors = errors([0,1], f, [0,0], points)
_ = plot(points, errors , 'black',linestyle='dotted')
# inverse power method
def IPM(A, x0, mu, eps=1.0e-12, nmax=1000):
M = A - mu*eye(len(A))
L,U = LU(M)
q = x0/linalg.norm(x0,2)
err = eps + 1.0
it = 0
while (err > eps and it < nmax ):
y = L_solve(L, q)
x = U_solve(U, y)
q = x/linalg.norm(x,2)
z = dot(A,q)
l = dot(q.T,z)
err = linalg.norm(z-l*q,2)
it += 1
print("error_IPM =", err, "iterations_IPM =", it)
print("lambda_IPM =", l)
return l,q
# power method to compute
def PM(A, z0, tol=1e-12, nmax=1000):
q = z0/linalg.norm(z0,2)
it = 0
err = tol + 1.
while (it < nmax and err > tol):
z = dot(A,q)
l = dot(q.T,z)
err = linalg.norm(z-l*q,2)
q = z/linalg.norm(z,2)
it += 1
print("error_PM =", err, "iterations_PM =", it)
print("lambda_PM =", l)
return l,q
#l,x = PM(A,z0)
#l_np, x_np = numpy.linalg.eig(A)
#print("numpy")
#print(l_np)
# computes max and min eigenvalues
def condNumb(A):
z0 = ones((len(A), ))
lmax = PM(A, z0)[0]
lmin = IPM(A, z0, 0.0)[0]
return lmax/lmin
condNumb(A)
# conjugate gradient
def cg(A, b, P, nmax=len(A), eps=1e-10):
x = zeros_like(b)
it = 0
r = b - dot(A,x)
tol = eps + 1
N=len(A)
rho_old = 1.
p_old = zeros_like(b)
while (it < nmax and tol > eps):
it += 1
z = linalg.solve(P,r)
rho = dot(r,z)
if (it > 1):
beta = rho/rho_old
p = z + beta*p_old
else:
p = z
q = dot(A,p)
alpha = rho/(dot(p,q))
x += p*alpha
r -= q*alpha
p_old = p
rho_old = rho
tol = linalg.norm(r,2)
return x
# forward Euler routine
def fe(u0,t0,tf,h,alpha,A,b):
t = arange(t0,tf+1e-10, h)
sol = zeros((len(t), len(u0)))
sol[0] = u0
for i in range(1,len(t)):
u2 = -dot(A,sol[i-1])
af = alpha(t[i-1])*b
sol[i] = sol[i-1] + h*u2 + h*af
return sol, t
# plots
omega = [0, pi]
x2=(omega[-1] - omega[0])
val1 = round(n / x2 * pi/2.) -1
val2= round(n/x2) - 1
val3 = round(n/x2*pi) - 1
t0 = 0
tf = 6*pi
alpha = lambda y: cos(y)
max, vect = PM(A, ones_like(x))
h = 1/(max)
# u0 = sin(x)
u0 = sin(x)
sol, t = fe(u0,t0,tf, h,alpha, A,b)
_ = plot(t, sol[:,val2], 'black',linestyle='dotted',label='x=1')
_ = plot(t, sol[:,val1], 'blue',linestyle='dotted',label='x=π/2')
_ = plot(t, sol[:,val3], 'red',linestyle='dotted',label='x=π')
_ = legend()
def eigenvalue_LU(A,eps,nmax):
B = A.copy()
val_old = np.diag(B)
err = eps+1.0
it = 0
while it < nmax and err > eps:
L,U = LU(B)
B = U@L
val_new = np.diag(B)
err = np.linalg.norm(val_new - val_old,2)
it += 1
val_old = val_new
return val_new
def inverse(A):
B=A.copy()
I=eye(n)
for i in range(n):
B[:,i]=U_solve(U,L_solve(L,I[:,i]))
return B
"""newton method"""
def newton(f,f_prime,x0,epsilon=1e-11,iter=1000):
x = x0
for n in range(0,iter):
if abs(f(x)) < epsilon:
return x
if f_prime(x) == 0:
return None
x = x - f(x)/f_prime(x)
return x
f=lambda t,y: -t*(y**2)
f_prime=lambda t,y: -t*2*y
"""backward euler"""
def b_euler(y0,g,g1,omega,n):
tspace=linspace(omega[0],omega[1],n)
h=(omega[1]-omega[0])/n
f=lambda t,z,x: z-h*g(t,z)-x
f1=lambda t,z,x: 1-h*g1(t,z)
y=zeros(n)
y[0]=y0
for i in range(1,n):
fn=lambda z: f(tspace[i],z,y[i-1])
fn1=lambda z: f1(tspace[i],z,y[i-1])
y[i]=newton(fn,fn1,y[i-1])
return y
n=25
y=b_euler(1,f,f_prime,array([0,2]),n)
plot(linspace(0,2,n),y,'black',linestyle='dotted',label="approx_sol")
plot(linspace(0,2,n),2/(linspace(0,2,n)**2+2),'go',label="exact_sol")
_=legend()
| 0.236252 | 0.945399 |
```
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import tensorflow as tf
from sklearn.model_selection import train_test_split
seq_length=2
x_data_dim=4
batch_size=100
min_max_normalization_flag=True
data_dir = '../dataset'
fname = os.path.join(data_dir, 'data-02-stock_daily.csv')
df = pd.read_csv(fname)
dataset=df.copy()
ori_Y=dataset.pop("Close")
ori_X=dataset.copy()
X_train, X_test, Y_train, Y_test = train_test_split(ori_X,ori_Y, test_size=0.2, shuffle=False)
X_train, X_val, Y_train, Y_val= train_test_split(X_train,Y_train, test_size=0.2, shuffle=False)
## 데이터의 min , max, mean, std 값 구하기.
dataset_stats = X_train.describe()
dataset_stats = dataset_stats.transpose()
## data normalization
def min_max_norm(x):
return (x - dataset_stats['min']) / (dataset_stats['max'] - dataset_stats['min'])
def standard_norm(x):
return (x - dataset_stats['mean']) / dataset_stats['std']
if min_max_normalization_flag==True:
min_max_norm_train_data = min_max_norm(X_train)
min_max_norm_val_data = min_max_norm(X_val)
min_max_norm_test_data = min_max_norm(X_test)
data_gen_train=tf.keras.preprocessing.sequence.TimeseriesGenerator(min_max_norm_train_data.values.tolist(), Y_train.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
data_gen_val=tf.keras.preprocessing.sequence.TimeseriesGenerator(min_max_norm_val_data.values.tolist(), Y_val.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
data_gen_test=tf.keras.preprocessing.sequence.TimeseriesGenerator(min_max_norm_test_data.values.tolist(), Y_test.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
else:
data_gen_train = tf.keras.preprocessing.sequence.TimeseriesGenerator(X_train.values.tolist(),Y_train.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
data_gen_val = tf.keras.preprocessing.sequence.TimeseriesGenerator(X_val.values.tolist(),Y_val.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
data_gen_test = tf.keras.preprocessing.sequence.TimeseriesGenerator(X_test.values.tolist(),Y_test.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
input_Layer = tf.keras.layers.Input(shape=(seq_length, x_data_dim))
x=tf.keras.layers.GRU(20,activation='tanh')(input_Layer) ##GRU
x=tf.keras.layers.Dense(20,activation='relu')(x)
x=tf.keras.layers.Dense(10,activation='relu')(x)
Out_Layer=tf.keras.layers.Dense(1,activation=None)(x)
model = tf.keras.Model(inputs=[input_Layer], outputs=[Out_Layer])
model.summary()
loss_function=tf.keras.losses.mean_squared_error
optimize=tf.keras.optimizers.Adam(learning_rate=0.001)
metric=tf.keras.metrics.mean_absolute_error
model.compile(loss=loss_function,
optimizer=optimize,
metrics=[metric])
history = model.fit(
data_gen_train,
validation_data=data_gen_val,
steps_per_epoch=len(X_train)/batch_size,
epochs=1000,
validation_freq=1
)
print(model.evaluate(data_gen_test))
test_data_X, test_data_Y=data_gen_test[0]
prediction_Y=model.predict(test_data_X).flatten()
Y_test=test_data_Y.flatten()
visual_y=[]
visual_pre_y=[]
for i in range(len(prediction_Y)):
label = Y_test[i]
prediction = prediction_Y[i]
print("실제가격: {:.3f}, 예상가격: {:.3f}".format(label, prediction))
visual_y.append(label)
visual_pre_y.append(prediction)
time = range(1, len(visual_y) + 1)
plt.plot(time, visual_y, 'r', label='ture')
plt.plot(time, visual_pre_y, 'b', label='prediction')
plt.title('stock prediction')
plt.xlabel('time')
plt.ylabel('value')
plt.legend()
plt.show()
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
|
github_jupyter
|
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import tensorflow as tf
from sklearn.model_selection import train_test_split
seq_length=2
x_data_dim=4
batch_size=100
min_max_normalization_flag=True
data_dir = '../dataset'
fname = os.path.join(data_dir, 'data-02-stock_daily.csv')
df = pd.read_csv(fname)
dataset=df.copy()
ori_Y=dataset.pop("Close")
ori_X=dataset.copy()
X_train, X_test, Y_train, Y_test = train_test_split(ori_X,ori_Y, test_size=0.2, shuffle=False)
X_train, X_val, Y_train, Y_val= train_test_split(X_train,Y_train, test_size=0.2, shuffle=False)
## 데이터의 min , max, mean, std 값 구하기.
dataset_stats = X_train.describe()
dataset_stats = dataset_stats.transpose()
## data normalization
def min_max_norm(x):
return (x - dataset_stats['min']) / (dataset_stats['max'] - dataset_stats['min'])
def standard_norm(x):
return (x - dataset_stats['mean']) / dataset_stats['std']
if min_max_normalization_flag==True:
min_max_norm_train_data = min_max_norm(X_train)
min_max_norm_val_data = min_max_norm(X_val)
min_max_norm_test_data = min_max_norm(X_test)
data_gen_train=tf.keras.preprocessing.sequence.TimeseriesGenerator(min_max_norm_train_data.values.tolist(), Y_train.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
data_gen_val=tf.keras.preprocessing.sequence.TimeseriesGenerator(min_max_norm_val_data.values.tolist(), Y_val.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
data_gen_test=tf.keras.preprocessing.sequence.TimeseriesGenerator(min_max_norm_test_data.values.tolist(), Y_test.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
else:
data_gen_train = tf.keras.preprocessing.sequence.TimeseriesGenerator(X_train.values.tolist(),Y_train.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
data_gen_val = tf.keras.preprocessing.sequence.TimeseriesGenerator(X_val.values.tolist(),Y_val.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
data_gen_test = tf.keras.preprocessing.sequence.TimeseriesGenerator(X_test.values.tolist(),Y_test.values.tolist(),
length=seq_length, sampling_rate=1,
batch_size=batch_size)
input_Layer = tf.keras.layers.Input(shape=(seq_length, x_data_dim))
x=tf.keras.layers.GRU(20,activation='tanh')(input_Layer) ##GRU
x=tf.keras.layers.Dense(20,activation='relu')(x)
x=tf.keras.layers.Dense(10,activation='relu')(x)
Out_Layer=tf.keras.layers.Dense(1,activation=None)(x)
model = tf.keras.Model(inputs=[input_Layer], outputs=[Out_Layer])
model.summary()
loss_function=tf.keras.losses.mean_squared_error
optimize=tf.keras.optimizers.Adam(learning_rate=0.001)
metric=tf.keras.metrics.mean_absolute_error
model.compile(loss=loss_function,
optimizer=optimize,
metrics=[metric])
history = model.fit(
data_gen_train,
validation_data=data_gen_val,
steps_per_epoch=len(X_train)/batch_size,
epochs=1000,
validation_freq=1
)
print(model.evaluate(data_gen_test))
test_data_X, test_data_Y=data_gen_test[0]
prediction_Y=model.predict(test_data_X).flatten()
Y_test=test_data_Y.flatten()
visual_y=[]
visual_pre_y=[]
for i in range(len(prediction_Y)):
label = Y_test[i]
prediction = prediction_Y[i]
print("실제가격: {:.3f}, 예상가격: {:.3f}".format(label, prediction))
visual_y.append(label)
visual_pre_y.append(prediction)
time = range(1, len(visual_y) + 1)
plt.plot(time, visual_y, 'r', label='ture')
plt.plot(time, visual_pre_y, 'b', label='prediction')
plt.title('stock prediction')
plt.xlabel('time')
plt.ylabel('value')
plt.legend()
plt.show()
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
| 0.646349 | 0.458227 |
# REINFORCE in TensorFlow
Just like we did before for Q-learning, this time we'll design a TensorFlow network to learn `CartPole-v0` via policy gradient (REINFORCE).
Most of the code in this notebook is taken from approximate Q-learning, so you'll find it more or less familiar and even simpler.
```
import sys, os
if 'google.colab' in sys.modules:
%tensorflow_version 1.x
if not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/grading.py -O ../grading.py
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/week5_policy_based/submit.py
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import gym
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
A caveat: with some versions of `pyglet`, the following cell may crash with `NameError: name 'base' is not defined`. The corresponding bug report is [here](https://github.com/pyglet/pyglet/issues/134). If you see this error, try restarting the kernel.
```
env = gym.make("CartPole-v0")
# gym compatibility: unwrap TimeLimit
if hasattr(env, '_max_episode_steps'):
env = env.env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
plt.imshow(env.render("rgb_array"))
```
# Building the network for REINFORCE
For REINFORCE algorithm, we'll need a model that predicts action probabilities given states.
For numerical stability, please __do not include the softmax layer into your network architecture__.
We'll use softmax or log-softmax where appropriate.
```
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.InteractiveSession()
# create input variables. We only need <s, a, r> for REINFORCE
ph_states = tf.compat.v1.placeholder('float32', (None,) + state_dim, name="states")
ph_actions = tf.compat.v1.placeholder('int32', name="action_ids")
ph_cumulative_rewards = tf.compat.v1.placeholder('float32', name="cumulative_returns")
model = tf.keras.Sequential([
tf.keras.layers.Dense(units=128, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.6),
tf.keras.layers.Dense(units=n_actions, activation=None)
])
logits = model(ph_states)
policy = tf.nn.softmax(logits)
log_policy = tf.nn.log_softmax(logits)
# Initialize model parameters
sess.run(tf.compat.v1.global_variables_initializer())
def predict_probs(states):
"""
Predict action probabilities given states.
:param states: numpy array of shape [batch, state_shape]
:returns: numpy array of shape [batch, n_actions]
"""
return policy.eval({ph_states: [states]})[0]
```
### Play the game
We can now use our newly built agent to play the game.
```
from numpy.random import default_rng
def generate_session(env, t_max=1000):
"""
Play a full session with REINFORCE agent.
Returns sequences of states, actions, and rewards.
"""
rng = default_rng()
# arrays to record session
states, actions, rewards = [], [], []
s = env.reset()
for t in range(t_max):
# action probabilities array aka pi(a|s)
action_probs = predict_probs(s)
# Sample action with given probabilities.
a = rng.choice(n_actions, replace=False, p=action_probs)
new_s, r, done, info = env.step(a)
# record session history to train later
states.append(s)
actions.append(a)
rewards.append(r)
s = new_s
if done:
break
return states, actions, rewards
# test it
states, actions, rewards = generate_session(env)
actions
```
### Computing cumulative rewards
$$
\begin{align*}
G_t &= r_t + \gamma r_{t + 1} + \gamma^2 r_{t + 2} + \ldots \\
&= \sum_{i = t}^T \gamma^{i - t} r_i \\
&= r_t + \gamma * G_{t + 1}
\end{align*}
$$
```
from scipy.signal import lfilter
def get_cumulative_rewards(rewards, # rewards at each step
gamma=0.99 # discount for reward
):
"""
Take a list of immediate rewards r(s,a) for the whole session
and compute cumulative returns (a.k.a. G(s,a) in Sutton '16).
G_t = r_t + gamma*r_{t+1} + gamma^2*r_{t+2} + ...
A simple way to compute cumulative rewards is to iterate from the last
to the first timestep and compute G_t = r_t + gamma*G_{t+1} recurrently
You must return an array/list of cumulative rewards with as many elements as in the initial rewards.
"""
r = np.array(rewards[::-1]).astype(np.float32)
a = [1, -gamma]
b = [1]
cum_rewards = lfilter(b, a, x=r)
return cum_rewards[::-1]
assert len(get_cumulative_rewards(range(100))) == 100
assert np.allclose(
get_cumulative_rewards([0, 0, 1, 0, 0, 1, 0], gamma=0.9),
[1.40049, 1.5561, 1.729, 0.81, 0.9, 1.0, 0.0])
assert np.allclose(
get_cumulative_rewards([0, 0, 1, -2, 3, -4, 0], gamma=0.5),
[0.0625, 0.125, 0.25, -1.5, 1.0, -4.0, 0.0])
assert np.allclose(
get_cumulative_rewards([0, 0, 1, 2, 3, 4, 0], gamma=0),
[0, 0, 1, 2, 3, 4, 0])
print("looks good!")
```
#### Loss function and updates
We now need to define objective and update over policy gradient.
Our objective function is
$$ J \approx { 1 \over N } \sum_{s_i,a_i} G(s_i,a_i) $$
REINFORCE defines a way to compute the gradient of the expected reward with respect to policy parameters. The formula is as follows:
$$ \nabla_\theta \hat J(\theta) \approx { 1 \over N } \sum_{s_i, a_i} \nabla_\theta \log \pi_\theta (a_i \mid s_i) \cdot G_t(s_i, a_i) $$
We can abuse Tensorflow's capabilities for automatic differentiation by defining our objective function as follows:
$$ \hat J(\theta) \approx { 1 \over N } \sum_{s_i, a_i} \log \pi_\theta (a_i \mid s_i) \cdot G_t(s_i, a_i) $$
When you compute the gradient of that function with respect to network weights $\theta$, it will become exactly the policy gradient.
```
# This code selects the log-probabilities (log pi(a_i|s_i)) for those actions that were actually played.
indices = tf.stack([tf.range(tf.shape(log_policy)[0]), ph_actions], axis=-1)
log_policy_for_actions = tf.gather_nd(log_policy, indices)
# Policy objective as in the last formula. Please use reduce_mean, not reduce_sum.
# You may use log_policy_for_actions to get log probabilities for actions taken.
# Also recall that we defined ph_cumulative_rewards earlier.
J = tf.reduce_mean(log_policy_for_actions * ph_cumulative_rewards)
```
As a reminder, for a discrete probability distribution (like the one our policy outputs), entropy is defined as:
$$ \operatorname{entropy}(p) = -\sum_{i = 1}^n p_i \cdot \log p_i $$
```
# Entropy regularization. If you don't add it, the policy will quickly deteriorate to
# being deterministic, harming exploration.
entropy = - (tf.reduce_sum(policy * log_policy))
# # Maximizing X is the same as minimizing -X, hence the sign.
with tf.GradientTape() as tape:
loss = -(J + 0.1 * entropy)
loss_grad = tape.gradient(loss, model.trainable_variables)
update = tf.keras.optimizers.Adam().apply_gradients(zip(loss_grad, model.trainable_variables))
def train_on_session(states, actions, rewards, t_max=1000):
"""given full session, trains agent with policy gradient"""
cumulative_rewards = get_cumulative_rewards(rewards)
update.run({
ph_states: states,
ph_actions: actions,
ph_cumulative_rewards: cumulative_rewards,
})
return sum(rewards)
# Initialize optimizer parameters
env.reset()
sess.run(tf.compat.v1.global_variables_initializer())
```
### The actual training
```
for i in range(100):
rewards = [train_on_session(*generate_session(env)) for _ in range(100)] # generate new sessions
print("mean reward: %.3f" % (np.mean(rewards)))
if np.mean(rewards) > 300:
print("You Win!") # but you can train even further
break
```
### Results & video
```
# Record sessions
import gym.wrappers
with gym.wrappers.Monitor(gym.make("CartPole-v0"), directory="videos", force=True) as env_monitor:
sessions = [generate_session(env_monitor) for _ in range(100)]
# Show video. This may not work in some setups. If it doesn't
# work for you, you can download the videos and view them locally.
from pathlib import Path
from IPython.display import HTML
video_names = sorted([s for s in Path('videos').iterdir() if s.suffix == '.mp4'])
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format(video_names[-1])) # You can also try other indices
from submit import submit_cartpole
submit_cartpole(generate_session, 'your.email@example.com', 'YourAssignmentToken')
```
That's all, thank you for your attention!
Not having enough? There's an actor-critic waiting for you in the honor section. But make sure you've seen the videos first.
|
github_jupyter
|
import sys, os
if 'google.colab' in sys.modules:
%tensorflow_version 1.x
if not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/grading.py -O ../grading.py
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/week5_policy_based/submit.py
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import gym
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make("CartPole-v0")
# gym compatibility: unwrap TimeLimit
if hasattr(env, '_max_episode_steps'):
env = env.env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
plt.imshow(env.render("rgb_array"))
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.InteractiveSession()
# create input variables. We only need <s, a, r> for REINFORCE
ph_states = tf.compat.v1.placeholder('float32', (None,) + state_dim, name="states")
ph_actions = tf.compat.v1.placeholder('int32', name="action_ids")
ph_cumulative_rewards = tf.compat.v1.placeholder('float32', name="cumulative_returns")
model = tf.keras.Sequential([
tf.keras.layers.Dense(units=128, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.6),
tf.keras.layers.Dense(units=n_actions, activation=None)
])
logits = model(ph_states)
policy = tf.nn.softmax(logits)
log_policy = tf.nn.log_softmax(logits)
# Initialize model parameters
sess.run(tf.compat.v1.global_variables_initializer())
def predict_probs(states):
"""
Predict action probabilities given states.
:param states: numpy array of shape [batch, state_shape]
:returns: numpy array of shape [batch, n_actions]
"""
return policy.eval({ph_states: [states]})[0]
from numpy.random import default_rng
def generate_session(env, t_max=1000):
"""
Play a full session with REINFORCE agent.
Returns sequences of states, actions, and rewards.
"""
rng = default_rng()
# arrays to record session
states, actions, rewards = [], [], []
s = env.reset()
for t in range(t_max):
# action probabilities array aka pi(a|s)
action_probs = predict_probs(s)
# Sample action with given probabilities.
a = rng.choice(n_actions, replace=False, p=action_probs)
new_s, r, done, info = env.step(a)
# record session history to train later
states.append(s)
actions.append(a)
rewards.append(r)
s = new_s
if done:
break
return states, actions, rewards
# test it
states, actions, rewards = generate_session(env)
actions
from scipy.signal import lfilter
def get_cumulative_rewards(rewards, # rewards at each step
gamma=0.99 # discount for reward
):
"""
Take a list of immediate rewards r(s,a) for the whole session
and compute cumulative returns (a.k.a. G(s,a) in Sutton '16).
G_t = r_t + gamma*r_{t+1} + gamma^2*r_{t+2} + ...
A simple way to compute cumulative rewards is to iterate from the last
to the first timestep and compute G_t = r_t + gamma*G_{t+1} recurrently
You must return an array/list of cumulative rewards with as many elements as in the initial rewards.
"""
r = np.array(rewards[::-1]).astype(np.float32)
a = [1, -gamma]
b = [1]
cum_rewards = lfilter(b, a, x=r)
return cum_rewards[::-1]
assert len(get_cumulative_rewards(range(100))) == 100
assert np.allclose(
get_cumulative_rewards([0, 0, 1, 0, 0, 1, 0], gamma=0.9),
[1.40049, 1.5561, 1.729, 0.81, 0.9, 1.0, 0.0])
assert np.allclose(
get_cumulative_rewards([0, 0, 1, -2, 3, -4, 0], gamma=0.5),
[0.0625, 0.125, 0.25, -1.5, 1.0, -4.0, 0.0])
assert np.allclose(
get_cumulative_rewards([0, 0, 1, 2, 3, 4, 0], gamma=0),
[0, 0, 1, 2, 3, 4, 0])
print("looks good!")
# This code selects the log-probabilities (log pi(a_i|s_i)) for those actions that were actually played.
indices = tf.stack([tf.range(tf.shape(log_policy)[0]), ph_actions], axis=-1)
log_policy_for_actions = tf.gather_nd(log_policy, indices)
# Policy objective as in the last formula. Please use reduce_mean, not reduce_sum.
# You may use log_policy_for_actions to get log probabilities for actions taken.
# Also recall that we defined ph_cumulative_rewards earlier.
J = tf.reduce_mean(log_policy_for_actions * ph_cumulative_rewards)
# Entropy regularization. If you don't add it, the policy will quickly deteriorate to
# being deterministic, harming exploration.
entropy = - (tf.reduce_sum(policy * log_policy))
# # Maximizing X is the same as minimizing -X, hence the sign.
with tf.GradientTape() as tape:
loss = -(J + 0.1 * entropy)
loss_grad = tape.gradient(loss, model.trainable_variables)
update = tf.keras.optimizers.Adam().apply_gradients(zip(loss_grad, model.trainable_variables))
def train_on_session(states, actions, rewards, t_max=1000):
"""given full session, trains agent with policy gradient"""
cumulative_rewards = get_cumulative_rewards(rewards)
update.run({
ph_states: states,
ph_actions: actions,
ph_cumulative_rewards: cumulative_rewards,
})
return sum(rewards)
# Initialize optimizer parameters
env.reset()
sess.run(tf.compat.v1.global_variables_initializer())
for i in range(100):
rewards = [train_on_session(*generate_session(env)) for _ in range(100)] # generate new sessions
print("mean reward: %.3f" % (np.mean(rewards)))
if np.mean(rewards) > 300:
print("You Win!") # but you can train even further
break
# Record sessions
import gym.wrappers
with gym.wrappers.Monitor(gym.make("CartPole-v0"), directory="videos", force=True) as env_monitor:
sessions = [generate_session(env_monitor) for _ in range(100)]
# Show video. This may not work in some setups. If it doesn't
# work for you, you can download the videos and view them locally.
from pathlib import Path
from IPython.display import HTML
video_names = sorted([s for s in Path('videos').iterdir() if s.suffix == '.mp4'])
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format(video_names[-1])) # You can also try other indices
from submit import submit_cartpole
submit_cartpole(generate_session, 'your.email@example.com', 'YourAssignmentToken')
| 0.61173 | 0.923661 |
<a id="Geomedians_and_Geomedoids_top"></a>
# Geomedians and Geomedoids
<hr>
## Background
This notebook is inspired by an IEEE publication titled [High-Dimensional Pixel Composites From
Earth Observation Time Series](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8004469) authored by Dale Roberts, Norman Mueller, and Alexis McIntyre.
This notebook explains geometric medians (geomedians) and geometric medoids (geomedoids) and applies these compositing methods to Landsat 7 imagery and displays a rendering of the computed composites.
<hr>
## Index
* [Import Dependencies and Connect to the Data Cube](#Geomedians_and_Geomedoids_import)
* [Choose Platform and Product](#Geomedians_and_Geomedoids_plat_prod)
* [Define the Extents of the Analysis](#Geomedians_and_Geomedoids_define_extents)
* [Load Data from the Data Cube](#Geomedians_and_Geomedoids_retrieve_data)
* [Geometric Medoid Compositing](#Geomedians_and_Geomedoids_medoid)
* [Geometric Median Compositing](#Geomedians_and_Geomedoids_median)
## <span id="Geomedians_and_Geomedoids_import">Import Dependencies and Connect to the Data Cube [▴](#Geomedians_and_Geomedoids_top)</span>
```
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
import matplotlib.pyplot as plt
from utils.data_cube_utilities.dc_display_map import display_map
from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full
from utils.data_cube_utilities.dc_mosaic import \
create_hdmedians_multiple_band_mosaic
from utils.data_cube_utilities.plotter_utils import figure_ratio
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
import datacube
dc = datacube.Datacube()
```
## <span id="Geomedians_and_Geomedoids_plat_prod">Choose Platform and Product [▴](#Geomedians_and_Geomedoids_top)</span>
```
# Get available products
products_info = dc.list_products()
print("LANDSAT 7 Products:")
products_info[["platform", "name"]][products_info.platform == "LANDSAT_7"]
print("LANDSAT 8 Products:")
products_info[["platform", "name"]][products_info.platform == "LANDSAT_8"]
```
**Choose the platform and product**
```
platform = 'LANDSAT_7'
product = 'ls7_usgs_sr_scene'
collection = 'c1'
level = 'l2'
```
## <span id="Geomedians_and_Geomedoids_define_extents">Define the Extents of the Analysis [▴](#Geomedians_and_Geomedoids_top)</span>
```
# Zanzibar, Tanzania
# lat = (-6.2238, -6.1267)
# lon = (39.2298, 39.2909)
# Masaki, Dar es Salaam, Tanzania
lat = (-6.7758, -6.7357)
lon = (39.2473, 39.2981)
time_range = ("2015-01-01", "2015-12-31")
display_map(latitude = lat, longitude = lon)
```
## <span id="Geomedians_and_Geomedoids_retrieve_data">Load Data from the Data Cube [▴](#Geomedians_and_Geomedoids_top)</span>
```
landsat_ds = \
dc.load(product = product, platform = platform,
lat = lat, lon = lon, time = time_range,
measurements = ['red', 'green', 'nir', 'swir1', \
'swir2', 'blue', 'pixel_qa'],
group_by='solar_day')
clean_mask = landsat_clean_mask_full(dc, landsat_ds, product=product, platform=platform,
collection=collection, level=level)
landsat_ds = landsat_ds.where(clean_mask)
```
## <span id="Geomedians_and_Geomedoids_medoid">Geometric Medoid Compositing [▴](#Geomedians_and_Geomedoids_top)</span>
> To compute a Geomedoid composite, the geometric medoid algorithm is applied to the time series of every pixel (indexed by `lat,lon`).
Every pixel (indexed by `time,lat,lon`) in the the time series is treated as an independent observation used in the computation of the geometric medoid.
> In the case of Landsat 7 imagery, an observation `<red,green,blue,nir,swir1,swir2>` is a vector/point embedded in 6-dimensional feature-space.
> ### Formal Definition of a Geometric Medoid
>Given a finite set $\mathbb{X}$ of $\mathbb{_p}$-dimensional observation vectors $\mathbb{X} = \{x_1,...,x_n \}$ , the medoid of these observations is given by the following equation <sup>[[1]](#hd_medians)</sup>:
>$$ m := argmin_{ x \in \mathbb{X}} \sum_{i=1}^{n}{ \lVert x - x_i\rVert } $$
> We use the `create_hdmedians_multiple_band_mosaic()` function with the setting `operation='medoid'` to create a geomedoid composite. This function comes from `utils.data_cube_utilities.dc_mosaic`.
**Run geomedoid compositor**
```
geomedoid_mosaic = \
create_hdmedians_multiple_band_mosaic(landsat_ds,
clean_mask = clean_mask,
operation = 'medoid')
```
> ### Example of a composited `swir1` band
```
figsize = figure_ratio(landsat_ds, fixed_width=12)
geomedoid_mosaic.swir1.plot(figsize = figsize, cmap = 'magma')
plt.show()
```
## <span id="Geomedians_and_Geomedoids_median">Geometric Median Compositing [▴](#Geomedians_and_Geomedoids_top)</span>
> To compute a Geomedian composite, the geometric median algorithm is applied to the time series of every pixel (indexed by `lat,lon`).
Every pixel (indexed by `time,lat,lon`) in the the time series is treated as an independent observation used in the computation of the geometric median.
> In the case of Landsat 7 imagery an observation `<red,green,blue,nir,swir1,swir2>` is a vector/point embedded in 6-dimensional feature-space.
> ### Formal Definition of a Geometric Median
>Given a finite set $\mathbb{X}$ of $\mathbb{_p}$-dimensional observation vectors $\mathbb{X} = \{ x_1,...,x_n \}$ , the Median of these observations is given by the following equation <sup>[[1]](#hd_medians)</sup>:
>$$ \hat{\mu} := argmin_{ x \in \mathbb{R^{_p}}} \sum_{i=1}^{n}{ \lVert x - x_i\rVert } $$
> **Note:**
> There is a subtle difference between the definition of the geometric median and the medoid: the search space for the solution differs and has the effect that the medoid returns one of the true observations whereas the geometric median can be described as a synthetic (not physically observed) observation.<sup>[[2]](#multi_dim_medians)</sup>
> We use the `create_hdmedians_multiple_band_mosaic()` function with the setting `operation='median'` to create a geomedian composite. Note that `operation='median'` is the default setting, so this can be omitted for geomedians. This function comes from `utils.data_cube_utilities.dc_mosaic`.
**Run geomedian compositor**
```
geomedian_mosaic = \
create_hdmedians_multiple_band_mosaic(landsat_ds,
clean_mask = clean_mask,
operation = 'median')
```
> ### Example of a composited `swir1` band
```
figsize = figure_ratio(landsat_ds, fixed_width=12)
geomedian_mosaic.swir1.plot(figsize = figsize, cmap = 'magma')
plt.show()
```
----
# References
<span id='hd_medians'></span>
1. Dale Roberts 2018. Hdmedians. Github: https://github.com/daleroberts/hdmedians,
<span id='multi_dim_medians'></span>
2. Small, C. G. (1990). A survey of multidimensional medians. International Statistical Review/Revue Internationale de Statistique, 263-277.
|
github_jupyter
|
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
import matplotlib.pyplot as plt
from utils.data_cube_utilities.dc_display_map import display_map
from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full
from utils.data_cube_utilities.dc_mosaic import \
create_hdmedians_multiple_band_mosaic
from utils.data_cube_utilities.plotter_utils import figure_ratio
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
import datacube
dc = datacube.Datacube()
# Get available products
products_info = dc.list_products()
print("LANDSAT 7 Products:")
products_info[["platform", "name"]][products_info.platform == "LANDSAT_7"]
print("LANDSAT 8 Products:")
products_info[["platform", "name"]][products_info.platform == "LANDSAT_8"]
platform = 'LANDSAT_7'
product = 'ls7_usgs_sr_scene'
collection = 'c1'
level = 'l2'
# Zanzibar, Tanzania
# lat = (-6.2238, -6.1267)
# lon = (39.2298, 39.2909)
# Masaki, Dar es Salaam, Tanzania
lat = (-6.7758, -6.7357)
lon = (39.2473, 39.2981)
time_range = ("2015-01-01", "2015-12-31")
display_map(latitude = lat, longitude = lon)
landsat_ds = \
dc.load(product = product, platform = platform,
lat = lat, lon = lon, time = time_range,
measurements = ['red', 'green', 'nir', 'swir1', \
'swir2', 'blue', 'pixel_qa'],
group_by='solar_day')
clean_mask = landsat_clean_mask_full(dc, landsat_ds, product=product, platform=platform,
collection=collection, level=level)
landsat_ds = landsat_ds.where(clean_mask)
geomedoid_mosaic = \
create_hdmedians_multiple_band_mosaic(landsat_ds,
clean_mask = clean_mask,
operation = 'medoid')
figsize = figure_ratio(landsat_ds, fixed_width=12)
geomedoid_mosaic.swir1.plot(figsize = figsize, cmap = 'magma')
plt.show()
geomedian_mosaic = \
create_hdmedians_multiple_band_mosaic(landsat_ds,
clean_mask = clean_mask,
operation = 'median')
figsize = figure_ratio(landsat_ds, fixed_width=12)
geomedian_mosaic.swir1.plot(figsize = figsize, cmap = 'magma')
plt.show()
| 0.25992 | 0.976423 |
```
#IMPORT SEMUA LIBRARY DISINI
#IMPORT LIBRARY PANDAS
import pandas as pd
#IMPORT LIBRARY POSTGRESQL
import psycopg2
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
#IMPORT LIBRARY CHART
from matplotlib import pyplot as plt
from matplotlib import style
#IMPORT LIBRARY PDF
from fpdf import FPDF
#IMPORT LIBRARY BASEPATH
import io
#IMPORT LIBRARY BASE64 IMG
import base64
#IMPORT LIBRARY NUMPY
import numpy as np
#IMPORT LIBRARY EXCEL
import xlsxwriter
#IMPORT LIBRARY SIMILARITAS
import n0similarities as n0
#FUNGSI UNTUK MENGUPLOAD DATA DARI CSV KE POSTGRESQL
def uploadToPSQL(host, username, password, database, port, table, judul, filePath, name, subjudul, dataheader, databody):
#TEST KONEKSI KE DATABASE
try:
for t in range(0, len(table)):
#DATA DIJADIKAN LIST
rawstr = [tuple(x) for x in zip(dataheader, databody[t])]
#KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=database)
cursor = connection.cursor()
connection.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT);
#CEK TABLE
cursor.execute("SELECT * FROM information_schema.tables where table_name=%s", (table[t],))
exist = bool(cursor.rowcount)
#KALAU ADA DIHAPUS DULU, TERUS DICREATE ULANG
if exist == True:
cursor.execute("DROP TABLE "+ table[t] + " CASCADE")
cursor.execute("CREATE TABLE "+table[t]+" (index SERIAL, tanggal date, total varchar);")
#KALAU GA ADA CREATE DATABASE
else:
cursor.execute("CREATE TABLE "+table[t]+" (index SERIAL, tanggal date, total varchar);")
#MASUKAN DATA KE DATABASE YANG TELAH DIBUAT
cursor.execute('INSERT INTO '+table[t]+'(tanggal, total) values ' +str(rawstr)[1:-1])
#JIKA BERHASIL SEMUA AKAN MENGHASILKAN KELUARAN BENAR (TRUE)
return True
#JIKA KONEKSI GAGAL
except (Exception, psycopg2.Error) as error :
return error
#TUTUP KONEKSI
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI UNTUK MEMBUAT CHART, DATA YANG DIAMBIL DARI DATABASE DENGAN MENGGUNAKAN ORDER DARI TANGGAL DAN JUGA LIMIT
#DISINI JUGA MEMANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
def makeChart(host, username, password, db, port, table, judul, filePath, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, limitdata, wilayah, tabledata, basePath):
try:
datarowsend = []
for t in range(0, len(table)):
#TEST KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=db)
cursor = connection.cursor()
#MENGAMBIL DATA DARI DATABASE DENGAN LIMIT YANG SUDAH DIKIRIMKAN DARI VARIABLE DIBAWAH
postgreSQL_select_Query = "SELECT * FROM "+table[t]+" ORDER BY tanggal DESC LIMIT " + str(limitdata)
cursor.execute(postgreSQL_select_Query)
mobile_records = cursor.fetchall()
uid = []
lengthx = []
lengthy = []
#MENYIMPAN DATA DARI DATABASE KE DALAM VARIABLE
for row in mobile_records:
uid.append(row[0])
lengthx.append(row[1])
lengthy.append(row[2])
datarowsend.append(mobile_records)
#JUDUL CHART
judulgraf = A2 + " " + wilayah[t]
#bar
style.use('ggplot')
fig, ax = plt.subplots()
#DATA CHART DIMASUKAN DISINI
ax.bar(uid, lengthy, align='center')
#JUDUL CHART
ax.set_title(judulgraf)
ax.set_ylabel('Total')
ax.set_xlabel('Tanggal')
ax.set_xticks(uid)
ax.set_xticklabels((lengthx))
b = io.BytesIO()
#BUAT CHART MENJADI FORMAT PNG
plt.savefig(b, format='png', bbox_inches="tight")
#CHART DIJADIKAN BASE64
barChart = base64.b64encode(b.getvalue()).decode("utf-8").replace("\n", "")
plt.show()
#line
#DATA CHART DIMASUKAN DISINI
plt.plot(lengthx, lengthy)
plt.xlabel('Tanggal')
plt.ylabel('Total')
#JUDUL CHART
plt.title(judulgraf)
plt.grid(True)
l = io.BytesIO()
#CHART DIJADIKAN GAMBAR
plt.savefig(l, format='png', bbox_inches="tight")
#GAMBAR DIJADIKAN BAS64
lineChart = base64.b64encode(l.getvalue()).decode("utf-8").replace("\n", "")
plt.show()
#pie
#JUDUL CHART
plt.title(judulgraf)
#DATA CHART DIMASUKAN DISINI
plt.pie(lengthy, labels=lengthx, autopct='%1.1f%%',
shadow=True, startangle=180)
plt.plot(legend=None)
plt.axis('equal')
p = io.BytesIO()
#CHART DIJADIKAN GAMBAR
plt.savefig(p, format='png', bbox_inches="tight")
#CHART DICONVERT KE BASE64
pieChart = base64.b64encode(p.getvalue()).decode("utf-8").replace("\n", "")
plt.show()
#CHART DISIMPAN KE DIREKTORI DIJADIKAN FORMAT PNG
#BARCHART
bardata = base64.b64decode(barChart)
barname = basePath+'jupyter/CEIC/17. Sektor Perbankan/img/'+name+''+table[t]+'-bar.png'
with open(barname, 'wb') as f:
f.write(bardata)
#LINECHART
linedata = base64.b64decode(lineChart)
linename = basePath+'jupyter/CEIC/17. Sektor Perbankan/img/'+name+''+table[t]+'-line.png'
with open(linename, 'wb') as f:
f.write(linedata)
#PIECHART
piedata = base64.b64decode(pieChart)
piename = basePath+'jupyter/CEIC/17. Sektor Perbankan/img/'+name+''+table[t]+'-pie.png'
with open(piename, 'wb') as f:
f.write(piedata)
#MEMANGGIL FUNGSI EXCEL
makeExcel(datarowsend, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, name, limitdata, table, wilayah, basePath)
#MEMANGGIL FUNGSI PDF
makePDF(datarowsend, judul, barChart, lineChart, pieChart, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, limitdata, table, wilayah, basePath)
#JIKA KONEKSI GAGAL
except (Exception, psycopg2.Error) as error :
print (error)
#TUTUP KONEKSI
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI UNTUK MEMBUAT PDF YANG DATANYA BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH FPDF
def makePDF(datarow, judul, bar, line, pie, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, lengthPDF, table, wilayah, basePath):
#PDF DIATUR DENGAN SIZE A4 DAN POSISI LANDSCAPE
pdf = FPDF('L', 'mm', [210,297])
#TAMBAH HALAMAN PDF
pdf.add_page()
#SET FONT DAN JUGA PADDING
pdf.set_font('helvetica', 'B', 20.0)
pdf.set_xy(145.0, 15.0)
#TAMPILKAN JUDUL PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=judul, border=0)
#SET FONT DAN JUGA PADDING
pdf.set_font('arial', '', 14.0)
pdf.set_xy(145.0, 25.0)
#TAMPILKAN SUB JUDUL PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=subjudul, border=0)
#BUAT GARIS DIBAWAH SUB JUDUL
pdf.line(10.0, 30.0, 287.0, 30.0)
pdf.set_font('times', '', 10.0)
pdf.set_xy(17.0, 37.0)
pdf.set_font('Times','B',11.0)
pdf.ln(0.5)
th1 = pdf.font_size
#BUAT TABLE DATA DATA DI DPF
pdf.cell(100, 2*th1, "Kategori", border=1, align='C')
pdf.cell(177, 2*th1, A2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Region", border=1, align='C')
pdf.cell(177, 2*th1, B2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Frekuensi", border=1, align='C')
pdf.cell(177, 2*th1, C2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Unit", border=1, align='C')
pdf.cell(177, 2*th1, D2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Sumber", border=1, align='C')
pdf.cell(177, 2*th1, E2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Status", border=1, align='C')
pdf.cell(177, 2*th1, F2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "ID Seri", border=1, align='C')
pdf.cell(177, 2*th1, G2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Kode SR", border=1, align='C')
pdf.cell(177, 2*th1, H2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Tanggal Obs. Pertama", border=1, align='C')
pdf.cell(177, 2*th1, str(I2.date()), border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Tanggal Obs. Terakhir ", border=1, align='C')
pdf.cell(177, 2*th1, str(J2.date()), border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Waktu pembaruan terakhir", border=1, align='C')
pdf.cell(177, 2*th1, str(K2.date()), border=1, align='C')
pdf.ln(2*th1)
pdf.set_xy(17.0, 125.0)
pdf.set_font('Times','B',11.0)
epw = pdf.w - 2*pdf.l_margin
col_width = epw/(lengthPDF+1)
pdf.ln(0.5)
th = pdf.font_size
#HEADER TABLE DATA F2
pdf.cell(col_width, 2*th, str("Wilayah"), border=1, align='C')
#TANGAL HEADER DI LOOPING
for row in datarow[0]:
pdf.cell(col_width, 2*th, str(row[1]), border=1, align='C')
pdf.ln(2*th)
#ISI TABLE F2
for w in range(0, len(table)):
data=list(datarow[w])
pdf.set_font('Times','B',10.0)
pdf.set_font('Arial','',9)
pdf.cell(col_width, 2*th, wilayah[w], border=1, align='C')
#DATA BERDASARKAN TANGGAL
for row in data:
pdf.cell(col_width, 2*th, str(row[2]), border=1, align='C')
pdf.ln(2*th)
#PEMANGGILAN GAMBAR
for s in range(0, len(table)):
col = pdf.w - 2*pdf.l_margin
pdf.ln(2*th)
widthcol = col/3
#TAMBAH HALAMAN
pdf.add_page()
#DATA GAMBAR BERDASARKAN DIREKTORI DIATAS
pdf.image(basePath+'jupyter/CEIC/17. Sektor Perbankan/img/'+name+''+table[s]+'-bar.png', link='', type='',x=8, y=80, w=widthcol)
pdf.set_xy(17.0, 144.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(basePath+'jupyter/CEIC/17. Sektor Perbankan/img/'+name+''+table[s]+'-line.png', link='', type='',x=103, y=80, w=widthcol)
pdf.set_xy(17.0, 144.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(basePath+'jupyter/CEIC/17. Sektor Perbankan/img/'+name+''+table[s]+'-pie.png', link='', type='',x=195, y=80, w=widthcol)
pdf.ln(4*th)
#PDF DIBUAT
pdf.output(basePath+'jupyter/CEIC/17. Sektor Perbankan/pdf/'+A2+'.pdf', 'F')
#FUNGSI MAKEEXCEL GUNANYA UNTUK MEMBUAT DATA YANG BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH XLSXWRITER
def makeExcel(datarow, A2, B2, C2, D2, E2, F2, G2, H2, I2, J2, K2, name, limit, table, wilayah, basePath):
#BUAT FILE EXCEL
workbook = xlsxwriter.Workbook(basePath+'jupyter/CEIC/17. Sektor Perbankan/excel/'+A2+'.xlsx')
#BUAT WORKSHEET EXCEL
worksheet = workbook.add_worksheet('sheet1')
#SETTINGAN UNTUK BORDER DAN FONT BOLD
row1 = workbook.add_format({'border': 2, 'bold': 1})
row2 = workbook.add_format({'border': 2})
#HEADER UNTUK TABLE EXCEL F2
header = ["Wilayah", "Kategori","Region","Frekuensi","Unit","Sumber","Status","ID Seri","Kode SR","Tanggal Obs. Pertama","Tanggal Obs. Terakhir ","Waktu pembaruan terakhir"]
#DATA DATA DITAMPUNG PADA VARIABLE
for rowhead2 in datarow[0]:
header.append(str(rowhead2[1]))
#DATA HEADER DARI VARIABLE DIMASUKAN KE SINI UNTUK DITAMPILKAN BERDASARKAN ROW DAN COLUMN
for col_num, data in enumerate(header):
worksheet.write(0, col_num, data, row1)
#DATA ISI TABLE F2 DITAMPILKAN DISINI
for w in range(0, len(table)):
data=list(datarow[w])
body = [wilayah[w], A2, B2, C2, D2, E2, F2, G2, H2, str(I2.date()), str(J2.date()), str(K2.date())]
for rowbody2 in data:
body.append(str(rowbody2[2]))
for col_num, data in enumerate(body):
worksheet.write(w+1, col_num, data, row2)
#FILE EXCEL DITUTUP
workbook.close()
#DISINI TEMPAT AWAL UNTUK MENDEFINISIKAN VARIABEL VARIABEL SEBELUM NANTINYA DIKIRIM KE FUNGSI
#PERTAMA MANGGIL FUNGSI UPLOADTOPSQL DULU, KALAU SUKSES BARU MANGGIL FUNGSI MAKECHART
#DAN DI MAKECHART MANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
#BASE PATH UNTUK NANTINYA MENGCREATE FILE ATAU MEMANGGIL FILE
basePath = 'C:/Users/ASUS/Documents/bappenas/'
#FILE SIMILARITY WILAYAH
filePathwilayah = basePath+'data mentah/CEIC/allwilayah.xlsx';
#BACA FILE EXCEL DENGAN PANDAS
readexcelwilayah = pd.read_excel(filePathwilayah)
dfwilayah = list(readexcelwilayah.values)
readexcelwilayah.fillna(0)
allwilayah = []
#PEMILIHAN JENIS DATA, APA DATA ITU PROVINSI, KABUPATEN, KECAMATAN ATAU KELURAHAN
tipewilayah = 'prov'
if tipewilayah == 'prov':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][1])
elif tipewilayah=='kabkot':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][3])
elif tipewilayah == 'kec':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][5])
elif tipewilayah == 'kel':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][7])
semuawilayah = list(set(allwilayah))
#SETTING VARIABLE UNTUK DATABASE DAN DATA YANG INGIN DIKIRIMKAN KE FUNGSI DISINI
name = "04. Bank Umum Bank Swasta Devisa (KBD001-KBD037) Part 1"
host = "localhost"
username = "postgres"
password = "1234567890"
port = "5432"
database = "ceic"
judul = "Produk Domestik Bruto (AA001-AA007)"
subjudul = "Badan Perencanaan Pembangunan Nasional"
filePath = basePath+'data mentah/CEIC/17. Sektor Perbankan/'+name+'.xlsx';
limitdata = int(8)
readexcel = pd.read_excel(filePath)
tabledata = []
wilayah = []
databody = []
#DATA EXCEL DIBACA DISINI DENGAN MENGGUNAKAN PANDAS
df = list(readexcel.values)
head = list(readexcel)
body = list(df[0])
readexcel.fillna(0)
#PILIH ROW DATA YANG INGIN DITAMPILKAN
rangeawal = 106
rangeakhir = 107
rowrange = range(rangeawal, rangeakhir)
#INI UNTUK MEMFILTER APAKAH DATA YANG DIPILIH MEMILIKI SIMILARITAS ATAU TIDAK
#ISIKAN 'WILAYAH' UNTUK SIMILARITAS
#ISIKAN BUKAN WILAYAH JIKA BUKAN WILAYAH
jenisdata = "Indonesia"
#ROW DATA DI LOOPING UNTUK MENDAPATKAN SIMILARITAS WILAYAH
#JIKA VARIABLE JENISDATA WILAYAH AKAN MASUK KESINI
if jenisdata == 'Wilayah':
for x in rowrange:
rethasil = 0
big_w = 0
for w in range(0, len(semuawilayah)):
namawilayah = semuawilayah[w].lower().strip()
nama_wilayah_len = len(namawilayah)
hasil = n0.get_levenshtein_similarity(df[x][0].lower().strip()[nama_wilayah_len*-1:], namawilayah)
if hasil > rethasil:
rethasil = hasil
big_w = w
wilayah.append(semuawilayah[big_w].capitalize())
tabledata.append('produkdomestikbruto_'+semuawilayah[big_w].lower().replace(" ", "") + "" + str(x))
testbody = []
for listbody in df[x][11:]:
if ~np.isnan(listbody) == False:
testbody.append(str('0'))
else:
testbody.append(str(listbody))
databody.append(testbody)
#JIKA BUKAN WILAYAH MASUK KESINI
else:
for x in rowrange:
wilayah.append(jenisdata.capitalize())
tabledata.append('produkdomestikbruto_'+jenisdata.lower().replace(" ", "") + "" + str(x))
testbody = []
for listbody in df[x][11:]:
if ~np.isnan(listbody) == False:
testbody.append(str('0'))
else:
testbody.append(str(listbody))
databody.append(testbody)
#HEADER UNTUK PDF DAN EXCEL
A2 = "Data Migas"
B2 = df[rangeawal][1]
C2 = df[rangeawal][2]
D2 = df[rangeawal][3]
E2 = df[rangeawal][4]
F2 = df[rangeawal][5]
G2 = df[rangeawal][6]
H2 = df[rangeawal][7]
I2 = df[rangeawal][8]
J2 = df[rangeawal][9]
K2 = df[rangeawal][10]
#DATA ISI TABLE F2
dataheader = []
for listhead in head[11:]:
dataheader.append(str(listhead))
#FUNGSI UNTUK UPLOAD DATA KE SQL, JIKA BERHASIL AKAN MAMANGGIL FUNGSI UPLOAD CHART
sql = uploadToPSQL(host, username, password, database, port, tabledata, judul, filePath, name, subjudul, dataheader, databody)
if sql == True:
makeChart(host, username, password, database, port, tabledata, judul, filePath, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, limitdata, wilayah, tabledata, basePath)
else:
print(sql)
```
|
github_jupyter
|
#IMPORT SEMUA LIBRARY DISINI
#IMPORT LIBRARY PANDAS
import pandas as pd
#IMPORT LIBRARY POSTGRESQL
import psycopg2
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
#IMPORT LIBRARY CHART
from matplotlib import pyplot as plt
from matplotlib import style
#IMPORT LIBRARY PDF
from fpdf import FPDF
#IMPORT LIBRARY BASEPATH
import io
#IMPORT LIBRARY BASE64 IMG
import base64
#IMPORT LIBRARY NUMPY
import numpy as np
#IMPORT LIBRARY EXCEL
import xlsxwriter
#IMPORT LIBRARY SIMILARITAS
import n0similarities as n0
#FUNGSI UNTUK MENGUPLOAD DATA DARI CSV KE POSTGRESQL
def uploadToPSQL(host, username, password, database, port, table, judul, filePath, name, subjudul, dataheader, databody):
#TEST KONEKSI KE DATABASE
try:
for t in range(0, len(table)):
#DATA DIJADIKAN LIST
rawstr = [tuple(x) for x in zip(dataheader, databody[t])]
#KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=database)
cursor = connection.cursor()
connection.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT);
#CEK TABLE
cursor.execute("SELECT * FROM information_schema.tables where table_name=%s", (table[t],))
exist = bool(cursor.rowcount)
#KALAU ADA DIHAPUS DULU, TERUS DICREATE ULANG
if exist == True:
cursor.execute("DROP TABLE "+ table[t] + " CASCADE")
cursor.execute("CREATE TABLE "+table[t]+" (index SERIAL, tanggal date, total varchar);")
#KALAU GA ADA CREATE DATABASE
else:
cursor.execute("CREATE TABLE "+table[t]+" (index SERIAL, tanggal date, total varchar);")
#MASUKAN DATA KE DATABASE YANG TELAH DIBUAT
cursor.execute('INSERT INTO '+table[t]+'(tanggal, total) values ' +str(rawstr)[1:-1])
#JIKA BERHASIL SEMUA AKAN MENGHASILKAN KELUARAN BENAR (TRUE)
return True
#JIKA KONEKSI GAGAL
except (Exception, psycopg2.Error) as error :
return error
#TUTUP KONEKSI
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI UNTUK MEMBUAT CHART, DATA YANG DIAMBIL DARI DATABASE DENGAN MENGGUNAKAN ORDER DARI TANGGAL DAN JUGA LIMIT
#DISINI JUGA MEMANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
def makeChart(host, username, password, db, port, table, judul, filePath, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, limitdata, wilayah, tabledata, basePath):
try:
datarowsend = []
for t in range(0, len(table)):
#TEST KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=db)
cursor = connection.cursor()
#MENGAMBIL DATA DARI DATABASE DENGAN LIMIT YANG SUDAH DIKIRIMKAN DARI VARIABLE DIBAWAH
postgreSQL_select_Query = "SELECT * FROM "+table[t]+" ORDER BY tanggal DESC LIMIT " + str(limitdata)
cursor.execute(postgreSQL_select_Query)
mobile_records = cursor.fetchall()
uid = []
lengthx = []
lengthy = []
#MENYIMPAN DATA DARI DATABASE KE DALAM VARIABLE
for row in mobile_records:
uid.append(row[0])
lengthx.append(row[1])
lengthy.append(row[2])
datarowsend.append(mobile_records)
#JUDUL CHART
judulgraf = A2 + " " + wilayah[t]
#bar
style.use('ggplot')
fig, ax = plt.subplots()
#DATA CHART DIMASUKAN DISINI
ax.bar(uid, lengthy, align='center')
#JUDUL CHART
ax.set_title(judulgraf)
ax.set_ylabel('Total')
ax.set_xlabel('Tanggal')
ax.set_xticks(uid)
ax.set_xticklabels((lengthx))
b = io.BytesIO()
#BUAT CHART MENJADI FORMAT PNG
plt.savefig(b, format='png', bbox_inches="tight")
#CHART DIJADIKAN BASE64
barChart = base64.b64encode(b.getvalue()).decode("utf-8").replace("\n", "")
plt.show()
#line
#DATA CHART DIMASUKAN DISINI
plt.plot(lengthx, lengthy)
plt.xlabel('Tanggal')
plt.ylabel('Total')
#JUDUL CHART
plt.title(judulgraf)
plt.grid(True)
l = io.BytesIO()
#CHART DIJADIKAN GAMBAR
plt.savefig(l, format='png', bbox_inches="tight")
#GAMBAR DIJADIKAN BAS64
lineChart = base64.b64encode(l.getvalue()).decode("utf-8").replace("\n", "")
plt.show()
#pie
#JUDUL CHART
plt.title(judulgraf)
#DATA CHART DIMASUKAN DISINI
plt.pie(lengthy, labels=lengthx, autopct='%1.1f%%',
shadow=True, startangle=180)
plt.plot(legend=None)
plt.axis('equal')
p = io.BytesIO()
#CHART DIJADIKAN GAMBAR
plt.savefig(p, format='png', bbox_inches="tight")
#CHART DICONVERT KE BASE64
pieChart = base64.b64encode(p.getvalue()).decode("utf-8").replace("\n", "")
plt.show()
#CHART DISIMPAN KE DIREKTORI DIJADIKAN FORMAT PNG
#BARCHART
bardata = base64.b64decode(barChart)
barname = basePath+'jupyter/CEIC/17. Sektor Perbankan/img/'+name+''+table[t]+'-bar.png'
with open(barname, 'wb') as f:
f.write(bardata)
#LINECHART
linedata = base64.b64decode(lineChart)
linename = basePath+'jupyter/CEIC/17. Sektor Perbankan/img/'+name+''+table[t]+'-line.png'
with open(linename, 'wb') as f:
f.write(linedata)
#PIECHART
piedata = base64.b64decode(pieChart)
piename = basePath+'jupyter/CEIC/17. Sektor Perbankan/img/'+name+''+table[t]+'-pie.png'
with open(piename, 'wb') as f:
f.write(piedata)
#MEMANGGIL FUNGSI EXCEL
makeExcel(datarowsend, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, name, limitdata, table, wilayah, basePath)
#MEMANGGIL FUNGSI PDF
makePDF(datarowsend, judul, barChart, lineChart, pieChart, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, limitdata, table, wilayah, basePath)
#JIKA KONEKSI GAGAL
except (Exception, psycopg2.Error) as error :
print (error)
#TUTUP KONEKSI
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI UNTUK MEMBUAT PDF YANG DATANYA BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH FPDF
def makePDF(datarow, judul, bar, line, pie, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, lengthPDF, table, wilayah, basePath):
#PDF DIATUR DENGAN SIZE A4 DAN POSISI LANDSCAPE
pdf = FPDF('L', 'mm', [210,297])
#TAMBAH HALAMAN PDF
pdf.add_page()
#SET FONT DAN JUGA PADDING
pdf.set_font('helvetica', 'B', 20.0)
pdf.set_xy(145.0, 15.0)
#TAMPILKAN JUDUL PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=judul, border=0)
#SET FONT DAN JUGA PADDING
pdf.set_font('arial', '', 14.0)
pdf.set_xy(145.0, 25.0)
#TAMPILKAN SUB JUDUL PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=subjudul, border=0)
#BUAT GARIS DIBAWAH SUB JUDUL
pdf.line(10.0, 30.0, 287.0, 30.0)
pdf.set_font('times', '', 10.0)
pdf.set_xy(17.0, 37.0)
pdf.set_font('Times','B',11.0)
pdf.ln(0.5)
th1 = pdf.font_size
#BUAT TABLE DATA DATA DI DPF
pdf.cell(100, 2*th1, "Kategori", border=1, align='C')
pdf.cell(177, 2*th1, A2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Region", border=1, align='C')
pdf.cell(177, 2*th1, B2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Frekuensi", border=1, align='C')
pdf.cell(177, 2*th1, C2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Unit", border=1, align='C')
pdf.cell(177, 2*th1, D2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Sumber", border=1, align='C')
pdf.cell(177, 2*th1, E2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Status", border=1, align='C')
pdf.cell(177, 2*th1, F2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "ID Seri", border=1, align='C')
pdf.cell(177, 2*th1, G2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Kode SR", border=1, align='C')
pdf.cell(177, 2*th1, H2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Tanggal Obs. Pertama", border=1, align='C')
pdf.cell(177, 2*th1, str(I2.date()), border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Tanggal Obs. Terakhir ", border=1, align='C')
pdf.cell(177, 2*th1, str(J2.date()), border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Waktu pembaruan terakhir", border=1, align='C')
pdf.cell(177, 2*th1, str(K2.date()), border=1, align='C')
pdf.ln(2*th1)
pdf.set_xy(17.0, 125.0)
pdf.set_font('Times','B',11.0)
epw = pdf.w - 2*pdf.l_margin
col_width = epw/(lengthPDF+1)
pdf.ln(0.5)
th = pdf.font_size
#HEADER TABLE DATA F2
pdf.cell(col_width, 2*th, str("Wilayah"), border=1, align='C')
#TANGAL HEADER DI LOOPING
for row in datarow[0]:
pdf.cell(col_width, 2*th, str(row[1]), border=1, align='C')
pdf.ln(2*th)
#ISI TABLE F2
for w in range(0, len(table)):
data=list(datarow[w])
pdf.set_font('Times','B',10.0)
pdf.set_font('Arial','',9)
pdf.cell(col_width, 2*th, wilayah[w], border=1, align='C')
#DATA BERDASARKAN TANGGAL
for row in data:
pdf.cell(col_width, 2*th, str(row[2]), border=1, align='C')
pdf.ln(2*th)
#PEMANGGILAN GAMBAR
for s in range(0, len(table)):
col = pdf.w - 2*pdf.l_margin
pdf.ln(2*th)
widthcol = col/3
#TAMBAH HALAMAN
pdf.add_page()
#DATA GAMBAR BERDASARKAN DIREKTORI DIATAS
pdf.image(basePath+'jupyter/CEIC/17. Sektor Perbankan/img/'+name+''+table[s]+'-bar.png', link='', type='',x=8, y=80, w=widthcol)
pdf.set_xy(17.0, 144.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(basePath+'jupyter/CEIC/17. Sektor Perbankan/img/'+name+''+table[s]+'-line.png', link='', type='',x=103, y=80, w=widthcol)
pdf.set_xy(17.0, 144.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(basePath+'jupyter/CEIC/17. Sektor Perbankan/img/'+name+''+table[s]+'-pie.png', link='', type='',x=195, y=80, w=widthcol)
pdf.ln(4*th)
#PDF DIBUAT
pdf.output(basePath+'jupyter/CEIC/17. Sektor Perbankan/pdf/'+A2+'.pdf', 'F')
#FUNGSI MAKEEXCEL GUNANYA UNTUK MEMBUAT DATA YANG BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH XLSXWRITER
def makeExcel(datarow, A2, B2, C2, D2, E2, F2, G2, H2, I2, J2, K2, name, limit, table, wilayah, basePath):
#BUAT FILE EXCEL
workbook = xlsxwriter.Workbook(basePath+'jupyter/CEIC/17. Sektor Perbankan/excel/'+A2+'.xlsx')
#BUAT WORKSHEET EXCEL
worksheet = workbook.add_worksheet('sheet1')
#SETTINGAN UNTUK BORDER DAN FONT BOLD
row1 = workbook.add_format({'border': 2, 'bold': 1})
row2 = workbook.add_format({'border': 2})
#HEADER UNTUK TABLE EXCEL F2
header = ["Wilayah", "Kategori","Region","Frekuensi","Unit","Sumber","Status","ID Seri","Kode SR","Tanggal Obs. Pertama","Tanggal Obs. Terakhir ","Waktu pembaruan terakhir"]
#DATA DATA DITAMPUNG PADA VARIABLE
for rowhead2 in datarow[0]:
header.append(str(rowhead2[1]))
#DATA HEADER DARI VARIABLE DIMASUKAN KE SINI UNTUK DITAMPILKAN BERDASARKAN ROW DAN COLUMN
for col_num, data in enumerate(header):
worksheet.write(0, col_num, data, row1)
#DATA ISI TABLE F2 DITAMPILKAN DISINI
for w in range(0, len(table)):
data=list(datarow[w])
body = [wilayah[w], A2, B2, C2, D2, E2, F2, G2, H2, str(I2.date()), str(J2.date()), str(K2.date())]
for rowbody2 in data:
body.append(str(rowbody2[2]))
for col_num, data in enumerate(body):
worksheet.write(w+1, col_num, data, row2)
#FILE EXCEL DITUTUP
workbook.close()
#DISINI TEMPAT AWAL UNTUK MENDEFINISIKAN VARIABEL VARIABEL SEBELUM NANTINYA DIKIRIM KE FUNGSI
#PERTAMA MANGGIL FUNGSI UPLOADTOPSQL DULU, KALAU SUKSES BARU MANGGIL FUNGSI MAKECHART
#DAN DI MAKECHART MANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
#BASE PATH UNTUK NANTINYA MENGCREATE FILE ATAU MEMANGGIL FILE
basePath = 'C:/Users/ASUS/Documents/bappenas/'
#FILE SIMILARITY WILAYAH
filePathwilayah = basePath+'data mentah/CEIC/allwilayah.xlsx';
#BACA FILE EXCEL DENGAN PANDAS
readexcelwilayah = pd.read_excel(filePathwilayah)
dfwilayah = list(readexcelwilayah.values)
readexcelwilayah.fillna(0)
allwilayah = []
#PEMILIHAN JENIS DATA, APA DATA ITU PROVINSI, KABUPATEN, KECAMATAN ATAU KELURAHAN
tipewilayah = 'prov'
if tipewilayah == 'prov':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][1])
elif tipewilayah=='kabkot':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][3])
elif tipewilayah == 'kec':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][5])
elif tipewilayah == 'kel':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][7])
semuawilayah = list(set(allwilayah))
#SETTING VARIABLE UNTUK DATABASE DAN DATA YANG INGIN DIKIRIMKAN KE FUNGSI DISINI
name = "04. Bank Umum Bank Swasta Devisa (KBD001-KBD037) Part 1"
host = "localhost"
username = "postgres"
password = "1234567890"
port = "5432"
database = "ceic"
judul = "Produk Domestik Bruto (AA001-AA007)"
subjudul = "Badan Perencanaan Pembangunan Nasional"
filePath = basePath+'data mentah/CEIC/17. Sektor Perbankan/'+name+'.xlsx';
limitdata = int(8)
readexcel = pd.read_excel(filePath)
tabledata = []
wilayah = []
databody = []
#DATA EXCEL DIBACA DISINI DENGAN MENGGUNAKAN PANDAS
df = list(readexcel.values)
head = list(readexcel)
body = list(df[0])
readexcel.fillna(0)
#PILIH ROW DATA YANG INGIN DITAMPILKAN
rangeawal = 106
rangeakhir = 107
rowrange = range(rangeawal, rangeakhir)
#INI UNTUK MEMFILTER APAKAH DATA YANG DIPILIH MEMILIKI SIMILARITAS ATAU TIDAK
#ISIKAN 'WILAYAH' UNTUK SIMILARITAS
#ISIKAN BUKAN WILAYAH JIKA BUKAN WILAYAH
jenisdata = "Indonesia"
#ROW DATA DI LOOPING UNTUK MENDAPATKAN SIMILARITAS WILAYAH
#JIKA VARIABLE JENISDATA WILAYAH AKAN MASUK KESINI
if jenisdata == 'Wilayah':
for x in rowrange:
rethasil = 0
big_w = 0
for w in range(0, len(semuawilayah)):
namawilayah = semuawilayah[w].lower().strip()
nama_wilayah_len = len(namawilayah)
hasil = n0.get_levenshtein_similarity(df[x][0].lower().strip()[nama_wilayah_len*-1:], namawilayah)
if hasil > rethasil:
rethasil = hasil
big_w = w
wilayah.append(semuawilayah[big_w].capitalize())
tabledata.append('produkdomestikbruto_'+semuawilayah[big_w].lower().replace(" ", "") + "" + str(x))
testbody = []
for listbody in df[x][11:]:
if ~np.isnan(listbody) == False:
testbody.append(str('0'))
else:
testbody.append(str(listbody))
databody.append(testbody)
#JIKA BUKAN WILAYAH MASUK KESINI
else:
for x in rowrange:
wilayah.append(jenisdata.capitalize())
tabledata.append('produkdomestikbruto_'+jenisdata.lower().replace(" ", "") + "" + str(x))
testbody = []
for listbody in df[x][11:]:
if ~np.isnan(listbody) == False:
testbody.append(str('0'))
else:
testbody.append(str(listbody))
databody.append(testbody)
#HEADER UNTUK PDF DAN EXCEL
A2 = "Data Migas"
B2 = df[rangeawal][1]
C2 = df[rangeawal][2]
D2 = df[rangeawal][3]
E2 = df[rangeawal][4]
F2 = df[rangeawal][5]
G2 = df[rangeawal][6]
H2 = df[rangeawal][7]
I2 = df[rangeawal][8]
J2 = df[rangeawal][9]
K2 = df[rangeawal][10]
#DATA ISI TABLE F2
dataheader = []
for listhead in head[11:]:
dataheader.append(str(listhead))
#FUNGSI UNTUK UPLOAD DATA KE SQL, JIKA BERHASIL AKAN MAMANGGIL FUNGSI UPLOAD CHART
sql = uploadToPSQL(host, username, password, database, port, tabledata, judul, filePath, name, subjudul, dataheader, databody)
if sql == True:
makeChart(host, username, password, database, port, tabledata, judul, filePath, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, limitdata, wilayah, tabledata, basePath)
else:
print(sql)
| 0.11737 | 0.09886 |

# Qiskit Finance: Portfolio diversification
## Introduction
In asset management, there are broadly two approaches: active and passive investment management. Within passive investment management, there are index-tracking funds and there are approaches based on portfolio diversification, which aim at representing a portfolio with a large number of assets by a smaller number of representative stocks.
This notebook illustrates a portfolio diversification problem, which has recently become popular for two reasons:
1. it makes it possible to mimic the performance of an index (or a similarly large set of assets) with a limited budget, at limited transaction costs. That is: traditional index-tracking may purchase all assets in the index, ideally with the same weights as in the index. This may be impractical for a number of reasons: the total of even a single round lot per asset may amount to more than the assets under management, the large scale of the index-tracking problem with integrality constraints may render the optimization problem difficult, and the transaction costs of the frequent rebalancing to adjust the positions to the weights in the index may render the approach expensive. Thus, a popular approach is to select a portfolio of $q$ assets that represent the market with $n$ assets, where $q$ is significantly smaller than $n$, but where the portfolio replicates the behavior of the underlying market. To determine how to group assets into $q$ clusters and how to determine which $q$ assets should represent the $q$ clusters amounts to solving a large-scale optimization problem. In the following we describe the mathematical model for the portfolio diversification problem as introduced in [Cornuejols & Tutuncu, 2006]
2. it allows for similarity measures between time-series beyond the covariance matrix. Notice that traditionally, modern portfolio theory considers the covariance matrix as a measure of similarity between the assets. As such, however, the covariance matrix is imperfect. Consider, for instance, a company listed both in London and New York. Although both listings should be very similar, only parts of the time series of the prices of the two listings will overlap, because of the partial overlap of the times the markets open. Instead of covariance, one can consider, for example, dynamic time warping of [Berndt and Clifford, 1994] as a measure of similarity between two time series, which allows for the fact that for some time periods, the data are captured by only one of the time series, while for others, both time series exhibit the similarity due to the parallel evolution of the stock price.
The overall workflow we demonstrate comprises:
1. pick the ground set of assets. In our case, this is a small number of US stocks.
2. load the time series capturing the evolution of the prices of assets. In our case, this is a simplistic load of adjusted daily closing price data from Wikipedia or Nasdaq or LSE or EuroNext, whereas in a real asset management, a much higher frequency may be considered.
3. compute the pair-wise similarity among the time series. In our case, we run a linear-time approximation of the dynamic time warping, still on the classical computer.
4. compute the actual portfolio of $q$ representative assets, based on the similarity measure. This step is run twice, actually. First, we obtain a reference value by a run of an IBM solver (IBM ILOG CPLEX or the Exact Eigensolver) on the classical computer. Second, we run an alternative, hybrid algorithm partly on the quantum computer.
5. visualization of the results. In our case, this is again a simplistic plot.
In the following, we first explain the model used in (4) above, before we proceed with the installation of the pre-requisites and the data loading.
## The Model
As discussed in [Cornuejols & Tutuncu, 2006], we describe a mathematical model that clusters assets into groups of similar ones and selects one representative asset from each group to be included in the index fund portfolio. The model is based on the following data, which we will discuss in more detail later:
$$
\rho_{ij} = \textrm{similarity}\, \textrm{between}\, \textrm{stock}\, i \, \textrm{and}\, \textrm{stock}\, j.
$$
For example, $\rho_{ii} = 1$, $\rho_{ij} \leq 1$ for $i \neq j$ and $\rho_{ij}$ is larger for more similar stocks. An example of this is the correlation between the returns of stocks $i$ and $j$. But one could choose other similarity indices $\rho_{ij}$.
The problem that we are interested in solving is:
$$
(M) \quad f = \max_{x_{ij}, y_{j}} \,\, \sum_{i=1}^n \sum_{j=1}^n \rho_{ij} x_{ij}
$$
subject to the clustering constraint:
$$
\sum_{j=1}^n y_j = q,
$$
to consistency constraints:
$$
\sum_{j=1}^n x_{ij} = 1, \,\textrm{ for }\, i = 1,\ldots, n,
\quad x_{ij} \leq y_j,\,\textrm{ for }\, i = 1,\ldots, n; \, j = 1,\ldots, n,
\quad x_{jj} = y_j,\,\textrm{ for }\, j = 1,\ldots, n,
$$
and integral constraints:
$$
\quad x_{ij}, y_j \in\{0,1\}, \,\textrm{ for }\, i = 1,\ldots, n; \, j = 1,\ldots, n.
$$
The variables $y_j$ describe which stocks $j$ are in the index fund ($y_j = 1$ if $j$ is selected in the fund, $0$ otherwise). For each stock $i = 1,\dots,n$, the variable $x_{ij}$ indicates which stock $j$ in the index fund is most similar to $i$ ($x_{ij} = 1$ if $j$ is the most similar stock in the index fund, $0$ otherwise).
The first constraint selects $q$ stocks in the fund. The second constraint imposes that each stock $i$ has exactly one representative stock $j$ in the fund. The third and fourth constraints guarantee that stock $i$ can be represented by stock $j$ only if $j$ is in the fund. The objective of the model maximizes the similarity between the $n$ stocks and their representatives in the fund. Different cost functions can also be considered.
Let us concatenate the decision variables in one vector
$$
{\bf z} = [x_{11},x_{12},\ldots,x_{11}, x_{22},\ldots,x_{nn}, y_{1},\ldots,y_{n}],
$$
whose dimension is ${\bf z} \in \{0,1\}^N$, with $N = n (n+1)$ and denote the optimal solution with ${\bf z}^*$, and the optimal cost $f^*$.
## A Hybrid Approach
Here, we demonstrate an approach that combines classical and quantum computing steps, following the quantum approximate optimization approach of Farhi, Goldstone, and Gutman (2014).
### Construct a binary polynomial optimization
From $(M)$ one can construct a binary polynomial optimization with equality constraints only, by substituting the $x_{ij} \leq y_j$ inequality constraints with the equivalent equality constraints $x_{ij} (1- y_j) = 0$. Then the problem becomes:
$$
(BPO) \quad f = \max_{x_{ij}, y_{j}} \,\, \sum_{i=1}^n \sum_{j=1}^n \rho_{ij} x_{ij}
$$
subject to the clustering constraint, the integral constraints, and the following modified consistency constraints:
$$\sum_{j=1}^n x_{ij} = 1, \,\textrm{ for }\, i = 1,\ldots, n,$$
$$\quad x_{ij} (1- y_j) = 0,\,\textrm{ for }\, i = 1,\ldots, n; \, j = 1,\ldots, n,$$
$$\quad x_{jj} = y_j,\,\textrm{ for }\, j = 1,\ldots, n.$$
### Construct the Ising Hamiltonian
We can now construct the Ising Hamiltonian (QUBO) by penalty methods (introducing a penalty coefficient $A$ for each equality constraint) as
$$
(IH) \quad H = \sum_{i=1}^n \sum_{j=1}^n \rho_{ij} x_{ij} + A\Big( \sum_{j=1}^n y_j - q\Big)^2 + \sum_{i=1}^n A\Big( \sum_{j=1}^n x_{ij} - 1\Big)^2 + \sum_{j=1}^n A (x_{jj}-y_j)^2 +\sum_{i=1}^n \sum_{j=1}^n A \left(x_{ij} (1- y_j)\right).
$$
### From Hamiltonian to Quadratic Programming (QP) formulation
In the vector ${\bf z}$, the Ising Hamiltonian elements can be rewritten as follows,
First term:
$$
\sum_{i=1}^n \sum_{j=1}^n \rho_{ij} x_{ij} = [\rho_{11},\rho_{12},\ldots,\rho_{11}, \rho_{22},\ldots,\rho_{nn}|{\bf 0}_n ]{\bf z} =: {\bf c}_0^T {\bf z}
$$
Second term:
$$
A\Big( \sum_{j=1}^n y_j - q\Big)^2 = A \Big(\sum_{j=1}^n y_j\Big)^2 - 2 A \sum_{j=1}^n y_j + A q^2 = A {\bf z}^T \left[\begin{array}{c}{\bf 0}_{n^2} \\ \hline {\bf 1}_n \end{array}\right]\left[\begin{array}{cc}{\bf 0}_{n^2} | {\bf 1}_n \end{array}\right]{\bf z} - 2 A q [{\bf 0}_{n^2}|{\bf 1}_n]{\bf z} + A q^2 =: {\bf z}^T {\bf Q}_0 {\bf z} + {\bf c}_1^T {\bf z} + r_0
$$
Third term:
$$
\sum_{i=1}^n A\Big( \sum_{j=1}^n x_{ij} - 1\Big)^2 = A\sum_{i=1}^n \Big(\sum_{j=1}^n x_{ij}\Big)^2 - 2 A \sum_{i=1}^n\sum_{j=1}^n x_{ij} + n A = \qquad\qquad\qquad\qquad\qquad\qquad\qquad $$
which is equivalent to:
$$
\qquad\qquad\qquad\qquad\qquad\qquad\qquad = A {\bf z}^T \left(\sum_{i=1}^n \left[\begin{array}{c}{\bf 0}_{n(i-1)} \\ {\bf 1}_n \\ {\bf 0}_{n(n-i)} \\ \hline {\bf 0}_{n} \end{array}\right]\left[\begin{array}{cccc}{\bf 0}_{n(i-1)} & {\bf 1}_n & {\bf 0}_{n(n-i)} & | {\bf 0}_{n} \end{array}\right]\right){\bf z} - 2 A [{\bf 1}_{n^2}|{\bf 0}_n]{\bf z} + n A =: {\bf z}^T {\bf Q}_1 {\bf z} + {\bf c}_2^T {\bf z} + r_1
$$
Fourth term:
$$
A \sum_{j=1}^n (x_{jj}-y_j)^2 = A {\bf z}^T \left(\sum_{j=0}^{n-1} \left[\begin{array}{c}{\bf 0}_{nj + j} \\ 1 \\ {\bf 0}_{n^2-(nj+j+1)} \\ \hline {\bf 0}_{j} \\ -1 \\ {\bf 0}_{n-j-1} \end{array}\right]\left[\begin{array}{cccccc}{\bf 0}_{nj + j} & 1 & {\bf 0}_{n^2-(nj+j+1)} & | {\bf 0}_{j} & -1 & {\bf 0}_{n-j-1} \end{array}\right]\right){\bf z} = A {\bf z}^T {\bf Q}_2 {\bf z}
$$
Fifth term:
$$
\sum_{i=1}^n \sum_{j=1}^n A \left(x_{ij} (1- y_j)\right) = A [{\bf 1}_{n^2}|{\bf 0}_n]{\bf z} + A {\bf z}^T \left( \sum_{i=1}^n \sum_{j=1}^n \left[\begin{array}{ccc|c} & & & \\ & {\bf 0}_{n^2\times n^2} & & -1/2_{(ij,j)} \\ & & & \\ \hline & -1/2_{(j, ij)} & & {\bf 0}_{n} \end{array}\right] \right) {\bf z} =: {\bf z}^T {\bf Q}_3 {\bf z} + {\bf c}_3^T {\bf z}
$$
Therefore, the formulation becomes,
$$
(IH-QP)\quad \max_{{\bf z}\in\{0,1\}^{n(n+1)}} \, {\bf z}^T ({\bf Q}_0+{\bf Q}_1+ {\bf Q}_2 + {\bf Q}_3 ){\bf z} + ({\bf c}_0+{\bf c}_1+{\bf c}_2+{\bf c}_3)^T {\bf z} +r_0+r_1+r_2$$
which can be passed to the variational quantum eigensolver.
## References
[1] G. Cornuejols, M. L. Fisher, and G. L. Nemhauser, *Location of bank accounts to optimize float: an analytical study of exact and approximate algorithms*, Management Science, vol. 23(8), 1997
[2] E. Farhi, J. Goldstone, S. Gutmann e-print arXiv 1411.4028, 2014
[3] G. Cornuejols and R. Tutuncu, *Optimization methods in finance*, 2006
[4] DJ. Berndt and J. Clifford, *Using dynamic time warping to find patterns in time series*. In KDD workshop 1994 (Vol. 10, No. 16, pp. 359-370).
[5] https://github.com/Qiskit/qiskit-tutorial/blob/master/qiskit/aqua/optimization/maxcut_and_tsp.ipynb
## The Implementation
First, we import the requisite modules.
```
# Import requisite modules
import math
import operator
import logging
import traceback
import datetime
import sys
import warnings
warnings.filterwarnings("error")
warnings.filterwarnings("ignore", category=DeprecationWarning)
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Import Qiskit packages
warnings.filterwarnings('ignore')
import qiskit
from qiskit import Aer
from qiskit.aqua import QuantumInstance
from qiskit.aqua import Operator, run_algorithm
from qiskit.aqua.input import EnergyInput
from qiskit.aqua.algorithms import VQE, QAOA, ExactEigensolver
from qiskit.aqua.components.optimizers import COBYLA
from qiskit.aqua.components.variational_forms import RY
# setup aqua logging
from qiskit.aqua._logging import set_logging_config, build_logging_config
# set_logging_config(build_logging_config(logging.DEBUG)) # choose INFO, DEBUG to see the log
# The data providers of stock-market data
from qiskit.aqua.translators.data_providers import *
from qiskit.aqua.translators.ising import portfolio_diversification
```
Next, we download price data for two stocks and compute their pair-wise similarity matrix (<a target="_blank" href="https://en.wikipedia.org/wiki/Dynamic_time_warping">dynamic time warping</a> distance normalized to (0,1] by taking the reciprocal). If this fails, e.g., due to you being offline or exceeding the daily limit for accesses to the stock-market data, we consider a constant matrix instead.
```
# Generate a pairwise time-series similarity matrix
stocks = ["TICKER1", "TICKER2"]
n = len(stocks)
rho = np.ones((n,n))
rho[0,1] = 0.8
rho[1,0] = 0.8
data = RandomDataProvider(tickers = stocks,
start = datetime.datetime(2016,1,1),
end = datetime.datetime(2016,1,30))
data.run()
rho = data.get_similarity_matrix()
# Actually, we consider the additive inverse to invert the direction of optimisation.
rho = -1 * rho
```
Now we decide on the number of clusters. This has to be smaller than the number of stocks we have loaded.
```
q = 1 # q less or equal than n
```
## Classical solution using IBM ILOG CPLEX
For a classical solution, we use IBM CPLEX. CPLEX is able to find the exact solution of this problem. We first define a ClassicalOptimizer class that encodes the problem in a way that CPLEX can solve, and then instantiate the class and solve it.
```
class ClassicalOptimizer:
def __init__(self, rho, n, q):
self.rho = rho
self.n = n # number of inner variables
self.q = q # number of required selection
def compute_allowed_combinations(self):
f = math.factorial
return int(f(self.n) / f(self.q) / f(self.n - self.q))
def cplex_solution(self):
# refactoring
rho = self.rho
n = self.n
q = self.q
my_obj = list(rho.reshape(1, n ** 2)[0]) + [0. for x in range(0, n)]
my_ub = [1 for x in range(0, n ** 2 + n)]
my_lb = [0 for x in range(0, n ** 2 + n)]
my_ctype = "".join(['I' for x in range(0, n ** 2 + n)])
my_rhs = [q] + [1 for x in range (0, n)] +[0 for x in range (0, n)] + [0.1 for x in range(0, n ** 2)]
my_sense = "".join(['E' for x in range(0, 1+n)]) + "".join(['E' for x in range(0, n)]) + "".join(
['L' for x in range(0, n ** 2)])
try:
my_prob = cplex.Cplex()
self.populatebyrow(my_prob, my_obj, my_ub, my_lb, my_ctype, my_sense, my_rhs)
my_prob.solve()
except CplexError as exc:
print(exc)
return
x = my_prob.solution.get_values()
x = np.array(x)
cost = my_prob.solution.get_objective_value()
return x, cost
def populatebyrow(self, prob, my_obj, my_ub, my_lb, my_ctype, my_sense, my_rhs):
n = self.n
prob.objective.set_sense(prob.objective.sense.minimize)
prob.variables.add(obj=my_obj, lb=my_lb, ub=my_ub, types=my_ctype)
prob.set_log_stream(None)
prob.set_error_stream(None)
prob.set_warning_stream(None)
prob.set_results_stream(None)
rows = []
col = [x for x in range(n**2, n**2+n)]
coef = [1 for x in range(0, n)]
rows.append([col, coef])
for ii in range(0, n):
col = [x for x in range(0+n*ii, n+n*ii)]
coef = [1 for x in range(0, n)]
rows.append([col, coef])
for ii in range(0, n):
col = [ii * n + ii, n ** 2 + ii]
coef = [1, -1]
rows.append([col, coef])
for ii in range(0, n):
for jj in range(0, n):
col = [ii*n + jj, n ** 2 + jj]
coef = [1, -1]
rows.append([col, coef])
prob.linear_constraints.add(lin_expr=rows, senses=my_sense, rhs=my_rhs)
# Instantiate the classical optimizer class
classical_optimizer = ClassicalOptimizer(rho, n, q)
# Compute the number of feasible solutions:
print('Number of feasible combinations= ' + str(classical_optimizer.compute_allowed_combinations()))
# Compute the total number of possible combinations (feasible + unfeasible)
print('Total number of combinations= ' + str(2 ** (n*(n+1))))
# Visualize the solution
def visualize_solution(xc, yc, x, C, n, K, title_str):
plt.figure()
plt.scatter(xc, yc, s=200)
for i in range(len(xc)):
plt.annotate(i, (xc[i] + 0.015, yc[i]), size=16, color='r')
plt.grid()
for ii in range(n ** 2, n **2 + n):
if x[ii] > 0:
plt.plot(xc[ii-n**2], yc[ii-n**2], 'r*', ms=20)
for ii in range(0, n ** 2):
if x[ii] > 0:
iy = ii // n
ix = ii % n
plt.plot([xc[ix], xc[iy]], [yc[ix], yc[iy]], 'C2')
plt.title(title_str +' cost = ' + str(int(C * 100) / 100.))
plt.show()
```
Solution shows the selected stocks via the stars and in green the links (via similarities) with other stocks that are represented in the fund by the linked stock.
## Quantum Computing with IBM Q
For the quantum solution, we use Qiskit. We first define a class QuantumOptimizer that encodes the quantum approach to solve the problem and then we instantiate it and solve it. We define the following methods inside the class:
- `exact_solution` : to make sure that the Ising Hamiltonian is correctly encoded in the $Z$ basis, we can compute its eigendecomposition classically, i.e., considering a symmetric matrix of dimension $2^N \times 2^N$. For the problem at hand $n=3$, that is $N = 12$, seems to be the limit for many laptops;
- `vqe_solution` : solves the problem $(M)$ via the variational quantum eigensolver (VQE);
- `qaoa_solution` : solves the problem $(M)$ via a Quantum Approximate Optimization Algorithm (QAOA).
```
class QuantumOptimizer:
def __init__(self, rho, n, q):
self.rho = rho
self.n = n
self.q = q
# Obtains the least eigenvalue of the Hamiltonian classically
def exact_solution(self):
qubitOp = portfolio_diversification.get_portfoliodiversification_qubitops(self.rho, self.n, self.q)
algo_input = EnergyInput(qubitOp)
algorithm_cfg = {
'name': 'ExactEigensolver',
}
params = {
'problem': {'name': 'ising'},
'algorithm': algorithm_cfg
}
result = run_algorithm(params, algo_input)
return self.decode_result(result)
def vqe_solution(self):
qubitOp = portfolio_diversification.get_portfoliodiversification_qubitops(self.rho, self.n, self.q)
backend = Aer.get_backend('statevector_simulator')
seed = 50
cobyla = COBYLA()
cobyla.set_options(maxiter=250)
ry = RY(qubitOp.num_qubits, depth=5, entanglement='full')
vqe = VQE(qubitOp, ry, cobyla, 'matrix')
vqe.random_seed = seed
quantum_instance = QuantumInstance(backend=backend, seed_simulator=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
return self.decode_result(result)
def qaoa_solution(self):
qubitOp = portfolio_diversification.get_portfoliodiversification_qubitops(self.rho, self.n, self.q)
backend = Aer.get_backend('statevector_simulator')
seed = 50
cobyla = COBYLA()
cobyla.set_options(maxiter=250)
qaoa = QAOA(qubitOp, cobyla, 3, 'matrix')
qaoa.random_seed = seed
quantum_instance = QuantumInstance(backend=backend, seed_simulator=seed, seed_transpiler=seed)
result = qaoa.run(quantum_instance)
return self.decode_result(result)
def decode_result(self, result, offset = 0):
quantum_solution = portfolio_diversification.get_portfoliodiversification_solution(self.rho, self.n, self.q, result)
ground_level = portfolio_diversification.get_portfoliodiversification_value(self.rho, self.n, self.q, quantum_solution)
return quantum_solution, ground_level
```
### Step 1
Instantiate the quantum optimizer class with parameters:
- the similarity matrix `rho`;
- the number of assets and clusters `n` and `q`;
```
# Instantiate the quantum optimizer class with parameters:
quantum_optimizer = QuantumOptimizer(rho, n, q)
```
### Step 2
Encode the problem as a binary formulation (IH-QP).
Sanity check: make sure that the binary formulation in the quantum optimizer is correct (i.e., yields the same cost given the same solution).
```
# Check if the binary representation is correct. This requires CPLEX
try:
import cplex
warnings.filterwarnings('ignore')
quantum_solution, quantum_cost = quantum_optimizer.exact_solution()
classical_solution, classical_cost = classical_optimizer.cplex_solution()
print(quantum_cost, classical_cost)
if np.abs(quantum_cost - classical_cost) < 0.01:
print('Binary formulation is correct')
else: print('Error in the formulation of the Hamiltonian')
except: None
```
### Step 3
Encode the problem as an Ising Hamiltonian in the Z basis.
Sanity check: make sure that the formulation is correct (i.e., yields the same cost given the same solution)
```
ground_state, ground_level = quantum_optimizer.exact_solution()
print(ground_state)
try:
if np.abs(ground_level - classical_cost)<0.01:
print('Ising Hamiltonian in Z basis is correct')
else: print('Error in the Ising Hamiltonian formulation')
except: None
```
### Step 4
Solve the problem via VQE. Notice that depending on the number of qubits, this can take a while: for 6 qubits it takes 15 minutes on a 2015 Macbook Pro, for 12 qubits it takes more than 12 hours. For longer runs, logging may be useful to observe the workings; otherwise, you just have to wait until the solution is printed.
```
warnings.filterwarnings('ignore')
vqe_state, vqe_level = quantum_optimizer.vqe_solution()
print(vqe_state)
try:
if np.linalg.norm(ground_state - vqe_state)<0.01:
print('VQE produces the same solution as the exact eigensolver.')
else: print('VQE does not produce the same solution as the exact eigensolver, but that is to be expected.')
except: None
```
### Step 5
Visualize the solution
```
xc, yc = data.get_coordinates()
visualize_solution(xc, yc, ground_state, ground_level, n, q, 'Classical')
visualize_solution(xc, yc, vqe_state, vqe_level, n, q, 'VQE')
```
Solution shows the selected stocks via the stars and in green the links (via similarities) with other stocks that are represented in the fund by the linked stock. Keep in mind that VQE is a heuristic working on the QP formulation of the Ising Hamiltonian, though. For suitable choices of A, local optima of the QP formulation will be feasible solutions to the ILP. While for some small instances, as above, we can find optimal solutions of the QP formulation which coincide with optima of the ILP, finding optimal solutions of the ILP is harder than finding local optima of the QP formulation, in general. Even within the VQE, one may provide stronger guarantees, for specific variational forms (trial wave functions).
```
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
|
github_jupyter
|
# Import requisite modules
import math
import operator
import logging
import traceback
import datetime
import sys
import warnings
warnings.filterwarnings("error")
warnings.filterwarnings("ignore", category=DeprecationWarning)
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Import Qiskit packages
warnings.filterwarnings('ignore')
import qiskit
from qiskit import Aer
from qiskit.aqua import QuantumInstance
from qiskit.aqua import Operator, run_algorithm
from qiskit.aqua.input import EnergyInput
from qiskit.aqua.algorithms import VQE, QAOA, ExactEigensolver
from qiskit.aqua.components.optimizers import COBYLA
from qiskit.aqua.components.variational_forms import RY
# setup aqua logging
from qiskit.aqua._logging import set_logging_config, build_logging_config
# set_logging_config(build_logging_config(logging.DEBUG)) # choose INFO, DEBUG to see the log
# The data providers of stock-market data
from qiskit.aqua.translators.data_providers import *
from qiskit.aqua.translators.ising import portfolio_diversification
# Generate a pairwise time-series similarity matrix
stocks = ["TICKER1", "TICKER2"]
n = len(stocks)
rho = np.ones((n,n))
rho[0,1] = 0.8
rho[1,0] = 0.8
data = RandomDataProvider(tickers = stocks,
start = datetime.datetime(2016,1,1),
end = datetime.datetime(2016,1,30))
data.run()
rho = data.get_similarity_matrix()
# Actually, we consider the additive inverse to invert the direction of optimisation.
rho = -1 * rho
q = 1 # q less or equal than n
class ClassicalOptimizer:
def __init__(self, rho, n, q):
self.rho = rho
self.n = n # number of inner variables
self.q = q # number of required selection
def compute_allowed_combinations(self):
f = math.factorial
return int(f(self.n) / f(self.q) / f(self.n - self.q))
def cplex_solution(self):
# refactoring
rho = self.rho
n = self.n
q = self.q
my_obj = list(rho.reshape(1, n ** 2)[0]) + [0. for x in range(0, n)]
my_ub = [1 for x in range(0, n ** 2 + n)]
my_lb = [0 for x in range(0, n ** 2 + n)]
my_ctype = "".join(['I' for x in range(0, n ** 2 + n)])
my_rhs = [q] + [1 for x in range (0, n)] +[0 for x in range (0, n)] + [0.1 for x in range(0, n ** 2)]
my_sense = "".join(['E' for x in range(0, 1+n)]) + "".join(['E' for x in range(0, n)]) + "".join(
['L' for x in range(0, n ** 2)])
try:
my_prob = cplex.Cplex()
self.populatebyrow(my_prob, my_obj, my_ub, my_lb, my_ctype, my_sense, my_rhs)
my_prob.solve()
except CplexError as exc:
print(exc)
return
x = my_prob.solution.get_values()
x = np.array(x)
cost = my_prob.solution.get_objective_value()
return x, cost
def populatebyrow(self, prob, my_obj, my_ub, my_lb, my_ctype, my_sense, my_rhs):
n = self.n
prob.objective.set_sense(prob.objective.sense.minimize)
prob.variables.add(obj=my_obj, lb=my_lb, ub=my_ub, types=my_ctype)
prob.set_log_stream(None)
prob.set_error_stream(None)
prob.set_warning_stream(None)
prob.set_results_stream(None)
rows = []
col = [x for x in range(n**2, n**2+n)]
coef = [1 for x in range(0, n)]
rows.append([col, coef])
for ii in range(0, n):
col = [x for x in range(0+n*ii, n+n*ii)]
coef = [1 for x in range(0, n)]
rows.append([col, coef])
for ii in range(0, n):
col = [ii * n + ii, n ** 2 + ii]
coef = [1, -1]
rows.append([col, coef])
for ii in range(0, n):
for jj in range(0, n):
col = [ii*n + jj, n ** 2 + jj]
coef = [1, -1]
rows.append([col, coef])
prob.linear_constraints.add(lin_expr=rows, senses=my_sense, rhs=my_rhs)
# Instantiate the classical optimizer class
classical_optimizer = ClassicalOptimizer(rho, n, q)
# Compute the number of feasible solutions:
print('Number of feasible combinations= ' + str(classical_optimizer.compute_allowed_combinations()))
# Compute the total number of possible combinations (feasible + unfeasible)
print('Total number of combinations= ' + str(2 ** (n*(n+1))))
# Visualize the solution
def visualize_solution(xc, yc, x, C, n, K, title_str):
plt.figure()
plt.scatter(xc, yc, s=200)
for i in range(len(xc)):
plt.annotate(i, (xc[i] + 0.015, yc[i]), size=16, color='r')
plt.grid()
for ii in range(n ** 2, n **2 + n):
if x[ii] > 0:
plt.plot(xc[ii-n**2], yc[ii-n**2], 'r*', ms=20)
for ii in range(0, n ** 2):
if x[ii] > 0:
iy = ii // n
ix = ii % n
plt.plot([xc[ix], xc[iy]], [yc[ix], yc[iy]], 'C2')
plt.title(title_str +' cost = ' + str(int(C * 100) / 100.))
plt.show()
class QuantumOptimizer:
def __init__(self, rho, n, q):
self.rho = rho
self.n = n
self.q = q
# Obtains the least eigenvalue of the Hamiltonian classically
def exact_solution(self):
qubitOp = portfolio_diversification.get_portfoliodiversification_qubitops(self.rho, self.n, self.q)
algo_input = EnergyInput(qubitOp)
algorithm_cfg = {
'name': 'ExactEigensolver',
}
params = {
'problem': {'name': 'ising'},
'algorithm': algorithm_cfg
}
result = run_algorithm(params, algo_input)
return self.decode_result(result)
def vqe_solution(self):
qubitOp = portfolio_diversification.get_portfoliodiversification_qubitops(self.rho, self.n, self.q)
backend = Aer.get_backend('statevector_simulator')
seed = 50
cobyla = COBYLA()
cobyla.set_options(maxiter=250)
ry = RY(qubitOp.num_qubits, depth=5, entanglement='full')
vqe = VQE(qubitOp, ry, cobyla, 'matrix')
vqe.random_seed = seed
quantum_instance = QuantumInstance(backend=backend, seed_simulator=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
return self.decode_result(result)
def qaoa_solution(self):
qubitOp = portfolio_diversification.get_portfoliodiversification_qubitops(self.rho, self.n, self.q)
backend = Aer.get_backend('statevector_simulator')
seed = 50
cobyla = COBYLA()
cobyla.set_options(maxiter=250)
qaoa = QAOA(qubitOp, cobyla, 3, 'matrix')
qaoa.random_seed = seed
quantum_instance = QuantumInstance(backend=backend, seed_simulator=seed, seed_transpiler=seed)
result = qaoa.run(quantum_instance)
return self.decode_result(result)
def decode_result(self, result, offset = 0):
quantum_solution = portfolio_diversification.get_portfoliodiversification_solution(self.rho, self.n, self.q, result)
ground_level = portfolio_diversification.get_portfoliodiversification_value(self.rho, self.n, self.q, quantum_solution)
return quantum_solution, ground_level
# Instantiate the quantum optimizer class with parameters:
quantum_optimizer = QuantumOptimizer(rho, n, q)
# Check if the binary representation is correct. This requires CPLEX
try:
import cplex
warnings.filterwarnings('ignore')
quantum_solution, quantum_cost = quantum_optimizer.exact_solution()
classical_solution, classical_cost = classical_optimizer.cplex_solution()
print(quantum_cost, classical_cost)
if np.abs(quantum_cost - classical_cost) < 0.01:
print('Binary formulation is correct')
else: print('Error in the formulation of the Hamiltonian')
except: None
ground_state, ground_level = quantum_optimizer.exact_solution()
print(ground_state)
try:
if np.abs(ground_level - classical_cost)<0.01:
print('Ising Hamiltonian in Z basis is correct')
else: print('Error in the Ising Hamiltonian formulation')
except: None
warnings.filterwarnings('ignore')
vqe_state, vqe_level = quantum_optimizer.vqe_solution()
print(vqe_state)
try:
if np.linalg.norm(ground_state - vqe_state)<0.01:
print('VQE produces the same solution as the exact eigensolver.')
else: print('VQE does not produce the same solution as the exact eigensolver, but that is to be expected.')
except: None
xc, yc = data.get_coordinates()
visualize_solution(xc, yc, ground_state, ground_level, n, q, 'Classical')
visualize_solution(xc, yc, vqe_state, vqe_level, n, q, 'VQE')
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
| 0.498535 | 0.995882 |
# Network, Graph and Graph Database
## Prepare the dataset
```
# https://zenodo.org/record/4670228
TEST_DATA_FOLDER = "test_data/flights"
!mkdir -p $TEST_DATA_FOLDER
# Uncomment the dataset to download
#!curl https://zenodo.org/record/4670228/files/flightlist_20191201_20191231.csv.gz -o {TEST_DATA_FOLDER}/flightlist_20191201_20191231.csv.gz
#!curl https://zenodo.org/record/4670228/files/flightlist_20210301_20210331.csv.gz -o {TEST_DATA_FOLDER}/flightlist_20210301_20210331.csv.gz
!curl https://zenodo.org/record/4670228/files/flightlist_20201201_20201231.csv.gz -o {TEST_DATA_FOLDER}/flightlist_20201201_20201231.csv.gz
#!gunzip {TEST_DATA_FOLDER}/flightlist_20191201_20191231.csv.gz
#!gunzip {TEST_DATA_FOLDER}/flightlist_20210301_20210331.csv.gz
!gunzip {TEST_DATA_FOLDER}/flightlist_20201201_20201231.csv.gz
!ls -lh {TEST_DATA_FOLDER}
```
## Data Cleansing
```
# Uncomment to install the dependency
# Refer below for more details
# - https://scitools.org.uk/cartopy/docs/latest/installing.html#installing
# - https://networkx.org/documentation/latest/install.html
# References for maps
# - https://rabernat.github.io/research_computing_2018/maps-with-cartopy.html
# - https://semba-blog.netlify.app/07/04/2020/mapping-with-cartopy-in-python/
# - https://ipython-books.github.io/142-drawing-flight-routes-with-networkx/
# !conda install -c conda-forge cartopy networkx -y
import math
import json
import numpy as np
import pandas as pd
import networkx as nx
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
from IPython.display import Image
%matplotlib inline
FLIGHT_DEC_2019 = f"{TEST_DATA_FOLDER}/flightlist_20191201_20191231.csv"
FLIGHT_DEC_2020 = f"{TEST_DATA_FOLDER}/flightlist_20201201_20201231.csv"
FLIGHT_MAR_2021 = f"{TEST_DATA_FOLDER}/flightlist_20210301_20210331.csv"
#df = pd.read_csv(FLIGHT_DEC_2019, low_memory=False)
df = pd.read_csv(FLIGHT_MAR_2021, low_memory=False)
#df = pd.read_csv(FLIGHT_DEC_2020, low_memory=False)
len(df)
df.head(10)
df.columns
df.dropna(subset = ["origin", "destination"], inplace=True)
len(df)
df[['altitude_1', 'altitude_2']] = df[['altitude_1','altitude_2']].fillna(value=0)
df = df[["origin", "destination", "day", "latitude_1", "longitude_1", "altitude_1", "latitude_2", "longitude_2", "altitude_2"]]
df.head(10)
```
## Network Visualization
```
edges = df[['origin', 'destination']].values
len(edges), edges
g = nx.from_edgelist(edges)
len(g.nodes()), len(g.edges())
#fig, ax = plt.subplots(1, 1, figsize=(6, 6))
#nx.draw_networkx(g, ax=ax, node_size=5,
# font_size=6, alpha=.5,
# width=.5)
#ax.set_axis_off()
sg = next(g.subgraph(c) for c in nx.connected_components(g))
#fig, ax = plt.subplots(1, 1, figsize=(6, 6))
#nx.draw_networkx(sg, ax=ax, with_labels=False,
# node_size=5, width=.5)
#ax.set_axis_off()
# Airport with latitude and longitude
airports = {}
altitudes = {}
for index, row in df.iterrows():
if not row['origin'] in airports:
airports[row['origin']] = (row['longitude_1'], row['latitude_1'])
altitudes[row['origin']] = row['altitude_1']
if not row['destination'] in airports:
airports[row['destination']] = (row['longitude_2'], row['latitude_2'])
altitudes[row['destination']] = row['altitude_2']
len(airports), len(altitudes)
deg = nx.degree(sg)
sizes = [5 * deg[code] for code in sg.nodes]
altitudes = [altitudes[code] for code in sg.nodes]
len(altitudes)
labels = {code: code if deg[code] >= 20 else '' for code in sg.nodes}
# Map projection
crs = ccrs.PlateCarree(central_longitude=0)
fig, ax = plt.subplots(
1, 1, figsize=(20, 14),
subplot_kw=dict(projection=crs))
ax.coastlines()
ax.set_global()
nx.draw_networkx(sg, ax=ax,
font_size=16,
alpha=.5,
width=.075,
node_size=sizes,
labels=labels,
pos=airports,
node_color=altitudes,
cmap=plt.cm.autumn)
sorted(g.degree, key=lambda x: x[1], reverse=True)
```
|
github_jupyter
|
# https://zenodo.org/record/4670228
TEST_DATA_FOLDER = "test_data/flights"
!mkdir -p $TEST_DATA_FOLDER
# Uncomment the dataset to download
#!curl https://zenodo.org/record/4670228/files/flightlist_20191201_20191231.csv.gz -o {TEST_DATA_FOLDER}/flightlist_20191201_20191231.csv.gz
#!curl https://zenodo.org/record/4670228/files/flightlist_20210301_20210331.csv.gz -o {TEST_DATA_FOLDER}/flightlist_20210301_20210331.csv.gz
!curl https://zenodo.org/record/4670228/files/flightlist_20201201_20201231.csv.gz -o {TEST_DATA_FOLDER}/flightlist_20201201_20201231.csv.gz
#!gunzip {TEST_DATA_FOLDER}/flightlist_20191201_20191231.csv.gz
#!gunzip {TEST_DATA_FOLDER}/flightlist_20210301_20210331.csv.gz
!gunzip {TEST_DATA_FOLDER}/flightlist_20201201_20201231.csv.gz
!ls -lh {TEST_DATA_FOLDER}
# Uncomment to install the dependency
# Refer below for more details
# - https://scitools.org.uk/cartopy/docs/latest/installing.html#installing
# - https://networkx.org/documentation/latest/install.html
# References for maps
# - https://rabernat.github.io/research_computing_2018/maps-with-cartopy.html
# - https://semba-blog.netlify.app/07/04/2020/mapping-with-cartopy-in-python/
# - https://ipython-books.github.io/142-drawing-flight-routes-with-networkx/
# !conda install -c conda-forge cartopy networkx -y
import math
import json
import numpy as np
import pandas as pd
import networkx as nx
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
from IPython.display import Image
%matplotlib inline
FLIGHT_DEC_2019 = f"{TEST_DATA_FOLDER}/flightlist_20191201_20191231.csv"
FLIGHT_DEC_2020 = f"{TEST_DATA_FOLDER}/flightlist_20201201_20201231.csv"
FLIGHT_MAR_2021 = f"{TEST_DATA_FOLDER}/flightlist_20210301_20210331.csv"
#df = pd.read_csv(FLIGHT_DEC_2019, low_memory=False)
df = pd.read_csv(FLIGHT_MAR_2021, low_memory=False)
#df = pd.read_csv(FLIGHT_DEC_2020, low_memory=False)
len(df)
df.head(10)
df.columns
df.dropna(subset = ["origin", "destination"], inplace=True)
len(df)
df[['altitude_1', 'altitude_2']] = df[['altitude_1','altitude_2']].fillna(value=0)
df = df[["origin", "destination", "day", "latitude_1", "longitude_1", "altitude_1", "latitude_2", "longitude_2", "altitude_2"]]
df.head(10)
edges = df[['origin', 'destination']].values
len(edges), edges
g = nx.from_edgelist(edges)
len(g.nodes()), len(g.edges())
#fig, ax = plt.subplots(1, 1, figsize=(6, 6))
#nx.draw_networkx(g, ax=ax, node_size=5,
# font_size=6, alpha=.5,
# width=.5)
#ax.set_axis_off()
sg = next(g.subgraph(c) for c in nx.connected_components(g))
#fig, ax = plt.subplots(1, 1, figsize=(6, 6))
#nx.draw_networkx(sg, ax=ax, with_labels=False,
# node_size=5, width=.5)
#ax.set_axis_off()
# Airport with latitude and longitude
airports = {}
altitudes = {}
for index, row in df.iterrows():
if not row['origin'] in airports:
airports[row['origin']] = (row['longitude_1'], row['latitude_1'])
altitudes[row['origin']] = row['altitude_1']
if not row['destination'] in airports:
airports[row['destination']] = (row['longitude_2'], row['latitude_2'])
altitudes[row['destination']] = row['altitude_2']
len(airports), len(altitudes)
deg = nx.degree(sg)
sizes = [5 * deg[code] for code in sg.nodes]
altitudes = [altitudes[code] for code in sg.nodes]
len(altitudes)
labels = {code: code if deg[code] >= 20 else '' for code in sg.nodes}
# Map projection
crs = ccrs.PlateCarree(central_longitude=0)
fig, ax = plt.subplots(
1, 1, figsize=(20, 14),
subplot_kw=dict(projection=crs))
ax.coastlines()
ax.set_global()
nx.draw_networkx(sg, ax=ax,
font_size=16,
alpha=.5,
width=.075,
node_size=sizes,
labels=labels,
pos=airports,
node_color=altitudes,
cmap=plt.cm.autumn)
sorted(g.degree, key=lambda x: x[1], reverse=True)
| 0.473901 | 0.814238 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from imblearn.over_sampling import SMOTE
from sklearn.datasets import make_blobs, make_classification
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
train = pd.read_csv('train.csv')
train.head()
sns.countplot(train['IN_TREINEIRO']);
```
distribuição do target mal balanceada
```
test = pd.read_csv('test.csv')
test.columns
# excluindo as features de treino que não estão no teste
train = pd.concat([train['IN_TREINEIRO'], train[test.columns]],axis=1)
train.shape
train.corr()['IN_TREINEIRO'].sort_values(ascending=False)
# dropando baixa correlação
train.drop(['Q027','TP_STATUS_REDACAO','TP_DEPENDENCIA_ADM_ESC','TP_ENSINO'],axis=1,inplace=True)
```
## Dados faltantes
```
train.isna().sum()
# setando nota zero para os alunos que não estavam presentes nas respectivas provas
train['NU_NOTA_CN'].loc[train.TP_PRESENCA_CN == 0] = 0
train['NU_NOTA_CH'].loc[train.TP_PRESENCA_CH == 0] = 0
train['NU_NOTA_LC'].loc[train.TP_PRESENCA_LC == 0] = 0
# train['NU_NOTA_REDACAO'].loc[train.TP_STATUS_REDACAO ==4] = 0
# train['NU_NOTA_COMP1'].loc[train.TP_STATUS_REDACAO==4] = 0
# train['NU_NOTA_COMP2'].loc[train.TP_STATUS_REDACAO==4] = 0
# train['NU_NOTA_COMP3'].loc[train.TP_STATUS_REDACAO==4] = 0
# train['NU_NOTA_COMP4'].loc[train.TP_STATUS_REDACAO==4] = 0
# train['NU_NOTA_COMP5'].loc[train.TP_STATUS_REDACAO==4] = 0
train.isna().sum()
train = train.fillna(0)
train.corr()['IN_TREINEIRO'].sort_values(ascending=False)
# dropando features com baixa correlação
train.drop(['SG_UF_RESIDENCIA','IN_CEGUEIRA','TP_LINGUA','CO_UF_RESIDENCIA','TP_NACIONALIDADE','IN_BAIXA_VISAO','IN_GESTANTE','IN_SURDEZ','IN_IDOSO','IN_DISCALCULIA','IN_DISLEXIA','IN_SABATISTA','TP_COR_RACA'],axis=1,inplace=True)
```
## dummies
```
exploracao = pd.DataFrame({'nomes' : train.columns, 'tipos' : train.dtypes})
exploracao
lista_colunas = list(exploracao[exploracao['tipos'] == 'object']['nomes'])
lista_colunas = lista_colunas[1:]
# lista de variáveis categóricas para get dummies
lista_colunas
# salvando target
IN_TREINEIRO = train['IN_TREINEIRO'].copy()
b = pd.get_dummies(train, columns=lista_colunas, drop_first=True, prefix=lista_colunas)
# dropa colunas antigas e concatena novas features transformadas
train = pd.concat([train.drop(lista_colunas, axis=1), b], axis=1)
id_save = train['NU_INSCRICAO'].copy()
train.drop('NU_INSCRICAO',axis=1,inplace=True)
train.drop('IN_TREINEIRO',axis=1,inplace=True)
train.shape
```
## Rebalanceando os dados com imbalanced
```
X_data = train
y_data = IN_TREINEIRO
X_data.shape
imbalanced = pd.DataFrame(np.c_[X_data, y_data], columns=["X" + str(i) for i in range(1, 77)] + ["target"])
imbalanced.target = imbalanced.target.astype(bool)
imbalanced.iloc[:5, :-1]
imbalanced.target.value_counts()
pca = PCA(n_components=2)
pca.fit(imbalanced.drop(["target"], axis=1))
imbalanced_pca = pca.transform(imbalanced.drop(["target"], axis=1))
sns.scatterplot(imbalanced_pca[:, 0], imbalanced_pca[:, 1], hue=imbalanced.target);
smote = SMOTE()
X_smote, y_smote = smote.fit_resample(imbalanced.iloc[:, :-1], imbalanced.target)
imbalanced_pca_smote = pca.transform(X_smote)
sns.scatterplot(x=imbalanced_pca_smote[:, 0], y=imbalanced_pca_smote[:, 1], hue=y_smote);
# balanceado
sum(y_smote == True)/sum(y_smote == False)
```
## Aplicando modelo logistic regression
```
X = X_data
y = y_data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
logmodel = LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
verbose=0, warm_start=False)
logmodel.fit(X_train,y_train)
predictions = logmodel.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test,predictions))
```
## Predict
```
NU_INSCRICAO = test['NU_INSCRICAO'].copy()
# buscando as features utilizadas para o treino
exploracao['nomes'].tolist()
feat_trained = ['NU_INSCRICAO',
'NU_IDADE',
'TP_SEXO',
'TP_ST_CONCLUSAO',
'TP_ANO_CONCLUIU',
'TP_ESCOLA',
'TP_PRESENCA_CN',
'TP_PRESENCA_CH',
'TP_PRESENCA_LC',
'TP_PRESENCA_MT',
'TP_STATUS_REDACAO' ,
'NU_NOTA_CN',
'NU_NOTA_CH',
'NU_NOTA_LC',
'NU_NOTA_COMP1',
'NU_NOTA_COMP2',
'NU_NOTA_COMP3',
'NU_NOTA_COMP4',
'NU_NOTA_COMP5',
'NU_NOTA_REDACAO',
'Q001',
'Q002',
'Q006',
'Q024',
'Q025',
'Q026',
'Q047']
test = test[feat_trained]
test.isna().sum()
# setando zero para alunos que faltaram a prova
test['NU_NOTA_CN'].loc[test.TP_PRESENCA_CN == 0] = 0
test['NU_NOTA_CH'].loc[test.TP_PRESENCA_CH == 0] = 0
test['NU_NOTA_LC'].loc[test.TP_PRESENCA_LC == 0] = 0
test['NU_NOTA_REDACAO'].loc[test.TP_STATUS_REDACAO ==4] = 0
test['NU_NOTA_COMP1'].loc[test.TP_STATUS_REDACAO==4] = 0
test['NU_NOTA_COMP2'].loc[test.TP_STATUS_REDACAO==4] = 0
test['NU_NOTA_COMP3'].loc[test.TP_STATUS_REDACAO==4] = 0
test['NU_NOTA_COMP4'].loc[test.TP_STATUS_REDACAO==4] = 0
test['NU_NOTA_COMP5'].loc[test.TP_STATUS_REDACAO==4] = 0
# preenchendo com zero os faltantes
test = test.fillna(0)
# esta features não estava no treino, só foi importada para fazer a imputação dos faltantes NU_NOTA_REDACAO
test.drop("TP_STATUS_REDACAO",axis=1,inplace=True)
# get dummies da mesma lista utilizada para treino
a = pd.get_dummies(test, columns=lista_colunas, drop_first=True, prefix=lista_colunas)
# concatena a nova lista de features e dropa a antiga
test = pd.concat([test.drop(lista_colunas, axis=1), a], axis=1)
test.drop('NU_INSCRICAO',axis=1,inplace=True)
# conferindo se há a mesma quantidade de features no teste e treino
test.shape
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from imblearn.over_sampling import SMOTE
from sklearn.datasets import make_blobs, make_classification
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
train = pd.read_csv('train.csv')
train.head()
sns.countplot(train['IN_TREINEIRO']);
test = pd.read_csv('test.csv')
test.columns
# excluindo as features de treino que não estão no teste
train = pd.concat([train['IN_TREINEIRO'], train[test.columns]],axis=1)
train.shape
train.corr()['IN_TREINEIRO'].sort_values(ascending=False)
# dropando baixa correlação
train.drop(['Q027','TP_STATUS_REDACAO','TP_DEPENDENCIA_ADM_ESC','TP_ENSINO'],axis=1,inplace=True)
train.isna().sum()
# setando nota zero para os alunos que não estavam presentes nas respectivas provas
train['NU_NOTA_CN'].loc[train.TP_PRESENCA_CN == 0] = 0
train['NU_NOTA_CH'].loc[train.TP_PRESENCA_CH == 0] = 0
train['NU_NOTA_LC'].loc[train.TP_PRESENCA_LC == 0] = 0
# train['NU_NOTA_REDACAO'].loc[train.TP_STATUS_REDACAO ==4] = 0
# train['NU_NOTA_COMP1'].loc[train.TP_STATUS_REDACAO==4] = 0
# train['NU_NOTA_COMP2'].loc[train.TP_STATUS_REDACAO==4] = 0
# train['NU_NOTA_COMP3'].loc[train.TP_STATUS_REDACAO==4] = 0
# train['NU_NOTA_COMP4'].loc[train.TP_STATUS_REDACAO==4] = 0
# train['NU_NOTA_COMP5'].loc[train.TP_STATUS_REDACAO==4] = 0
train.isna().sum()
train = train.fillna(0)
train.corr()['IN_TREINEIRO'].sort_values(ascending=False)
# dropando features com baixa correlação
train.drop(['SG_UF_RESIDENCIA','IN_CEGUEIRA','TP_LINGUA','CO_UF_RESIDENCIA','TP_NACIONALIDADE','IN_BAIXA_VISAO','IN_GESTANTE','IN_SURDEZ','IN_IDOSO','IN_DISCALCULIA','IN_DISLEXIA','IN_SABATISTA','TP_COR_RACA'],axis=1,inplace=True)
exploracao = pd.DataFrame({'nomes' : train.columns, 'tipos' : train.dtypes})
exploracao
lista_colunas = list(exploracao[exploracao['tipos'] == 'object']['nomes'])
lista_colunas = lista_colunas[1:]
# lista de variáveis categóricas para get dummies
lista_colunas
# salvando target
IN_TREINEIRO = train['IN_TREINEIRO'].copy()
b = pd.get_dummies(train, columns=lista_colunas, drop_first=True, prefix=lista_colunas)
# dropa colunas antigas e concatena novas features transformadas
train = pd.concat([train.drop(lista_colunas, axis=1), b], axis=1)
id_save = train['NU_INSCRICAO'].copy()
train.drop('NU_INSCRICAO',axis=1,inplace=True)
train.drop('IN_TREINEIRO',axis=1,inplace=True)
train.shape
X_data = train
y_data = IN_TREINEIRO
X_data.shape
imbalanced = pd.DataFrame(np.c_[X_data, y_data], columns=["X" + str(i) for i in range(1, 77)] + ["target"])
imbalanced.target = imbalanced.target.astype(bool)
imbalanced.iloc[:5, :-1]
imbalanced.target.value_counts()
pca = PCA(n_components=2)
pca.fit(imbalanced.drop(["target"], axis=1))
imbalanced_pca = pca.transform(imbalanced.drop(["target"], axis=1))
sns.scatterplot(imbalanced_pca[:, 0], imbalanced_pca[:, 1], hue=imbalanced.target);
smote = SMOTE()
X_smote, y_smote = smote.fit_resample(imbalanced.iloc[:, :-1], imbalanced.target)
imbalanced_pca_smote = pca.transform(X_smote)
sns.scatterplot(x=imbalanced_pca_smote[:, 0], y=imbalanced_pca_smote[:, 1], hue=y_smote);
# balanceado
sum(y_smote == True)/sum(y_smote == False)
X = X_data
y = y_data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
logmodel = LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
verbose=0, warm_start=False)
logmodel.fit(X_train,y_train)
predictions = logmodel.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test,predictions))
NU_INSCRICAO = test['NU_INSCRICAO'].copy()
# buscando as features utilizadas para o treino
exploracao['nomes'].tolist()
feat_trained = ['NU_INSCRICAO',
'NU_IDADE',
'TP_SEXO',
'TP_ST_CONCLUSAO',
'TP_ANO_CONCLUIU',
'TP_ESCOLA',
'TP_PRESENCA_CN',
'TP_PRESENCA_CH',
'TP_PRESENCA_LC',
'TP_PRESENCA_MT',
'TP_STATUS_REDACAO' ,
'NU_NOTA_CN',
'NU_NOTA_CH',
'NU_NOTA_LC',
'NU_NOTA_COMP1',
'NU_NOTA_COMP2',
'NU_NOTA_COMP3',
'NU_NOTA_COMP4',
'NU_NOTA_COMP5',
'NU_NOTA_REDACAO',
'Q001',
'Q002',
'Q006',
'Q024',
'Q025',
'Q026',
'Q047']
test = test[feat_trained]
test.isna().sum()
# setando zero para alunos que faltaram a prova
test['NU_NOTA_CN'].loc[test.TP_PRESENCA_CN == 0] = 0
test['NU_NOTA_CH'].loc[test.TP_PRESENCA_CH == 0] = 0
test['NU_NOTA_LC'].loc[test.TP_PRESENCA_LC == 0] = 0
test['NU_NOTA_REDACAO'].loc[test.TP_STATUS_REDACAO ==4] = 0
test['NU_NOTA_COMP1'].loc[test.TP_STATUS_REDACAO==4] = 0
test['NU_NOTA_COMP2'].loc[test.TP_STATUS_REDACAO==4] = 0
test['NU_NOTA_COMP3'].loc[test.TP_STATUS_REDACAO==4] = 0
test['NU_NOTA_COMP4'].loc[test.TP_STATUS_REDACAO==4] = 0
test['NU_NOTA_COMP5'].loc[test.TP_STATUS_REDACAO==4] = 0
# preenchendo com zero os faltantes
test = test.fillna(0)
# esta features não estava no treino, só foi importada para fazer a imputação dos faltantes NU_NOTA_REDACAO
test.drop("TP_STATUS_REDACAO",axis=1,inplace=True)
# get dummies da mesma lista utilizada para treino
a = pd.get_dummies(test, columns=lista_colunas, drop_first=True, prefix=lista_colunas)
# concatena a nova lista de features e dropa a antiga
test = pd.concat([test.drop(lista_colunas, axis=1), a], axis=1)
test.drop('NU_INSCRICAO',axis=1,inplace=True)
# conferindo se há a mesma quantidade de features no teste e treino
test.shape
| 0.38341 | 0.719288 |
# Day 1 - Data Types
*Author: Eda AYDIN*
## Objective
Today, we're discussing data types. Check out the Tutorial tab for learning materials and an instructional video!
## Task
Complete the code in the editor below. The variables , , and are already declared and initialized for you. You must:
Declare variables: one of type int, one of type double, and one of type String.
Read lines of input from stdin (according to the sequence given in the Input Format section below) and initialize your variables.
Use the operator to perform the following operations:
Print the sum of plus your int variable on a new line.
Print the sum of plus your double variable to a scale of one decimal place on a new line.
Concatenate with the string you read as input and print the result on a new line.
Note: If you are using a language that doesn't support using for string concatenation (e.g.: C), you can just print one variable immediately following the other on the same line. The string provided in your editor must be printed first, immediately followed by the string you read as input.
## Input Format
The first line contains an integer that you must sum with .
The second line contains a double that you must sum with .
The third line contains a string that you must concatenate with .
## Output Format
Print the sum of both integers on the first line, the sum of both doubles (scaled to decimal place) on the second line, and then the two concatenated strings on the third line.
## Sample Input
12
4.0
is the best place to learn and practice coding!
## Sample Output
16
8.0
HackerRank is the best place to learn and practice coding!
## Explanation
When we sum the integers and , we get the integer .
When we sum the floating-point numbers and , we get .
When we concatenate HackerRank with is the best place to learn and practice coding!, we get HackerRank is the best place to learn and practice coding!.
You will not pass this challenge if you attempt to assign the Sample Case values to your variables instead of following the instructions above and reading input from stdin.
```
i = 4
d = 4.0
s = 'HackerRank '
# Declare second integer, double, and String variables.
# Read and save an integer, double, and String to your variables.
# Print the sum of both integer variables on a new line.
# Print the sum of the double variables on a new line.
# Concatenate and print the String variables on a new line
# The 's' variable above should be printed first.
def data_type(i,d,s):
i2 = int(input())
d2 = float(input())
s2 = str(input())
print(i + i2)
print(d + d2)
print(s + s2)
data_type(i,d,s)
```
|
github_jupyter
|
i = 4
d = 4.0
s = 'HackerRank '
# Declare second integer, double, and String variables.
# Read and save an integer, double, and String to your variables.
# Print the sum of both integer variables on a new line.
# Print the sum of the double variables on a new line.
# Concatenate and print the String variables on a new line
# The 's' variable above should be printed first.
def data_type(i,d,s):
i2 = int(input())
d2 = float(input())
s2 = str(input())
print(i + i2)
print(d + d2)
print(s + s2)
data_type(i,d,s)
| 0.38549 | 0.952397 |
# Sentiment Analysis
## Introduction
When it comes to text data, there are a few popular techniques that we'll be going through in the next few notebooks, starting with sentiment analysis. A few key points to remember with sentiment analysis.
1. **TextBlob Module:** Linguistic researchers have labeled the sentiment of words based on their domain expertise. Sentiment of words can vary based on where it is in a sentence. The TextBlob module allows us to take advantage of these labels.
2. **Sentiment Labels:** Each word in a corpus is labeled in terms of polarity and subjectivity (there are more labels as well, but we're going to ignore them for now). A corpus' sentiment is the average of these.
* **Polarity**: How positive or negative a word is. -1 is very negative. +1 is very positive.
* **Subjectivity**: How subjective, or opinionated a word is. 0 is fact. +1 is very much an opinion.
For more info on how TextBlob coded up its [sentiment function](https://planspace.org/20150607-textblob_sentiment/).
Let's take a look at the sentiment of the various transcripts, both overall and throughout the comedy routine.
## Sentiment of Routine
```
# We'll start by reading in the corpus, which preserves word order
import pandas as pd
data = pd.read_pickle('corpus.pkl')
data
# Create quick lambda functions to find the polarity and subjectivity of each routine
from textblob import TextBlob
pol = lambda x: TextBlob(x).sentiment.polarity
sub = lambda x: TextBlob(x).sentiment.subjectivity
data['polarity'] = data['transcript'].apply(pol)
data['subjectivity'] = data['transcript'].apply(sub)
data
# Let's plot the results
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [10, 8]
for index, special in enumerate(data.index):
x = data.polarity.loc[special]
y = data.subjectivity.loc[special]
plt.scatter(x, y, color='red')
plt.text(x+.001, y+.001, data['titles'][index], fontsize=10)
plt.xlim(-.01, .12)
plt.title('Sentiment Analysis', fontsize=20)
plt.xlabel('<-- Negative -------- Positive -->', fontsize=15)
plt.ylabel('<-- Facts -------- Opinions -->', fontsize=15)
plt.show()
```
## Sentiment of Routine Over Time
Instead of looking at the overall sentiment, let's see if there's anything interesting about the sentiment over time throughout each routine.
```
# Split each routine into 10 parts
import numpy as np
import math
def split_text(text, n=10):
'''Takes in a string of text and splits into n equal parts, with a default of 10 equal parts.'''
# Calculate length of text, the size of each chunk of text and the starting points of each chunk of text
length = len(text)
size = math.floor(length / n)
start = np.arange(0, length, size)
# Pull out equally sized pieces of text and put it into a list
split_list = []
for piece in range(n):
split_list.append(text[start[piece]:start[piece]+size])
return split_list
# Let's take a look at our data again
data
# Let's create a list to hold all of the pieces of text
list_pieces = []
for t in data.transcript:
split = split_text(t)
list_pieces.append(split)
list_pieces
# The list has 6 elements, one for each transcript
len(list_pieces)
# Each transcript has been split into 10 pieces of text
len(list_pieces[0])
# Calculate the polarity for each piece of text
polarity_transcript = []
for lp in list_pieces:
polarity_piece = []
for p in lp:
polarity_piece.append(TextBlob(p).sentiment.polarity)
polarity_transcript.append(polarity_piece)
polarity_transcript
# Show the plot for one special
plt.plot(polarity_transcript[0])
plt.title(data['titles'].index[0])
plt.show()
# Show the plot for all comedians
plt.rcParams['figure.figsize'] = [16, 12]
for index, special in enumerate(data.index):
plt.subplot(3, 4, index+1)
plt.plot(polarity_transcript[index])
plt.plot(np.arange(0,10), np.zeros(10))
plt.title(data['titles'][index])
plt.ylim(ymin=-.2, ymax=.3)
plt.show()
```
Dave Chappelle is known for touching on topics that alot of comedians do not touch. In the future a comparsion between him and his contemparies will further cement the fact. On the whole he spends more spends more time speaking in negative sentiment
|
github_jupyter
|
# We'll start by reading in the corpus, which preserves word order
import pandas as pd
data = pd.read_pickle('corpus.pkl')
data
# Create quick lambda functions to find the polarity and subjectivity of each routine
from textblob import TextBlob
pol = lambda x: TextBlob(x).sentiment.polarity
sub = lambda x: TextBlob(x).sentiment.subjectivity
data['polarity'] = data['transcript'].apply(pol)
data['subjectivity'] = data['transcript'].apply(sub)
data
# Let's plot the results
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [10, 8]
for index, special in enumerate(data.index):
x = data.polarity.loc[special]
y = data.subjectivity.loc[special]
plt.scatter(x, y, color='red')
plt.text(x+.001, y+.001, data['titles'][index], fontsize=10)
plt.xlim(-.01, .12)
plt.title('Sentiment Analysis', fontsize=20)
plt.xlabel('<-- Negative -------- Positive -->', fontsize=15)
plt.ylabel('<-- Facts -------- Opinions -->', fontsize=15)
plt.show()
# Split each routine into 10 parts
import numpy as np
import math
def split_text(text, n=10):
'''Takes in a string of text and splits into n equal parts, with a default of 10 equal parts.'''
# Calculate length of text, the size of each chunk of text and the starting points of each chunk of text
length = len(text)
size = math.floor(length / n)
start = np.arange(0, length, size)
# Pull out equally sized pieces of text and put it into a list
split_list = []
for piece in range(n):
split_list.append(text[start[piece]:start[piece]+size])
return split_list
# Let's take a look at our data again
data
# Let's create a list to hold all of the pieces of text
list_pieces = []
for t in data.transcript:
split = split_text(t)
list_pieces.append(split)
list_pieces
# The list has 6 elements, one for each transcript
len(list_pieces)
# Each transcript has been split into 10 pieces of text
len(list_pieces[0])
# Calculate the polarity for each piece of text
polarity_transcript = []
for lp in list_pieces:
polarity_piece = []
for p in lp:
polarity_piece.append(TextBlob(p).sentiment.polarity)
polarity_transcript.append(polarity_piece)
polarity_transcript
# Show the plot for one special
plt.plot(polarity_transcript[0])
plt.title(data['titles'].index[0])
plt.show()
# Show the plot for all comedians
plt.rcParams['figure.figsize'] = [16, 12]
for index, special in enumerate(data.index):
plt.subplot(3, 4, index+1)
plt.plot(polarity_transcript[index])
plt.plot(np.arange(0,10), np.zeros(10))
plt.title(data['titles'][index])
plt.ylim(ymin=-.2, ymax=.3)
plt.show()
| 0.697506 | 0.985286 |
```

Notebook used to train Lyrics Genius given a lyrics dataset and the network specification
from __future__ import print_function
# Data manipulation
import pydot
import numpy as np
import pandas as pd
# Misc libraries
import json
import pickle
import sys
import io
# Deep Learning libraries
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking, Embedding, InputLayer
from keras.layers import LSTM, Lambda, concatenate, Bidirectional, Concatenate, SpatialDropout1D
from keras.utils.vis_utils import plot_model
import keras
from keras.layers.merge import add
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing.text import Tokenizer
from keras.layers import Input, Embedding, Activation, Flatten, Dense
from keras.layers import Conv1D, MaxPooling1D, Dropout
```
# Loading Training Data
```
print("Loading text data...")
text = io.open('data/rhcp-lyrics.txt', encoding='utf-8').read().lower()
print('corpus length:', len(text))
Tx = 40
chars = sorted(list(set(text)))
num_classes = len(chars)
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
print('number of unique characters in the corpus:', len(chars))
def build_data(text, Tx = 40, stride = 3):
"""
Create a training set by scanning a window of size Tx over the text corpus, with stride 3.
Arguments:
text -- string, corpus of Shakespearian poem
Tx -- sequence length, number of time-steps (or characters) in one training example
stride -- how much the window shifts itself while scanning
Returns:
X -- list of training examples
Y -- list of training labels
"""
X = []
Y = []
for i in range(0, len(text) - Tx, stride):
X.append(text[i: i + Tx])
Y.append(text[i + Tx])
print('number of training examples:', len(X))
return X, Y
```
# Create Training Set and Vectorize Data
```
print("Creating training set...")
X, Y = build_data(text, Tx=Tx, stride = 3)
tk = Tokenizer(num_words=None, char_level=True, oov_token='UNK')
tk.fit_on_texts(X)
# If we already have a character list, then replace the tk.word_index
# If not, just skip below part
# construct a new vocabulary
alphabet = chars
#Store alphabet to make predictions
with open('models/rhcp-alphabet.json', 'w+') as fp:
json.dump(alphabet, fp)
char_dict = {}
for i, char in enumerate(alphabet):
char_dict[char] = i + 1
# Use char_dict to replace the tk.word_index
tk.word_index = char_dict.copy()
# Add 'UNK' to the vocabulary
tk.word_index[tk.oov_token] = max(char_dict.values()) + 1
with open('models/rhcp-tokenizer.pkl', 'wb') as handle:
pickle.dump(tk, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Convert string to index
train_sequences = tk.texts_to_sequences(X)
# Padding
train_data = pad_sequences(train_sequences, maxlen=Tx, padding='post')
# Convert to numpy array
train_data = np.array(train_data, dtype='float32')
# =======================Get classes================
train_classes = [elem[0] for elem in tk.texts_to_sequences(Y)]
train_class_list = [x - 1 for x in train_classes]
from keras.utils import to_categorical
train_classes = to_categorical(train_class_list)
x, y = train_data, train_classes
```
# Building the Model
```
model_config = {
'rnn_width': 64,
'rnn_depth': 4,
'rnn_dropout': 0.3,
'bidirectional': True
}
embedding_size = 128
continue_learning = False
model_path = "models/rhcp_model_res.h5"
def new_lstm_cell(rnn_width, rnn_dropout, bidirectional=True, return_sequences=False):
if bidirectional:
return Bidirectional(LSTM(rnn_width, recurrent_dropout=rnn_dropout, dropout=rnn_dropout,return_sequences=return_sequences))
else:
return LSTM(rnn_width, recurrent_dropout=rnn_dropout, dropout=rnn_dropout,return_sequences=return_sequences)
def make_lstm_layers(input, rnn_width, rnn_depth, rnn_dropout, bidirectional=True):
layer_list = []
layer = input
for i in range(rnn_depth):
return_sequences = i < rnn_depth - 1
prev_layer = input if i == 0 else layer_list[-1]
layer = new_lstm_cell(rnn_width, rnn_dropout, bidirectional=bidirectional, return_sequences=return_sequences)
layer_list.append(layer)
return layer, layer_list
def make_residual_lstm_layers(input, rnn_width, rnn_depth, rnn_dropout, bidirectional=True):
"""
The intermediate LSTM layers return sequences, while the last returns a single element.
The input is also a sequence. In order to match the shape of input and output of the LSTM
to sum them we can do it only for all layers but the last.
"""
x = input
layer_list = []
for i in range(rnn_depth):
return_sequences = i < rnn_depth - 1
x_rnn = Bidirectional(LSTM(rnn_width, recurrent_dropout=rnn_dropout, dropout=rnn_dropout, return_sequences=return_sequences))(x)
if return_sequences:
# Intermediate layers return sequences, input is also a sequence.
if i > 0 or input.shape[-1] == rnn_width:
x = add([x, x_rnn])
else:
# Note that the input size and RNN output has to match, due to the sum operation.
# If we want different rnn_width, we'd have to perform the sum from layer 2 on.
x = x_rnn
else:
# Last layer does not return sequences, just the last element
# so we select only the last element of the previous output.
def slice_last(x):
return x[..., -1, :]
x = add([Lambda(slice_last)(x), x_rnn])
layer_list.append(x_rnn)
return x, layer_list
def create_model_residual(model_config):
inputs = Input(shape=(Tx, ), name='sent_input', dtype='int64')
embeddings = keras.layers.Embedding(len(chars) + 1, embedding_size, input_length=Tx)(inputs)
embeddings = SpatialDropout1D(model_config['rnn_dropout'], name='spatial-dropout')(embeddings)
lstm_layer, layer_list = make_residual_lstm_layers(embeddings, **model_config)
dense_layer = keras.layers.Dense(len(chars), activation='softmax')(lstm_layer)
model = keras.Model(inputs=inputs, outputs=dense_layer)
optimizer = keras.optimizers.Adam(learning_rate=4e-3)
model.compile( loss='categorical_crossentropy', optimizer=optimizer)
return model
# Simple Deep LSTM Model without Residual Units
def create_model():
model = Sequential()
model.add(InputLayer(input_shape=(Tx, len(chars))))
model.add(LSTM(128, input_shape=(Tx, len(chars)), return_sequences=True))
model.add(Dropout(0.5))
model.add(LSTM(128))
model.add(Dropout(0.5))
model.add(Dense(len(chars), activation='softmax'))
optimizer = keras.optimizers.Adam(learning_rate=4e-3)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
return model
model = None
if continue_learning:
model = load_model(model_path)
else:
model = create_model_residual(model_config)
model.summary()
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
out = np.random.choice(range(len(chars)), p = probas.ravel())
return out
history = model.fit(x, y, batch_size=128, epochs=30, verbose=True)
import matplotlib.pyplot as plt
plt.style.use('ggplot')
def plot_history(history):
loss = history.history['loss']
x = range(1, len(loss) + 1)
plt.plot(x, loss, 'b', label='Training loss')
plt.title('Training loss')
plt.legend()
plot_history(history)
# serialize weights to HDF5
model.save("models/rhcp_model_res.h5", overwrite=True)
print("Model succesfully saved to disk")
def generate_output(temperature=1.0):
generated = ''
usr_input = input("Start typing the beginning of your lyrics. Lyric-genius will complete it.\n Your input is: ")
# zero pad the sentence to Tx characters.
sentence = ('{0:0>' + str(Tx) + '}').format(usr_input).lower()
generated += usr_input
sys.stdout.write("\n\nHere is your lyric: \n\n")
sys.stdout.write(usr_input)
for i in range(300):
predict_sequence = tk.texts_to_sequences([sentence])
# Padding
predict_data = pad_sequences(predict_sequence, maxlen=Tx, padding='post')
# Convert to numpy array
x_pred = np.array(predict_data, dtype='float32')
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, temperature = temperature)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
if next_char == '\n':
continue
generate_output(temperature=0.5)
```
|
github_jupyter
|

Notebook used to train Lyrics Genius given a lyrics dataset and the network specification
from __future__ import print_function
# Data manipulation
import pydot
import numpy as np
import pandas as pd
# Misc libraries
import json
import pickle
import sys
import io
# Deep Learning libraries
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking, Embedding, InputLayer
from keras.layers import LSTM, Lambda, concatenate, Bidirectional, Concatenate, SpatialDropout1D
from keras.utils.vis_utils import plot_model
import keras
from keras.layers.merge import add
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing.text import Tokenizer
from keras.layers import Input, Embedding, Activation, Flatten, Dense
from keras.layers import Conv1D, MaxPooling1D, Dropout
print("Loading text data...")
text = io.open('data/rhcp-lyrics.txt', encoding='utf-8').read().lower()
print('corpus length:', len(text))
Tx = 40
chars = sorted(list(set(text)))
num_classes = len(chars)
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
print('number of unique characters in the corpus:', len(chars))
def build_data(text, Tx = 40, stride = 3):
"""
Create a training set by scanning a window of size Tx over the text corpus, with stride 3.
Arguments:
text -- string, corpus of Shakespearian poem
Tx -- sequence length, number of time-steps (or characters) in one training example
stride -- how much the window shifts itself while scanning
Returns:
X -- list of training examples
Y -- list of training labels
"""
X = []
Y = []
for i in range(0, len(text) - Tx, stride):
X.append(text[i: i + Tx])
Y.append(text[i + Tx])
print('number of training examples:', len(X))
return X, Y
print("Creating training set...")
X, Y = build_data(text, Tx=Tx, stride = 3)
tk = Tokenizer(num_words=None, char_level=True, oov_token='UNK')
tk.fit_on_texts(X)
# If we already have a character list, then replace the tk.word_index
# If not, just skip below part
# construct a new vocabulary
alphabet = chars
#Store alphabet to make predictions
with open('models/rhcp-alphabet.json', 'w+') as fp:
json.dump(alphabet, fp)
char_dict = {}
for i, char in enumerate(alphabet):
char_dict[char] = i + 1
# Use char_dict to replace the tk.word_index
tk.word_index = char_dict.copy()
# Add 'UNK' to the vocabulary
tk.word_index[tk.oov_token] = max(char_dict.values()) + 1
with open('models/rhcp-tokenizer.pkl', 'wb') as handle:
pickle.dump(tk, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Convert string to index
train_sequences = tk.texts_to_sequences(X)
# Padding
train_data = pad_sequences(train_sequences, maxlen=Tx, padding='post')
# Convert to numpy array
train_data = np.array(train_data, dtype='float32')
# =======================Get classes================
train_classes = [elem[0] for elem in tk.texts_to_sequences(Y)]
train_class_list = [x - 1 for x in train_classes]
from keras.utils import to_categorical
train_classes = to_categorical(train_class_list)
x, y = train_data, train_classes
model_config = {
'rnn_width': 64,
'rnn_depth': 4,
'rnn_dropout': 0.3,
'bidirectional': True
}
embedding_size = 128
continue_learning = False
model_path = "models/rhcp_model_res.h5"
def new_lstm_cell(rnn_width, rnn_dropout, bidirectional=True, return_sequences=False):
if bidirectional:
return Bidirectional(LSTM(rnn_width, recurrent_dropout=rnn_dropout, dropout=rnn_dropout,return_sequences=return_sequences))
else:
return LSTM(rnn_width, recurrent_dropout=rnn_dropout, dropout=rnn_dropout,return_sequences=return_sequences)
def make_lstm_layers(input, rnn_width, rnn_depth, rnn_dropout, bidirectional=True):
layer_list = []
layer = input
for i in range(rnn_depth):
return_sequences = i < rnn_depth - 1
prev_layer = input if i == 0 else layer_list[-1]
layer = new_lstm_cell(rnn_width, rnn_dropout, bidirectional=bidirectional, return_sequences=return_sequences)
layer_list.append(layer)
return layer, layer_list
def make_residual_lstm_layers(input, rnn_width, rnn_depth, rnn_dropout, bidirectional=True):
"""
The intermediate LSTM layers return sequences, while the last returns a single element.
The input is also a sequence. In order to match the shape of input and output of the LSTM
to sum them we can do it only for all layers but the last.
"""
x = input
layer_list = []
for i in range(rnn_depth):
return_sequences = i < rnn_depth - 1
x_rnn = Bidirectional(LSTM(rnn_width, recurrent_dropout=rnn_dropout, dropout=rnn_dropout, return_sequences=return_sequences))(x)
if return_sequences:
# Intermediate layers return sequences, input is also a sequence.
if i > 0 or input.shape[-1] == rnn_width:
x = add([x, x_rnn])
else:
# Note that the input size and RNN output has to match, due to the sum operation.
# If we want different rnn_width, we'd have to perform the sum from layer 2 on.
x = x_rnn
else:
# Last layer does not return sequences, just the last element
# so we select only the last element of the previous output.
def slice_last(x):
return x[..., -1, :]
x = add([Lambda(slice_last)(x), x_rnn])
layer_list.append(x_rnn)
return x, layer_list
def create_model_residual(model_config):
inputs = Input(shape=(Tx, ), name='sent_input', dtype='int64')
embeddings = keras.layers.Embedding(len(chars) + 1, embedding_size, input_length=Tx)(inputs)
embeddings = SpatialDropout1D(model_config['rnn_dropout'], name='spatial-dropout')(embeddings)
lstm_layer, layer_list = make_residual_lstm_layers(embeddings, **model_config)
dense_layer = keras.layers.Dense(len(chars), activation='softmax')(lstm_layer)
model = keras.Model(inputs=inputs, outputs=dense_layer)
optimizer = keras.optimizers.Adam(learning_rate=4e-3)
model.compile( loss='categorical_crossentropy', optimizer=optimizer)
return model
# Simple Deep LSTM Model without Residual Units
def create_model():
model = Sequential()
model.add(InputLayer(input_shape=(Tx, len(chars))))
model.add(LSTM(128, input_shape=(Tx, len(chars)), return_sequences=True))
model.add(Dropout(0.5))
model.add(LSTM(128))
model.add(Dropout(0.5))
model.add(Dense(len(chars), activation='softmax'))
optimizer = keras.optimizers.Adam(learning_rate=4e-3)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
return model
model = None
if continue_learning:
model = load_model(model_path)
else:
model = create_model_residual(model_config)
model.summary()
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
out = np.random.choice(range(len(chars)), p = probas.ravel())
return out
history = model.fit(x, y, batch_size=128, epochs=30, verbose=True)
import matplotlib.pyplot as plt
plt.style.use('ggplot')
def plot_history(history):
loss = history.history['loss']
x = range(1, len(loss) + 1)
plt.plot(x, loss, 'b', label='Training loss')
plt.title('Training loss')
plt.legend()
plot_history(history)
# serialize weights to HDF5
model.save("models/rhcp_model_res.h5", overwrite=True)
print("Model succesfully saved to disk")
def generate_output(temperature=1.0):
generated = ''
usr_input = input("Start typing the beginning of your lyrics. Lyric-genius will complete it.\n Your input is: ")
# zero pad the sentence to Tx characters.
sentence = ('{0:0>' + str(Tx) + '}').format(usr_input).lower()
generated += usr_input
sys.stdout.write("\n\nHere is your lyric: \n\n")
sys.stdout.write(usr_input)
for i in range(300):
predict_sequence = tk.texts_to_sequences([sentence])
# Padding
predict_data = pad_sequences(predict_sequence, maxlen=Tx, padding='post')
# Convert to numpy array
x_pred = np.array(predict_data, dtype='float32')
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, temperature = temperature)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
if next_char == '\n':
continue
generate_output(temperature=0.5)
| 0.789437 | 0.760673 |
# QGS model: Simple run example with comparison of the Covariant Lyapunov vectors computation method (see last section)
## Reinhold and Pierrehumbert 1982 model version
This model version is a simple 2-layer channel QG atmosphere truncated at wavenumber 2 on a beta-plane with a simple orography (a montain and a valley).
More detail can be found in the articles:
* Reinhold, B. B., & Pierrehumbert, R. T. (1982). *Dynamics of weather regimes: Quasi-stationary waves and blocking*. Monthly Weather Review, **110** (9), 1105-1145. [doi:10.1175/1520-0493(1982)110%3C1105:DOWRQS%3E2.0.CO;2](https://doi.org/10.1175/1520-0493(1982)110%3C1105:DOWRQS%3E2.0.CO;2)
* Cehelsky, P., & Tung, K. K. (1987). *Theories of multiple equilibria and weather regimes—A critical reexamination. Part II: Baroclinic two-layer models*. Journal of the atmospheric sciences, **44** (21), 3282-3303. [doi:10.1175/1520-0469(1987)044%3C3282%3ATOMEAW%3E2.0.CO%3B2](https://doi.org/10.1175/1520-0469(1987)044%3C3282%3ATOMEAW%3E2.0.CO%3B2)
## Modules import
First, setting the path and loading of some modules
```
import sys, os
sys.path.extend([os.path.abspath('../../')])
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import rc
rc('font',**{'family':'serif','sans-serif':['Times'],'size':14})
```
Initializing the random number generator (for reproducibility). -- Disable if needed.
```
np.random.seed(210217)
```
Importing the model's modules
```
from qgs.params.params import QgParams
from qgs.integrators.integrator import RungeKuttaIntegrator
from qgs.functions.tendencies import create_tendencies
from qgs.plotting.util import std_plot
```
Importing the Lyapunovs Estimators
```
from qgs.toolbox.lyapunov import LyapunovsEstimator, CovariantLyapunovsEstimator
from qgs.toolbox.lyapunov import _compute_backward_lyap_traj_jit, _compute_forward_lyap_traj_jit
```
## Systems definition
General parameters
```
# Time parameters
dt = 0.1
# Saving the model state n steps
write_steps = 5
number_of_trajectories = 1
number_of_perturbed_trajectories = 10
```
Setting some model parameters
```
# Model parameters instantiation with some non-default specs
model_parameters = QgParams({'phi0_npi': np.deg2rad(50.)/np.pi, 'hd':0.3})
# Mode truncation at the wavenumber 2 in both x and y spatial coordinate
model_parameters.set_atmospheric_channel_fourier_modes(2, 2)
# Changing (increasing) the orography depth and the meridional temperature gradient
model_parameters.ground_params.set_orography(0.4, 1)
model_parameters.atemperature_params.set_thetas(0.2, 0)
# Printing the model's parameters
model_parameters.print_params()
```
Creating the tendencies function
```
f, Df = create_tendencies(model_parameters)
```
## Time integration
Defining an integrator
```
integrator = RungeKuttaIntegrator()
integrator.set_func(f)
```
Start on a random initial condition and integrate over a transient time to obtain an initial condition on the attractors
```
%%time
ic = np.random.rand(model_parameters.ndim)*0.1
integrator.integrate(0., 200000., dt, ic=ic, write_steps=0)
time, ic = integrator.get_trajectories()
```
Now integrate to obtain a trajectory on the attractor
```
%%time
integrator.integrate(0., 100000., dt, ic=ic, write_steps=write_steps)
reference_time, reference_traj = integrator.get_trajectories()
varx = 0
vary = 1
varz = 2
fig = plt.figure(figsize=(10, 8))
axi = fig.add_subplot(111, projection='3d')
axi.scatter(reference_traj[varx], reference_traj[vary], reference_traj[varz], s=0.2);
axi.set_xlabel('$'+model_parameters.latex_var_string[varx]+'$')
axi.set_ylabel('$'+model_parameters.latex_var_string[vary]+'$')
axi.set_zlabel('$'+model_parameters.latex_var_string[varz]+'$');
varx = 2
vary = 1
plt.figure(figsize=(10, 8))
plt.plot(reference_traj[varx], reference_traj[vary], marker='o', ms=0.07, ls='')
plt.xlabel('$'+model_parameters.latex_var_string[varx]+'$')
plt.ylabel('$'+model_parameters.latex_var_string[vary]+'$');
var = 1
plt.figure(figsize=(10, 8))
plt.plot(model_parameters.dimensional_time*reference_time, reference_traj[var])
plt.xlabel('time (days)')
plt.ylabel('$'+model_parameters.latex_var_string[var]+'$');
```
## Comparing Covariant Lyapunov vectors computation
Here we compare the two methods used to compute the CLVs: the method 0 (Ginelli et al. algorithm) and the method 1 (subspaces intersection method). These methods are described in:
* **Method 0:**
* **Method 1:**
Covariant Lyapunovs Estimator
```
clvint = CovariantLyapunovsEstimator()
```
### Computing the CLVs with the Ginelli et al. algorithm (method 0)
```
%%time
clvint.set_func(f, Df)
clvint.compute_clvs(0., 10000., 40000., 50000., 0.1, 0.1, ic, write_steps=1)
ctl0, ctraj0, cexp0, cvec0 = clvint.get_clvs()
clvint.terminate()
```
Plotting the spectrum for reference
```
plt.figure(figsize=(15, 4))
mean_exp = np.mean(cexp0, axis=-1)
x_pos = np.arange(1.,model_parameters.ndim+1,1)
plt.bar(x_pos, mean_exp)
plt.vlines(x_pos, -0.55, np.minimum(0.,mean_exp)-0.035, linestyles='dashdot', colors='tab:gray')
plt.xticks(x_pos, map(str,range(1, model_parameters.ndim+1,1)))
yt=[-0.5,-0.4,-0.3,-0.2,-0.1,0.,0.1]
plt.yticks(yt, map(str,yt))
plt.xlim(x_pos[0]-1., x_pos[-1]+1.)
plt.ylim(np.min(mean_exp)-0.1, np.max(mean_exp)+0.1)
plt.ylabel("Lyapunov exponent");
plt.xlabel("Index of the Lyapunov exponent");
```
### Computing the CLVs with the subspaces intersection method along the same trajectory (method 1 done manually using hidden routine)
Computing the BLVs and FLVs
```
pretime = ctl0[:100001]
time = ctl0[100000:200001]
posttime = ctl0[200000:]
backtraj = ctraj0[..., :200001][np.newaxis, ...]
forwtraj = ctraj0[..., 100000:][np.newaxis, ...]
cvec0 = cvec0[..., 100000:200001]
ftraj, fexp, fvec = _compute_forward_lyap_traj_jit(f, Df, time, posttime, forwtraj, 0.1, model_parameters.ndim, 1, False, 1, clvint.b, clvint.c, clvint.a)
btraj, bexp, bvec = _compute_backward_lyap_traj_jit(f, Df, pretime, time, backtraj, 0.1, model_parameters.ndim, 1, False, 1, clvint.b, clvint.c, clvint.a)
```
Computing the subspaces intersections
```
ctraj1 = forwtraj[..., :100001]
n_records = ctraj1.shape[-1]
cvec1 = np.zeros((model_parameters.ndim, model_parameters.ndim, n_records))
i_traj = 0
for ti in range(n_records):
for j in range(model_parameters.ndim):
u, z, w = np.linalg.svd(bvec[i_traj, :, :j+1, ti].T @ fvec[i_traj, :, :model_parameters.ndim-j, ti])
basis = bvec[i_traj, :, :j+1, ti] @ u
cvec1[:, j, ti] = basis[:, 0]
```
### Showing the first CLVs obtained by both method at a given time
Obtained by method 0
```
cvec0[:,0,0]
```
Obtained by method 1
```
cvec1[:,0,0]
```
### Plotting component by component the difference between each vector obtained by the two different methods
Each component is plotted with a different color
```
vars = slice(0, model_parameters.ndim)
fig = plt.figure(figsize=(20, int(model_parameters.ndim*8/2)), constrained_layout=False)
grid = fig.add_gridspec(int(model_parameters.ndim/2), 2)
axs = grid.subplots()
for vec, ax in enumerate(axs.flatten()):
ax.plot(model_parameters.dimensional_time*time, (np.abs(cvec0[vars,vec,:])-np.abs(cvec1[vars,vec,:])).T);
ax.set_xlabel('time (days)')
ax.set_title('CLV '+str(vec+1))
```
|
github_jupyter
|
import sys, os
sys.path.extend([os.path.abspath('../../')])
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import rc
rc('font',**{'family':'serif','sans-serif':['Times'],'size':14})
np.random.seed(210217)
from qgs.params.params import QgParams
from qgs.integrators.integrator import RungeKuttaIntegrator
from qgs.functions.tendencies import create_tendencies
from qgs.plotting.util import std_plot
from qgs.toolbox.lyapunov import LyapunovsEstimator, CovariantLyapunovsEstimator
from qgs.toolbox.lyapunov import _compute_backward_lyap_traj_jit, _compute_forward_lyap_traj_jit
# Time parameters
dt = 0.1
# Saving the model state n steps
write_steps = 5
number_of_trajectories = 1
number_of_perturbed_trajectories = 10
# Model parameters instantiation with some non-default specs
model_parameters = QgParams({'phi0_npi': np.deg2rad(50.)/np.pi, 'hd':0.3})
# Mode truncation at the wavenumber 2 in both x and y spatial coordinate
model_parameters.set_atmospheric_channel_fourier_modes(2, 2)
# Changing (increasing) the orography depth and the meridional temperature gradient
model_parameters.ground_params.set_orography(0.4, 1)
model_parameters.atemperature_params.set_thetas(0.2, 0)
# Printing the model's parameters
model_parameters.print_params()
f, Df = create_tendencies(model_parameters)
integrator = RungeKuttaIntegrator()
integrator.set_func(f)
%%time
ic = np.random.rand(model_parameters.ndim)*0.1
integrator.integrate(0., 200000., dt, ic=ic, write_steps=0)
time, ic = integrator.get_trajectories()
%%time
integrator.integrate(0., 100000., dt, ic=ic, write_steps=write_steps)
reference_time, reference_traj = integrator.get_trajectories()
varx = 0
vary = 1
varz = 2
fig = plt.figure(figsize=(10, 8))
axi = fig.add_subplot(111, projection='3d')
axi.scatter(reference_traj[varx], reference_traj[vary], reference_traj[varz], s=0.2);
axi.set_xlabel('$'+model_parameters.latex_var_string[varx]+'$')
axi.set_ylabel('$'+model_parameters.latex_var_string[vary]+'$')
axi.set_zlabel('$'+model_parameters.latex_var_string[varz]+'$');
varx = 2
vary = 1
plt.figure(figsize=(10, 8))
plt.plot(reference_traj[varx], reference_traj[vary], marker='o', ms=0.07, ls='')
plt.xlabel('$'+model_parameters.latex_var_string[varx]+'$')
plt.ylabel('$'+model_parameters.latex_var_string[vary]+'$');
var = 1
plt.figure(figsize=(10, 8))
plt.plot(model_parameters.dimensional_time*reference_time, reference_traj[var])
plt.xlabel('time (days)')
plt.ylabel('$'+model_parameters.latex_var_string[var]+'$');
clvint = CovariantLyapunovsEstimator()
%%time
clvint.set_func(f, Df)
clvint.compute_clvs(0., 10000., 40000., 50000., 0.1, 0.1, ic, write_steps=1)
ctl0, ctraj0, cexp0, cvec0 = clvint.get_clvs()
clvint.terminate()
plt.figure(figsize=(15, 4))
mean_exp = np.mean(cexp0, axis=-1)
x_pos = np.arange(1.,model_parameters.ndim+1,1)
plt.bar(x_pos, mean_exp)
plt.vlines(x_pos, -0.55, np.minimum(0.,mean_exp)-0.035, linestyles='dashdot', colors='tab:gray')
plt.xticks(x_pos, map(str,range(1, model_parameters.ndim+1,1)))
yt=[-0.5,-0.4,-0.3,-0.2,-0.1,0.,0.1]
plt.yticks(yt, map(str,yt))
plt.xlim(x_pos[0]-1., x_pos[-1]+1.)
plt.ylim(np.min(mean_exp)-0.1, np.max(mean_exp)+0.1)
plt.ylabel("Lyapunov exponent");
plt.xlabel("Index of the Lyapunov exponent");
pretime = ctl0[:100001]
time = ctl0[100000:200001]
posttime = ctl0[200000:]
backtraj = ctraj0[..., :200001][np.newaxis, ...]
forwtraj = ctraj0[..., 100000:][np.newaxis, ...]
cvec0 = cvec0[..., 100000:200001]
ftraj, fexp, fvec = _compute_forward_lyap_traj_jit(f, Df, time, posttime, forwtraj, 0.1, model_parameters.ndim, 1, False, 1, clvint.b, clvint.c, clvint.a)
btraj, bexp, bvec = _compute_backward_lyap_traj_jit(f, Df, pretime, time, backtraj, 0.1, model_parameters.ndim, 1, False, 1, clvint.b, clvint.c, clvint.a)
ctraj1 = forwtraj[..., :100001]
n_records = ctraj1.shape[-1]
cvec1 = np.zeros((model_parameters.ndim, model_parameters.ndim, n_records))
i_traj = 0
for ti in range(n_records):
for j in range(model_parameters.ndim):
u, z, w = np.linalg.svd(bvec[i_traj, :, :j+1, ti].T @ fvec[i_traj, :, :model_parameters.ndim-j, ti])
basis = bvec[i_traj, :, :j+1, ti] @ u
cvec1[:, j, ti] = basis[:, 0]
cvec0[:,0,0]
cvec1[:,0,0]
vars = slice(0, model_parameters.ndim)
fig = plt.figure(figsize=(20, int(model_parameters.ndim*8/2)), constrained_layout=False)
grid = fig.add_gridspec(int(model_parameters.ndim/2), 2)
axs = grid.subplots()
for vec, ax in enumerate(axs.flatten()):
ax.plot(model_parameters.dimensional_time*time, (np.abs(cvec0[vars,vec,:])-np.abs(cvec1[vars,vec,:])).T);
ax.set_xlabel('time (days)')
ax.set_title('CLV '+str(vec+1))
| 0.408513 | 0.984441 |
### Cargar las librerías
```
import os
import numpy as np
import pprint
import copy
from math import sqrt
from scipy.linalg import solve_triangular
```
### Cargar las funciones
```
%run -i funciones_factorizacion_QR.py
```
# Prueba Unitaria
## Eliminación por bloques con QR considerando sistemas con única solución
### Ejemplo 1 - Matriz 2 x 2
Empezaremos por generar un sistema de ecuaciones lineales con solución unica.
```
# Generamos una matriz 2 x 2
A = np.array([[2, 3], [3, -1]], dtype='d')
b = np.array([[1], [-1]], dtype='d')
print("A:")
pprint.pprint(A)
print("b:")
pprint.pprint(b)
```
Calculamos el determinante de la matriz A
```
np.linalg.det(A)
```
Dado que el determinante de la matriz es distinto de cero la matriz A tiene solución única
**Solución del sistema usando Numpy**
Utilizaremos la función de numpy *np.linalg.solve(A,b)* para validar que el sistema de ecuaciones efectivamente no tiene solución.
```
np.linalg.solve(A,b)
```
Podemos observar que la función de numpy nos arroja la solución al sistema de ecuaciones lineales propuesto.
**Implementación Programadores - Eliminación por bloques con QR**
Utilizaremos la función eliminacion_bloques implementada por los programadores para validar sus funcionalidad cuando trata de resolver un sistema de ecuaciones lineales con solución única.
```
eliminacion_bloques(A,b)
```
Podemos observar que la función nos arroja la misma solución que numpy.
### Ejemplo 2 - Matriz 10^2 x 10^2
Generaremos un sistema de ecuaciones lineales de 10^2 x 10^2
Fijamos una semilla para que el ejemplo sea replicable
```
np.random.seed(2020)
m = 100
n = 100
A = crear_matriz_aleatoria(m, n, 5, -5,True)
# sumamos las entradas de las filas para crear el vector b, así nuestro vector x tendrá por solución el valor de 1 en cada entrada.
b = np.sum(A, axis=1)
print("A:")
pprint.pprint(A)
print("b:")
pprint.pprint(b)
```
Calculamos el determinante
```
np.linalg.det(A)
```
El determinante es cercano a cero, pero no es cero.
**Solución del sistema usando Numpy**
Utilizaremos la función de numpy *np.linalg.solve(A,b)* una vez mas para validar que el sistema de ecuaciones dado tiene solución.
```
np.linalg.solve(A,b)
```
Podemos observar que la función de numpy nos arroja la solución que esperabamos.
**Implementación Programadores - Eliminación por bloques con QR**
Utilizaremos la función eliminacion_bloques implementada por los programadores para validar su funcionalidad cuando trata de resolver un sistema de ecuaciones lineales de 10^2x100^2.
```
eliminacion_bloques(A,b)
```
Podemos observar que la función nos arroja el mismo resultado.
## Resumen
La función eliminacion_bloques(A,b) es capaz de resolver efectivamente sistemas de ecuaciones con solución única.
|
github_jupyter
|
import os
import numpy as np
import pprint
import copy
from math import sqrt
from scipy.linalg import solve_triangular
%run -i funciones_factorizacion_QR.py
# Generamos una matriz 2 x 2
A = np.array([[2, 3], [3, -1]], dtype='d')
b = np.array([[1], [-1]], dtype='d')
print("A:")
pprint.pprint(A)
print("b:")
pprint.pprint(b)
np.linalg.det(A)
np.linalg.solve(A,b)
eliminacion_bloques(A,b)
np.random.seed(2020)
m = 100
n = 100
A = crear_matriz_aleatoria(m, n, 5, -5,True)
# sumamos las entradas de las filas para crear el vector b, así nuestro vector x tendrá por solución el valor de 1 en cada entrada.
b = np.sum(A, axis=1)
print("A:")
pprint.pprint(A)
print("b:")
pprint.pprint(b)
np.linalg.det(A)
np.linalg.solve(A,b)
eliminacion_bloques(A,b)
| 0.247714 | 0.930774 |
# Navigation
---
You are welcome to use this coding environment to train your agent for the project. Follow the instructions below to get started!
### 1. Start the Environment
Run the next code cell to install a few packages. This line will take a few minutes to run!
```
!pip -q install ./python
```
The environment is already saved in the Workspace and can be accessed at the file path provided below. Please run the next code cell without making any changes.
```
import numpy as np
import torch
import matplotlib.pyplot as plt
from collections import deque
from unityagents import UnityEnvironment
from agent import Agent
# please do not modify the line below
env = UnityEnvironment(file_name="/data/Banana_Linux_NoVis/Banana.x86_64")
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents in the environment
print('Number of agents:', len(env_info.agents))
# number of actions
action_size = brain.vector_action_space_size
print('Number of actions:', action_size)
# examine the state space
state = env_info.vector_observations[0]
print('States look like:', state)
state_size = len(state)
print('States have length:', state_size)
```
### 3. Take Random Actions in the Environment
In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.
Note that **in this coding environment, you will not be able to watch the agent while it is training**, and you should set `train_mode=True` to restart the environment.
```
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0 # initialize the score
while True:
action = np.random.randint(action_size) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
score += reward # update the score
state = next_state # roll over the state to next time step
if done: # exit loop if episode finished
break
print("Score: {}".format(score))
```
### 4. It's Your Turn!
Now it's your turn to train your own agent to solve the environment! A few **important notes**:
- When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
```python
env_info = env.reset(train_mode=True)[brain_name]
```
- To structure your work, you're welcome to work directly in this Jupyter notebook, or you might like to start over with a new file! You can see the list of files in the workspace by clicking on **_Jupyter_** in the top left corner of the notebook.
- In this coding environment, you will not be able to watch the agent while it is training. However, **_after training the agent_**, you can download the saved model weights to watch the agent on your own machine!
```
def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995,
train_mode=True, ckpt_path='checkpoint.pth'):
"""Deep Q-Learning.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
eps_start (float): starting value of epsilon, for epsilon-greedy action selection
eps_end (float): minimum value of epsilon
eps_decay (float): multiplicative factor (per episode) for decreasing epsilon
train_mode (bool): run training mode if `True`
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=train_mode)[brain_name] # reset environment
state = env_info.vector_observations[0] # get current state
score = 0
for t in range(max_t):
action = agent.act(state, eps) # select an action
env_info = env.step(action)[brain_name] # send action to environment
next_state = env_info.vector_observations[0] # get next state
reward = env_info.rewards[0] # get reward
done = env_info.local_done[0] # see if episode has finished
agent.step(state, action, reward, next_state, done) # learning step
state = next_state
score += reward
if done:
break
scores_window.append(score) # save most recent score to window
scores.append(score) # save most recent score to total
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode % 100 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
if np.mean(scores_window) >= 13.0:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))
if train_mode: torch.save(agent.qnetwork_local.state_dict(), ckpt_path)
break
return scores
```
DQN
```
agent = Agent(state_size=state_size, action_size=action_size, seed=0)
scores = dqn(n_episodes=500, eps_decay=0.98, ckpt_path='v1_checkpoint.pth')
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores, label='DQN')
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.legend(loc='upper left')
plt.show()
```
Dueling DQN
```
agent = Agent(state_size=state_size, action_size=action_size, seed=0, duel=True)
scores = dqn(n_episodes=500, eps_decay=0.98, ckpt_path='v2_checkpoint.pth')
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores, label='Duel DQN')
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.legend(loc='upper left')
plt.show()
```
Double DQN
```
agent = Agent(state_size=state_size, action_size=action_size, seed=0, double=True)
scores = dqn(eps_decay=0.98, ckpt_path='v3_checkpoint.pth')
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores, label='DDQN')
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.legend(loc='upper left')
plt.show()
```
DDQN + Prioritized Experience Replay
```
agent = Agent(state_size=state_size, action_size=action_size, seed=0, double=True, prioritized=True)
scores = dqn(n_episodes=500, eps_decay=0.98, ckpt_path='v4_checkpoint.pth')
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores, label='DDQN + PER')
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.legend(loc='upper left')
plt.show()
```
Dueling DDQN
```
agent = Agent(state_size=state_size, action_size=action_size, seed=0, double=True, duel=True)
scores = dqn(eps_decay=0.98, ckpt_path='v5_checkpoint.pth')
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores, label='Duel DDQN')
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.legend(loc='upper left')
plt.show()
env.close()
```
|
github_jupyter
|
!pip -q install ./python
import numpy as np
import torch
import matplotlib.pyplot as plt
from collections import deque
from unityagents import UnityEnvironment
from agent import Agent
# please do not modify the line below
env = UnityEnvironment(file_name="/data/Banana_Linux_NoVis/Banana.x86_64")
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents in the environment
print('Number of agents:', len(env_info.agents))
# number of actions
action_size = brain.vector_action_space_size
print('Number of actions:', action_size)
# examine the state space
state = env_info.vector_observations[0]
print('States look like:', state)
state_size = len(state)
print('States have length:', state_size)
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0 # initialize the score
while True:
action = np.random.randint(action_size) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
score += reward # update the score
state = next_state # roll over the state to next time step
if done: # exit loop if episode finished
break
print("Score: {}".format(score))
env_info = env.reset(train_mode=True)[brain_name]
def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995,
train_mode=True, ckpt_path='checkpoint.pth'):
"""Deep Q-Learning.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
eps_start (float): starting value of epsilon, for epsilon-greedy action selection
eps_end (float): minimum value of epsilon
eps_decay (float): multiplicative factor (per episode) for decreasing epsilon
train_mode (bool): run training mode if `True`
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=train_mode)[brain_name] # reset environment
state = env_info.vector_observations[0] # get current state
score = 0
for t in range(max_t):
action = agent.act(state, eps) # select an action
env_info = env.step(action)[brain_name] # send action to environment
next_state = env_info.vector_observations[0] # get next state
reward = env_info.rewards[0] # get reward
done = env_info.local_done[0] # see if episode has finished
agent.step(state, action, reward, next_state, done) # learning step
state = next_state
score += reward
if done:
break
scores_window.append(score) # save most recent score to window
scores.append(score) # save most recent score to total
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode % 100 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
if np.mean(scores_window) >= 13.0:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))
if train_mode: torch.save(agent.qnetwork_local.state_dict(), ckpt_path)
break
return scores
agent = Agent(state_size=state_size, action_size=action_size, seed=0)
scores = dqn(n_episodes=500, eps_decay=0.98, ckpt_path='v1_checkpoint.pth')
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores, label='DQN')
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.legend(loc='upper left')
plt.show()
agent = Agent(state_size=state_size, action_size=action_size, seed=0, duel=True)
scores = dqn(n_episodes=500, eps_decay=0.98, ckpt_path='v2_checkpoint.pth')
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores, label='Duel DQN')
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.legend(loc='upper left')
plt.show()
agent = Agent(state_size=state_size, action_size=action_size, seed=0, double=True)
scores = dqn(eps_decay=0.98, ckpt_path='v3_checkpoint.pth')
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores, label='DDQN')
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.legend(loc='upper left')
plt.show()
agent = Agent(state_size=state_size, action_size=action_size, seed=0, double=True, prioritized=True)
scores = dqn(n_episodes=500, eps_decay=0.98, ckpt_path='v4_checkpoint.pth')
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores, label='DDQN + PER')
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.legend(loc='upper left')
plt.show()
agent = Agent(state_size=state_size, action_size=action_size, seed=0, double=True, duel=True)
scores = dqn(eps_decay=0.98, ckpt_path='v5_checkpoint.pth')
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores, label='Duel DDQN')
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.legend(loc='upper left')
plt.show()
env.close()
| 0.665302 | 0.920074 |
# **Business Intution**
You work for a social media platform. Your task is to create a solution using deep learning to discern whether a post is holiday-related in an effort to better monetize the platform.
**Task**
You are given the following six categories. You are required to classify the images in the dataset based on these categories.
Miscellaneous
Christmas_Tree
Jacket
Candle
Airplane
Snowman
**Data description**
This data set consists of the following two columns:
| Column Name | Description |
| --- | --- |
| Image | Name of image |
| Class | Category of image |
The data folder consists of two folders and one .csv file. The details are as follows:
train: Contains 6469 images for 6 classes
test: Contains 3489 images
train.csv: 3489 x 2
**Submission format**
Image,Class
image3476.jpg,Miscellaneous
image5198.jpg,Candle
image4183.jpg,Snowman
image1806.jpg,Miscellaneous
image7831.jpg,Miscellaneous
**Evaluation metric**
$ score = {100* f1\_score(actual\_values,predicted\_values,average = 'weighted')} $
Note: To avoid any discrepancies in the scoring, ensure all the index column (Image) values in the submitted file match the values in the provided test folder.
# Import the Required Packages
```
import os.path, sys, math
import cv2
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from tqdm import tqdm
from glob import glob
from PIL import Image
from imgaug import augmenters as iaa
import warnings
import random as rn
from keras import backend as K
import tensorflow as tf
warnings.filterwarnings("ignore")
%matplotlib inline
sns.set(style = 'whitegrid')
def random_seed(num):
np.random.seed(num)
rn.seed(num)
try:
tf.random.set_seed(num)
return(f"Info: Tensorflow Version {tf.__version__}")
except:
tf.set_random_seed(num)
return(f"Info: Tensorflow Version {tf.__version__}")
random_seed(30)
```
# Load the Datasets
```
def path(path_to_train):
for dirname, _, filenames in os.walk(path_to_train):
for filename in filenames:
return os.path.join(dirname, filename)
path('D:/DataSets/dataset/train/')
train = pd.read_csv('D:/DataSets/dataset/train.csv')
train.head()
```
# Load an image to check
```
def load_img():
img = cv2.imread('D:/DataSets/dataset/train/image1.jpg').astype(np.float32) / 255
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
return img
def display_img(img):
fig = plt.figure(figsize=(5,5))
ax = fig.add_subplot(111)
ax.imshow(img)
i = load_img()
display_img(i)
i.shape
```
# Basic EDA
```
train.info()
train.isnull().any()
train.describe()
train['Class'].value_counts()
label_counts = train.Class.value_counts()
plt.figure(figsize = (10,5))
sns.barplot(label_counts.index, label_counts.values, alpha = 0.9)
plt.xticks(rotation = 'vertical')
plt.xlabel('Image Class', fontsize =12)
plt.ylabel('Counts', fontsize = 12)
plt.show()
```
# Split the DataFrame
```
from sklearn.model_selection import train_test_split
train_df,test_df = train_test_split(train,test_size=.15,stratify=train.Class.values,shuffle=True)
train_df.reset_index(inplace=True,drop=True)
test_df.reset_index(inplace=True,drop=True)
train_df.head()
test_df.head()
```
# Import the Libraries for model building
```
from keras.layers import Input, Lambda, Dense, Flatten
from keras.models import Model
from keras.applications.inception_resnet_v2 import InceptionResNetV2
from keras.applications.inception_resnet_v2 import preprocess_input
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.preprocessing import image
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras import backend as K
from tensorflow.keras import applications
from tensorflow.keras.models import Model
from keras import optimizers
from keras.utils import to_categorical
from keras.layers.normalization import BatchNormalization
from keras.utils import np_utils
from keras.callbacks import EarlyStopping
```
# Divide & Get the images for training Purpose
We will use tf.keras.preprocessing.image.ImageDataGenerator
```
train_image = 'D:/DataSets/dataset/train/'
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
training_set = train_datagen.flow_from_dataframe(dataframe = train_df, directory = train_image, x_col='Image', y_col='Class',
weight_col=None, target_size=(299, 299), color_mode='rgb',
classes=None, class_mode='categorical', batch_size=32, shuffle=True,
seed=None, save_to_dir=None, save_prefix='',
save_format='png', subset=None, interpolation='nearest',
validate_filenames=True)
```
# Divide & Get the images for validation Purpose
```
test_datagen = ImageDataGenerator(rescale=1./255)
test_set = test_datagen.flow_from_dataframe(dataframe = test_df, directory = train_image, x_col='Image', y_col='Class',
weight_col=None, target_size=(299, 299), color_mode='rgb',
classes=None, class_mode='categorical', batch_size=32, shuffle=True,
seed=None, save_to_dir=None, save_prefix='',
save_format='png', subset=None, interpolation='nearest',
validate_filenames=True)
```
# **InceptionResNetV2 Model**
```
image_shape = [299, 299]
inception_model = tf.keras.applications.InceptionResNetV2(include_top=False,weights="imagenet",
input_shape=image_shape + [3])
for layer in inception_model.layers:
layer.trainable = False
x = Flatten()(inception_model.output)
prediction = Dense(6, activation='softmax')(x)
model = Model(inputs=inception_model.input, outputs=prediction)
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model_generate = model.fit(training_set,
steps_per_epoch=training_set.n//32,
epochs=10,
validation_data=test_set,
validation_steps=test_set.n//32)
```
# Plot the loss & the accuracy
```
# Loss
plt.plot(model_generate.history['loss'], label='train loss')
plt.plot(model_generate.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')
# Accuracies
plt.plot(model_generate.history['accuracy'], label='train acc')
plt.plot(model_generate.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
plt.savefig('AccVal_acc')
```
# **VGG19 Model**
```
vgg19_model = tf.keras.applications.VGG19(include_top=False,
weights="imagenet",
input_shape=image_shape + [3])
add_model = Sequential()
add_model.add(Flatten(input_shape=vgg19_model.output_shape[1:]))
add_model.add(Dropout(0.3))
add_model.add(Dense(128, activation='relu'))
add_model.add(Dropout(0.5))
add_model.add(Dense(train_df.shape[1], activation='softmax'))
model = Model(inputs=vgg19_model.input, outputs=add_model(vgg19_model.output))
model.compile(loss='categorical_crossentropy', optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
model.summary()
batch_size = 8
epochs = 15
training_datagen = ImageDataGenerator(rotation_range=30,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
training_datagen.fit(train_df)
history = model.fit_generator( train_datagen.flow(train_df, test_df, batch_size=batch_size),
steps_per_epoch=x_train.shape[0] // batch_size,
epochs=epochs,callbacks=callbacks("vgg19"))
```
# Plot the loss & the accuracy
```
# Loss
plt.plot(history.history['loss'], label='train loss')
plt.plot(history.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')
# Accuracies
plt.plot(history.history['accuracy'], label='train acc')
plt.plot(history.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
plt.savefig('AccVal_acc')
```
# **ResNet50 Model**
```
base_model2 = tf.keras.applications.ResNet50(include_top=False,
weights="imagenet",
input_shape=image_shape + [3])
for layers in base_model2.layers[:-5]:
layers.trainable=False
add_model2 = Sequential()
add_model2.add(base_model2)
add_model2.add(Conv2D(64,(3,3),activation='relu'))
add_model2.add(Conv2D(32,(3,3),activation='relu'))
add_model2.add(Flatten())
add_model2.add(Dropout(0.3))
add_model2.add(Dense(512, activation='relu'))
add_model2.add(Dropout(0.5))
add_model2.add(Dense(train_df.shape[1], activation='softmax'))
add_model2.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
add_model2.summary()
history2 = add_model2.fit_generator(train_datagen.flow(train_df, test_df, batch_size=batch_size),
steps_per_epoch=x_train.shape[0] // batch_size,
epochs=epochs,callbacks=callbacks("NasNet"))
```
# Plot the loss & the accuracy
```
# Loss
plt.plot(history2.history['loss'], label='train loss')
plt.plot(history2.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')
# Accuracies
plt.plot(history2.history['accuracy'], label='train acc')
plt.plot(history2.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
plt.savefig('AccVal_acc')
```
|
github_jupyter
|
import os.path, sys, math
import cv2
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from tqdm import tqdm
from glob import glob
from PIL import Image
from imgaug import augmenters as iaa
import warnings
import random as rn
from keras import backend as K
import tensorflow as tf
warnings.filterwarnings("ignore")
%matplotlib inline
sns.set(style = 'whitegrid')
def random_seed(num):
np.random.seed(num)
rn.seed(num)
try:
tf.random.set_seed(num)
return(f"Info: Tensorflow Version {tf.__version__}")
except:
tf.set_random_seed(num)
return(f"Info: Tensorflow Version {tf.__version__}")
random_seed(30)
def path(path_to_train):
for dirname, _, filenames in os.walk(path_to_train):
for filename in filenames:
return os.path.join(dirname, filename)
path('D:/DataSets/dataset/train/')
train = pd.read_csv('D:/DataSets/dataset/train.csv')
train.head()
def load_img():
img = cv2.imread('D:/DataSets/dataset/train/image1.jpg').astype(np.float32) / 255
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
return img
def display_img(img):
fig = plt.figure(figsize=(5,5))
ax = fig.add_subplot(111)
ax.imshow(img)
i = load_img()
display_img(i)
i.shape
train.info()
train.isnull().any()
train.describe()
train['Class'].value_counts()
label_counts = train.Class.value_counts()
plt.figure(figsize = (10,5))
sns.barplot(label_counts.index, label_counts.values, alpha = 0.9)
plt.xticks(rotation = 'vertical')
plt.xlabel('Image Class', fontsize =12)
plt.ylabel('Counts', fontsize = 12)
plt.show()
from sklearn.model_selection import train_test_split
train_df,test_df = train_test_split(train,test_size=.15,stratify=train.Class.values,shuffle=True)
train_df.reset_index(inplace=True,drop=True)
test_df.reset_index(inplace=True,drop=True)
train_df.head()
test_df.head()
from keras.layers import Input, Lambda, Dense, Flatten
from keras.models import Model
from keras.applications.inception_resnet_v2 import InceptionResNetV2
from keras.applications.inception_resnet_v2 import preprocess_input
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.preprocessing import image
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras import backend as K
from tensorflow.keras import applications
from tensorflow.keras.models import Model
from keras import optimizers
from keras.utils import to_categorical
from keras.layers.normalization import BatchNormalization
from keras.utils import np_utils
from keras.callbacks import EarlyStopping
train_image = 'D:/DataSets/dataset/train/'
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
training_set = train_datagen.flow_from_dataframe(dataframe = train_df, directory = train_image, x_col='Image', y_col='Class',
weight_col=None, target_size=(299, 299), color_mode='rgb',
classes=None, class_mode='categorical', batch_size=32, shuffle=True,
seed=None, save_to_dir=None, save_prefix='',
save_format='png', subset=None, interpolation='nearest',
validate_filenames=True)
test_datagen = ImageDataGenerator(rescale=1./255)
test_set = test_datagen.flow_from_dataframe(dataframe = test_df, directory = train_image, x_col='Image', y_col='Class',
weight_col=None, target_size=(299, 299), color_mode='rgb',
classes=None, class_mode='categorical', batch_size=32, shuffle=True,
seed=None, save_to_dir=None, save_prefix='',
save_format='png', subset=None, interpolation='nearest',
validate_filenames=True)
image_shape = [299, 299]
inception_model = tf.keras.applications.InceptionResNetV2(include_top=False,weights="imagenet",
input_shape=image_shape + [3])
for layer in inception_model.layers:
layer.trainable = False
x = Flatten()(inception_model.output)
prediction = Dense(6, activation='softmax')(x)
model = Model(inputs=inception_model.input, outputs=prediction)
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model_generate = model.fit(training_set,
steps_per_epoch=training_set.n//32,
epochs=10,
validation_data=test_set,
validation_steps=test_set.n//32)
# Loss
plt.plot(model_generate.history['loss'], label='train loss')
plt.plot(model_generate.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')
# Accuracies
plt.plot(model_generate.history['accuracy'], label='train acc')
plt.plot(model_generate.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
plt.savefig('AccVal_acc')
vgg19_model = tf.keras.applications.VGG19(include_top=False,
weights="imagenet",
input_shape=image_shape + [3])
add_model = Sequential()
add_model.add(Flatten(input_shape=vgg19_model.output_shape[1:]))
add_model.add(Dropout(0.3))
add_model.add(Dense(128, activation='relu'))
add_model.add(Dropout(0.5))
add_model.add(Dense(train_df.shape[1], activation='softmax'))
model = Model(inputs=vgg19_model.input, outputs=add_model(vgg19_model.output))
model.compile(loss='categorical_crossentropy', optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
model.summary()
batch_size = 8
epochs = 15
training_datagen = ImageDataGenerator(rotation_range=30,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
training_datagen.fit(train_df)
history = model.fit_generator( train_datagen.flow(train_df, test_df, batch_size=batch_size),
steps_per_epoch=x_train.shape[0] // batch_size,
epochs=epochs,callbacks=callbacks("vgg19"))
# Loss
plt.plot(history.history['loss'], label='train loss')
plt.plot(history.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')
# Accuracies
plt.plot(history.history['accuracy'], label='train acc')
plt.plot(history.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
plt.savefig('AccVal_acc')
base_model2 = tf.keras.applications.ResNet50(include_top=False,
weights="imagenet",
input_shape=image_shape + [3])
for layers in base_model2.layers[:-5]:
layers.trainable=False
add_model2 = Sequential()
add_model2.add(base_model2)
add_model2.add(Conv2D(64,(3,3),activation='relu'))
add_model2.add(Conv2D(32,(3,3),activation='relu'))
add_model2.add(Flatten())
add_model2.add(Dropout(0.3))
add_model2.add(Dense(512, activation='relu'))
add_model2.add(Dropout(0.5))
add_model2.add(Dense(train_df.shape[1], activation='softmax'))
add_model2.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
add_model2.summary()
history2 = add_model2.fit_generator(train_datagen.flow(train_df, test_df, batch_size=batch_size),
steps_per_epoch=x_train.shape[0] // batch_size,
epochs=epochs,callbacks=callbacks("NasNet"))
# Loss
plt.plot(history2.history['loss'], label='train loss')
plt.plot(history2.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')
# Accuracies
plt.plot(history2.history['accuracy'], label='train acc')
plt.plot(history2.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
plt.savefig('AccVal_acc')
| 0.519765 | 0.907476 |
# Tutorial 6: Coupling the finite and boundary element methods
In this tutorial, we will look at how Bempp can be used alongside the finite element library FEniCSx to solve a transmission problem. To run this tutorial, you will need to have FEniCSx installed. FEniCSx is included in the [Bempp Docker image](https://bempp.com/installation.html), so you may like to use that rather than installing FEniCSx yourself.
For this example, we let $\Omega$ be a unit cube and we solve a transmission problem with different matrial properties inside and outside the cube.
As an incident wave, we use
$$
p_\text{inc}(\mathbf{x})=\mathrm{e}^{\mathrm{i} k \mathbf{x}\cdot\mathbf{d}},
$$
where $\mathbf{x}=(x_0,x_1,x_2)$ and $\mathbf{d} = \frac{1}{\sqrt{3}}(1,1,1)$ is the direction of the incident wave.
The PDE we want to solve is
$$
\Delta p + n(\mathbf{x})^2 k^2 p = 0, \quad \text{ in } \Omega\\
\Delta p + k^2 p = 0, \quad \text{ in } \mathbb{R}^3 \backslash \Omega
$$
In this example, we use $n(\mathbf{x}) = 0.5$. For simplicity we have chosen $n$ to be constant.
As $n$ is constant, we could actually use BEM inside the domain too; but if $n$ were not constant, BEM could not be used, and the benefit of coupling with FEM is more apparent.
## Formulation
### FEM part
In $\Omega$, the FEM part is formulated as
$$
\int_\Omega \nabla p\cdot\nabla v -k^2\int_\Omega n^2pv - \int_{d\Omega} v\frac{\partial p}{\partial \nu} = 0,
$$
or
$$
\langle\nabla p,\nabla v\rangle_\Omega - k^2\langle n^2p,v\rangle_\Omega - \langle \lambda,v\rangle_\Gamma=0,
$$
where $\lambda=\frac{\partial p}{\partial \nu}$.
Later, we will write this as an operator equation, as this more closely matches the BEM approach:
$$
\mathsf{A}u-k^2 \mathsf{M}u-\mathsf{M}_\Gamma \lambda = 0.
$$
### BEM part
Outside the cube, we split $p$ into $p_\text{s}+p_\text{inc}$.
#### Representation formula
$$
p_\text{s} = \mathcal{D}p-\mathcal{S}\lambda,
$$
where $\mathcal{S}$ is the single layer potential operator; $\mathcal{D}$ is the double layer potential operator; and $\lambda$ is the normal derivative of $p$ on the surface of the cube.
#### Boundary integral equation
$$
\left(\tfrac{1}{2}\mathsf{I}-\mathsf{D}\right)p+\mathsf{S}\lambda = p_\text{inc},
$$
where $\mathsf{S}$ is the single layer boundary operator; $\mathsf{D}$ is the double layer boundary operator; and $\mathsf{I}$ is the identity operator.
### Overall formulation
Combining the FEM and BEM parts of the formulation, we have two simultaneous operator equation in terms of $p$ and $\lambda$. We can write this as a blocked system:
$$
\begin{bmatrix}
\mathsf{A}-k^2 \mathsf{M} & -\mathsf{M}_\Gamma\\
\tfrac{1}{2}\mathsf{I}-\mathsf{D} & \mathsf{S}
\end{bmatrix}
\begin{bmatrix}
p\\
\lambda
\end{bmatrix}=\begin{bmatrix}
0\\
p_\text{inc}
\end{bmatrix}.
$$
## Solving with Bempp
We begin by importing DOLFINx (the FEniCSx python library), UFL (FEniCS's unified form language), MPI, Bempp and Numpy. We also disable Bempp's logging messages (as otherwise a lot will appear during the solve step).
```
import dolfinx
import dolfinx.geometry
import ufl
from mpi4py import MPI
import bempp.api
import numpy as np
```
Next, we set the wavenumber ``k`` and the direction ``d`` of the incoming wave.
```
k = 6.
d = np.array([1., 1., 1])
d /= np.linalg.norm(d)
```
We create a mesh of a cube using DOLFINx. This will be mesh of tetrahedral cells to be used for the interior FEM part of the problem.
```
mesh = dolfinx.UnitCubeMesh(MPI.COMM_WORLD, 10, 10, 10)
```
Next, we make the DOLFINx and Bempp function spaces.
The function ``fenics_to_bempp_trace_data`` will extract the trace space from the DOLFINx space and create the matrix ``trace_matrix``, which maps between the dofs (degrees of freedom) in DOLFINx and Bempp.
```
from bempp.api.external import fenicsx
fenics_space = dolfinx.FunctionSpace(mesh, ("CG", 1))
trace_space, trace_matrix = \
fenicsx.fenics_to_bempp_trace_data(fenics_space)
bempp_space = bempp.api.function_space(trace_space.grid, "DP", 0)
fem_size = fenics_space.dofmap.index_map.size_global
bem_size = bempp_space.global_dof_count
print("FEM dofs: {0}".format(fem_size))
print("BEM dofs: {0}".format(bem_size))
```
We create the boundary operators that we need.
```
identity = bempp.api.operators.boundary.sparse.identity(
trace_space, bempp_space, bempp_space)
mass = bempp.api.operators.boundary.sparse.identity(
bempp_space, bempp_space, trace_space)
double_layer = bempp.api.operators.boundary.helmholtz.double_layer(
trace_space, bempp_space, bempp_space, k)
single_layer = bempp.api.operators.boundary.helmholtz.single_layer(
bempp_space, bempp_space, bempp_space, k)
```
We create the UFL trial function, test function, and define $n$.
```
u = ufl.TrialFunction(fenics_space)
v = ufl.TestFunction(fenics_space)
n = 0.5
```
We make the vectors on the right hand side of the formulation.
```
@bempp.api.complex_callable
def u_inc(x, n, domain_index, result):
result[0] = np.exp(1j * k * np.dot(x, d))
u_inc = bempp.api.GridFunction(bempp_space, fun=u_inc)
# The rhs from the FEM
rhs_fem = np.zeros(fem_size)
# The rhs from the BEM
rhs_bem = u_inc.projections(bempp_space)
# The combined rhs
rhs = np.concatenate([rhs_fem, rhs_bem])
```
We are now ready to create a ``BlockedLinearOperator`` containing all four parts of the discretisation of
$$
\begin{bmatrix}
\mathsf{A}-k^2 \mathsf{M} & -\mathsf{M}_\Gamma\\
\tfrac{1}{2}\mathsf{I}-\mathsf{D} & \mathsf{S}
\end{bmatrix}.
$$
```
from bempp.api.assembly.blocked_operator import BlockedDiscreteOperator
from scipy.sparse.linalg.interface import LinearOperator
blocks = [[None,None],[None,None]]
trace_op = LinearOperator(trace_matrix.shape, lambda x:trace_matrix @ x)
A = fenicsx.FenicsOperator((ufl.inner(ufl.grad(u), ufl.grad(v)) - k**2 * n**2 * ufl.inner(u, v)) * ufl.dx)
blocks[0][0] = A.weak_form()
blocks[0][1] = -trace_matrix.T * mass.weak_form().to_sparse()
blocks[1][0] = (.5 * identity - double_layer).weak_form() * trace_op
blocks[1][1] = single_layer.weak_form()
blocked = BlockedDiscreteOperator(np.array(blocks))
```
Next, we solve the system, then split the solution into the parts assosiated with p and $\lambda$. For an efficient solve, preconditioning is required.
```
from bempp.api.assembly.discrete_boundary_operator import InverseSparseDiscreteBoundaryOperator
from scipy.sparse.linalg import LinearOperator
# Compute the sparse inverse of the Helmholtz operator
# Although it is not a boundary operator we can use
# the SparseInverseDiscreteBoundaryOperator function from
# BEM++ to turn its LU decomposition into a linear operator.
P1 = InverseSparseDiscreteBoundaryOperator(
blocked[0,0].to_sparse().tocsc())
# For the Laplace slp we use a simple mass matrix preconditioner.
# This is sufficient for smaller low-frequency problems.
P2 = InverseSparseDiscreteBoundaryOperator(
bempp.api.operators.boundary.sparse.identity(
bempp_space, bempp_space, bempp_space).weak_form())
# Create a block diagonal preconditioner object using the Scipy LinearOperator class
def apply_prec(x):
"""Apply the block diagonal preconditioner"""
m1 = P1.shape[0]
m2 = P2.shape[0]
n1 = P1.shape[1]
n2 = P2.shape[1]
res1 = P1.dot(x[:n1])
res2 = P2.dot(x[n1:])
return np.concatenate([res1, res2])
p_shape = (P1.shape[0] + P2.shape[0], P1.shape[1] + P2.shape[1])
P = LinearOperator(p_shape, apply_prec, dtype=np.dtype('complex128'))
# Create a callback function to count the number of iterations
it_count = 0
def count_iterations(x):
global it_count
it_count += 1
from scipy.sparse.linalg import gmres
soln, info = gmres(blocked, rhs, M=P, callback=count_iterations)
soln_fem = soln[:fem_size]
soln_bem = soln[fem_size:]
print("Number of iterations: {0}".format(it_count))
```
Next, we make DOLFINx and Bempp functions from the solution.
```
# Store the real part of the FEM solution
u = dolfinx.Function(fenics_space)
u.vector[:] = np.ascontiguousarray(np.real(soln_fem))
# Solution function with dirichlet data on the boundary
dirichlet_data = trace_matrix * soln_fem
dirichlet_fun = bempp.api.GridFunction(trace_space, coefficients=dirichlet_data)
# Solution function with Neumann data on the boundary
neumann_fun = bempp.api.GridFunction(bempp_space, coefficients=soln_bem)
```
We now evaluate the solution on the slice $z=0.5$ and plot it. For the exterior domain, we use the respresentation formula
$$
p_\text{s} = \mathcal{D}p-\mathcal{S}\frac{\partial u}{\partial \nu}
$$
to evaluate the solution.
```
%matplotlib inline
Nx=200
Ny=200
xmin, xmax, ymin, ymax=[-1,3,-1,3]
plot_grid = np.mgrid[xmin:xmax:Nx*1j,ymin:ymax:Ny*1j]
points = np.vstack((plot_grid[0].ravel(),
plot_grid[1].ravel(),
np.array([0.5]*plot_grid[0].size)))
plot_me = np.zeros(points.shape[1], dtype=np.complex128)
x,y,z = points
bem_x = np.logical_not((x>0) * (x<1) * (y>0) * (y<1) * (z>0) * (z<1))
slp_pot= bempp.api.operators.potential.helmholtz.single_layer(
bempp_space, points[:, bem_x], k)
dlp_pot= bempp.api.operators.potential.helmholtz.double_layer(
trace_space, points[:, bem_x], k)
plot_me[bem_x] += np.exp(1j * k * (points[0, bem_x] * d[0] \
+ points[1, bem_x] * d[1] \
+ points[2, bem_x] * d[2]))
plot_me[bem_x] += dlp_pot.evaluate(dirichlet_fun).flat
plot_me[bem_x] -= slp_pot.evaluate(neumann_fun).flat
fem_points = points[:, np.logical_not(bem_x)].transpose()
tree = dolfinx.geometry.BoundingBoxTree(mesh, 3)
entities = []
for point in fem_points:
entities.append(dolfinx.geometry.compute_closest_entity(tree, point, mesh)[0])
fem_val = u.eval(fem_points, entities)
plot_me[np.logical_not(bem_x)] += fem_val.T[0]
plot_me = plot_me.reshape((Nx, Ny))
plot_me = plot_me.transpose()[::-1]
vmax = max(np.abs(np.real(plot_me.flat)))
# Plot the image
from matplotlib import pyplot as plt
fig=plt.figure(figsize=(10, 8))
plt.imshow(np.real(plot_me), extent=[xmin, xmax, ymin, ymax],
cmap=plt.get_cmap("bwr"), vmin=-vmax, vmax=vmax)
plt.xlabel('x')
plt.ylabel('y')
plt.colorbar()
plt.title("FEM-BEM Coupling for Helmholtz")
plt.show()
```
|
github_jupyter
|
import dolfinx
import dolfinx.geometry
import ufl
from mpi4py import MPI
import bempp.api
import numpy as np
k = 6.
d = np.array([1., 1., 1])
d /= np.linalg.norm(d)
mesh = dolfinx.UnitCubeMesh(MPI.COMM_WORLD, 10, 10, 10)
from bempp.api.external import fenicsx
fenics_space = dolfinx.FunctionSpace(mesh, ("CG", 1))
trace_space, trace_matrix = \
fenicsx.fenics_to_bempp_trace_data(fenics_space)
bempp_space = bempp.api.function_space(trace_space.grid, "DP", 0)
fem_size = fenics_space.dofmap.index_map.size_global
bem_size = bempp_space.global_dof_count
print("FEM dofs: {0}".format(fem_size))
print("BEM dofs: {0}".format(bem_size))
identity = bempp.api.operators.boundary.sparse.identity(
trace_space, bempp_space, bempp_space)
mass = bempp.api.operators.boundary.sparse.identity(
bempp_space, bempp_space, trace_space)
double_layer = bempp.api.operators.boundary.helmholtz.double_layer(
trace_space, bempp_space, bempp_space, k)
single_layer = bempp.api.operators.boundary.helmholtz.single_layer(
bempp_space, bempp_space, bempp_space, k)
u = ufl.TrialFunction(fenics_space)
v = ufl.TestFunction(fenics_space)
n = 0.5
@bempp.api.complex_callable
def u_inc(x, n, domain_index, result):
result[0] = np.exp(1j * k * np.dot(x, d))
u_inc = bempp.api.GridFunction(bempp_space, fun=u_inc)
# The rhs from the FEM
rhs_fem = np.zeros(fem_size)
# The rhs from the BEM
rhs_bem = u_inc.projections(bempp_space)
# The combined rhs
rhs = np.concatenate([rhs_fem, rhs_bem])
from bempp.api.assembly.blocked_operator import BlockedDiscreteOperator
from scipy.sparse.linalg.interface import LinearOperator
blocks = [[None,None],[None,None]]
trace_op = LinearOperator(trace_matrix.shape, lambda x:trace_matrix @ x)
A = fenicsx.FenicsOperator((ufl.inner(ufl.grad(u), ufl.grad(v)) - k**2 * n**2 * ufl.inner(u, v)) * ufl.dx)
blocks[0][0] = A.weak_form()
blocks[0][1] = -trace_matrix.T * mass.weak_form().to_sparse()
blocks[1][0] = (.5 * identity - double_layer).weak_form() * trace_op
blocks[1][1] = single_layer.weak_form()
blocked = BlockedDiscreteOperator(np.array(blocks))
from bempp.api.assembly.discrete_boundary_operator import InverseSparseDiscreteBoundaryOperator
from scipy.sparse.linalg import LinearOperator
# Compute the sparse inverse of the Helmholtz operator
# Although it is not a boundary operator we can use
# the SparseInverseDiscreteBoundaryOperator function from
# BEM++ to turn its LU decomposition into a linear operator.
P1 = InverseSparseDiscreteBoundaryOperator(
blocked[0,0].to_sparse().tocsc())
# For the Laplace slp we use a simple mass matrix preconditioner.
# This is sufficient for smaller low-frequency problems.
P2 = InverseSparseDiscreteBoundaryOperator(
bempp.api.operators.boundary.sparse.identity(
bempp_space, bempp_space, bempp_space).weak_form())
# Create a block diagonal preconditioner object using the Scipy LinearOperator class
def apply_prec(x):
"""Apply the block diagonal preconditioner"""
m1 = P1.shape[0]
m2 = P2.shape[0]
n1 = P1.shape[1]
n2 = P2.shape[1]
res1 = P1.dot(x[:n1])
res2 = P2.dot(x[n1:])
return np.concatenate([res1, res2])
p_shape = (P1.shape[0] + P2.shape[0], P1.shape[1] + P2.shape[1])
P = LinearOperator(p_shape, apply_prec, dtype=np.dtype('complex128'))
# Create a callback function to count the number of iterations
it_count = 0
def count_iterations(x):
global it_count
it_count += 1
from scipy.sparse.linalg import gmres
soln, info = gmres(blocked, rhs, M=P, callback=count_iterations)
soln_fem = soln[:fem_size]
soln_bem = soln[fem_size:]
print("Number of iterations: {0}".format(it_count))
# Store the real part of the FEM solution
u = dolfinx.Function(fenics_space)
u.vector[:] = np.ascontiguousarray(np.real(soln_fem))
# Solution function with dirichlet data on the boundary
dirichlet_data = trace_matrix * soln_fem
dirichlet_fun = bempp.api.GridFunction(trace_space, coefficients=dirichlet_data)
# Solution function with Neumann data on the boundary
neumann_fun = bempp.api.GridFunction(bempp_space, coefficients=soln_bem)
%matplotlib inline
Nx=200
Ny=200
xmin, xmax, ymin, ymax=[-1,3,-1,3]
plot_grid = np.mgrid[xmin:xmax:Nx*1j,ymin:ymax:Ny*1j]
points = np.vstack((plot_grid[0].ravel(),
plot_grid[1].ravel(),
np.array([0.5]*plot_grid[0].size)))
plot_me = np.zeros(points.shape[1], dtype=np.complex128)
x,y,z = points
bem_x = np.logical_not((x>0) * (x<1) * (y>0) * (y<1) * (z>0) * (z<1))
slp_pot= bempp.api.operators.potential.helmholtz.single_layer(
bempp_space, points[:, bem_x], k)
dlp_pot= bempp.api.operators.potential.helmholtz.double_layer(
trace_space, points[:, bem_x], k)
plot_me[bem_x] += np.exp(1j * k * (points[0, bem_x] * d[0] \
+ points[1, bem_x] * d[1] \
+ points[2, bem_x] * d[2]))
plot_me[bem_x] += dlp_pot.evaluate(dirichlet_fun).flat
plot_me[bem_x] -= slp_pot.evaluate(neumann_fun).flat
fem_points = points[:, np.logical_not(bem_x)].transpose()
tree = dolfinx.geometry.BoundingBoxTree(mesh, 3)
entities = []
for point in fem_points:
entities.append(dolfinx.geometry.compute_closest_entity(tree, point, mesh)[0])
fem_val = u.eval(fem_points, entities)
plot_me[np.logical_not(bem_x)] += fem_val.T[0]
plot_me = plot_me.reshape((Nx, Ny))
plot_me = plot_me.transpose()[::-1]
vmax = max(np.abs(np.real(plot_me.flat)))
# Plot the image
from matplotlib import pyplot as plt
fig=plt.figure(figsize=(10, 8))
plt.imshow(np.real(plot_me), extent=[xmin, xmax, ymin, ymax],
cmap=plt.get_cmap("bwr"), vmin=-vmax, vmax=vmax)
plt.xlabel('x')
plt.ylabel('y')
plt.colorbar()
plt.title("FEM-BEM Coupling for Helmholtz")
plt.show()
| 0.632616 | 0.990812 |
# Welcome to the Perceptron demo page
# Most of the inspiration comes from the book Grokking Machine Learning from Manning. A really good book to get into Machine Learning
```
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
import random
import turicreate as tc
data = pd.DataFrame({
"sentence": ["tjilp tjilp tjilp","mwah mwah", "tjilp mwah tjilp", "tjilp mwah mwah", "mwah mwah mwah tjilp", "tjilp mwah tjilp mwah tjilp"],
"tjilp":[3,0,2,1,1,3],
"mwah": [0,2,1,2,3,2],
"mood": ["Happy", "Sad", "Happy", "Sad", "Sad", "Happy"]
})
data
def plot_sentiment(happy_data, sad_data, line = []):
tick_spacing = 1
fig, ax = plt.subplots(1,1)
ax.scatter(happy_data["tjilp"], happy_data["mwah"], c='g',marker='o', label='Happy')
ax.scatter(sad_data["tjilp"], sad_data["mwah"], c='r',marker='x', label='Sad')
if line and len(line) > 1:
ax.plot(line[0],line[1])
ax.xaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
plt.title('Happy or Sad sentence')
plt.ylabel('Mwah')
plt.xlabel('Tjilp')
plt.rcParams["figure.figsize"] = (8,6)
plt.legend()
plt.grid()
plt.show()
happy_sentence = data[data["mood"] == "Happy"]
sad_sentence = data[data["mood"] == "Sad"]
plot_sentiment(happy_sentence, sad_sentence)
data = pd.DataFrame({
"sentence": ["tjilp","mwah mwah", "tjilp mwah tjilp", "tjilp mwah mwah", "mwah mwah mwah tjilp", "tjilp mwah tjilp mwah", "mwah mwah tjilp tjilp tjilp", "mwah mwah mwah tjilp tjilp"],
"tjilp":[1,0,2,1,1,2,3,2],
"mwah": [0,2,1,2,3,2,2,3],
"mood": ["Sad", "Sad", "Sad", "Sad", "Happy", "Happy", "Happy", "Happy"]
})
happy_sentence = data[data["mood"] == "Happy"]
sad_sentence = data[data["mood"] == "Sad"]
plot_sentiment(happy_sentence, sad_sentence)
data = pd.DataFrame({
"tjilp":[1,0,2,1,1,2,3,2,4,2,3,4,4,3],
"mwah": [0,2,1,2,3,2,2,3,4,4,1,2,0,3],
"mood": ["Sad", "Sad", "Sad", "Sad", "Happy", "Happy", "Sad", "Happy", "Happy", "Happy", "Sad", "Happy", "Happy","Sad"]
})
happy_sentence = data[data["mood"] == "Happy"]
sad_sentence = data[data["mood"] == "Sad"]
plot_sentiment(happy_sentence, sad_sentence)
data["label"]=data["mood"].apply(lambda x: 1 if x == "Happy" else 0)
data
features = data[["tjilp","mwah"]].to_numpy()
labels = data["label"].to_numpy()
def score(weights, bias, features):
return features.dot(weights) + bias
def step(x):
if x >= 0:
return 1
else:
return 0
def prediction(weights, bias, features):
return step(score(weights, bias, features))
def error(weights, bias, features, label):
pred = prediction(weights, bias, features)
if (pred == label):
return 0
else:
return np.abs(score(weights, bias, features))
def mean_perceptron_error(weights, bias, features, labels):
total_error = 0
for i in range(len(features)):
total_error += error(weights, bias, features[i], labels[i]) # do you understand why we take i for some parameters?
return total_error / len(features)
def perceptron_trick(weights, bias, features, label, learning_rate = 0.01):
pred = prediction(weights, bias, features)
for i in range(len(weights)):
weights[i] += (label - pred)*features[i]*learning_rate
bias += (label - pred) * learning_rate
return weights, bias
def perceptron_algorithm(features, labels, learning_rate = 0.01, epochs = 200):
weights = [1.0 for i in range(len(features[0]))]
bias = 0.0
errors = []
for epoch in range(epochs):
error = mean_perceptron_error(weights, bias, features, labels)
errors.append(error)
i = random.randint(0, len(features) - 1) # Pick a random point in our dataset
weights, bias = perceptron_trick(weights, bias, features[i], labels[i])
return weights, bias, errors
found_weights, found_bias, found_errors = perceptron_algorithm(features, labels)
```
We need the line formula, which is w1 * x1 + w2 * x2 + bias = 0, dus x2 = (-w1 * x1-bias)/w2
```
def calculate_x2 (x1, weights, bias):
return (-1*weights[0] * x1 - bias)/weights[1]
x_2_4 = calculate_x2(4,found_weights, found_bias)
x_2_0 = calculate_x2(0, found_weights, found_bias)
plot_sentiment(happy_sentence, sad_sentence, [[0, 4],[x_2_0, x_2_4]])
prediction(found_weights, found_bias, np.array([1,3]))
datadict = {'tjilp': features[:,0], 'mwah':features[:,1], 'prediction': labels}
datatc = tc.SFrame(datadict)
perceptron = tc.logistic_classifier.create(datatc, target='prediction')
perceptron.coefficients
new_sentence = tc.SFrame({'tjilp':[3], 'mwah':[3]})
perceptron.predict(new_sentence)
perceptron.coefficients[1]['value']
tc_weights = np.array([perceptron.coefficients[2]['value'],perceptron.coefficients[1]['value']])
tc_bias = perceptron.coefficients[0]['value']
tc_x_2_4 = calculate_x2(4, tc_weights, tc_bias)
tc_x_2_0 = calculate_x2(0, tc_weights, tc_bias)
plot_sentiment(happy_sentence, sad_sentence, [[0, 4],[tc_x_2_0, tc_x_2_4]])
```
|
github_jupyter
|
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
import random
import turicreate as tc
data = pd.DataFrame({
"sentence": ["tjilp tjilp tjilp","mwah mwah", "tjilp mwah tjilp", "tjilp mwah mwah", "mwah mwah mwah tjilp", "tjilp mwah tjilp mwah tjilp"],
"tjilp":[3,0,2,1,1,3],
"mwah": [0,2,1,2,3,2],
"mood": ["Happy", "Sad", "Happy", "Sad", "Sad", "Happy"]
})
data
def plot_sentiment(happy_data, sad_data, line = []):
tick_spacing = 1
fig, ax = plt.subplots(1,1)
ax.scatter(happy_data["tjilp"], happy_data["mwah"], c='g',marker='o', label='Happy')
ax.scatter(sad_data["tjilp"], sad_data["mwah"], c='r',marker='x', label='Sad')
if line and len(line) > 1:
ax.plot(line[0],line[1])
ax.xaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
plt.title('Happy or Sad sentence')
plt.ylabel('Mwah')
plt.xlabel('Tjilp')
plt.rcParams["figure.figsize"] = (8,6)
plt.legend()
plt.grid()
plt.show()
happy_sentence = data[data["mood"] == "Happy"]
sad_sentence = data[data["mood"] == "Sad"]
plot_sentiment(happy_sentence, sad_sentence)
data = pd.DataFrame({
"sentence": ["tjilp","mwah mwah", "tjilp mwah tjilp", "tjilp mwah mwah", "mwah mwah mwah tjilp", "tjilp mwah tjilp mwah", "mwah mwah tjilp tjilp tjilp", "mwah mwah mwah tjilp tjilp"],
"tjilp":[1,0,2,1,1,2,3,2],
"mwah": [0,2,1,2,3,2,2,3],
"mood": ["Sad", "Sad", "Sad", "Sad", "Happy", "Happy", "Happy", "Happy"]
})
happy_sentence = data[data["mood"] == "Happy"]
sad_sentence = data[data["mood"] == "Sad"]
plot_sentiment(happy_sentence, sad_sentence)
data = pd.DataFrame({
"tjilp":[1,0,2,1,1,2,3,2,4,2,3,4,4,3],
"mwah": [0,2,1,2,3,2,2,3,4,4,1,2,0,3],
"mood": ["Sad", "Sad", "Sad", "Sad", "Happy", "Happy", "Sad", "Happy", "Happy", "Happy", "Sad", "Happy", "Happy","Sad"]
})
happy_sentence = data[data["mood"] == "Happy"]
sad_sentence = data[data["mood"] == "Sad"]
plot_sentiment(happy_sentence, sad_sentence)
data["label"]=data["mood"].apply(lambda x: 1 if x == "Happy" else 0)
data
features = data[["tjilp","mwah"]].to_numpy()
labels = data["label"].to_numpy()
def score(weights, bias, features):
return features.dot(weights) + bias
def step(x):
if x >= 0:
return 1
else:
return 0
def prediction(weights, bias, features):
return step(score(weights, bias, features))
def error(weights, bias, features, label):
pred = prediction(weights, bias, features)
if (pred == label):
return 0
else:
return np.abs(score(weights, bias, features))
def mean_perceptron_error(weights, bias, features, labels):
total_error = 0
for i in range(len(features)):
total_error += error(weights, bias, features[i], labels[i]) # do you understand why we take i for some parameters?
return total_error / len(features)
def perceptron_trick(weights, bias, features, label, learning_rate = 0.01):
pred = prediction(weights, bias, features)
for i in range(len(weights)):
weights[i] += (label - pred)*features[i]*learning_rate
bias += (label - pred) * learning_rate
return weights, bias
def perceptron_algorithm(features, labels, learning_rate = 0.01, epochs = 200):
weights = [1.0 for i in range(len(features[0]))]
bias = 0.0
errors = []
for epoch in range(epochs):
error = mean_perceptron_error(weights, bias, features, labels)
errors.append(error)
i = random.randint(0, len(features) - 1) # Pick a random point in our dataset
weights, bias = perceptron_trick(weights, bias, features[i], labels[i])
return weights, bias, errors
found_weights, found_bias, found_errors = perceptron_algorithm(features, labels)
def calculate_x2 (x1, weights, bias):
return (-1*weights[0] * x1 - bias)/weights[1]
x_2_4 = calculate_x2(4,found_weights, found_bias)
x_2_0 = calculate_x2(0, found_weights, found_bias)
plot_sentiment(happy_sentence, sad_sentence, [[0, 4],[x_2_0, x_2_4]])
prediction(found_weights, found_bias, np.array([1,3]))
datadict = {'tjilp': features[:,0], 'mwah':features[:,1], 'prediction': labels}
datatc = tc.SFrame(datadict)
perceptron = tc.logistic_classifier.create(datatc, target='prediction')
perceptron.coefficients
new_sentence = tc.SFrame({'tjilp':[3], 'mwah':[3]})
perceptron.predict(new_sentence)
perceptron.coefficients[1]['value']
tc_weights = np.array([perceptron.coefficients[2]['value'],perceptron.coefficients[1]['value']])
tc_bias = perceptron.coefficients[0]['value']
tc_x_2_4 = calculate_x2(4, tc_weights, tc_bias)
tc_x_2_0 = calculate_x2(0, tc_weights, tc_bias)
plot_sentiment(happy_sentence, sad_sentence, [[0, 4],[tc_x_2_0, tc_x_2_4]])
| 0.380183 | 0.896885 |
# Baseline measures
Step1. Import packages
The sub-package used to compute the baseline measures is aif360.sklearn. This package allows users to apply the bias metrics on their own datasets. For more information, please refer to
https://github.com/Trusted-AI/AIF360/tree/master/aif360/sklearn.
```
import numpy as np
import pandas as pd
import random
from sklearn.preprocessing import LabelEncoder
!pip install 'aif360[OptimPreproc]'
from sklearn.linear_model import LogisticRegression, LogisticRegressionCV, SGDClassifier
from sklearn.model_selection import train_test_split, KFold, cross_val_score
from aif360.sklearn.metrics import consistency_score,generalized_entropy_error,generalized_entropy_index,theil_index,coefficient_of_variation
from aif360.sklearn.metrics import statistical_parity_difference,disparate_impact_ratio,equal_opportunity_difference,average_odds_difference
from aif360.sklearn.datasets import standardize_dataset, to_dataframe
from sklearn.neighbors import NearestNeighbors, KNeighborsClassifier
```
Preprocess dataset
```
df = pd.read_csv('german.data', na_values='?', header=None, sep=' ')
cols = ['Status_of_existing_checking_account','Duration_in_month', 'Credit_history', 'Purpose', 'Credit_amount', 'Savings_accountbonds', 'Present_employment_since', 'Installment_rate_in_percentage_of_disposable_income', 'Personal_status_and_sex', 'Other_debtorsguarantors', 'Present_residence_since', 'Property', 'Age_in_years', 'Other_installment_plans', 'Housing', 'Number_of_existing_credits_at_this_bank', 'Job', 'Number_of_people_being_liable_to_provide_maintenance_for', 'Telephone', 'Foreign_worker', 'Creditworthiness']
df.columns = cols
# Since the numeric variable 'Number_of_people_being_liable_to_provide_maintenance_for' is dichotomous, it's going to be treated as a nominal variable.
df['Number_of_people_being_liable_to_provide_maintenance_for'] = df['Number_of_people_being_liable_to_provide_maintenance_for'].astype('object')
#df['Creditworthiness'] = df['Creditworthiness'].astype('object')
# specify numeric and nominal columns
numeric = [False if df[col].dtype == 'object' else True for col in df]
nominal = [True if df[col].dtype == 'object' else False for col in df]
# normalize numeric variables
num=df.loc[:,numeric].values[:,:-1] # exclude target variable
scaled=np.subtract(num,np.min(num,axis=0))/np.subtract(np.max(num,axis=0),np.min(num,axis=0))
df[df.columns[numeric][:-1]] = pd.DataFrame(scaled, columns=df.columns[numeric][:-1])
# recode 'Personal_status_and_sex' based on AIF360's preprocessing
df['Personal_status_and_sex'] = np.where(df['Personal_status_and_sex'] == 'A92', 'female', 'male')
# label encode nominal variables
lb = LabelEncoder()
for col in df[df.columns[nominal]]:
df[col] = lb.fit_transform(df[col])
```
Step2. Preprocess dataset based on AIF360's guidelines and Initialize objects.
For more information about preprocessing please refer to https://aif360.readthedocs.io/en/latest/modules/generated/aif360.sklearn.datasets.standardize_dataset.html#aif360.sklearn.datasets.standardize_dataset.
```
# preprocess data following aif360.sklearn instructions
X,y = standardize_dataset(df,prot_attr=['Personal_status_and_sex','Age_in_years'], target = 'Creditworthiness')
```
Step3. Compute individal and group fairness baseline measures
**Individual fairness metrics**:
- Consistency score: measures how similar the labels are for similar instances
- Generalised entropy error: measures inequality over a population. This algorithm compares the predictions made by a classifier with the ground truth. To that end, a LogisticRegression is used. Note that no test-train split is made as well as no hyperparameter tuning.
First, we compute measures using all attributes in the dataset. Second, we exclude the attribute gender from the dataset and compute measures once more.
```
# Dataset names: German, Compas, Titanic, Synthetic3
dataset_name = 'German'
prot1 = 'Personal_status_and_sex'
prot2 = 'Age_in_years'
target = 'Creditworthiness'
pos_label = 1
# initialize objects
dataset = [] # dataset name
consistency = [] # consistency scores before and after excluding protected features
generalized_entropy = [] # GEE before and after excluding protected features
# Consistency score including all attributes in the dataset
name = dataset_name+'_all_attributes'
dataset.append(name) #
X,y = standardize_dataset(df,prot_attr=[prot1,prot2],target=target)
y = y.astype('float64')
consistency.append(consistency_score(X, y))
neigh = KNeighborsClassifier(n_neighbors=5).fit(X, y.astype('int64'))
#print(neigh.score(X,y.astype('int64')))
# Consistency score excluding a protected attribute from the dataset
name = dataset_name+'_excl_'+prot1
dataset.append(name)
X,y = standardize_dataset(df,prot_attr=[prot1,prot2],dropcols=[prot1],target=target)
y = y.astype('float64')
consistency.append(consistency_score(X, y))
neigh = KNeighborsClassifier(n_neighbors=5).fit(X, y)
#print(neigh.score(X,y))
# Consistency score excluding the other protected attribute from the dataset
name = dataset_name+'_excl_'+prot2
dataset.append(name)
# excl prot2
X,y = standardize_dataset(df,prot_attr=[prot1,prot2],
dropcols=[prot2],target=target)
y = y.astype('float64')
consistency.append(consistency_score(X, y))
neigh = KNeighborsClassifier(n_neighbors=5).fit(X, y)
#print(neigh.score(X,y))
# Generalized Entropy Error including all attributes in the dataset
X,y = standardize_dataset(df,prot_attr=[prot1,prot2],target=target)
y = y.astype('float64')
model = LogisticRegression(max_iter=1000,random_state=1).fit(X,y)
y_pred = model.predict(X)
#print(model.score(X,y))
generalized_entropy.append(generalized_entropy_error(y, y_pred,pos_label=pos_label))
# Generalized Entropy Error excluding a protected attribute from the dataset
X,y = standardize_dataset(df,prot_attr=[prot1,prot2],dropcols=[prot1],target=target)
y = y.astype('float64')
model = LogisticRegression(max_iter=1000,random_state=1)
model.fit(X,y)
y_pred = model.predict(X)
#print(model.score(X,y))
generalized_entropy.append(generalized_entropy_error(y, y_pred,pos_label=pos_label))
# Generalized Entropy Error excluding another protected attribute from the dataset
X,y = standardize_dataset(df,prot_attr=[prot1,prot2],dropcols=[prot2],target=target)
y = y.astype('float64')
model = LogisticRegression(max_iter=1000,random_state=1)
model.fit(X,y)
y_pred = model.predict(X)
#print(model.score(X,y))
generalized_entropy.append(generalized_entropy_error(y, y_pred,pos_label=pos_label))
```
Finally, we gather all scores in a table.
```
baseline = pd.concat((np.round(pd.Series(consistency, name='Consistency'),3),np.round(pd.Series(generalized_entropy, name='GEE'),3)),1)
baseline.index = dataset
baseline
```
## Group Fairness
**Group fairness metrics**:
- Statistical parity difference
- Disparate impact
- Equal opportunity difference
- Average odds difference
```
dataset_name = 'German'
prot1 = 'Personal_status_and_sex'
prot2 = 'Age_in_years'
target = 'Creditworthiness'
pos_label = 1
# initialize objects
dataset = [] # scenario
stat_par = []
disp_im = []
eq_opp = []
ave_odds = []
```
Group fairness metrics require numeric features to be discretized. Based on the literature, 'Age' is discretized in the following manner: people older or equal to 25 years old are 'old' (0) and people younger than 25 are 'young' (1).
```
# preprocess data following aif360.sklearn instructions
X,y = standardize_dataset(df,prot_attr=[prot1,prot2],target=target)
y = y.astype('float')
# discretize age
age_in_years = df.Age_in_years * (df_raw.Age_in_years.max() - df_raw.Age_in_years.min()) + df_raw.Age_in_years.min()
X['Age_in_years'] = age_in_years.values
X.Age_in_years = np.where(X.Age_in_years>25,int(0),int(1)) # only for German credit
model = LogisticRegression(max_iter=1000,random_state=1)
model.fit(X,y)
y_pred = model.predict(X)
```
We compute the four group fairness measures by setting `prot_attr` parameter to the index of the protected attribute.
First, we compute the metrics focusing on gender. `priv_group` is 1, i.e. males.
```
dataset.append('Personal_status_and_sex/female')
stat_par.append(statistical_parity_difference(y,y_pred,prot_attr=prot1,pos_label=pos_label,priv_group=1))
disp_im.append(disparate_impact_ratio(y,y_pred,prot_attr=prot1,pos_label=pos_label,priv_group=1))
eq_opp.append(equal_opportunity_difference(y,y_pred,prot1,pos_label=pos_label,priv_group=1))
ave_odds.append(average_odds_difference(y,y_pred,prot1,pos_label=pos_label,priv_group=1))
```
Second, we compute the metrics focusing on age. `priv_group` is 0, i.e. people older than 25 years old.
```
dataset.append('Age_in_years/young')
stat_par.append(statistical_parity_difference(y,y_pred,prot_attr=prot2,pos_label=pos_label,priv_group=0))
disp_im.append(disparate_impact_ratio(y,y_pred,prot_attr=prot2,pos_label=pos_label,priv_group=0))
eq_opp.append(equal_opportunity_difference(y,y_pred,prot_attr=prot2,pos_label=pos_label,priv_group=0))
ave_odds.append(average_odds_difference(y,y_pred,prot_attr=prot2,pos_label=pos_label,priv_group=0))
```
Finally, we merge the two.
```
pd.DataFrame(np.array([stat_par, disp_im, eq_opp, ave_odds]).T,
columns = ['Statistical Parity', 'Disparate Impact',
'Equal Opportunity', 'Average Odds'], index = dataset)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import random
from sklearn.preprocessing import LabelEncoder
!pip install 'aif360[OptimPreproc]'
from sklearn.linear_model import LogisticRegression, LogisticRegressionCV, SGDClassifier
from sklearn.model_selection import train_test_split, KFold, cross_val_score
from aif360.sklearn.metrics import consistency_score,generalized_entropy_error,generalized_entropy_index,theil_index,coefficient_of_variation
from aif360.sklearn.metrics import statistical_parity_difference,disparate_impact_ratio,equal_opportunity_difference,average_odds_difference
from aif360.sklearn.datasets import standardize_dataset, to_dataframe
from sklearn.neighbors import NearestNeighbors, KNeighborsClassifier
df = pd.read_csv('german.data', na_values='?', header=None, sep=' ')
cols = ['Status_of_existing_checking_account','Duration_in_month', 'Credit_history', 'Purpose', 'Credit_amount', 'Savings_accountbonds', 'Present_employment_since', 'Installment_rate_in_percentage_of_disposable_income', 'Personal_status_and_sex', 'Other_debtorsguarantors', 'Present_residence_since', 'Property', 'Age_in_years', 'Other_installment_plans', 'Housing', 'Number_of_existing_credits_at_this_bank', 'Job', 'Number_of_people_being_liable_to_provide_maintenance_for', 'Telephone', 'Foreign_worker', 'Creditworthiness']
df.columns = cols
# Since the numeric variable 'Number_of_people_being_liable_to_provide_maintenance_for' is dichotomous, it's going to be treated as a nominal variable.
df['Number_of_people_being_liable_to_provide_maintenance_for'] = df['Number_of_people_being_liable_to_provide_maintenance_for'].astype('object')
#df['Creditworthiness'] = df['Creditworthiness'].astype('object')
# specify numeric and nominal columns
numeric = [False if df[col].dtype == 'object' else True for col in df]
nominal = [True if df[col].dtype == 'object' else False for col in df]
# normalize numeric variables
num=df.loc[:,numeric].values[:,:-1] # exclude target variable
scaled=np.subtract(num,np.min(num,axis=0))/np.subtract(np.max(num,axis=0),np.min(num,axis=0))
df[df.columns[numeric][:-1]] = pd.DataFrame(scaled, columns=df.columns[numeric][:-1])
# recode 'Personal_status_and_sex' based on AIF360's preprocessing
df['Personal_status_and_sex'] = np.where(df['Personal_status_and_sex'] == 'A92', 'female', 'male')
# label encode nominal variables
lb = LabelEncoder()
for col in df[df.columns[nominal]]:
df[col] = lb.fit_transform(df[col])
# preprocess data following aif360.sklearn instructions
X,y = standardize_dataset(df,prot_attr=['Personal_status_and_sex','Age_in_years'], target = 'Creditworthiness')
# Dataset names: German, Compas, Titanic, Synthetic3
dataset_name = 'German'
prot1 = 'Personal_status_and_sex'
prot2 = 'Age_in_years'
target = 'Creditworthiness'
pos_label = 1
# initialize objects
dataset = [] # dataset name
consistency = [] # consistency scores before and after excluding protected features
generalized_entropy = [] # GEE before and after excluding protected features
# Consistency score including all attributes in the dataset
name = dataset_name+'_all_attributes'
dataset.append(name) #
X,y = standardize_dataset(df,prot_attr=[prot1,prot2],target=target)
y = y.astype('float64')
consistency.append(consistency_score(X, y))
neigh = KNeighborsClassifier(n_neighbors=5).fit(X, y.astype('int64'))
#print(neigh.score(X,y.astype('int64')))
# Consistency score excluding a protected attribute from the dataset
name = dataset_name+'_excl_'+prot1
dataset.append(name)
X,y = standardize_dataset(df,prot_attr=[prot1,prot2],dropcols=[prot1],target=target)
y = y.astype('float64')
consistency.append(consistency_score(X, y))
neigh = KNeighborsClassifier(n_neighbors=5).fit(X, y)
#print(neigh.score(X,y))
# Consistency score excluding the other protected attribute from the dataset
name = dataset_name+'_excl_'+prot2
dataset.append(name)
# excl prot2
X,y = standardize_dataset(df,prot_attr=[prot1,prot2],
dropcols=[prot2],target=target)
y = y.astype('float64')
consistency.append(consistency_score(X, y))
neigh = KNeighborsClassifier(n_neighbors=5).fit(X, y)
#print(neigh.score(X,y))
# Generalized Entropy Error including all attributes in the dataset
X,y = standardize_dataset(df,prot_attr=[prot1,prot2],target=target)
y = y.astype('float64')
model = LogisticRegression(max_iter=1000,random_state=1).fit(X,y)
y_pred = model.predict(X)
#print(model.score(X,y))
generalized_entropy.append(generalized_entropy_error(y, y_pred,pos_label=pos_label))
# Generalized Entropy Error excluding a protected attribute from the dataset
X,y = standardize_dataset(df,prot_attr=[prot1,prot2],dropcols=[prot1],target=target)
y = y.astype('float64')
model = LogisticRegression(max_iter=1000,random_state=1)
model.fit(X,y)
y_pred = model.predict(X)
#print(model.score(X,y))
generalized_entropy.append(generalized_entropy_error(y, y_pred,pos_label=pos_label))
# Generalized Entropy Error excluding another protected attribute from the dataset
X,y = standardize_dataset(df,prot_attr=[prot1,prot2],dropcols=[prot2],target=target)
y = y.astype('float64')
model = LogisticRegression(max_iter=1000,random_state=1)
model.fit(X,y)
y_pred = model.predict(X)
#print(model.score(X,y))
generalized_entropy.append(generalized_entropy_error(y, y_pred,pos_label=pos_label))
baseline = pd.concat((np.round(pd.Series(consistency, name='Consistency'),3),np.round(pd.Series(generalized_entropy, name='GEE'),3)),1)
baseline.index = dataset
baseline
dataset_name = 'German'
prot1 = 'Personal_status_and_sex'
prot2 = 'Age_in_years'
target = 'Creditworthiness'
pos_label = 1
# initialize objects
dataset = [] # scenario
stat_par = []
disp_im = []
eq_opp = []
ave_odds = []
# preprocess data following aif360.sklearn instructions
X,y = standardize_dataset(df,prot_attr=[prot1,prot2],target=target)
y = y.astype('float')
# discretize age
age_in_years = df.Age_in_years * (df_raw.Age_in_years.max() - df_raw.Age_in_years.min()) + df_raw.Age_in_years.min()
X['Age_in_years'] = age_in_years.values
X.Age_in_years = np.where(X.Age_in_years>25,int(0),int(1)) # only for German credit
model = LogisticRegression(max_iter=1000,random_state=1)
model.fit(X,y)
y_pred = model.predict(X)
dataset.append('Personal_status_and_sex/female')
stat_par.append(statistical_parity_difference(y,y_pred,prot_attr=prot1,pos_label=pos_label,priv_group=1))
disp_im.append(disparate_impact_ratio(y,y_pred,prot_attr=prot1,pos_label=pos_label,priv_group=1))
eq_opp.append(equal_opportunity_difference(y,y_pred,prot1,pos_label=pos_label,priv_group=1))
ave_odds.append(average_odds_difference(y,y_pred,prot1,pos_label=pos_label,priv_group=1))
dataset.append('Age_in_years/young')
stat_par.append(statistical_parity_difference(y,y_pred,prot_attr=prot2,pos_label=pos_label,priv_group=0))
disp_im.append(disparate_impact_ratio(y,y_pred,prot_attr=prot2,pos_label=pos_label,priv_group=0))
eq_opp.append(equal_opportunity_difference(y,y_pred,prot_attr=prot2,pos_label=pos_label,priv_group=0))
ave_odds.append(average_odds_difference(y,y_pred,prot_attr=prot2,pos_label=pos_label,priv_group=0))
pd.DataFrame(np.array([stat_par, disp_im, eq_opp, ave_odds]).T,
columns = ['Statistical Parity', 'Disparate Impact',
'Equal Opportunity', 'Average Odds'], index = dataset)
| 0.420719 | 0.850344 |
# Deserialisation
YAML (a recursive acronym for "YAML Ain't Markup Language") is a human-readable data-serialization language.
We're going to slightly modify our previous model and look at how to serialise it to YAML.
```
class Element:
def __init__(self, symbol):
self.symbol = symbol
class Molecule:
def __init__(self):
self.elements = {} # Map from element to number of that element in the molecule
def add_element(self, element, number):
self.elements[element] = number
def to_struct(self):
return {x.symbol: self.elements[x] for x in self.elements}
class Reaction:
def __init__(self):
self.reactants = {} # Map from reactants to stoichiometries
self.products = {} # Map from products to stoichiometries
def add_reactant(self, reactant, stoichiometry):
self.reactants[reactant] = stoichiometry
def add_product(self, product, stoichiometry):
self.products[product] = stoichiometry
def to_struct(self):
return {
"reactants": [x.to_struct() for x in self.reactants],
"products": [x.to_struct() for x in self.products],
"stoichiometries": list(self.reactants.values()) + list(self.products.values()),
}
class System:
def __init__(self):
self.reactions = []
def add_reaction(self, reaction):
self.reactions.append(reaction)
def to_struct(self):
return [x.to_struct() for x in self.reactions]
c = Element("C")
o = Element("O")
h = Element("H")
co2 = Molecule()
co2.add_element(c, 1)
co2.add_element(o, 2)
h2o = Molecule()
h2o.add_element(h, 2)
h2o.add_element(o, 1)
o2 = Molecule()
o2.add_element(o, 2)
h2 = Molecule()
h2.add_element(h, 2)
glucose = Molecule()
glucose.add_element(c, 6)
glucose.add_element(h, 12)
glucose.add_element(o, 6)
combustion_glucose = Reaction()
combustion_glucose.add_reactant(glucose, 1)
combustion_glucose.add_reactant(o2, 6)
combustion_glucose.add_product(co2, 6)
combustion_glucose.add_product(h2o, 6)
combustion_hydrogen = Reaction()
combustion_hydrogen.add_reactant(h2, 2)
combustion_hydrogen.add_reactant(o2, 1)
combustion_hydrogen.add_product(h2o, 2)
s = System()
s.add_reaction(combustion_glucose)
s.add_reaction(combustion_hydrogen)
s.to_struct()
import yaml
print(yaml.dump(s.to_struct()))
```
# Deserialising non-normal data structures
We can see that this data structure, although seemingly
sensible, is horribly **non-normal**.
* The stoichiometries information requires us to align each one to the corresponding molecule in order.
* Each element is described multiple times: we will have to ensure that each mention of `C` comes back to the same constructed element object.
```
class YamlDeSerialisingSystem:
def __init__(self):
self.elements = {}
self.molecules = {}
def add_element(self, candidate):
if candidate not in self.elements:
self.elements[candidate] = Element(candidate)
return self.elements[candidate]
def add_molecule(self, candidate):
if tuple(candidate.items()) not in self.molecules:
m = Molecule()
for symbol, number in candidate.items():
m.add_element(self.add_element(symbol), number)
self.molecules[tuple(candidate.items())] = m
return self.molecules[tuple(candidate.items())]
def parse_system(self, system):
s = System()
for reaction in system:
r = Reaction()
stoichiometries = reaction["stoichiometries"]
for molecule in reaction["reactants"]:
r.add_reactant(self.add_molecule(molecule), stoichiometries.pop(0))
for molecule in reaction["products"]:
r.add_product(self.add_molecule(molecule), stoichiometries.pop(0))
s.add_reaction(r)
return s
de_serialiser = YamlDeSerialisingSystem()
round_trip = de_serialiser.parse_system(s.to_struct())
round_trip.to_struct()
de_serialiser.elements
de_serialiser.molecules
list(round_trip.reactions[0].reactants.keys())[1].to_struct()
list(round_trip.reactions[1].reactants.keys())[1].to_struct()
```
In order to de-serialise this data, we had to construct a unique key to distinguish repeated mentions of the same identical item.
Effectively, we ended up choosing primary keys for our datatypes:
```
list(de_serialiser.molecules.keys())
```
Remembering that a combination of columns uniquely defining an item is a valid key - there is a key correspondence between a candidate key in the database sense and a "hashable" data structure that can be used to a key in a `dict`.
Note that to make this example even reasonably doable, we had to exclude additional data from the objects (mass, rate etc)
# Normalising a YAML structure
To make this structure easier to de-serialise, we can make a normalised file-format, by defining primary keys (hashable types) for each entity on write:
```
class YamlSavingSystem:
def __init__(self):
self.elements = set()
self.molecules = set()
def element_key(self, element):
return element.symbol
def molecule_key(self, molecule):
key = ""
for element, number in molecule.elements.items():
key += element.symbol
key += str(number)
return key
def save(self, system):
for reaction in system.reactions:
for molecule in reaction.reactants:
self.molecules.add(molecule)
for element in molecule.elements:
self.elements.add(element)
for molecule in reaction.products:
self.molecules.add(molecule)
for element in molecule.elements:
self.elements.add(element)
result = {
"elements": [self.element_key(element) for element in self.elements],
"molecules": {
self.molecule_key(molecule): {
self.element_key(element): number
for element, number in molecule.elements.items()
}
for molecule in self.molecules
},
"reactions": [
{
"reactants": {
self.molecule_key(reactant): stoich
for reactant, stoich in reaction.reactants.items()
},
"products": {
self.molecule_key(product): stoich
for product, stoich in reaction.products.items()
},
}
for reaction in system.reactions
],
}
return result
saver = YamlSavingSystem()
print(yaml.dump(saver.save(s)))
```
We can see that to make an easily parsed file format, without having to
guess-recognise repeated entities based on their names
(which is highly subject to data entry error), we effectively recover
the same tables as found for the database model.
An alternative is to use a simple integer for such a primary key:
```
class YamlIntegerKeySavingSystem:
def __init__(self):
self.elements = {}
self.molecules = {}
def add_element(self, element):
if element not in self.elements:
self.elements[element] = len(self.elements)
return self.elements[element]
def add_molecule(self, molecule):
if molecule not in self.molecules:
self.molecules[molecule] = len(self.molecules)
return self.molecules[molecule]
def element_key(self, element):
return self.elements[element]
def molecule_key(self, molecule):
return self.molecules[molecule]
def save(self, system):
for reaction in system.reactions:
for molecule in reaction.reactants:
self.add_molecule(molecule)
for element in molecule.elements:
self.add_element(element)
for molecule in reaction.products:
self.add_molecule(molecule)
for element in molecule.elements:
self.add_element(element)
result = {
"elements": [element.symbol for element in self.elements],
"molecules": {
self.molecule_key(molecule): {
self.element_key(element): number
for element, number in molecule.elements.items()
}
for molecule in self.molecules
},
"reactions": [
{
"reactants": {
self.molecule_key(reactant): stoich
for reactant, stoich in reaction.reactants.items()
},
"products": {
self.molecule_key(product): stoich
for product, stoich in reaction.products.items()
},
}
for reaction in system.reactions
],
}
return result
saver = YamlIntegerKeySavingSystem()
print(yaml.dump(saver.save(s)))
```
## Reference counting
The above approach of using a dictionary to determine the integer keys
for objects is a bit clunky.
Another good approach is to use counted objects either via a static member or by using a factory pattern:
```
class Element:
def __init__(self, symbol, id):
self.symbol = symbol
self.id = id
class Molecule:
def __init__(self, id):
self.elements = {} # Map from element to number of that element in the molecule
self.id = id
def add_element(self, element, number):
self.elements[element] = number
def to_struct(self):
return {x.symbol: self.elements[x] for x in self.elements}
class Reaction:
def __init__(self):
self.reactants = {} # Map from reactants to stoichiometries
self.products = {} # Map from products to stoichiometries
def add_reactant(self, reactant, stoichiometry):
self.reactants[reactant] = stoichiometry
def add_product(self, product, stoichiometry):
self.products[product] = stoichiometry
def to_struct(self):
return {
"reactants": [x.to_struct() for x in self.reactants],
"products": [x.to_struct() for x in self.products],
"stoichiometries": list(self.reactants.values())
+ list(self.products.values()),
}
class System: # This will be our factory
def __init__(self):
self.reactions = []
self.elements = []
self.molecules = []
def add_element(self, symbol):
new_element = Element(symbol, len(self.elements))
self.elements.append(new_element)
return new_element
def add_molecule(self):
new_molecule = Molecule(len(self.molecules))
self.molecules.append(new_molecule)
return new_molecule
def add_reaction(self):
new_reaction = Reaction()
self.reactions.append(new_reaction)
return new_reaction
def save(self):
result = {
"elements": [element.symbol for element in self.elements],
"molecules": {
molecule.id: {
element.id: number for element, number in molecule.elements.items()
}
for molecule in self.molecules
},
"reactions": [
{
"reactants": {
reactant.id: stoich
for reactant, stoich in reaction.reactants.items()
},
"products": {
product.id: stoich
for product, stoich in reaction.products.items()
},
}
for reaction in self.reactions
],
}
return result
s2 = System()
c = s2.add_element("C")
o = s2.add_element("O")
h = s2.add_element("H")
co2 = s2.add_molecule()
co2.add_element(c, 1)
co2.add_element(o, 2)
h2o = s2.add_molecule()
h2o.add_element(h, 2)
h2o.add_element(o, 1)
o2 = s2.add_molecule()
o2.add_element(o, 2)
h2 = s2.add_molecule()
h2.add_element(h, 2)
glucose = s2.add_molecule()
glucose.add_element(c, 6)
glucose.add_element(h, 12)
glucose.add_element(o, 6)
combustion_glucose = s2.add_reaction()
combustion_glucose.add_reactant(glucose, 1)
combustion_glucose.add_reactant(o2, 6)
combustion_glucose.add_product(co2, 6)
combustion_glucose.add_product(h2o, 6)
combustion_hydrogen = s2.add_reaction()
combustion_hydrogen.add_reactant(h2, 2)
combustion_hydrogen.add_reactant(o2, 1)
combustion_hydrogen.add_product(h2o, 2)
s2.save()
print(yaml.dump(s2.save()))
```
## Binary file formats
Now we're getting toward a numerically-based data structure, using integers for object keys, we should think about binary serialisation.
Binary file formats are much smaller than human-readable text based formats, so important when handling really big datasets.
One can compress a textual file format, of course, and with good compression algorithms this will be similar in size to the binary file. (C.f. discussions of Shannon information density!) However, this has performance implications.
A hand-designed binary format is fast and small, at the loss of human readability.
The problem with binary file formats, is that, lacking complex data structures, one needs to supply the *length* of an item before that item:
```
class FakeBinarySavingSystem:
# Pretend binary-style writing to a list to make it easier to read at first.
def save(self, system, buffer):
buffer.append(len(system.elements))
for element in system.elements:
buffer.append(element.symbol)
buffer.append(len(system.molecules))
for molecule in system.molecules:
buffer.append(len(molecule.elements))
for element, number in molecule.elements.items():
buffer.append(element.id)
buffer.append(number)
buffer.append(len(system.reactions))
for reaction in system.reactions:
buffer.append(len(reaction.reactants))
for reactant, stoich in reaction.reactants.items():
buffer.append(reactant.id)
buffer.append(stoich)
buffer.append(len(reaction.products))
for product, stoich in reaction.products.items():
buffer.append(product.id)
buffer.append(stoich)
import io
arraybuffer = []
FakeBinarySavingSystem().save(s2, arraybuffer)
arraybuffer
```
Deserialisation is left **as an exercise for the reader** :).
## Endian-robust binary file formats
Having prepared our data as a sequence of data which can be recorded in a single byte,
we might think a binary file format on disk is as simple as saving
each number in one byte:
```
# First, turn symbol characters to equivalent integers (ascii)
intarray = [x.encode("ascii")[0] if type(x) == str else x for x in arraybuffer]
intarray
bytearray(intarray)
with open("system.mol", "bw") as binfile:
binfile.write(bytearray(intarray))
```
However, this misses out on an unfortunate problem if we end up with large enough numbers to need more than one byte per integer, or we want to represent floats: different computer designs but the most-significant bytes of a multi-byte integer or float at the beginning or
end ('big endian' or 'little endian' data).
To get around this, we need to use a portable standard for making binary files.
One possible choice is **XDR** (standing for eXternal Data Representation). XDR is a standard data serialization format that accounts for endian differences between systems.
```
import xdrlib
class XDRSavingSystem(System):
def __init__(self, system):
# Shallow Copy constructor
self.elements = system.elements
self.reactions = system.reactions
self.molecules = system.molecules
self.buffer = xdrlib.Packer()
def _pack_pair(self, item):
self.buffer.pack_int(item[0].id)
self.buffer.pack_int(item[1])
def _pack_molecule(self, mol):
self.buffer.pack_array(mol.elements.items(), self._pack_pair)
def _pack_reaction(self, reaction):
self.buffer.pack_array(reaction.reactants.items(), self._pack_pair)
self.buffer.pack_array(reaction.products.items(), self._pack_pair)
def save(self):
el_symbols = list(map(lambda x: x.symbol.encode("utf-8"), self.elements))
# Note that pack_array AUTOMATICALLY packs the length of the array first!
self.buffer.pack_array(el_symbols, self.buffer.pack_string)
self.buffer.pack_array(self.molecules, self._pack_molecule)
self.buffer.pack_array(self.reactions, self._pack_reaction)
return self.buffer
xdrsys = XDRSavingSystem(s2)
xdrbuffer = xdrsys.save()
xdrbuffer.get_buffer()
```
## A higher level approach to binary file formats: HDF5
This was quite painful. We've shown you it because it is very likely
you will encounter this kind of unpleasant binary file format in your work.
However, the recommended approach to building binary file formats is to use HDF5 (Hierarchical Data Format), a much higher level binary file format.
HDF5's approach requires you to represent your system in terms of high-dimensional matrices, like NumPy arrays.
It then saves these, and handles all the tedious number-of-field management for you.
```
import h5py
import numpy as np
class HDF5SavingSystem(System):
def __init__(self, system):
# Shallow Copy constructor
self.elements = system.elements
self.reactions = system.reactions
self.molecules = system.molecules
def element_symbols(self):
return list(map(lambda x: x.symbol.encode("ascii"), self.elements))
def molecule_matrix(self):
molecule_matrix = np.zeros((len(self.elements), len(self.molecules)), dtype=int)
for molecule in self.molecules:
for element, n in molecule.elements.items():
molecule_matrix[element.id, molecule.id] = n
return molecule_matrix
def reaction_matrix(self):
reaction_matrix = np.zeros(
(len(self.molecules), len(self.reactions)), dtype=int
)
for i, reaction in enumerate(self.reactions):
for reactant, n in reaction.reactants.items():
reaction_matrix[reactant.id, i] = -1 * n
for product, n in reaction.products.items():
reaction_matrix[product.id, i] = n
return reaction_matrix
def write(self, filename):
hdf = h5py.File(filename, "w")
string_type = h5py.special_dtype(vlen=bytes)
hdf.create_dataset(
"symbols", (len(self.elements), 1), string_type, self.element_symbols()
)
hdf.create_dataset("molecules", data=self.molecule_matrix())
hdf.create_dataset("reactions", data=self.reaction_matrix())
hdf.close()
saver = HDF5SavingSystem(s2)
saver.element_symbols()
saver.molecule_matrix()
saver.reaction_matrix()
saver.write("foo.hdf5")
```
Note that this binary representation is *not* human readable at all.
```
%%bash
# Read the first 100 characters from the file
head -c 100 foo.hdf5
import h5py
hdf_load = h5py.File("foo.hdf5")
np.array(hdf_load["reactions"])
```
Using a `sparse matrix` storage would be even better here, but we don't have time for that!
|
github_jupyter
|
class Element:
def __init__(self, symbol):
self.symbol = symbol
class Molecule:
def __init__(self):
self.elements = {} # Map from element to number of that element in the molecule
def add_element(self, element, number):
self.elements[element] = number
def to_struct(self):
return {x.symbol: self.elements[x] for x in self.elements}
class Reaction:
def __init__(self):
self.reactants = {} # Map from reactants to stoichiometries
self.products = {} # Map from products to stoichiometries
def add_reactant(self, reactant, stoichiometry):
self.reactants[reactant] = stoichiometry
def add_product(self, product, stoichiometry):
self.products[product] = stoichiometry
def to_struct(self):
return {
"reactants": [x.to_struct() for x in self.reactants],
"products": [x.to_struct() for x in self.products],
"stoichiometries": list(self.reactants.values()) + list(self.products.values()),
}
class System:
def __init__(self):
self.reactions = []
def add_reaction(self, reaction):
self.reactions.append(reaction)
def to_struct(self):
return [x.to_struct() for x in self.reactions]
c = Element("C")
o = Element("O")
h = Element("H")
co2 = Molecule()
co2.add_element(c, 1)
co2.add_element(o, 2)
h2o = Molecule()
h2o.add_element(h, 2)
h2o.add_element(o, 1)
o2 = Molecule()
o2.add_element(o, 2)
h2 = Molecule()
h2.add_element(h, 2)
glucose = Molecule()
glucose.add_element(c, 6)
glucose.add_element(h, 12)
glucose.add_element(o, 6)
combustion_glucose = Reaction()
combustion_glucose.add_reactant(glucose, 1)
combustion_glucose.add_reactant(o2, 6)
combustion_glucose.add_product(co2, 6)
combustion_glucose.add_product(h2o, 6)
combustion_hydrogen = Reaction()
combustion_hydrogen.add_reactant(h2, 2)
combustion_hydrogen.add_reactant(o2, 1)
combustion_hydrogen.add_product(h2o, 2)
s = System()
s.add_reaction(combustion_glucose)
s.add_reaction(combustion_hydrogen)
s.to_struct()
import yaml
print(yaml.dump(s.to_struct()))
class YamlDeSerialisingSystem:
def __init__(self):
self.elements = {}
self.molecules = {}
def add_element(self, candidate):
if candidate not in self.elements:
self.elements[candidate] = Element(candidate)
return self.elements[candidate]
def add_molecule(self, candidate):
if tuple(candidate.items()) not in self.molecules:
m = Molecule()
for symbol, number in candidate.items():
m.add_element(self.add_element(symbol), number)
self.molecules[tuple(candidate.items())] = m
return self.molecules[tuple(candidate.items())]
def parse_system(self, system):
s = System()
for reaction in system:
r = Reaction()
stoichiometries = reaction["stoichiometries"]
for molecule in reaction["reactants"]:
r.add_reactant(self.add_molecule(molecule), stoichiometries.pop(0))
for molecule in reaction["products"]:
r.add_product(self.add_molecule(molecule), stoichiometries.pop(0))
s.add_reaction(r)
return s
de_serialiser = YamlDeSerialisingSystem()
round_trip = de_serialiser.parse_system(s.to_struct())
round_trip.to_struct()
de_serialiser.elements
de_serialiser.molecules
list(round_trip.reactions[0].reactants.keys())[1].to_struct()
list(round_trip.reactions[1].reactants.keys())[1].to_struct()
list(de_serialiser.molecules.keys())
class YamlSavingSystem:
def __init__(self):
self.elements = set()
self.molecules = set()
def element_key(self, element):
return element.symbol
def molecule_key(self, molecule):
key = ""
for element, number in molecule.elements.items():
key += element.symbol
key += str(number)
return key
def save(self, system):
for reaction in system.reactions:
for molecule in reaction.reactants:
self.molecules.add(molecule)
for element in molecule.elements:
self.elements.add(element)
for molecule in reaction.products:
self.molecules.add(molecule)
for element in molecule.elements:
self.elements.add(element)
result = {
"elements": [self.element_key(element) for element in self.elements],
"molecules": {
self.molecule_key(molecule): {
self.element_key(element): number
for element, number in molecule.elements.items()
}
for molecule in self.molecules
},
"reactions": [
{
"reactants": {
self.molecule_key(reactant): stoich
for reactant, stoich in reaction.reactants.items()
},
"products": {
self.molecule_key(product): stoich
for product, stoich in reaction.products.items()
},
}
for reaction in system.reactions
],
}
return result
saver = YamlSavingSystem()
print(yaml.dump(saver.save(s)))
class YamlIntegerKeySavingSystem:
def __init__(self):
self.elements = {}
self.molecules = {}
def add_element(self, element):
if element not in self.elements:
self.elements[element] = len(self.elements)
return self.elements[element]
def add_molecule(self, molecule):
if molecule not in self.molecules:
self.molecules[molecule] = len(self.molecules)
return self.molecules[molecule]
def element_key(self, element):
return self.elements[element]
def molecule_key(self, molecule):
return self.molecules[molecule]
def save(self, system):
for reaction in system.reactions:
for molecule in reaction.reactants:
self.add_molecule(molecule)
for element in molecule.elements:
self.add_element(element)
for molecule in reaction.products:
self.add_molecule(molecule)
for element in molecule.elements:
self.add_element(element)
result = {
"elements": [element.symbol for element in self.elements],
"molecules": {
self.molecule_key(molecule): {
self.element_key(element): number
for element, number in molecule.elements.items()
}
for molecule in self.molecules
},
"reactions": [
{
"reactants": {
self.molecule_key(reactant): stoich
for reactant, stoich in reaction.reactants.items()
},
"products": {
self.molecule_key(product): stoich
for product, stoich in reaction.products.items()
},
}
for reaction in system.reactions
],
}
return result
saver = YamlIntegerKeySavingSystem()
print(yaml.dump(saver.save(s)))
class Element:
def __init__(self, symbol, id):
self.symbol = symbol
self.id = id
class Molecule:
def __init__(self, id):
self.elements = {} # Map from element to number of that element in the molecule
self.id = id
def add_element(self, element, number):
self.elements[element] = number
def to_struct(self):
return {x.symbol: self.elements[x] for x in self.elements}
class Reaction:
def __init__(self):
self.reactants = {} # Map from reactants to stoichiometries
self.products = {} # Map from products to stoichiometries
def add_reactant(self, reactant, stoichiometry):
self.reactants[reactant] = stoichiometry
def add_product(self, product, stoichiometry):
self.products[product] = stoichiometry
def to_struct(self):
return {
"reactants": [x.to_struct() for x in self.reactants],
"products": [x.to_struct() for x in self.products],
"stoichiometries": list(self.reactants.values())
+ list(self.products.values()),
}
class System: # This will be our factory
def __init__(self):
self.reactions = []
self.elements = []
self.molecules = []
def add_element(self, symbol):
new_element = Element(symbol, len(self.elements))
self.elements.append(new_element)
return new_element
def add_molecule(self):
new_molecule = Molecule(len(self.molecules))
self.molecules.append(new_molecule)
return new_molecule
def add_reaction(self):
new_reaction = Reaction()
self.reactions.append(new_reaction)
return new_reaction
def save(self):
result = {
"elements": [element.symbol for element in self.elements],
"molecules": {
molecule.id: {
element.id: number for element, number in molecule.elements.items()
}
for molecule in self.molecules
},
"reactions": [
{
"reactants": {
reactant.id: stoich
for reactant, stoich in reaction.reactants.items()
},
"products": {
product.id: stoich
for product, stoich in reaction.products.items()
},
}
for reaction in self.reactions
],
}
return result
s2 = System()
c = s2.add_element("C")
o = s2.add_element("O")
h = s2.add_element("H")
co2 = s2.add_molecule()
co2.add_element(c, 1)
co2.add_element(o, 2)
h2o = s2.add_molecule()
h2o.add_element(h, 2)
h2o.add_element(o, 1)
o2 = s2.add_molecule()
o2.add_element(o, 2)
h2 = s2.add_molecule()
h2.add_element(h, 2)
glucose = s2.add_molecule()
glucose.add_element(c, 6)
glucose.add_element(h, 12)
glucose.add_element(o, 6)
combustion_glucose = s2.add_reaction()
combustion_glucose.add_reactant(glucose, 1)
combustion_glucose.add_reactant(o2, 6)
combustion_glucose.add_product(co2, 6)
combustion_glucose.add_product(h2o, 6)
combustion_hydrogen = s2.add_reaction()
combustion_hydrogen.add_reactant(h2, 2)
combustion_hydrogen.add_reactant(o2, 1)
combustion_hydrogen.add_product(h2o, 2)
s2.save()
print(yaml.dump(s2.save()))
class FakeBinarySavingSystem:
# Pretend binary-style writing to a list to make it easier to read at first.
def save(self, system, buffer):
buffer.append(len(system.elements))
for element in system.elements:
buffer.append(element.symbol)
buffer.append(len(system.molecules))
for molecule in system.molecules:
buffer.append(len(molecule.elements))
for element, number in molecule.elements.items():
buffer.append(element.id)
buffer.append(number)
buffer.append(len(system.reactions))
for reaction in system.reactions:
buffer.append(len(reaction.reactants))
for reactant, stoich in reaction.reactants.items():
buffer.append(reactant.id)
buffer.append(stoich)
buffer.append(len(reaction.products))
for product, stoich in reaction.products.items():
buffer.append(product.id)
buffer.append(stoich)
import io
arraybuffer = []
FakeBinarySavingSystem().save(s2, arraybuffer)
arraybuffer
# First, turn symbol characters to equivalent integers (ascii)
intarray = [x.encode("ascii")[0] if type(x) == str else x for x in arraybuffer]
intarray
bytearray(intarray)
with open("system.mol", "bw") as binfile:
binfile.write(bytearray(intarray))
import xdrlib
class XDRSavingSystem(System):
def __init__(self, system):
# Shallow Copy constructor
self.elements = system.elements
self.reactions = system.reactions
self.molecules = system.molecules
self.buffer = xdrlib.Packer()
def _pack_pair(self, item):
self.buffer.pack_int(item[0].id)
self.buffer.pack_int(item[1])
def _pack_molecule(self, mol):
self.buffer.pack_array(mol.elements.items(), self._pack_pair)
def _pack_reaction(self, reaction):
self.buffer.pack_array(reaction.reactants.items(), self._pack_pair)
self.buffer.pack_array(reaction.products.items(), self._pack_pair)
def save(self):
el_symbols = list(map(lambda x: x.symbol.encode("utf-8"), self.elements))
# Note that pack_array AUTOMATICALLY packs the length of the array first!
self.buffer.pack_array(el_symbols, self.buffer.pack_string)
self.buffer.pack_array(self.molecules, self._pack_molecule)
self.buffer.pack_array(self.reactions, self._pack_reaction)
return self.buffer
xdrsys = XDRSavingSystem(s2)
xdrbuffer = xdrsys.save()
xdrbuffer.get_buffer()
import h5py
import numpy as np
class HDF5SavingSystem(System):
def __init__(self, system):
# Shallow Copy constructor
self.elements = system.elements
self.reactions = system.reactions
self.molecules = system.molecules
def element_symbols(self):
return list(map(lambda x: x.symbol.encode("ascii"), self.elements))
def molecule_matrix(self):
molecule_matrix = np.zeros((len(self.elements), len(self.molecules)), dtype=int)
for molecule in self.molecules:
for element, n in molecule.elements.items():
molecule_matrix[element.id, molecule.id] = n
return molecule_matrix
def reaction_matrix(self):
reaction_matrix = np.zeros(
(len(self.molecules), len(self.reactions)), dtype=int
)
for i, reaction in enumerate(self.reactions):
for reactant, n in reaction.reactants.items():
reaction_matrix[reactant.id, i] = -1 * n
for product, n in reaction.products.items():
reaction_matrix[product.id, i] = n
return reaction_matrix
def write(self, filename):
hdf = h5py.File(filename, "w")
string_type = h5py.special_dtype(vlen=bytes)
hdf.create_dataset(
"symbols", (len(self.elements), 1), string_type, self.element_symbols()
)
hdf.create_dataset("molecules", data=self.molecule_matrix())
hdf.create_dataset("reactions", data=self.reaction_matrix())
hdf.close()
saver = HDF5SavingSystem(s2)
saver.element_symbols()
saver.molecule_matrix()
saver.reaction_matrix()
saver.write("foo.hdf5")
%%bash
# Read the first 100 characters from the file
head -c 100 foo.hdf5
import h5py
hdf_load = h5py.File("foo.hdf5")
np.array(hdf_load["reactions"])
| 0.794544 | 0.858719 |
# Python for Data Science
> Applications and Practices
郭耀仁 Tony Yao-Jen Kuo, <tony@kyosei.ai>
## Find slides at
- Find slides at: <http://bit.ly/2Ua9aOD>
- Find notebook at: <http://bit.ly/2HF9qiU>
## TL; DR
> Tons of tools and programming languages are joining the data science eco-system. Python, unlike its data-centric peers, this general-purposed programming language is now ranked as the go-to choice for data science. According to 2018 [Kaggle](https://www.kaggle.com) ML & DS Survey, Python is extremely important in visualization and machine learning. Besides Python, a deep understanding of RDBMS(SQL and Database management) and software development(Git), could bring you to the next level.
## Let's talk about ...
- About Me
- Applications
- Practices
## About Me
```
yao_jen_kuo = {
"name": "郭耀仁",
"organization": "Kyosei.ai",
"loves": ["Data Science", "Marathon", "Ping pong"],
"teaching": ["台大資訊系統訓練班", "資策會", "中華電信訓練學院"],
"books": {
"輕鬆學習 R 語言": "https://www.datainpoint.com/r-essentials/",
"R 語言使用者的 Python 學習筆記": "https://ithelp.ithome.com.tw/users/20103511/ironman/1077"
}
}
```

Source: [iT 邦幫忙](https://ithelp.ithome.com.tw/ironman/winner-list)
## Blogging
- Datainpoint
- <https://www.datainpoint.com/>
- <https://medium.com/datainpoint>
- [Pyradise](https://medium.com/pyradise)
## Meetups
- Pyradise X AWS
- [TensorFlow User Group (Coming soon!)](https://www.meetup.com/TensorFlow-User-Group-Taipei/)
## Applications
## What is Data Science, actually?

Source: <http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram>
## A typical data science project might involve

Source: [R for Data Science](https://r4ds.had.co.nz/explore-intro.html)
## That is exactly how I manage my book
<https://www.datainpoint.com/data-science-in-action/>
## How do we apply Python on these applications?
- Getting data
- Preprocessing data
- Exploring data
- Predicting data
- Communicating with data
## Through modules and packages
## Getting data
- Using `json`, `pandas`, `lxml` to read different types of data
- Using `sqlalchemy` to access database
- Using `requests`, `lxml`, `beautifulsoup`, `pyquery`, `selenium`, `scrapy` to scrape webpage
## Preprocessing data
- Using `numpy` for vectorization
- Using `pandas` to handle tabular data
## Exploring data
Using `matplotlib`, `pandas` and `seaborn` to visualize data
## Predicting data
- Using `sklearn` for basic machine learning techniques
- Using `tensorflow`, `torch` for customized/advanced machine learning applications
## Communicating with data
- Using notebooks/slides with markdown for communication
- Using `plotly` for dynamic plots
## Practices
## Data science is different now
<http://veekaybee.github.io/2019/02/13/data-science-is-different/>
## Wait a sec...
Do you think this is legitimate?
## Since we are talking about data science
It is always more reliable to make conclusions from **DATA**, rather than from a couple of so-called "Experts".
## So let's find out by ourselves
[2018 Kaggle ML & DS Survey](https://www.kaggle.com/kaggle/kaggle-survey-2018)
## Importing libraries
```
import pandas as pd
import plotly_express as px
```
## Importing self-defined functions
```
from py4ds import barplot_multiple_choice, get_option_count
```
## Getting data
```
file_url = "https://s3-ap-northeast-1.amazonaws.com/kaggle-ds-survey-2018/"
df = pd.read_csv(file_url + 'multipleChoiceResponses.csv', skiprows=[1], low_memory=False)
response = pd.read_csv(file_url + 'freeFormResponses.csv', low_memory=False)
schema = pd.read_csv(file_url + 'SurveySchema.csv')
print('Shape of multipleChoiceResponses:', df.shape)
print('Shape of freeFormResponses:', response.shape)
print('Shape of schema:', schema.shape)
```
## Exploring data
```
df.head()
response.head()
schema.head()
print(schema["Q3"][0])
grouped = df.groupby("Q3")
grouped["Q3"].count().nlargest(5)
data = pd.DataFrame(grouped["Q3"].count().nlargest(5))
data = data.rename(columns={"Q3": "n_response"})
data = data.reset_index()
data
#category_orders = list(data["Q3"])
#px.bar(data, x="n_response", y="Q3", orientation="h", category_orders={"Q3": category_orders}, color="Q3", title="US, India, and China")
print(schema["Q6"][0])
grouped = df.groupby("Q6")
grouped["Q6"].count().nlargest(5)
data = pd.DataFrame(grouped["Q6"].count().nlargest(5))
data = data.rename(columns={"Q6": "n_response"})
data = data.reset_index()
data
#category_orders = list(data["Q6"])
#px.bar(data, x="n_response", y="Q6", orientation="h", category_orders={"Q6": category_orders}, color="Q6", title="Data Scientist, Software Engineer, and Data Analyst Dominate")
print(schema['Q7'][0])
non_student_df = df[df["Q7"] != "I am a student"]
print("How many non-student respondents: {}".format(non_student_df.shape[0]))
grouped = non_student_df.groupby("Q7")
grouped["Q7"].count().nlargest(5)
data = pd.DataFrame(grouped["Q7"].count().nlargest(5))
data = data.rename(columns={"Q7": "n_response"})
data = data.reset_index()
data
#category_orders = list(data["Q7"])
#px.bar(data, x="n_response", y="Q7", orientation="h", category_orders={"Q7": category_orders}, color="Q7", title="IT Industry and Academics adopt Data Science Approach")
print(schema['Q5'][0])
grouped = non_student_df.groupby("Q5")
grouped["Q5"].count().nlargest(10)
print(schema["Q12"][0])
grouped = non_student_df.groupby("Q12_MULTIPLE_CHOICE")
grouped["Q12_MULTIPLE_CHOICE"].count().sort_values(ascending=False)
print(schema["Q16"][0])
barplot_multiple_choice(non_student_df, column_start="Q16_Part", title="Python, SQL, and R", height= 600)
print(schema["Q19"][0])
barplot_multiple_choice(non_student_df, column_start="Q19_Part", title="Python ML Frameworks Dominate", height=600)
print(schema["Q21"][0])
barplot_multiple_choice(non_student_df, column_start="Q21_Part", title="Python and R Lead Visualization Frameworks, However...", height=600)
print(schema["Q36"][0])
barplot_multiple_choice(non_student_df, column_start="Q36_Part", title="Coursera, Udemy, and DataCamp", height=600)
print(schema["Q38"][0])
barplot_multiple_choice(non_student_df, column_start="Q38_Part", title="Kaggle forums and Medium blogs", height=600)
print(schema["Q29"][0])
barplot_multiple_choice(non_student_df, column_start="Q29_Part", title="Open source RDBMS dominates", height=600)
print(schema["Q11"][0])
get_option_count(non_student_df, "Q11")
print(schema["Q49"][0])
get_option_count(non_student_df, "Q49")
```
## So, what are the key takeaways?
- Find answers based on data
- Choose Python as your primary tool to step into data science
- Learn technical foundations(SQL and Git) together with Python
## 缺少基本功,就像沒有醬汁的鳳尾炸蝦

- Diving into JavaScript and front-end programming if you love visualization
- Keep learning through [Coursera](https://www.coursera.org/), [Udemy](https://www.udemy.com/), [DataCamp](https://www.datacamp.com?tap_a=5644-dce66f&tap_s=194899-1fb421) and [edX](https://www.edx.org/)
- Be sure to subscribe good Medium publications
- [Towards Data Science](https://towardsdatascience.com/)
- [DataInPoint](https://medium.com/datainpoint)
- [Pyradise](https://medium.com/pyradise)
## Further studies
- What topics are covered in top-tier data science programs?
- What are listed in Python data science job descriptions on [Indeed](https://tw.indeed.com/), [LinkedIn](https://www.linkedin.com/), or [104 人力銀行](https://www.104.com.tw)?
|
github_jupyter
|
yao_jen_kuo = {
"name": "郭耀仁",
"organization": "Kyosei.ai",
"loves": ["Data Science", "Marathon", "Ping pong"],
"teaching": ["台大資訊系統訓練班", "資策會", "中華電信訓練學院"],
"books": {
"輕鬆學習 R 語言": "https://www.datainpoint.com/r-essentials/",
"R 語言使用者的 Python 學習筆記": "https://ithelp.ithome.com.tw/users/20103511/ironman/1077"
}
}
import pandas as pd
import plotly_express as px
from py4ds import barplot_multiple_choice, get_option_count
file_url = "https://s3-ap-northeast-1.amazonaws.com/kaggle-ds-survey-2018/"
df = pd.read_csv(file_url + 'multipleChoiceResponses.csv', skiprows=[1], low_memory=False)
response = pd.read_csv(file_url + 'freeFormResponses.csv', low_memory=False)
schema = pd.read_csv(file_url + 'SurveySchema.csv')
print('Shape of multipleChoiceResponses:', df.shape)
print('Shape of freeFormResponses:', response.shape)
print('Shape of schema:', schema.shape)
df.head()
response.head()
schema.head()
print(schema["Q3"][0])
grouped = df.groupby("Q3")
grouped["Q3"].count().nlargest(5)
data = pd.DataFrame(grouped["Q3"].count().nlargest(5))
data = data.rename(columns={"Q3": "n_response"})
data = data.reset_index()
data
#category_orders = list(data["Q3"])
#px.bar(data, x="n_response", y="Q3", orientation="h", category_orders={"Q3": category_orders}, color="Q3", title="US, India, and China")
print(schema["Q6"][0])
grouped = df.groupby("Q6")
grouped["Q6"].count().nlargest(5)
data = pd.DataFrame(grouped["Q6"].count().nlargest(5))
data = data.rename(columns={"Q6": "n_response"})
data = data.reset_index()
data
#category_orders = list(data["Q6"])
#px.bar(data, x="n_response", y="Q6", orientation="h", category_orders={"Q6": category_orders}, color="Q6", title="Data Scientist, Software Engineer, and Data Analyst Dominate")
print(schema['Q7'][0])
non_student_df = df[df["Q7"] != "I am a student"]
print("How many non-student respondents: {}".format(non_student_df.shape[0]))
grouped = non_student_df.groupby("Q7")
grouped["Q7"].count().nlargest(5)
data = pd.DataFrame(grouped["Q7"].count().nlargest(5))
data = data.rename(columns={"Q7": "n_response"})
data = data.reset_index()
data
#category_orders = list(data["Q7"])
#px.bar(data, x="n_response", y="Q7", orientation="h", category_orders={"Q7": category_orders}, color="Q7", title="IT Industry and Academics adopt Data Science Approach")
print(schema['Q5'][0])
grouped = non_student_df.groupby("Q5")
grouped["Q5"].count().nlargest(10)
print(schema["Q12"][0])
grouped = non_student_df.groupby("Q12_MULTIPLE_CHOICE")
grouped["Q12_MULTIPLE_CHOICE"].count().sort_values(ascending=False)
print(schema["Q16"][0])
barplot_multiple_choice(non_student_df, column_start="Q16_Part", title="Python, SQL, and R", height= 600)
print(schema["Q19"][0])
barplot_multiple_choice(non_student_df, column_start="Q19_Part", title="Python ML Frameworks Dominate", height=600)
print(schema["Q21"][0])
barplot_multiple_choice(non_student_df, column_start="Q21_Part", title="Python and R Lead Visualization Frameworks, However...", height=600)
print(schema["Q36"][0])
barplot_multiple_choice(non_student_df, column_start="Q36_Part", title="Coursera, Udemy, and DataCamp", height=600)
print(schema["Q38"][0])
barplot_multiple_choice(non_student_df, column_start="Q38_Part", title="Kaggle forums and Medium blogs", height=600)
print(schema["Q29"][0])
barplot_multiple_choice(non_student_df, column_start="Q29_Part", title="Open source RDBMS dominates", height=600)
print(schema["Q11"][0])
get_option_count(non_student_df, "Q11")
print(schema["Q49"][0])
get_option_count(non_student_df, "Q49")
| 0.340047 | 0.952131 |
```
%pylab inline
```
# Statistical Learning
Different from machine learning, in that not merely interested in how well a model fits: also interested in how to interpret/derive meaning.
* How to relate my covariates $X = \{ x_1, x_2, x_3, ... x_p \}$ to the response $y$.
* Our model of the data is $y = f(x) + \epsilon$
* $f(x)$ is not necessarily linear,
* Error terms need not be normal.
* Goal: to develop an estimate of $f$, $\hat{f}$
* Two reasons to estimate $f$ with $\hat{f}$:
1. Make predictions (not necessarily informed by mechanisms, relationships among covariates),
* Want $\hat{y}$ to be close to $y$; $\hat{y} = \hat{f}(x)$
* Minimize Mean Squared Error:
$E(y-\hat{y})^2 = E[f(x) + \epsilon - \hat{f}(x)]^2$
$E(y-\hat{y})^2 = [f(x) - \hat{f}(x)]^2 + Var(\epsilon)$
```
import numpy as np
from scipy.stats import uniform
f = lambda x: np.log(x)
x = np.linspace(0.1, 5.1, 100)
y = f(x)
Eps = uniform.rvs(-1., 2., size=(100,))
plt.plot(x, y, label='$f(x)$', lw=3)
plt.scatter(x, y + Eps, label='y')
plt.xlabel('x')
plt.legend(loc='best')
plt.show()
```
* Goal: to develop an estimate of $f$, $\hat{f}$
* Two reasons to estimate $f$ with $\hat{f}$:
2. $\hat{f}$ -> making inference; want to know _how_ covariates X affects y.
```
models = ['Subset selection lasso', 'least squares', 'generalized additive model trees',
'bagging, boosting', 'support vector machines']
pos = [(0, 1), (0.2, 0.8), (0.4, 0.6), (0.6, 0.1), (0.7, 0.3)]
xlabels = ['Restrictive', 'Flexible']
ylabels = ['Low', 'High']
plt.figure(figsize=(10, 7))
for m, p in zip(models, pos):
plt.text(p[0]+ 0.02, p[1]-0.05, m, size=16)
plt.xticks([0.07, 0.95], xlabels, size=16)
plt.yticks([0, 1], ylabels, size=16)
plt.ylabel('Interpretability', size=20)
plt.xlabel('Flexibility', size=20)
plt.show()
```
## How do we estimate $\hat{f}$?
### Parametric vs non-parametric methods
**Parametric methods**
* Assume some form for the relationship between X and y. For example:
$y = \beta_0 + \beta_1x + \epsilon$
$y = X\beta + \epsilon$
$logit(y) = X\beta + \epsilon$
* And fit data by tweaking a few $p << n$ beta terms (much few parameters than the number of observations).
**Non-parametric methods**
* Assume no form for $f$,
* or the form has $p \simeq n$
```
x = np.linspace(0., 1.2, 5)
plt.scatter(x[0:4], [0.1, 0.6, 0.25, 0.7])
plt.plot(x, [0.1, 0.6, 0.25, 0.7, 1.2])
plt.plot(x, x/1.5)
plt.scatter(1.2, 0., c='red')
plt.show()
```
Can fit this perfectly with a cubic model. But assuming that this is correct.
What happens when we get a new data point: $(x_0, y_0)$
for non-parametric methods we need some way to penalize "wiggliness"
**wiggliness** df: cumulative change in the second derivative, $f''$.
Pros & cons:
* Parametric:
* Pros:
* More interpretable
* Requires fewer data
* Cons:
* More rigid
* More assumptions to make
* Non-parametric
* Pros:
* More flexible
* Fewer assumptions
* Cons:
* Need more data
* Harder to interpret
### Supervised vs. unsupervised algorithms
* in the supervised algorithm we have response variable, $y$
* unsupervised case, no response variable
* the response variable, $y$, supervises our selection of important covariates, $X$
Examples:
* Regression -- supervised
* NMDS/PCA -- unsupervised
* Diabetes risk -- supervised
```
plt.figure(figsize=(10, 5))
plt.subplot(121)
plt.title('Supervised')
plt.scatter([.0, .2, .1, .3], [.2, .1, .3, .4], c='red', label='nondiabetic')
plt.scatter([.6, .8, .9, .7], [.55, .74, .5, .8], c='blue', label='diabetic')
plt.ylabel('Weekly sugar intake')
plt.xlabel('BMI')
plt.legend(loc=2)
plt.subplot(122)
plt.title('Unsupervised')
plt.scatter([.6, .8, .9, .7]+[.0, .2, .1, .3], [.55, .74, .5, .8]+[.2, .1, .3, .4], c='black', label='diabetic')
plt.ylabel('Weekly sugar intake')
plt.xlabel('BMI')
plt.tight_layout()
```
In the unsupervised case, we don't know the patient groups.
### Classification & regression
**Regression:** response is continuous (either continuous or categorical covariates)
**Classification:** response is categorical
## Regression
### Assessing model accuracy
```
x = np.linspace(0., 1., 50)
y = x + np.random.random(size=50) - 0.5
plt.figure(figsize=(10, 5))
plt.subplot(121)
plt.title('Model A')
plt.scatter(x, y)
plt.plot(x, x)
plt.subplot(122)
plt.title('Model B')
plt.scatter(x, y)
plt.plot(x, [0.42]*50)
plt.tight_layout()
plt.show()
```
Model A is better because the $Ave(y-\hat{y})^2$ (Mean Squared Error) is smaller.
Consider the model where we have n parameters (e.g. n-degree polynomial). It can go through every data point: no MSE!
If the model is too flexible (and we overfit the data), then we tend to do a bad job at predicting a new data point that was not used in tuning the model.
### Test data & training data
Take our data and split into two groups:
1. Training data: data used to tune the model(s) of interest
2. Test data: data used to assess the accuracy of each model (typically use MSE)
In general, $MSE_{training} \leq MSE_{test}$
Want to look at the impact of model complexity on both $MSE_{training}$ and $MSE_{test}$.
```
plt.figure(figsize=(7, 5))
x = np.linspace(1, 10, 99)
plt.plot(x, 1./x**0.5 - 0.1, label='$MSE_training$', lw=3)
plt.plot(np.linspace(1, 10, 7), [0.9, 0.6, 0.5, 0.45, 0.55, 0.7, 0.9], label='$MSE_{test}$', lw=3)
plt.ylabel('$MSE$')
plt.xlabel('flexibility')
plt.legend()
plt.show()
```
$MSE_{test}$ should bottom out around the "true" function. $MSE_{test}$ should never drop below the "true" amount of error/residuals. Goal is to minimize $MSE_{test}$.
### Bias/Variance trade-off
* It can be shown that for $y = f(x) + \epsilon$,
$E[ y_0 - \hat{f}(x_0)]^2 = Var(\hat{f}(x_0) + [bias(\hat{f}(x_0))]^2 + Var(\epsilon)$
* $E[y_0 - \hat{f}(x_0)]^2$ -- Expected test set MSE
* $Var(\hat{f}(x_0)$ -- Measure of how much the $\hat{f}$ function would change if I got new data. If model is well-fit, this should be small.
* $Bias(\hat{f}) = E[f(x_0) - \hat{f}(x_0)]$ -- How much am I going to be wrong because my $\hat{f}$ is too restrictive. Want a model that is flexible enough that this bias is small.
$y_0$ is training data
## Classification
### Assessing accuracy
* $\hat{y}$ will be categorical (as is $y$)
* Measure will be % of cases mis-classified
**Training error rate**: $ER = \frac{1}{n}\sum{I(y_i \neq \hat{y}_i)}$
$I(u) =$ 1 if TRUE, 0 if FALSE$
```
x = np.linspace(0., 1., 20)
y = [1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]
plt.scatter(x, y)
plt.ylabel('Cougar occupied')
plt.xlabel('# of dogs')
```
$\hat{y}=$ 1=occupied if $\hat{p}(x_0) > 0.5$; 0=unoccupied if $\hat{p}(x_0) \leq 0.5$
Making a logistic regression a classifier.
|
github_jupyter
|
%pylab inline
import numpy as np
from scipy.stats import uniform
f = lambda x: np.log(x)
x = np.linspace(0.1, 5.1, 100)
y = f(x)
Eps = uniform.rvs(-1., 2., size=(100,))
plt.plot(x, y, label='$f(x)$', lw=3)
plt.scatter(x, y + Eps, label='y')
plt.xlabel('x')
plt.legend(loc='best')
plt.show()
models = ['Subset selection lasso', 'least squares', 'generalized additive model trees',
'bagging, boosting', 'support vector machines']
pos = [(0, 1), (0.2, 0.8), (0.4, 0.6), (0.6, 0.1), (0.7, 0.3)]
xlabels = ['Restrictive', 'Flexible']
ylabels = ['Low', 'High']
plt.figure(figsize=(10, 7))
for m, p in zip(models, pos):
plt.text(p[0]+ 0.02, p[1]-0.05, m, size=16)
plt.xticks([0.07, 0.95], xlabels, size=16)
plt.yticks([0, 1], ylabels, size=16)
plt.ylabel('Interpretability', size=20)
plt.xlabel('Flexibility', size=20)
plt.show()
x = np.linspace(0., 1.2, 5)
plt.scatter(x[0:4], [0.1, 0.6, 0.25, 0.7])
plt.plot(x, [0.1, 0.6, 0.25, 0.7, 1.2])
plt.plot(x, x/1.5)
plt.scatter(1.2, 0., c='red')
plt.show()
plt.figure(figsize=(10, 5))
plt.subplot(121)
plt.title('Supervised')
plt.scatter([.0, .2, .1, .3], [.2, .1, .3, .4], c='red', label='nondiabetic')
plt.scatter([.6, .8, .9, .7], [.55, .74, .5, .8], c='blue', label='diabetic')
plt.ylabel('Weekly sugar intake')
plt.xlabel('BMI')
plt.legend(loc=2)
plt.subplot(122)
plt.title('Unsupervised')
plt.scatter([.6, .8, .9, .7]+[.0, .2, .1, .3], [.55, .74, .5, .8]+[.2, .1, .3, .4], c='black', label='diabetic')
plt.ylabel('Weekly sugar intake')
plt.xlabel('BMI')
plt.tight_layout()
x = np.linspace(0., 1., 50)
y = x + np.random.random(size=50) - 0.5
plt.figure(figsize=(10, 5))
plt.subplot(121)
plt.title('Model A')
plt.scatter(x, y)
plt.plot(x, x)
plt.subplot(122)
plt.title('Model B')
plt.scatter(x, y)
plt.plot(x, [0.42]*50)
plt.tight_layout()
plt.show()
plt.figure(figsize=(7, 5))
x = np.linspace(1, 10, 99)
plt.plot(x, 1./x**0.5 - 0.1, label='$MSE_training$', lw=3)
plt.plot(np.linspace(1, 10, 7), [0.9, 0.6, 0.5, 0.45, 0.55, 0.7, 0.9], label='$MSE_{test}$', lw=3)
plt.ylabel('$MSE$')
plt.xlabel('flexibility')
plt.legend()
plt.show()
x = np.linspace(0., 1., 20)
y = [1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]
plt.scatter(x, y)
plt.ylabel('Cougar occupied')
plt.xlabel('# of dogs')
| 0.556641 | 0.976106 |
Deep Learning
=============
Assignment 2
------------
Previously in `1_notmnist.ipynb`, we created a pickle with formatted datasets for training, development and testing on the [notMNIST dataset](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html).
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
```
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
```
First reload the data we generated in `1_notmnist.ipynb`.
```
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
```
Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
```
#Before reformatting
print(train_labels)
print(train_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
#After formatting
print(train_labels)
print(train_labels.shape)
```
We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
* Then you can run the operations on this graph as many times as you want by calling `session.run()`, providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
```
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random valued following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
```
Let's run this computation and iterate:
```
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
```
Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a `Placeholder` node which will be fed actual data at every call of `sesion.run()`.
```
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
```
Let's run it:
```
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
```
---
Problem
-------
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units (nn.relu()) and 1024 hidden nodes. This model should improve your validation / test accuracy.
---
```
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
hidden_layer_size = 1024
weights1 = tf.Variable(
tf.truncated_normal([image_size*image_size, hidden_layer_size]))
biases1 = tf.Variable(tf.zeros([hidden_layer_size]))
hidden = tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1)
weights2 = tf.Variable(
tf.truncated_normal([hidden_layer_size, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
logits = tf.matmul(hidden, weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights1)+biases1), weights2) + biases2
)
test_prediction = tf.nn.softmax(
tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights1)+biases1), weights2) + biases2
)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step", step, ":", l)
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
predictions
tf_train_dataset
from hashlib import md5
from collections import Counter
hashed = [md5(img).hexdigest() for img in batch_data]
Counter(hashed).most_common(10)
predictions.shape
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(batch_data[111].reshape(28,28))
```
|
github_jupyter
|
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
#Before reformatting
print(train_labels)
print(train_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
#After formatting
print(train_labels)
print(train_labels.shape)
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random valued following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
hidden_layer_size = 1024
weights1 = tf.Variable(
tf.truncated_normal([image_size*image_size, hidden_layer_size]))
biases1 = tf.Variable(tf.zeros([hidden_layer_size]))
hidden = tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1)
weights2 = tf.Variable(
tf.truncated_normal([hidden_layer_size, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
logits = tf.matmul(hidden, weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights1)+biases1), weights2) + biases2
)
test_prediction = tf.nn.softmax(
tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights1)+biases1), weights2) + biases2
)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step", step, ":", l)
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
predictions
tf_train_dataset
from hashlib import md5
from collections import Counter
hashed = [md5(img).hexdigest() for img in batch_data]
Counter(hashed).most_common(10)
predictions.shape
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(batch_data[111].reshape(28,28))
| 0.860149 | 0.971591 |
<a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/intro/pandas_intro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Pandas
[Pandas](https://pandas.pydata.org/) is a widely used Python library for storing and manipulating tabular data, where feature columns may be of different types (e.g., scalar, ordinal, categorical, text). We give some examples of how to use it below.
For very large datasets, you might want to use [modin](https://github.com/modin-project/modin), which provides the same pandas API but scales to multiple cores, by using [dask](https://github.com/dask/dask) or [ray](https://github.com/ray-project/ray) on the backend.
### Install necessary libraries
```
# Standard Python libraries
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
import sklearn
import seaborn as sns;
sns.set(style="ticks", color_codes=True)
import pandas as pd
pd.set_option('precision', 2) # 2 decimal places
pd.set_option('display.max_rows', 20)
pd.set_option('display.max_columns', 30)
pd.set_option('display.width', 100) # wide windows
```
### Auto-mpg dataset <a class="anchor" id="EDA-autompg"></a>
```
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Year', 'Origin', 'Name']
df = pd.read_csv(url, names=column_names, sep='\s+', na_values="?")
# The last column (name) is a unique id for the car, so we drop it
df = df.drop(columns=['Name'])
df.info()
```
We notice that there are only 392 horsepower rows, but 398 of the others.
This is because the HP column has 6 **missing values** (also called NA, or
not available).
There are 3 main ways to deal with this:
- Drop the rows with any missing values using dropna()
- Drop any columns with any missing values using drop()
- Replace the missing vales with some other valye (eg the median) using fillna. (This is called missing value imputation.)
For simplicity, we adopt the first approach.
```
# Ensure same number of rows for all features.
df = df.dropna()
df.info()
# Summary statistics
df.describe(include='all')
# Convert Origin feature from int to categorical factor
df['Origin'] = df.Origin.replace([1,2,3],['USA','Europe','Japan'])
df['Origin'] = df['Origin'].astype('category')
# Let us check the categories (levels)
print(df['Origin'].cat.categories)
# Let us check the datatypes of all the features
print(df.dtypes)
# Let us inspect the data. We see meaningful names for Origin.
df.tail()
# Create latex table from first 5 rows
tbl = df[-5:].to_latex(index=False, escape=False)
print(tbl)
# Plot mpg distribution for cars from different countries of origin
data = pd.concat( [df['MPG'], df['Origin']], axis=1)
fig, ax = plt.subplots()
ax = sns.boxplot(x='Origin', y='MPG', data=data)
ax.axhline(data.MPG.mean(), color='r', linestyle='dashed', linewidth=2)
#plt.savefig(os.path.join(figdir, 'auto-mpg-origin-boxplot.pdf'))
plt.show()
# Plot mpg distribution for cars from different years
data = pd.concat( [df['MPG'], df['Year']], axis=1)
fig, ax = plt.subplots()
ax = sns.boxplot(x='Year', y='MPG', data=data)
ax.axhline(data.MPG.mean(), color='r', linestyle='dashed', linewidth=2)
#plt.savefig(os.path.join(figdir, 'auto-mpg-year-boxplot.pdf'))
plt.show()
```
### Iris dataset <a class="anchor" id="EDA-iris"></a>
```
# Get the iris dataset and look at it
from sklearn.datasets import load_iris
iris = load_iris()
# show attributes of this object
print(dir(iris))
# Extract numpy arrays
X = iris.data
y = iris.target
print(np.shape(X)) # (150, 4)
print(np.c_[X[0:3,:], y[0:3]]) # concatenate columns
# The data is sorted by class. Let's shuffle the rows.
N = np.shape(X)[0]
rng = np.random.RandomState(42)
perm = rng.permutation(N)
X = X[perm]
y = y[perm]
print(np.c_[X[0:3,:], y[0:3]])
# Convert to pandas dataframe
df = pd.DataFrame(data=X, columns=['sl', 'sw', 'pl', 'pw'])
# create column for labels
df['label'] = pd.Series(iris.target_names[y], dtype='category')
# Summary statistics
df.describe(include='all')
# Peak at the data
df.head()
# Create latex table from first 5 rows
tbl = df[:6].to_latex(index=False, escape=False)
print(tbl)
# 2d scatterplot
#https://seaborn.pydata.org/generated/seaborn.pairplot.html
import seaborn as sns;
sns.set(style="ticks", color_codes=True)
# Make a dataframe with nicer labels for printing
#iris_df = sns.load_dataset("iris")
iris_df = df.copy()
iris_df.columns = iris['feature_names'] + ['label']
g = sns.pairplot(iris_df, vars = iris_df.columns[0:3] , hue="label")
#save_fig("iris-scatterplot.pdf")
plt.show()
```
### Boston housing dataset <a class="anchor" id="EDA-boston"></a>
```
# Load data (creates numpy arrays)
boston = sklearn.datasets.load_boston()
X = boston.data
y = boston.target
# Convert to Pandas format
df = pd.DataFrame(X)
df.columns = boston.feature_names
df['MEDV'] = y.tolist()
df.describe()
# plot marginal histograms of each column (13 features, 1 response)
plt.figure()
df.hist()
plt.show()
# scatter plot of response vs each feature
nrows = 3; ncols = 4;
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, sharey=True, figsize=[15, 10])
plt.tight_layout()
plt.clf()
for i in range(0,12):
plt.subplot(nrows, ncols, i+1)
plt.scatter(X[:,i], y)
plt.xlabel(boston.feature_names[i])
plt.ylabel("house price")
plt.grid()
#save_fig("boston-housing-scatter.pdf")
plt.show()
```
# Xarray
[Xarray](http://xarray.pydata.org/en/stable/quick-overview.html) generalizes pandas to multi-dimensional indexing. Put another way, xarray is a way to create multi-dimensional numpy arrays, where each dimension has a label (instead of having to remember axis ordering), and each value along each dimension can also have a specified set of allowable values (instead of having to be an integer index). This allows for easier slicing and dicing of data. We give some examples below.
```
import xarray as xr
```
We create a 2d DataArray, where the first dimension is labeled 'gender' and has values 'male', 'female' and 'other' for its coordinates; the second dimension is labeled 'age', and has integer coordinates. We also associate some arbitrary attributes to the array.
```
X = np.reshape(np.arange(15), (3,5))
print(X)
attrs = {'authors': ['John', 'Mary'], 'date': '2021-01-29'}
data = xr.DataArray(X,
dims=("gender", "age"),
coords={"gender": ["male", "female", "other"]},
attrs = attrs)
data
# select on dimension name and coordinate label
data.sel(gender="female")
v = data.sel(gender="female").values
print(v)
assert np.all(v == X[1,:])
# the dict indexing method is equivalent to data.sel(gender="other")
data.loc[dict(gender="other")]
data
# For assignment, we need to use the dict indexing method
data.loc[dict(gender="other")] = 42
data
# select on dimension name and coordinate value
data.sel(age=3)
v = data.sel(age=3).values
print(v)
assert np.all(v == X[:,3])
# select on dimension name and integer index
data.isel(gender=1)
# regular numpy indexing
data[1,:].values
```
We can also do [broadcasting](http://xarray.pydata.org/en/stable/computation.html#broadcasting-by-dimension-name) on xarrays.
```
a = xr.DataArray([1, 2], [("x", ["a", "b"])])
a
b = xr.DataArray([-1, -2, -3], [("y", [10, 20, 30])])
b
c = a*b
print(c.shape)
c
data2 = xr.DataArray([10,20,30],dims=("gender"), coords={"gender": ["male", "female", "other"]})
data2
c = data + data2
print(c.shape)
print(c.sel(gender="female"))
c
```
|
github_jupyter
|
# Standard Python libraries
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
import sklearn
import seaborn as sns;
sns.set(style="ticks", color_codes=True)
import pandas as pd
pd.set_option('precision', 2) # 2 decimal places
pd.set_option('display.max_rows', 20)
pd.set_option('display.max_columns', 30)
pd.set_option('display.width', 100) # wide windows
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Year', 'Origin', 'Name']
df = pd.read_csv(url, names=column_names, sep='\s+', na_values="?")
# The last column (name) is a unique id for the car, so we drop it
df = df.drop(columns=['Name'])
df.info()
# Ensure same number of rows for all features.
df = df.dropna()
df.info()
# Summary statistics
df.describe(include='all')
# Convert Origin feature from int to categorical factor
df['Origin'] = df.Origin.replace([1,2,3],['USA','Europe','Japan'])
df['Origin'] = df['Origin'].astype('category')
# Let us check the categories (levels)
print(df['Origin'].cat.categories)
# Let us check the datatypes of all the features
print(df.dtypes)
# Let us inspect the data. We see meaningful names for Origin.
df.tail()
# Create latex table from first 5 rows
tbl = df[-5:].to_latex(index=False, escape=False)
print(tbl)
# Plot mpg distribution for cars from different countries of origin
data = pd.concat( [df['MPG'], df['Origin']], axis=1)
fig, ax = plt.subplots()
ax = sns.boxplot(x='Origin', y='MPG', data=data)
ax.axhline(data.MPG.mean(), color='r', linestyle='dashed', linewidth=2)
#plt.savefig(os.path.join(figdir, 'auto-mpg-origin-boxplot.pdf'))
plt.show()
# Plot mpg distribution for cars from different years
data = pd.concat( [df['MPG'], df['Year']], axis=1)
fig, ax = plt.subplots()
ax = sns.boxplot(x='Year', y='MPG', data=data)
ax.axhline(data.MPG.mean(), color='r', linestyle='dashed', linewidth=2)
#plt.savefig(os.path.join(figdir, 'auto-mpg-year-boxplot.pdf'))
plt.show()
# Get the iris dataset and look at it
from sklearn.datasets import load_iris
iris = load_iris()
# show attributes of this object
print(dir(iris))
# Extract numpy arrays
X = iris.data
y = iris.target
print(np.shape(X)) # (150, 4)
print(np.c_[X[0:3,:], y[0:3]]) # concatenate columns
# The data is sorted by class. Let's shuffle the rows.
N = np.shape(X)[0]
rng = np.random.RandomState(42)
perm = rng.permutation(N)
X = X[perm]
y = y[perm]
print(np.c_[X[0:3,:], y[0:3]])
# Convert to pandas dataframe
df = pd.DataFrame(data=X, columns=['sl', 'sw', 'pl', 'pw'])
# create column for labels
df['label'] = pd.Series(iris.target_names[y], dtype='category')
# Summary statistics
df.describe(include='all')
# Peak at the data
df.head()
# Create latex table from first 5 rows
tbl = df[:6].to_latex(index=False, escape=False)
print(tbl)
# 2d scatterplot
#https://seaborn.pydata.org/generated/seaborn.pairplot.html
import seaborn as sns;
sns.set(style="ticks", color_codes=True)
# Make a dataframe with nicer labels for printing
#iris_df = sns.load_dataset("iris")
iris_df = df.copy()
iris_df.columns = iris['feature_names'] + ['label']
g = sns.pairplot(iris_df, vars = iris_df.columns[0:3] , hue="label")
#save_fig("iris-scatterplot.pdf")
plt.show()
# Load data (creates numpy arrays)
boston = sklearn.datasets.load_boston()
X = boston.data
y = boston.target
# Convert to Pandas format
df = pd.DataFrame(X)
df.columns = boston.feature_names
df['MEDV'] = y.tolist()
df.describe()
# plot marginal histograms of each column (13 features, 1 response)
plt.figure()
df.hist()
plt.show()
# scatter plot of response vs each feature
nrows = 3; ncols = 4;
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, sharey=True, figsize=[15, 10])
plt.tight_layout()
plt.clf()
for i in range(0,12):
plt.subplot(nrows, ncols, i+1)
plt.scatter(X[:,i], y)
plt.xlabel(boston.feature_names[i])
plt.ylabel("house price")
plt.grid()
#save_fig("boston-housing-scatter.pdf")
plt.show()
import xarray as xr
X = np.reshape(np.arange(15), (3,5))
print(X)
attrs = {'authors': ['John', 'Mary'], 'date': '2021-01-29'}
data = xr.DataArray(X,
dims=("gender", "age"),
coords={"gender": ["male", "female", "other"]},
attrs = attrs)
data
# select on dimension name and coordinate label
data.sel(gender="female")
v = data.sel(gender="female").values
print(v)
assert np.all(v == X[1,:])
# the dict indexing method is equivalent to data.sel(gender="other")
data.loc[dict(gender="other")]
data
# For assignment, we need to use the dict indexing method
data.loc[dict(gender="other")] = 42
data
# select on dimension name and coordinate value
data.sel(age=3)
v = data.sel(age=3).values
print(v)
assert np.all(v == X[:,3])
# select on dimension name and integer index
data.isel(gender=1)
# regular numpy indexing
data[1,:].values
a = xr.DataArray([1, 2], [("x", ["a", "b"])])
a
b = xr.DataArray([-1, -2, -3], [("y", [10, 20, 30])])
b
c = a*b
print(c.shape)
c
data2 = xr.DataArray([10,20,30],dims=("gender"), coords={"gender": ["male", "female", "other"]})
data2
c = data + data2
print(c.shape)
print(c.sel(gender="female"))
c
| 0.802285 | 0.972231 |
```
%pylab inline
%config InlineBackend.figure_format = 'retina'
import pandas as pd
import seaborn as sns
k35_df = pd.read_csv('mmetsp/Asterionellopsis_glacialis/k35/decision_nodes.csv', skipinitialspace=True)
k27_df = pd.read_csv('mmetsp/Asterionellopsis_glacialis/k27/decision_nodes.csv', skipinitialspace=True)
k35_df.head()
```
We can find the number of decision nodes in the dBG by counting unique hashes...
```
k27_df.hash.nunique(), k35_df.hash.nunique()
```
We'll make a new column for total degree, for convenience.
```
k35_df['degree'] = k35_df['l_degree'] + k35_df['r_degree']
k27_df['degree'] = k27_df['l_degree'] + k27_df['r_degree']
```
Let's start with the overal degree distribution during the entire construction process.
```
figsize(18,10)
fig, ax_mat = subplots(ncols=3, nrows=2)
top = ax_mat[0]
sns.distplot(k35_df.degree, kde=False, ax=top[0], bins=8)
sns.distplot(k35_df.l_degree, kde=False, ax=top[1], bins=5)
sns.distplot(k35_df.r_degree, kde=False, ax=top[2], bins=5)
bottom = ax_mat[1]
sns.distplot(k27_df.degree, kde=False, ax=bottom[0], bins=8)
sns.distplot(k27_df.l_degree, kde=False, ax=bottom[1], bins=5)
sns.distplot(k27_df.r_degree, kde=False, ax=bottom[2], bins=5)
```
So most decision nodes in this dataset have degree 3. Note that a few have degree 2; these forks without handles.
```
figsize(12,8)
sns.distplot(k35_df.position, kde=False, label='K=35')
sns.distplot(k27_df.position, kde=False, label='K=27')
legend()
k35_melted_df = k35_df.melt(id_vars=['hash', 'position'], value_vars=['l_degree', 'r_degree'], )
k27_melted_df = k27_df.melt(id_vars=['hash', 'position'], value_vars=['l_degree', 'r_degree'], )
k27_melted_df.head()
figsize(18,8)
sns.violinplot('position', 'value', 'variable', k27_melted_df)
k35_dnodes_per_read = k35_df.groupby('read_n').count().\
reset_index()[['read_n', 'hash']].rename({'hash': 'n_dnodes'}, axis='columns')
k27_dnodes_per_read = k27_df.groupby('read_n').count().\
reset_index()[['read_n', 'hash']].rename({'hash': 'n_dnodes'}, axis='columns')
ax = k35_dnodes_per_read.rolling(100, min_periods=10, on='read_n').mean().plot(x='read_n',
y='n_dnodes',
label='k = 35')
ax = k27_dnodes_per_read.rolling(100, min_periods=10, on='read_n').mean().plot(x='read_n',
y='n_dnodes',
label='k = 27',
ax=ax)
ax.xaxis.set_major_formatter(mpl.ticker.StrMethodFormatter("{x:,}"))
from goetia.minimizers import WKMinimizer
from goetia.processors import MinimizerProcessor
M = WKMinimizer(10, 25)
S = "GACAACGGTAAAAGTTCTAATGCTGCCGAGTCACGGGAAGGATAGAGTGAGTCCCACCATATGGCGCACC"
print(S)
for kmer, pos in M.get_minimizer_kmers(S):
print(pos * ' ', kmer, sep='')
%%time
proc = MinimizerProcessor(25, 25, 'minimizers.csv')
proc.process('/store/biodb/genomes/fugu/Takifugu_rubripes.FUGU5.dna_rm.toplevel.fa', output_interval=1)
```
|
github_jupyter
|
%pylab inline
%config InlineBackend.figure_format = 'retina'
import pandas as pd
import seaborn as sns
k35_df = pd.read_csv('mmetsp/Asterionellopsis_glacialis/k35/decision_nodes.csv', skipinitialspace=True)
k27_df = pd.read_csv('mmetsp/Asterionellopsis_glacialis/k27/decision_nodes.csv', skipinitialspace=True)
k35_df.head()
k27_df.hash.nunique(), k35_df.hash.nunique()
k35_df['degree'] = k35_df['l_degree'] + k35_df['r_degree']
k27_df['degree'] = k27_df['l_degree'] + k27_df['r_degree']
figsize(18,10)
fig, ax_mat = subplots(ncols=3, nrows=2)
top = ax_mat[0]
sns.distplot(k35_df.degree, kde=False, ax=top[0], bins=8)
sns.distplot(k35_df.l_degree, kde=False, ax=top[1], bins=5)
sns.distplot(k35_df.r_degree, kde=False, ax=top[2], bins=5)
bottom = ax_mat[1]
sns.distplot(k27_df.degree, kde=False, ax=bottom[0], bins=8)
sns.distplot(k27_df.l_degree, kde=False, ax=bottom[1], bins=5)
sns.distplot(k27_df.r_degree, kde=False, ax=bottom[2], bins=5)
figsize(12,8)
sns.distplot(k35_df.position, kde=False, label='K=35')
sns.distplot(k27_df.position, kde=False, label='K=27')
legend()
k35_melted_df = k35_df.melt(id_vars=['hash', 'position'], value_vars=['l_degree', 'r_degree'], )
k27_melted_df = k27_df.melt(id_vars=['hash', 'position'], value_vars=['l_degree', 'r_degree'], )
k27_melted_df.head()
figsize(18,8)
sns.violinplot('position', 'value', 'variable', k27_melted_df)
k35_dnodes_per_read = k35_df.groupby('read_n').count().\
reset_index()[['read_n', 'hash']].rename({'hash': 'n_dnodes'}, axis='columns')
k27_dnodes_per_read = k27_df.groupby('read_n').count().\
reset_index()[['read_n', 'hash']].rename({'hash': 'n_dnodes'}, axis='columns')
ax = k35_dnodes_per_read.rolling(100, min_periods=10, on='read_n').mean().plot(x='read_n',
y='n_dnodes',
label='k = 35')
ax = k27_dnodes_per_read.rolling(100, min_periods=10, on='read_n').mean().plot(x='read_n',
y='n_dnodes',
label='k = 27',
ax=ax)
ax.xaxis.set_major_formatter(mpl.ticker.StrMethodFormatter("{x:,}"))
from goetia.minimizers import WKMinimizer
from goetia.processors import MinimizerProcessor
M = WKMinimizer(10, 25)
S = "GACAACGGTAAAAGTTCTAATGCTGCCGAGTCACGGGAAGGATAGAGTGAGTCCCACCATATGGCGCACC"
print(S)
for kmer, pos in M.get_minimizer_kmers(S):
print(pos * ' ', kmer, sep='')
%%time
proc = MinimizerProcessor(25, 25, 'minimizers.csv')
proc.process('/store/biodb/genomes/fugu/Takifugu_rubripes.FUGU5.dna_rm.toplevel.fa', output_interval=1)
| 0.306527 | 0.603202 |
```
"""
1. Data provider
a. Image data
b. random vector
2. Build compute graph
a. generator 生成器
b. discriminator 判别器
c. DCGAN
connect g and d
define loss
define train_op
3. training process
"""
import os
import sys
import pprint
import tensorflow.compat.v1 as tf
import tensorflow.io.gfile as gfile
from tensorflow.compat.v1 import logging
import pprint
import _pickle as cPickle
import numpy as np
import math
import random
from PIL import Image
from tensorflow.examples.tutorials.mnist import input_data
tf.compat.v1.disable_eager_execution()
mnist = input_data.read_data_sets('MNIST_data/', one_hot = True)
output_dir = './local_run'
if not os.path.exists(output_dir):
os.mkdir(output_dir)
class HParams:
def __init__(self,
z_dim,
# 随机向量变成的特征图的初始size
init_conv_size,
# 生成器的各个反卷积层的通道数目
g_channels,
# 判别器各个卷积层的通道数目,步长为2,feature map 减小,channel应该增加
d_channels,
batch_size,
learning_rate,
beta1,
# 生成的目标图像的大小
image_size):
self.z_dim = z_dim
self.init_conv_size = init_conv_size
self.g_channels = g_channels
self.d_channels = d_channels
self.batch_size = batch_size
self.learning_rate = learning_rate
self.beta1 = beta1
self.image_size = image_size
def get_default_params():
return HParams(
z_dim=100,
init_conv_size=4,
g_channels=[128, 64, 32, 1],
d_channels=[32, 64, 128, 256],
batch_size=128,
learning_rate=0.002,
beta1=0.5,
image_size=32,
)
hps = get_default_params()
class MnistData(object):
def __init__(self, mnist_train, z_dim, img_size):
self._data = mnist_train
self._example_num = len(self._data)
# 随机生成的向量
self._z_data = np.random.standard_normal((self._example_num, z_dim))
# 记录数据下标
self._indicator = 0
self._resize_mnist_img(img_size)
self._random_shuffle()
def _random_shuffle(self):
p = np.random.permutation(self._example_num)
self._z_data = self._z_data[p]
self._data = self._data[p]
def _resize_mnist_img(self, img_size):
"""
Resize mnist image to goal img_size
1. numpy -> PIL image
2. PIL image -> resize
3.PIL image -> numpy
:param img_size: 目标图像的大小
:return:
"""
# mnist自动归一化,复原成0 - 255 之间的数
data = np.asarray(self._data * 255, np.uint8)
# [example, 784] -> [example, 28, 28]
data = data.reshape((self._example_num, 1 ,28, 28))
data = data.transpose((0, 2, 3, 1))
new_data = []
for i in range(self._example_num):
img = data[i].reshape((28, 28))
# np -> PIL对象
img = Image.fromarray(img)
img = img.resize((img_size, img_size))
img = np.asarray(img)
img = img.reshape((img_size, img_size, 1))
new_data.append(img)
new_data = np.asarray(new_data, dtype=np.float32)
new_data = new_data / 127.5 - 1
# self._data: [num_example, img_size, img_size, 1]
self._data = new_data
def next_batch(self, batch_size):
end_indicator = self._indicator + batch_size
if end_indicator > self._example_num:
self._random_shuffle()
self._indicator = 0
end_indicator = self._indicator + batch_size
assert end_indicator < self._example_num
batch_data = self._data[self._indicator: end_indicator]
batch_z = self._z_data[self._indicator: end_indicator]
self._indicator = end_indicator
return batch_data, batch_z
mnist_data = MnistData(mnist.train.images, hps.z_dim, hps.image_size)
def conv2d_transpose(inputs, out_channel, name, training, with_bn_relu = True):
"""
反卷积
卷积核和步长是固定的
:param inputs:
:param out_channel:
:param name:
:param training:
:param with_bn_relu: 是否batch_normalization 和 relu
:return:
"""
conv2d_trans = tf.layers.conv2d_transpose(inputs,
out_channel,
[5, 5],
strides = (2, 2),
padding = 'SAME')
if with_bn_relu:
bn = tf.layers.batch_normalization(conv2d_trans,
training = training)
relu = tf.nn.relu(bn)
return relu
else:
return conv2d_trans
def conv2d(inputs, out_channel, name, training):
"""Wrapper of conv2d."""
def leaky_relu(x, leak = 0.2, name = ''):
# x > 0 ? x : leak * x
return tf.maximum(x, x * leak, name=name)
with tf.variable_scope(name):
conv2d_output = tf.layers.conv2d(inputs,
out_channel,
[5, 5],
strides=(2,2),
padding='SAME')
bn = tf.layers.batch_normalization(conv2d_output, training=training)
return leaky_relu(bn, name='outputs')
class Generator(object):
"""
生成器计算图
"""
def __init__(self, channels, init_conv_size):
self._channels = channels
self._init_conv_size = init_conv_size
self._reuse = False
def __call__(self, inputs, training):
inputs = tf.convert_to_tensor(inputs)
with tf.variable_scope('generator', reuse = self._reuse):
"""
random_vector -> fc -> self._channel[0] * init_conv_size ** 2 ->
reshape -> [init_conv_size, init_conv_size, channels[0]]
"""
with tf.variable_scope('inputs_conv'):
fc = tf.layers.dense(
inputs,
self._channels[0] * self._init_conv_size * self._init_conv_size)
conv0 = tf.reshape(fc,
[-1,
self._init_conv_size,
self._init_conv_size,
self._channels[0]])
bn0 = tf.layers.batch_normalization(conv0, training=training)
relu0 = tf.nn.relu(bn0)
# 存放反卷积的结果
deconv_inputs = relu0
for i in range(1, len(self._channels)):
with_bn_relu = (i != len(self._channels) - 1)
deconv_inputs = conv2d_transpose(
deconv_inputs,
self._channels[i],
"deconv-%d" % i,
training,
with_bn_relu
)
image_inputs = deconv_inputs
with tf.variable_scope('generate_imgs'):
# imgs value range: [-1, 1]
imgs = tf.tanh(image_inputs, name = 'imgs')
self._reuse = True
self.variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
scope='generator')
return imgs
class Discriminator(object):
"""
判别器计算图
"""
def __init__(self, channels):
self._channels = channels
# 是否batch normalization
self._reuse = False
def __call__(self, inputs, training):
inputs = tf.convert_to_tensor(inputs, dtype=tf.float32)
conv_inputs = inputs
with tf.variable_scope('discriminator', reuse=self._reuse):
for i in range(len(self._channels)):
conv_inputs = conv2d(conv_inputs,
self._channels[i],
'conv-%d' % i,
training)
fc_inputs = conv_inputs
with tf.variable_scope('fc'):
# 展平
flatten = tf.layers.flatten(fc_inputs)
# 与类别全连接
logits = tf.layers.dense(flatten, 2, name='logits')
self._reuse = True
# 获取所有参数
self.variables = tf.get_collection(
tf.GraphKeys.TRAINABLE_VARIABLES,
scope='discriminator')
return logits
class DCGAN(object):
# DCGAN 计算图
def __init__(self, hps):
g_channels = hps.g_channels
d_channels = hps.d_channels
self._batch_size = hps.batch_size
self._init_conv_size = hps.init_conv_size
self._z_dim = hps.z_dim
self._img_size = hps.image_size
self._generator = Generator(g_channels, self._init_conv_size)
self._discriminator = Discriminator(d_channels)
def build(self):
"""Builds the whole compute graph."""
self._z_placeholder = tf.placeholder(
tf.float32, (self._batch_size, self._z_dim))
self._img_placeholder = tf.placeholder(
tf.float32, (self._batch_size, self._img_size, self._img_size, 1))
generated_imgs = self._generator(
self._z_placeholder, training=True)
# 假图对应的logits
fake_img_logits = self._discriminator(
generated_imgs, training=True)
real_img_logits = self._discriminator(
self._img_placeholder, training=True)
# 生成器的loss: 假的图片判断成真的图片
loss_on_fake_to_real = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
labels = tf.ones([self._batch_size], dtype=tf.int64),
logits = fake_img_logits))
# 判别器的loss: 判断假图或者是真图
loss_on_fake_to_fake = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
labels = tf.zeros([self._batch_size], dtype=tf.int64),
logits = fake_img_logits))
loss_on_real_to_real = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
labels = tf.ones([self._batch_size], dtype=tf.int64),
logits = real_img_logits))
# 以字典形式保存
tf.add_to_collection('g_losses', loss_on_fake_to_real)
tf.add_to_collection('d_losses', loss_on_fake_to_fake)
tf.add_to_collection('d_losses', loss_on_real_to_real)
loss = {
# 把loss加起来
'g': tf.add_n(tf.get_collection('g_losses'),
name = 'total_g_loss'),
'd': tf.add_n(tf.get_collection('d_losses'),
name = 'total_d_loss')
}
return (self._z_placeholder,
self._img_placeholder,
generated_imgs,
loss)
def build_train_op(self, losses, learning_rate, beta1):
"""Builds train op, should be called after build is called."""
g_opt = tf.train.AdamOptimizer(
learning_rate = learning_rate,
beta1 = beta1)
d_opt = tf.train.AdamOptimizer(
learning_rate = learning_rate,
beta1 = beta1)
# 把对应的optimizer应用到对应的 loss 和 variables 上去
g_opt_op = g_opt.minimize(
losses['g'], var_list = self._generator.variables)
d_opt_op = d_opt.minimize(
losses['d'], var_list = self._discriminator.variables)
# 交叉训练两个op(g_opt_op 和 d_opt_op)
with tf.control_dependencies([g_opt_op, d_opt_op]):
return tf.no_op(name='train')
dcgan = DCGAN(hps)
z_placeholder, image_placeholder, generated_imgs, losses = dcgan.build()
train_op = dcgan.build_train_op(losses, hps.learning_rate, hps.beta1)
def combine_imgs(batch_imgs, img_size, rows = 8, cols = 16):
"""Combines small images in a batch into a big pic."""
# batch_imgs: [batch_size, img_size, img_size, 1]
result_big_img = []
for i in range(rows):
row_imgs = []
for j in range(cols):
# [img_size, img_size, 1]
img = batch_imgs[cols * i + j]
img = img.reshape((img_size, img_size))
# 反归一化 [-1, 1] -> [0, 255]
img = (img + 1) * 127.5
row_imgs.append(img)
row_imgs = np.hstack(row_imgs)
result_big_img.append(row_imgs)
# [8*32, 16*32] 八行六列个图,每个图是32 * 32
result_big_img = np.vstack(result_big_img)
result_big_img = np.asarray(result_big_img, np.uint8)
result_big_img = Image.fromarray(result_big_img)
return result_big_img
init_op = tf.global_variables_initializer()
train_steps = 10000
with tf.Session() as sess:
sess.run(init_op)
for step in range(train_steps):
batch_img, batch_z = mnist_data.next_batch(hps.batch_size)
fetches = [train_op, losses['g'], losses['d']]
should_sample = (step + 1) % 50 == 0
if should_sample:
fetches += [generated_imgs]
output_values = sess.run(fetches,
feed_dict = {
z_placeholder: batch_z,
image_placeholder: batch_img
})
_, g_loss_val, d_loss_val = output_values[0:3]
logging.info('step: %4d, g_loss: %4.3f, d_loss: %4.3f'
% (step, g_loss_val, d_loss_val))
if should_sample:
gen_imgs_val = output_values[3]
gen_img_path = os.path.join(output_dir, '%5d-gen.jpg' % (step + 1))
gt_img_path = os.path.join(output_dir, '%5d-gt.jpg' % (step + 1))
gen_img = combine_imgs(gen_imgs_val, hps.image_size)
gt_img = combine_imgs(batch_img, hps.image_size)
gen_img.save(gen_img_path)
gt_img.save(gt_img_path)
```
|
github_jupyter
|
"""
1. Data provider
a. Image data
b. random vector
2. Build compute graph
a. generator 生成器
b. discriminator 判别器
c. DCGAN
connect g and d
define loss
define train_op
3. training process
"""
import os
import sys
import pprint
import tensorflow.compat.v1 as tf
import tensorflow.io.gfile as gfile
from tensorflow.compat.v1 import logging
import pprint
import _pickle as cPickle
import numpy as np
import math
import random
from PIL import Image
from tensorflow.examples.tutorials.mnist import input_data
tf.compat.v1.disable_eager_execution()
mnist = input_data.read_data_sets('MNIST_data/', one_hot = True)
output_dir = './local_run'
if not os.path.exists(output_dir):
os.mkdir(output_dir)
class HParams:
def __init__(self,
z_dim,
# 随机向量变成的特征图的初始size
init_conv_size,
# 生成器的各个反卷积层的通道数目
g_channels,
# 判别器各个卷积层的通道数目,步长为2,feature map 减小,channel应该增加
d_channels,
batch_size,
learning_rate,
beta1,
# 生成的目标图像的大小
image_size):
self.z_dim = z_dim
self.init_conv_size = init_conv_size
self.g_channels = g_channels
self.d_channels = d_channels
self.batch_size = batch_size
self.learning_rate = learning_rate
self.beta1 = beta1
self.image_size = image_size
def get_default_params():
return HParams(
z_dim=100,
init_conv_size=4,
g_channels=[128, 64, 32, 1],
d_channels=[32, 64, 128, 256],
batch_size=128,
learning_rate=0.002,
beta1=0.5,
image_size=32,
)
hps = get_default_params()
class MnistData(object):
def __init__(self, mnist_train, z_dim, img_size):
self._data = mnist_train
self._example_num = len(self._data)
# 随机生成的向量
self._z_data = np.random.standard_normal((self._example_num, z_dim))
# 记录数据下标
self._indicator = 0
self._resize_mnist_img(img_size)
self._random_shuffle()
def _random_shuffle(self):
p = np.random.permutation(self._example_num)
self._z_data = self._z_data[p]
self._data = self._data[p]
def _resize_mnist_img(self, img_size):
"""
Resize mnist image to goal img_size
1. numpy -> PIL image
2. PIL image -> resize
3.PIL image -> numpy
:param img_size: 目标图像的大小
:return:
"""
# mnist自动归一化,复原成0 - 255 之间的数
data = np.asarray(self._data * 255, np.uint8)
# [example, 784] -> [example, 28, 28]
data = data.reshape((self._example_num, 1 ,28, 28))
data = data.transpose((0, 2, 3, 1))
new_data = []
for i in range(self._example_num):
img = data[i].reshape((28, 28))
# np -> PIL对象
img = Image.fromarray(img)
img = img.resize((img_size, img_size))
img = np.asarray(img)
img = img.reshape((img_size, img_size, 1))
new_data.append(img)
new_data = np.asarray(new_data, dtype=np.float32)
new_data = new_data / 127.5 - 1
# self._data: [num_example, img_size, img_size, 1]
self._data = new_data
def next_batch(self, batch_size):
end_indicator = self._indicator + batch_size
if end_indicator > self._example_num:
self._random_shuffle()
self._indicator = 0
end_indicator = self._indicator + batch_size
assert end_indicator < self._example_num
batch_data = self._data[self._indicator: end_indicator]
batch_z = self._z_data[self._indicator: end_indicator]
self._indicator = end_indicator
return batch_data, batch_z
mnist_data = MnistData(mnist.train.images, hps.z_dim, hps.image_size)
def conv2d_transpose(inputs, out_channel, name, training, with_bn_relu = True):
"""
反卷积
卷积核和步长是固定的
:param inputs:
:param out_channel:
:param name:
:param training:
:param with_bn_relu: 是否batch_normalization 和 relu
:return:
"""
conv2d_trans = tf.layers.conv2d_transpose(inputs,
out_channel,
[5, 5],
strides = (2, 2),
padding = 'SAME')
if with_bn_relu:
bn = tf.layers.batch_normalization(conv2d_trans,
training = training)
relu = tf.nn.relu(bn)
return relu
else:
return conv2d_trans
def conv2d(inputs, out_channel, name, training):
"""Wrapper of conv2d."""
def leaky_relu(x, leak = 0.2, name = ''):
# x > 0 ? x : leak * x
return tf.maximum(x, x * leak, name=name)
with tf.variable_scope(name):
conv2d_output = tf.layers.conv2d(inputs,
out_channel,
[5, 5],
strides=(2,2),
padding='SAME')
bn = tf.layers.batch_normalization(conv2d_output, training=training)
return leaky_relu(bn, name='outputs')
class Generator(object):
"""
生成器计算图
"""
def __init__(self, channels, init_conv_size):
self._channels = channels
self._init_conv_size = init_conv_size
self._reuse = False
def __call__(self, inputs, training):
inputs = tf.convert_to_tensor(inputs)
with tf.variable_scope('generator', reuse = self._reuse):
"""
random_vector -> fc -> self._channel[0] * init_conv_size ** 2 ->
reshape -> [init_conv_size, init_conv_size, channels[0]]
"""
with tf.variable_scope('inputs_conv'):
fc = tf.layers.dense(
inputs,
self._channels[0] * self._init_conv_size * self._init_conv_size)
conv0 = tf.reshape(fc,
[-1,
self._init_conv_size,
self._init_conv_size,
self._channels[0]])
bn0 = tf.layers.batch_normalization(conv0, training=training)
relu0 = tf.nn.relu(bn0)
# 存放反卷积的结果
deconv_inputs = relu0
for i in range(1, len(self._channels)):
with_bn_relu = (i != len(self._channels) - 1)
deconv_inputs = conv2d_transpose(
deconv_inputs,
self._channels[i],
"deconv-%d" % i,
training,
with_bn_relu
)
image_inputs = deconv_inputs
with tf.variable_scope('generate_imgs'):
# imgs value range: [-1, 1]
imgs = tf.tanh(image_inputs, name = 'imgs')
self._reuse = True
self.variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
scope='generator')
return imgs
class Discriminator(object):
"""
判别器计算图
"""
def __init__(self, channels):
self._channels = channels
# 是否batch normalization
self._reuse = False
def __call__(self, inputs, training):
inputs = tf.convert_to_tensor(inputs, dtype=tf.float32)
conv_inputs = inputs
with tf.variable_scope('discriminator', reuse=self._reuse):
for i in range(len(self._channels)):
conv_inputs = conv2d(conv_inputs,
self._channels[i],
'conv-%d' % i,
training)
fc_inputs = conv_inputs
with tf.variable_scope('fc'):
# 展平
flatten = tf.layers.flatten(fc_inputs)
# 与类别全连接
logits = tf.layers.dense(flatten, 2, name='logits')
self._reuse = True
# 获取所有参数
self.variables = tf.get_collection(
tf.GraphKeys.TRAINABLE_VARIABLES,
scope='discriminator')
return logits
class DCGAN(object):
# DCGAN 计算图
def __init__(self, hps):
g_channels = hps.g_channels
d_channels = hps.d_channels
self._batch_size = hps.batch_size
self._init_conv_size = hps.init_conv_size
self._z_dim = hps.z_dim
self._img_size = hps.image_size
self._generator = Generator(g_channels, self._init_conv_size)
self._discriminator = Discriminator(d_channels)
def build(self):
"""Builds the whole compute graph."""
self._z_placeholder = tf.placeholder(
tf.float32, (self._batch_size, self._z_dim))
self._img_placeholder = tf.placeholder(
tf.float32, (self._batch_size, self._img_size, self._img_size, 1))
generated_imgs = self._generator(
self._z_placeholder, training=True)
# 假图对应的logits
fake_img_logits = self._discriminator(
generated_imgs, training=True)
real_img_logits = self._discriminator(
self._img_placeholder, training=True)
# 生成器的loss: 假的图片判断成真的图片
loss_on_fake_to_real = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
labels = tf.ones([self._batch_size], dtype=tf.int64),
logits = fake_img_logits))
# 判别器的loss: 判断假图或者是真图
loss_on_fake_to_fake = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
labels = tf.zeros([self._batch_size], dtype=tf.int64),
logits = fake_img_logits))
loss_on_real_to_real = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
labels = tf.ones([self._batch_size], dtype=tf.int64),
logits = real_img_logits))
# 以字典形式保存
tf.add_to_collection('g_losses', loss_on_fake_to_real)
tf.add_to_collection('d_losses', loss_on_fake_to_fake)
tf.add_to_collection('d_losses', loss_on_real_to_real)
loss = {
# 把loss加起来
'g': tf.add_n(tf.get_collection('g_losses'),
name = 'total_g_loss'),
'd': tf.add_n(tf.get_collection('d_losses'),
name = 'total_d_loss')
}
return (self._z_placeholder,
self._img_placeholder,
generated_imgs,
loss)
def build_train_op(self, losses, learning_rate, beta1):
"""Builds train op, should be called after build is called."""
g_opt = tf.train.AdamOptimizer(
learning_rate = learning_rate,
beta1 = beta1)
d_opt = tf.train.AdamOptimizer(
learning_rate = learning_rate,
beta1 = beta1)
# 把对应的optimizer应用到对应的 loss 和 variables 上去
g_opt_op = g_opt.minimize(
losses['g'], var_list = self._generator.variables)
d_opt_op = d_opt.minimize(
losses['d'], var_list = self._discriminator.variables)
# 交叉训练两个op(g_opt_op 和 d_opt_op)
with tf.control_dependencies([g_opt_op, d_opt_op]):
return tf.no_op(name='train')
dcgan = DCGAN(hps)
z_placeholder, image_placeholder, generated_imgs, losses = dcgan.build()
train_op = dcgan.build_train_op(losses, hps.learning_rate, hps.beta1)
def combine_imgs(batch_imgs, img_size, rows = 8, cols = 16):
"""Combines small images in a batch into a big pic."""
# batch_imgs: [batch_size, img_size, img_size, 1]
result_big_img = []
for i in range(rows):
row_imgs = []
for j in range(cols):
# [img_size, img_size, 1]
img = batch_imgs[cols * i + j]
img = img.reshape((img_size, img_size))
# 反归一化 [-1, 1] -> [0, 255]
img = (img + 1) * 127.5
row_imgs.append(img)
row_imgs = np.hstack(row_imgs)
result_big_img.append(row_imgs)
# [8*32, 16*32] 八行六列个图,每个图是32 * 32
result_big_img = np.vstack(result_big_img)
result_big_img = np.asarray(result_big_img, np.uint8)
result_big_img = Image.fromarray(result_big_img)
return result_big_img
init_op = tf.global_variables_initializer()
train_steps = 10000
with tf.Session() as sess:
sess.run(init_op)
for step in range(train_steps):
batch_img, batch_z = mnist_data.next_batch(hps.batch_size)
fetches = [train_op, losses['g'], losses['d']]
should_sample = (step + 1) % 50 == 0
if should_sample:
fetches += [generated_imgs]
output_values = sess.run(fetches,
feed_dict = {
z_placeholder: batch_z,
image_placeholder: batch_img
})
_, g_loss_val, d_loss_val = output_values[0:3]
logging.info('step: %4d, g_loss: %4.3f, d_loss: %4.3f'
% (step, g_loss_val, d_loss_val))
if should_sample:
gen_imgs_val = output_values[3]
gen_img_path = os.path.join(output_dir, '%5d-gen.jpg' % (step + 1))
gt_img_path = os.path.join(output_dir, '%5d-gt.jpg' % (step + 1))
gen_img = combine_imgs(gen_imgs_val, hps.image_size)
gt_img = combine_imgs(batch_img, hps.image_size)
gen_img.save(gen_img_path)
gt_img.save(gt_img_path)
| 0.494873 | 0.432723 |
# Gaia DR2 variability lightcurves
### Part III: What do the Gaia lightcurves look like?
gully
May 2, 2018
```
# %load /Users/obsidian/Desktop/defaults.py
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
! du -hs ../data/dr2/Gaia/gdr2/light_curves/csv/
df0 = pd.read_csv('../data/dr2/Gaia/gdr2/light_curves/csv/light_curves_1042504286338226688_1098703830327377408.csv.gz')
df0.shape
df0.head(2)
df0.tail(2)
```
Ok, this *flat file* is just what we want. It contains the flux as a function of time for unique sources, with additional metadata flags.
```
import glob
fns = glob.glob('../data/dr2/Gaia/gdr2/light_curves/csv/light_curves_*.csv.gz')
n_files = len(fns)
n_files
```
It looks like the filename encodes the range of sources housed in each file. Let's extract that metadata without having to read the files.
```
fn_df = pd.DataFrame({'fn':fns})
fn_df.head()
fn_df['basename'] = fn_df.fn.str.split('/').str[-1].str.split('light_curves_').str[-1].str.split('.csv.gz').str[0]
fn_df['low'] = fn_df.basename.str.split('_').str[0].astype(np.int64)
fn_df['high'] = fn_df.basename.str.split('_').str[1].astype(np.int64)
```
Now we can make a mask to find which file we want. Let's say we want the Gaia source: 66511970924353792
```
source = 66511970924353792
k2_source = 211059767
gaia_period = 0.771791
mask = (source > fn_df.low) & (source < fn_df.high)
mask.sum()
path = fn_df[mask].fn.values[0]
df_lc = pd.read_csv(path)
df_lc = df_lc[df_lc.source_id==source]
df_lc.shape
```
Not bad! We have a 96 point lightcurve!
```
df_lc.band.value_counts()
gi = df_lc.band == 'G'
plt.plot(df_lc.time[gi], df_lc.flux[gi], '.')
```
The Gaia photometry is taken over 500 days! The mean starspot coverage fraction is not expected to be coherent over such large timescales. There's a portion of the data that is taken contiguously. Let's highlight those.
```
plt.plot(np.mod(df_lc.time[gi], gaia_period), df_lc.flux[gi], '.')
alt = gi & (df_lc.time >1900) & (df_lc.time<1950)
plt.plot(np.mod(df_lc.time[alt], gaia_period), df_lc.flux[alt], 'o')
```
Seems plausible...
```
30.0*4000
from lightkurve import KeplerTargetPixelFile
tpf = KeplerTargetPixelFile.from_archive(k2_source)
k2_lc = tpf.to_lightcurve()
k2_lc = k2_lc[(k2_lc.flux == k2_lc.flux) & np.isfinite(k2_lc.flux) & (k2_lc.flux_err == k2_lc.flux_err)]
tpf.interact(lc=k2_lc)
```
The full K2 postage stamp contains another source, which would have easily been separated in Gaia.
```
# %load https://www.astroml.org/gatspy/periodic/lomb_scargle-1.py
from gatspy import periodic
model = periodic.LombScargle()
model.optimizer.period_range = (0.5, 1)
model.fit(k2_lc.time, k2_lc.flux, k2_lc.flux_err)
periods = np.linspace(0.5, 1, 10000)
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
scores = model.score(periods)
# Plot the results
fig, ax = plt.subplots(figsize=(8, 3))
fig.subplots_adjust(bottom=0.2)
ax.plot(periods, scores)
ax.set(xlabel='period (days)', ylabel='Lomb Scargle Power')
model.best_period
```
Gaia has 0.771791, close!
```
plt.plot(np.mod(df_lc.time[gi], 0.7779122), df_lc.flux[gi], '.')
```
Maybe slightly better coherence than the Gaia-based estimate.
|
github_jupyter
|
# %load /Users/obsidian/Desktop/defaults.py
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
! du -hs ../data/dr2/Gaia/gdr2/light_curves/csv/
df0 = pd.read_csv('../data/dr2/Gaia/gdr2/light_curves/csv/light_curves_1042504286338226688_1098703830327377408.csv.gz')
df0.shape
df0.head(2)
df0.tail(2)
import glob
fns = glob.glob('../data/dr2/Gaia/gdr2/light_curves/csv/light_curves_*.csv.gz')
n_files = len(fns)
n_files
fn_df = pd.DataFrame({'fn':fns})
fn_df.head()
fn_df['basename'] = fn_df.fn.str.split('/').str[-1].str.split('light_curves_').str[-1].str.split('.csv.gz').str[0]
fn_df['low'] = fn_df.basename.str.split('_').str[0].astype(np.int64)
fn_df['high'] = fn_df.basename.str.split('_').str[1].astype(np.int64)
source = 66511970924353792
k2_source = 211059767
gaia_period = 0.771791
mask = (source > fn_df.low) & (source < fn_df.high)
mask.sum()
path = fn_df[mask].fn.values[0]
df_lc = pd.read_csv(path)
df_lc = df_lc[df_lc.source_id==source]
df_lc.shape
df_lc.band.value_counts()
gi = df_lc.band == 'G'
plt.plot(df_lc.time[gi], df_lc.flux[gi], '.')
plt.plot(np.mod(df_lc.time[gi], gaia_period), df_lc.flux[gi], '.')
alt = gi & (df_lc.time >1900) & (df_lc.time<1950)
plt.plot(np.mod(df_lc.time[alt], gaia_period), df_lc.flux[alt], 'o')
30.0*4000
from lightkurve import KeplerTargetPixelFile
tpf = KeplerTargetPixelFile.from_archive(k2_source)
k2_lc = tpf.to_lightcurve()
k2_lc = k2_lc[(k2_lc.flux == k2_lc.flux) & np.isfinite(k2_lc.flux) & (k2_lc.flux_err == k2_lc.flux_err)]
tpf.interact(lc=k2_lc)
# %load https://www.astroml.org/gatspy/periodic/lomb_scargle-1.py
from gatspy import periodic
model = periodic.LombScargle()
model.optimizer.period_range = (0.5, 1)
model.fit(k2_lc.time, k2_lc.flux, k2_lc.flux_err)
periods = np.linspace(0.5, 1, 10000)
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
scores = model.score(periods)
# Plot the results
fig, ax = plt.subplots(figsize=(8, 3))
fig.subplots_adjust(bottom=0.2)
ax.plot(periods, scores)
ax.set(xlabel='period (days)', ylabel='Lomb Scargle Power')
model.best_period
plt.plot(np.mod(df_lc.time[gi], 0.7779122), df_lc.flux[gi], '.')
| 0.434701 | 0.850033 |
```
cd source/optimizing/caching/
ls
# %load cache_deterministic.py
# file: cache_deterministic.py
# form Ziade 2008
"""Example for a deterministic cache
"""
import functools
from get_key import get_key #1
cache = {} #2
def memoize_deterministic(get_key=get_key, cache=cache): #3
"""Parameterized decorator for memoizing.
"""
def _memoize(function): #4
"""This takes the function.
"""
@functools.wraps(function)
def __memoize(*args, **kw): #5
"""This replaces the original function.
"""
key = get_key(function, *args, **kw) #6
try:
return cache[key] #7
except KeyError:
value = function(*args, **kw) #8
cache[key] = value #9
return value #10
return __memoize
return _memoize
# %load get_key.py
# file: get_key.py
# based on Ziade 2008
"""Generate a unique key for a function and its arguments.
"""
def get_key(function, *args, **kw): #1
"""Make key from module and function names as well as arguments.
"""
key = '%s.%s:' % (function.__module__,
function.__name__) #2
hash_args = [str(arg) for arg in args] #3
hash_kw = ['%s:%s' % (k, str(v))
for k, v in kw.items()] #4
return '%s::%s::%s' % (key, hash_args, hash_kw) #5
@memoize_deterministic()
def add(a,b):
print('adding')
return a + b
add(2, 3)
add(2, b=3)
# %load cache_non_deterministic.py
# file: cache_non_deterministic.py
# form Ziade 2008
"""Example for a cache that expires.
"""
import functools
import time
from get_key import get_key
cache = {}
def memoize_non_deterministic(get_key=get_key, storage=cache,
age=0): #1
"""Parameterized decorator that takes an expiration age.
"""
def _memoize(function):
"""This takes the function.
"""
@functools.wraps(function)
def __memoize(*args, **kw):
"""This replaces the original function.
"""
key = get_key(function, *args, **kw)
try:
value_age, value = storage[key] #2
deprecated = (age != 0 and
(value_age + age) < time.time()) #3
except KeyError:
deprecated = True #4
if not deprecated:
return value #5
storage[key] = time.time(), function(*args, **kw) #6
return storage[key][1] #7
return __memoize
return _memoize
from functools import lru_cache
lru_cache?
@lru_cache(maxsize=3)
def add(a,b):
print('adding')
return a + b
add(2,3)
add.cache_info()
add(2,3)
add.cache_info()
```
|
github_jupyter
|
cd source/optimizing/caching/
ls
# %load cache_deterministic.py
# file: cache_deterministic.py
# form Ziade 2008
"""Example for a deterministic cache
"""
import functools
from get_key import get_key #1
cache = {} #2
def memoize_deterministic(get_key=get_key, cache=cache): #3
"""Parameterized decorator for memoizing.
"""
def _memoize(function): #4
"""This takes the function.
"""
@functools.wraps(function)
def __memoize(*args, **kw): #5
"""This replaces the original function.
"""
key = get_key(function, *args, **kw) #6
try:
return cache[key] #7
except KeyError:
value = function(*args, **kw) #8
cache[key] = value #9
return value #10
return __memoize
return _memoize
# %load get_key.py
# file: get_key.py
# based on Ziade 2008
"""Generate a unique key for a function and its arguments.
"""
def get_key(function, *args, **kw): #1
"""Make key from module and function names as well as arguments.
"""
key = '%s.%s:' % (function.__module__,
function.__name__) #2
hash_args = [str(arg) for arg in args] #3
hash_kw = ['%s:%s' % (k, str(v))
for k, v in kw.items()] #4
return '%s::%s::%s' % (key, hash_args, hash_kw) #5
@memoize_deterministic()
def add(a,b):
print('adding')
return a + b
add(2, 3)
add(2, b=3)
# %load cache_non_deterministic.py
# file: cache_non_deterministic.py
# form Ziade 2008
"""Example for a cache that expires.
"""
import functools
import time
from get_key import get_key
cache = {}
def memoize_non_deterministic(get_key=get_key, storage=cache,
age=0): #1
"""Parameterized decorator that takes an expiration age.
"""
def _memoize(function):
"""This takes the function.
"""
@functools.wraps(function)
def __memoize(*args, **kw):
"""This replaces the original function.
"""
key = get_key(function, *args, **kw)
try:
value_age, value = storage[key] #2
deprecated = (age != 0 and
(value_age + age) < time.time()) #3
except KeyError:
deprecated = True #4
if not deprecated:
return value #5
storage[key] = time.time(), function(*args, **kw) #6
return storage[key][1] #7
return __memoize
return _memoize
from functools import lru_cache
lru_cache?
@lru_cache(maxsize=3)
def add(a,b):
print('adding')
return a + b
add(2,3)
add.cache_info()
add(2,3)
add.cache_info()
| 0.594669 | 0.144692 |
# Measuring GAN using Frechet Inception Distance
## Outline
- Introduction
- Load Model
- Download Model
- Init Model
- Generate Images
- Measuring Frechet Inception Distance
1. Generate fake samples and (get) real samples
2. Measure mean ($\mu$) and covariance ($\Sigma$) of each samples
3. Calculate Frechet distance using the means and covariances
## Introduction
Frechet Inception Distance is a proposed evaluation method as an improvisation of Inception score. By using the same neural network as Inception score, Frechet Inception Distance measures the features extracted from real samples (usually come from real data sample) and fake samples (generated by model).
```
import os
import torch
# Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
## Folder Configuration
from google.colab import drive
drive.mount('/content/drive')
ROOT = "/content/drive/My Drive/Colab Notebooks/DSC_UI_GAN/Batch1/W3/"
# Make dir if no exist
if not os.path.exists(ROOT):
os.makedirs(ROOT)
```
## Load Model
We will use DCGAN model implemented in [Pytorch](https://github.com/pytorch/examples/tree/master/dcgan), with trained weights provided by [csinva/gan-pretrained-pytorch](https://github.com/csinva/gan-pretrained-pytorch)
### Download weights
```
%%bash
wget https://github.com/DSC-UI-SRIN/Introduction-to-GAN/raw/master/3%20-%20GAN%20Evaluations/weight/netD_epoch_199.pth -d netD_epoch_199.pth
wget https://github.com/DSC-UI-SRIN/Introduction-to-GAN/raw/master/3%20-%20GAN%20Evaluations/weight/netG_epoch_199.pth -d netG_epoch_199.pth
import os
import torch
import torchvision
import torch.nn as nn
from torchvision import transforms
from torchvision.utils import save_image
from torch.autograd import Variable
import matplotlib.pyplot as plt
import pylab
import numpy as np
class Generator(nn.Module):
def __init__(self, ngpu, nc=3, nz=100, ngf=64):
super(Generator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
nn.ConvTranspose2d( ngf, nc, kernel_size=1, stride=1, padding=0, bias=False),
nn.Tanh()
)
def forward(self, input):
if input.is_cuda and self.ngpu > 1:
output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
else:
output = self.main(input)
return output
class Discriminator(nn.Module):
def __init__(self, ngpu, nc=3, ndf=64):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 8, 1, 2, 2, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
if input.is_cuda and self.ngpu > 1:
output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
else:
output = self.main(input)
return output.view(-1, 1).squeeze(1)
num_gpu = 1 if torch.cuda.is_available() else 0
D = Discriminator(ngpu=1).eval()
G = Generator(ngpu=1).eval()
# load weights
D.load_state_dict(torch.load("./netD_epoch_199.pth"))
G.load_state_dict(torch.load("./netG_epoch_199.pth"))
if torch.cuda.is_available():
D = D.cuda()
G = G.cuda()
```
## Generate samples from model
```
batch_size = 25
latent_size = 100
fixed_noise = torch.randn(batch_size, latent_size, 1, 1)
if torch.cuda.is_available():
fixed_noise = fixed_noise.cuda()
fake_images = G(fixed_noise)
fake_images_np = fake_images.cpu().detach().numpy()
fake_images_np = fake_images_np.reshape(fake_images_np.shape[0], 3, 32, 32)
fake_images_np = fake_images_np.transpose((0, 2, 3, 1))
R, C = 5, 5
for i in range(batch_size):
plt.subplot(R, C, i + 1)
plt.imshow(fake_images_np[i] * 0.5 + 0.5, interpolation='bilinear')
plt.axis('off')
plt.tight_layout()
plt.savefig(ROOT + "dcgan_sample.png")
plt.show()
```
## Measure FID on model
FID implementation by [mseitzer](https://github.com/mseitzer/pytorch-fid)
### 1. Generate fake samples and get real data samples
```
%%bash
wget https://github.com/mseitzer/pytorch-fid/raw/master/inception.py -d inception.py
from torchvision import transforms, datasets
from torch.utils.data import DataLoader
from torch.nn.functional import adaptive_avg_pool2d
from inception import InceptionV3
from scipy import linalg
from tqdm import tqdm
n_samples = 1000
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
cifar10 = datasets.CIFAR10('./data', transform=transform, download=True)
cifar10_loader = DataLoader(cifar10, batch_size=n_samples, shuffle=True)
cifar10_iter = iter(cifar10_loader)
# https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html
real_samples, _ = cifar10_iter.next()
real_samples.shape
fixed_noise = torch.randn(n_samples, latent_size, 1, 1)
if torch.cuda.is_available():
fixed_noise = fixed_noise.cuda()
fake_images = G(fixed_noise)
fake_images.shape
```
### 2. Calculate mean and covariance of Inception activations on each samples
```
def get_activations(files, model, batch_size=50, dims=2048, cuda=False):
"""Calculates the activations of the pool_3 layer for all images.
Params:
-- files : List of images data
-- model : Instance of inception model
-- batch_size : Batch size of images for the model to process at once.
-- dims : Dimensionality of features returned by Inception
-- cuda : If set to True, use GPU
Returns:
-- A numpy array of dimension (num images, dims) that contains the
activations of the given tensor when feeding inception with the
query tensor.
"""
model.eval()
if batch_size > len(files):
print(('Warning: batch size is bigger than the data size. '
'Setting batch size to data size'))
batch_size = len(files)
n_batches = len(files) // batch_size
n_used_imgs = n_batches * batch_size
pred_arr = np.empty((n_used_imgs, dims))
for i in tqdm(range(n_batches)):
print('\rPropagating batch %d/%d' % (i + 1, n_batches), end='', flush=True)
start = i * batch_size
end = start + batch_size
images = files[start:end]
# batch = torch.from_numpy(images).type(torch.FloatTensor)
if cuda:
# batch = batch.cuda()
batch = images.cuda()
pred = model(batch)[0]
# If model output is not scalar, apply global spatial average pooling.
# This happens if you choose a dimensionality not equal 2048.
if pred.shape[2] != 1 or pred.shape[3] != 1:
pred = adaptive_avg_pool2d(pred, output_size=(1, 1))
pred_arr[start:end] = pred.cpu().data.numpy().reshape(batch_size, -1)
print(' done')
return pred_arr
def calculate_activation_statistics(files, model, batch_size=50, dims=2048, cuda=False, verbose=False):
"""Calculation of the statistics used by the FID.
Params:
-- files : List of image files paths
-- model : Instance of inception model
-- batch_size : Size of batch per processing in Inception modl
-- dims : Dimensionality of features returned by Inception
-- cuda : If set to True, use GPU
-- verbose : If set to True and parameter out_step is given, the
number of calculated batches is reported.
Returns:
-- mu : The mean over samples of the activations of the pool_3 layer of
the inception model.
-- sigma : The covariance matrix of the activations of the pool_3 layer of
the inception model.
"""
act = get_activations(files, model, batch_size, dims, cuda)
mu = np.mean(act, axis=0)
sigma = np.cov(act, rowvar=False)
return mu, sigma
dims = 2048
batch_size = 50
cuda = torch.cuda.is_available()
block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims]
model = InceptionV3([block_idx], normalize_input=False)
if cuda:
model.cuda()
m_real, sigma_real = calculate_activation_statistics(real_samples, model, batch_size, dims, cuda)
m_fake, sigma_fake = calculate_activation_statistics(fake_images, model, batch_size, dims, cuda)
```
## Measure Frechet distance given the means and covariances
According to the [paper](https://arxiv.org/pdf/1706.08500.pdf)

```
def calculate_frechet_distance(mu1, sigma1, mu2, sigma2, eps=1e-6):
"""Numpy implementation of the Frechet Distance. Stable version by Dougal J. Sutherland.
Params:
-- mu1 : The sample mean over activations, precalculated on an generative data set.
-- mu2 : The sample mean over activations, precalculated on a representative data set.
-- sigma1: The covariance matrix over activations for generated samples.
-- sigma2: The covariance matrix over activations, precalculated on a representative data set.
Returns:
-- The Frechet distance calculated
"""
# Check dimension of mu and sigma
mu1 = np.atleast_1d(mu1)
mu2 = np.atleast_1d(mu2)
sigma1 = np.atleast_2d(sigma1)
sigma2 = np.atleast_2d(sigma2)
assert mu1.shape == mu2.shape, 'Training and test mean vectors have different lengths'
assert sigma1.shape == sigma2.shape, 'Training and test covariances have different dimensions'
# Calculate mu_1 - mu_2
diff = mu1 - mu2
# Calculate square root mean of sigma_1 * sigma_2
covmean, _ = linalg.sqrtm(sigma1.dot(sigma2), disp=False)
# Product might be almost singular
if not np.isfinite(covmean).all():
msg = ('fid calculation produces singular product; '
'adding %s to diagonal of cov estimates') % eps
print(msg)
offset = np.eye(sigma1.shape[0]) * eps
covmean = linalg.sqrtm((sigma1 + offset).dot(sigma2 + offset))
# Numerical error might give slight imaginary component
if np.iscomplexobj(covmean):
if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-3):
m = np.max(np.abs(covmean.imag))
raise ValueError('Imaginary component {}'.format(m))
covmean = covmean.real
# Get trace of covmean
tr_covmean = np.trace(covmean)
# Return the calculated FID result
return (diff.dot(diff) + np.trace(sigma1) + np.trace(sigma2) - 2 * tr_covmean)
fid_value = calculate_frechet_distance(m_real, sigma_real, m_fake, sigma_fake)
print('FID score of model: {:3.5f}'.format(fid_value))
```
|
github_jupyter
|
import os
import torch
# Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
## Folder Configuration
from google.colab import drive
drive.mount('/content/drive')
ROOT = "/content/drive/My Drive/Colab Notebooks/DSC_UI_GAN/Batch1/W3/"
# Make dir if no exist
if not os.path.exists(ROOT):
os.makedirs(ROOT)
%%bash
wget https://github.com/DSC-UI-SRIN/Introduction-to-GAN/raw/master/3%20-%20GAN%20Evaluations/weight/netD_epoch_199.pth -d netD_epoch_199.pth
wget https://github.com/DSC-UI-SRIN/Introduction-to-GAN/raw/master/3%20-%20GAN%20Evaluations/weight/netG_epoch_199.pth -d netG_epoch_199.pth
import os
import torch
import torchvision
import torch.nn as nn
from torchvision import transforms
from torchvision.utils import save_image
from torch.autograd import Variable
import matplotlib.pyplot as plt
import pylab
import numpy as np
class Generator(nn.Module):
def __init__(self, ngpu, nc=3, nz=100, ngf=64):
super(Generator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
nn.ConvTranspose2d( ngf, nc, kernel_size=1, stride=1, padding=0, bias=False),
nn.Tanh()
)
def forward(self, input):
if input.is_cuda and self.ngpu > 1:
output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
else:
output = self.main(input)
return output
class Discriminator(nn.Module):
def __init__(self, ngpu, nc=3, ndf=64):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 8, 1, 2, 2, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
if input.is_cuda and self.ngpu > 1:
output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
else:
output = self.main(input)
return output.view(-1, 1).squeeze(1)
num_gpu = 1 if torch.cuda.is_available() else 0
D = Discriminator(ngpu=1).eval()
G = Generator(ngpu=1).eval()
# load weights
D.load_state_dict(torch.load("./netD_epoch_199.pth"))
G.load_state_dict(torch.load("./netG_epoch_199.pth"))
if torch.cuda.is_available():
D = D.cuda()
G = G.cuda()
batch_size = 25
latent_size = 100
fixed_noise = torch.randn(batch_size, latent_size, 1, 1)
if torch.cuda.is_available():
fixed_noise = fixed_noise.cuda()
fake_images = G(fixed_noise)
fake_images_np = fake_images.cpu().detach().numpy()
fake_images_np = fake_images_np.reshape(fake_images_np.shape[0], 3, 32, 32)
fake_images_np = fake_images_np.transpose((0, 2, 3, 1))
R, C = 5, 5
for i in range(batch_size):
plt.subplot(R, C, i + 1)
plt.imshow(fake_images_np[i] * 0.5 + 0.5, interpolation='bilinear')
plt.axis('off')
plt.tight_layout()
plt.savefig(ROOT + "dcgan_sample.png")
plt.show()
%%bash
wget https://github.com/mseitzer/pytorch-fid/raw/master/inception.py -d inception.py
from torchvision import transforms, datasets
from torch.utils.data import DataLoader
from torch.nn.functional import adaptive_avg_pool2d
from inception import InceptionV3
from scipy import linalg
from tqdm import tqdm
n_samples = 1000
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
cifar10 = datasets.CIFAR10('./data', transform=transform, download=True)
cifar10_loader = DataLoader(cifar10, batch_size=n_samples, shuffle=True)
cifar10_iter = iter(cifar10_loader)
# https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html
real_samples, _ = cifar10_iter.next()
real_samples.shape
fixed_noise = torch.randn(n_samples, latent_size, 1, 1)
if torch.cuda.is_available():
fixed_noise = fixed_noise.cuda()
fake_images = G(fixed_noise)
fake_images.shape
def get_activations(files, model, batch_size=50, dims=2048, cuda=False):
"""Calculates the activations of the pool_3 layer for all images.
Params:
-- files : List of images data
-- model : Instance of inception model
-- batch_size : Batch size of images for the model to process at once.
-- dims : Dimensionality of features returned by Inception
-- cuda : If set to True, use GPU
Returns:
-- A numpy array of dimension (num images, dims) that contains the
activations of the given tensor when feeding inception with the
query tensor.
"""
model.eval()
if batch_size > len(files):
print(('Warning: batch size is bigger than the data size. '
'Setting batch size to data size'))
batch_size = len(files)
n_batches = len(files) // batch_size
n_used_imgs = n_batches * batch_size
pred_arr = np.empty((n_used_imgs, dims))
for i in tqdm(range(n_batches)):
print('\rPropagating batch %d/%d' % (i + 1, n_batches), end='', flush=True)
start = i * batch_size
end = start + batch_size
images = files[start:end]
# batch = torch.from_numpy(images).type(torch.FloatTensor)
if cuda:
# batch = batch.cuda()
batch = images.cuda()
pred = model(batch)[0]
# If model output is not scalar, apply global spatial average pooling.
# This happens if you choose a dimensionality not equal 2048.
if pred.shape[2] != 1 or pred.shape[3] != 1:
pred = adaptive_avg_pool2d(pred, output_size=(1, 1))
pred_arr[start:end] = pred.cpu().data.numpy().reshape(batch_size, -1)
print(' done')
return pred_arr
def calculate_activation_statistics(files, model, batch_size=50, dims=2048, cuda=False, verbose=False):
"""Calculation of the statistics used by the FID.
Params:
-- files : List of image files paths
-- model : Instance of inception model
-- batch_size : Size of batch per processing in Inception modl
-- dims : Dimensionality of features returned by Inception
-- cuda : If set to True, use GPU
-- verbose : If set to True and parameter out_step is given, the
number of calculated batches is reported.
Returns:
-- mu : The mean over samples of the activations of the pool_3 layer of
the inception model.
-- sigma : The covariance matrix of the activations of the pool_3 layer of
the inception model.
"""
act = get_activations(files, model, batch_size, dims, cuda)
mu = np.mean(act, axis=0)
sigma = np.cov(act, rowvar=False)
return mu, sigma
dims = 2048
batch_size = 50
cuda = torch.cuda.is_available()
block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims]
model = InceptionV3([block_idx], normalize_input=False)
if cuda:
model.cuda()
m_real, sigma_real = calculate_activation_statistics(real_samples, model, batch_size, dims, cuda)
m_fake, sigma_fake = calculate_activation_statistics(fake_images, model, batch_size, dims, cuda)
def calculate_frechet_distance(mu1, sigma1, mu2, sigma2, eps=1e-6):
"""Numpy implementation of the Frechet Distance. Stable version by Dougal J. Sutherland.
Params:
-- mu1 : The sample mean over activations, precalculated on an generative data set.
-- mu2 : The sample mean over activations, precalculated on a representative data set.
-- sigma1: The covariance matrix over activations for generated samples.
-- sigma2: The covariance matrix over activations, precalculated on a representative data set.
Returns:
-- The Frechet distance calculated
"""
# Check dimension of mu and sigma
mu1 = np.atleast_1d(mu1)
mu2 = np.atleast_1d(mu2)
sigma1 = np.atleast_2d(sigma1)
sigma2 = np.atleast_2d(sigma2)
assert mu1.shape == mu2.shape, 'Training and test mean vectors have different lengths'
assert sigma1.shape == sigma2.shape, 'Training and test covariances have different dimensions'
# Calculate mu_1 - mu_2
diff = mu1 - mu2
# Calculate square root mean of sigma_1 * sigma_2
covmean, _ = linalg.sqrtm(sigma1.dot(sigma2), disp=False)
# Product might be almost singular
if not np.isfinite(covmean).all():
msg = ('fid calculation produces singular product; '
'adding %s to diagonal of cov estimates') % eps
print(msg)
offset = np.eye(sigma1.shape[0]) * eps
covmean = linalg.sqrtm((sigma1 + offset).dot(sigma2 + offset))
# Numerical error might give slight imaginary component
if np.iscomplexobj(covmean):
if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-3):
m = np.max(np.abs(covmean.imag))
raise ValueError('Imaginary component {}'.format(m))
covmean = covmean.real
# Get trace of covmean
tr_covmean = np.trace(covmean)
# Return the calculated FID result
return (diff.dot(diff) + np.trace(sigma1) + np.trace(sigma2) - 2 * tr_covmean)
fid_value = calculate_frechet_distance(m_real, sigma_real, m_fake, sigma_fake)
print('FID score of model: {:3.5f}'.format(fid_value))
| 0.799286 | 0.921711 |
```
import pandas as pd
from plotly.offline import init_notebook_mode,iplot
import plotly.graph_objects as go
import cufflinks as cf
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
%matplotlib inline
init_notebook_mode(connected=True)
df_ = pd.read_json('News_Category_Dataset_v2.json', lines=True)
df_.info()
df_.head()
```
Eliminando columnas innecesarias y creando columna texto
```
df_.query('category == "PARENTS" or category == "PARENTING"')
df_['text'] = df_['authors'] +' '+df_['headline'] +' '+ df_['short_description']
df = df_.drop(['link', 'date'], axis=1)
print(df.loc[0][['authors', 'headline', 'short_description', 'text']])
print(df.loc[0]['headline'])
df[['authors', 'headline', 'short_description', 'text']].head(30)
```
Visualizando los datos
```
print(len(df_.category.value_counts()), len(df.category.value_counts()))
df_.category.value_counts()
#defining data
trace = go.Bar(x = list(df['category'].unique()),y=df['category'].value_counts())
data=[trace]
#defining layout
layout = go.Layout(title='Categories count',xaxis=dict(title='Category'))
#defining figure and plotting
figure = go.Figure(data=data,layout=layout)
iplot(figure)
```
Se puede observar que muchas de las categorías podrían unirse en una sola, como es el caso de ARTS, CULTURE & ARTS Y ARTS & CULTURE
```
df[df['category'] == 'PARENTING']['text'].iloc[20]
culture = ['ARTS & CULTURE', 'ARTS', 'CULTURE & ARTS']
worldpost = ['WORLDPOST', 'THE WORLDPOST']
parents = ['PARENTING', 'PARENTS']
df['category'][df['category'].isin(culture)] = "CULTURE"
df['category'][df['category'].isin(worldpost)] = "WORLDPOST"
df['category'][df['category'].isin(parents)] = "PARENTS"
df['category'].value_counts().plot(kind='bar')
cat = df['category'].value_counts()
print(len(cat))
print(cat)
(cat['POLITICS'] + cat['WELLNESS'] + cat['ENTERTAINMENT'])/cat.sum()*100
```
## Feature Engineering
### text cleaning
```
df.loc[3]['text']
df.loc[300]['text']
df.loc[3000]['text']
df.loc[29864]['text']
df.loc[123678]['text']
```
## Categories in weekday
```
df["weekday"] = pd.to_datetime(df_["date"]).dt.date.apply(lambda x: x.weekday())
def int2weekday(x):
days = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"]
return days[x]
df["weekday"] = df["weekday"].apply(int2weekday)
heat_map = df.groupby(["weekday", "category"]).size().reset_index(name="data")
heat_map['norm'] = heat_map['data'] / heat_map.groupby('category')['data'].transform('sum')
heat_map.tail()
heat_map2 = heat_map.pivot("weekday", "category", "norm").fillna(0).reindex(["Mon",
"Tue", "Wed", "Thu", "Fri", "Sat", "Sun"][::-1])
heat_map2.head(10)
import seaborn as sns
import matplotlib as pl
sns.set(font_scale=1.8)
%matplotlib inline
fig, ax = plt.subplots(figsize=(28,13))
#plt.xticks(fontsize=14)
ax.set_ylim(-0.5, 7+0.5)
sns.heatmap(heat_map2, cbar_kws={'label': 'percentage of the total articles'}, ax=ax)
```
## Categories and author
```
authors = df.groupby(['authors', 'category']).size().reset_index(name='data')
authors['norm'] = authors['data'] / authors.groupby('category')['data'].transform('sum')
authors['authors'].replace(r'^\s*$', 'UNK', regex=True, inplace=True)
authors.head()
authors2 = authors.pivot('authors', 'category', 'data').fillna(0)
authors2.head()
ax = sns.heatmap(authors2)
X_train, X_test, Y_train, Y_test = train_test_split(df['text'], df['category'])
train = pd.concat([X_train, Y_train], axis=1)
test = pd.concat([X_test, Y_test], axis=1)
test.head()
a = [1,2,3,4]
a[::-1]
```
|
github_jupyter
|
import pandas as pd
from plotly.offline import init_notebook_mode,iplot
import plotly.graph_objects as go
import cufflinks as cf
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
%matplotlib inline
init_notebook_mode(connected=True)
df_ = pd.read_json('News_Category_Dataset_v2.json', lines=True)
df_.info()
df_.head()
df_.query('category == "PARENTS" or category == "PARENTING"')
df_['text'] = df_['authors'] +' '+df_['headline'] +' '+ df_['short_description']
df = df_.drop(['link', 'date'], axis=1)
print(df.loc[0][['authors', 'headline', 'short_description', 'text']])
print(df.loc[0]['headline'])
df[['authors', 'headline', 'short_description', 'text']].head(30)
print(len(df_.category.value_counts()), len(df.category.value_counts()))
df_.category.value_counts()
#defining data
trace = go.Bar(x = list(df['category'].unique()),y=df['category'].value_counts())
data=[trace]
#defining layout
layout = go.Layout(title='Categories count',xaxis=dict(title='Category'))
#defining figure and plotting
figure = go.Figure(data=data,layout=layout)
iplot(figure)
df[df['category'] == 'PARENTING']['text'].iloc[20]
culture = ['ARTS & CULTURE', 'ARTS', 'CULTURE & ARTS']
worldpost = ['WORLDPOST', 'THE WORLDPOST']
parents = ['PARENTING', 'PARENTS']
df['category'][df['category'].isin(culture)] = "CULTURE"
df['category'][df['category'].isin(worldpost)] = "WORLDPOST"
df['category'][df['category'].isin(parents)] = "PARENTS"
df['category'].value_counts().plot(kind='bar')
cat = df['category'].value_counts()
print(len(cat))
print(cat)
(cat['POLITICS'] + cat['WELLNESS'] + cat['ENTERTAINMENT'])/cat.sum()*100
df.loc[3]['text']
df.loc[300]['text']
df.loc[3000]['text']
df.loc[29864]['text']
df.loc[123678]['text']
df["weekday"] = pd.to_datetime(df_["date"]).dt.date.apply(lambda x: x.weekday())
def int2weekday(x):
days = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"]
return days[x]
df["weekday"] = df["weekday"].apply(int2weekday)
heat_map = df.groupby(["weekday", "category"]).size().reset_index(name="data")
heat_map['norm'] = heat_map['data'] / heat_map.groupby('category')['data'].transform('sum')
heat_map.tail()
heat_map2 = heat_map.pivot("weekday", "category", "norm").fillna(0).reindex(["Mon",
"Tue", "Wed", "Thu", "Fri", "Sat", "Sun"][::-1])
heat_map2.head(10)
import seaborn as sns
import matplotlib as pl
sns.set(font_scale=1.8)
%matplotlib inline
fig, ax = plt.subplots(figsize=(28,13))
#plt.xticks(fontsize=14)
ax.set_ylim(-0.5, 7+0.5)
sns.heatmap(heat_map2, cbar_kws={'label': 'percentage of the total articles'}, ax=ax)
authors = df.groupby(['authors', 'category']).size().reset_index(name='data')
authors['norm'] = authors['data'] / authors.groupby('category')['data'].transform('sum')
authors['authors'].replace(r'^\s*$', 'UNK', regex=True, inplace=True)
authors.head()
authors2 = authors.pivot('authors', 'category', 'data').fillna(0)
authors2.head()
ax = sns.heatmap(authors2)
X_train, X_test, Y_train, Y_test = train_test_split(df['text'], df['category'])
train = pd.concat([X_train, Y_train], axis=1)
test = pd.concat([X_test, Y_test], axis=1)
test.head()
a = [1,2,3,4]
a[::-1]
| 0.498291 | 0.674502 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.