markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
We can do the lookup as before without the need to build vocabularies:
|
movie_title_hashing(["Star Wars (1977)", "One Flew Over the Cuckoo's Nest (1975)"])
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Defining the embeddings
Now that we have integer ids, we can use the Embedding layer to turn those into embeddings.
An embedding layer has two dimensions: the first dimension tells us how many distinct categories we can embed; the second tells us how large the vector representing each of them can be.
When creating the embedding layer for movie titles, we are going to set the first value to the size of our title vocabulary (or the number of hashing bins). The second is up to us: the larger it is, the higher the capacity of the model, but the slower it is to fit and serve.
|
# Turns positive integers (indexes) into dense vectors of fixed size.
movie_title_embedding = # TODO: Your code goes here
# Let's use the explicit vocabulary lookup.
input_dim=movie_title_lookup.vocab_size(),
output_dim=32
)
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
We can put the two together into a single layer which takes raw text in and yields embeddings.
|
movie_title_model = tf.keras.Sequential([movie_title_lookup, movie_title_embedding])
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Just like that, we can directly get the embeddings for our movie titles:
|
movie_title_model(["Star Wars (1977)"])
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
We can do the same with user embeddings:
|
user_id_lookup = tf.keras.layers.experimental.preprocessing.StringLookup()
user_id_lookup.adapt(ratings.map(lambda x: x["user_id"]))
user_id_embedding = tf.keras.layers.Embedding(user_id_lookup.vocab_size(), 32)
user_id_model = tf.keras.Sequential([user_id_lookup, user_id_embedding])
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Normalizing continuous features
Continuous features also need normalization. For example, the timestamp feature is far too large to be used directly in a deep model:
|
for x in ratings.take(3).as_numpy_iterator():
print(f"Timestamp: {x['timestamp']}.")
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
We need to process it before we can use it. While there are many ways in which we can do this, discretization and standardization are two common ones.
Standardization
Standardization rescales features to normalize their range by subtracting the feature's mean and dividing by its standard deviation. It is a common preprocessing transformation.
This can be easily accomplished using the tf.keras.layers.experimental.preprocessing.Normalization layer:
|
# Feature-wise normalization of the data.
timestamp_normalization = # TODO: Your code goes here
timestamp_normalization.adapt(ratings.map(lambda x: x["timestamp"]).batch(1024))
for x in ratings.take(3).as_numpy_iterator():
print(f"Normalized timestamp: {timestamp_normalization(x['timestamp'])}.")
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Discretization
Another common transformation is to turn a continuous feature into a number of categorical features. This makes good sense if we have reasons to suspect that a feature's effect is non-continuous.
To do this, we first need to establish the boundaries of the buckets we will use for discretization. The easiest way is to identify the minimum and maximum value of the feature, and divide the resulting interval equally:
|
max_timestamp = ratings.map(lambda x: x["timestamp"]).reduce(
tf.cast(0, tf.int64), tf.maximum).numpy().max()
min_timestamp = ratings.map(lambda x: x["timestamp"]).reduce(
np.int64(1e9), tf.minimum).numpy().min()
timestamp_buckets = np.linspace(
min_timestamp, max_timestamp, num=1000)
print(f"Buckets: {timestamp_buckets[:3]}")
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Given the bucket boundaries we can transform timestamps into embeddings:
|
timestamp_embedding_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32)
])
for timestamp in ratings.take(1).map(lambda x: x["timestamp"]).batch(1).as_numpy_iterator():
print(f"Timestamp embedding: {timestamp_embedding_model(timestamp)}.")
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Processing text features
We may also want to add text features to our model. Usually, things like product descriptions are free form text, and we can hope that our model can learn to use the information they contain to make better recommendations, especially in a cold-start or long tail scenario.
While the MovieLens dataset does not give us rich textual features, we can still use movie titles. This may help us capture the fact that movies with very similar titles are likely to belong to the same series.
The first transformation we need to apply to text is tokenization (splitting into constituent words or word-pieces), followed by vocabulary learning, followed by an embedding.
The Keras tf.keras.layers.experimental.preprocessing.TextVectorization layer can do the first two steps for us:
|
# Text vectorization layer.
title_text = # TODO: Your code goes here
title_text.adapt(ratings.map(lambda x: x["movie_title"]))
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Let's try it out:
|
for row in ratings.batch(1).map(lambda x: x["movie_title"]).take(1):
print(title_text(row))
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Each title is translated into a sequence of tokens, one for each piece we've tokenized.
We can check the learned vocabulary to verify that the layer is using the correct tokenization:
|
title_text.get_vocabulary()[40:45]
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
This looks correct: the layer is tokenizing titles into individual words.
To finish the processing, we now need to embed the text. Because each title contains multiple words, we will get multiple embeddings for each title. For use in a downstream model these are usually compressed into a single embedding. Models like RNNs or Transformers are useful here, but averaging all the words' embeddings together is a good starting point.
Putting it all together
With these components in place, we can build a model that does all the preprocessing together.
User model
The full user model may look like the following:
|
class UserModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.user_embedding = tf.keras.Sequential([
user_id_lookup,
tf.keras.layers.Embedding(user_id_lookup.vocab_size(), 32),
])
self.timestamp_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 2, 32)
])
self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization()
def call(self, inputs):
# Take the input dictionary, pass it through each input layer,
# and concatenate the result.
return tf.concat([
self.user_embedding(inputs["user_id"]),
self.timestamp_embedding(inputs["timestamp"]),
self.normalized_timestamp(inputs["timestamp"])
], axis=1)
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Let's try it out:
|
user_model = # TODO: Your code goes here
user_model.normalized_timestamp.adapt(
ratings.map(lambda x: x["timestamp"]).batch(128))
for row in ratings.batch(1).take(1):
print(f"Computed representations: {user_model(row)[0, :3]}")
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Movie model
We can do the same for the movie model:
|
class MovieModel(tf.keras.Model):
def __init__(self):
super().__init__()
max_tokens = 10_000
self.title_embedding = tf.keras.Sequential([
movie_title_lookup,
tf.keras.layers.Embedding(movie_title_lookup.vocab_size(), 32)
])
self.title_text_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.TextVectorization(max_tokens=max_tokens),
tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),
# We average the embedding of individual words to get one embedding vector
# per title.
tf.keras.layers.GlobalAveragePooling1D(),
])
def call(self, inputs):
return tf.concat([
self.title_embedding(inputs["movie_title"]),
self.title_text_embedding(inputs["movie_title"]),
], axis=1)
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Let's try it out:
|
movie_model = # TODO: Your code goes here
movie_model.title_text_embedding.layers[0].adapt(
ratings.map(lambda x: x["movie_title"]))
for row in ratings.batch(1).take(1):
print(f"Computed representations: {movie_model(row)[0, :3]}")
|
courses/machine_learning/deepdive2/recommendation_systems/labs/featurization.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
The OpenAI Gym toolkit includes the below environment for the "Cliff-Walking" problem:
|
print('OpenAI Gym environments for Cliff Walking Problem:')
[k for k in gym.envs.registry.env_specs.keys() if k.find('Cliff' , 0) >=0]
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Load the Cliff-Walking environment:
|
env = gym.make('CliffWalking-v0')
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
This environment has to do about gridworld shown below, where the traveller initial position (x) and the target to achieve (reach T) has been flagged appropriately. In addition in a one of the edge of this gridwordld example there is a "Cliff" denoted with C. Reward is $-1$ on all transitions except those into the cliff region. Steppping into this region incurs a reward of $-100$ and sends the agent instantly back to the start.
Once the environment is initialized you get the situation below. This is an episodic (undiscounted) task with start at traveller's starting point, and it is completed either when the goal is achieved, that is the traveller manage to reach the target location, T, or she may happen to step into the cliff. In this case the environment is reseted in each initial state.
|
env.render()
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Possible traveller's actions are of course her movements in this grid:
- "UP": denoted by 0
- "RIGHT": denoted by 1
- "DOWN": denoted by 2
- "LEFT": denoted by 3
To get the new state at every next step of an episode, you may pass the current action into the .step() method of the environment. The environment then will return a tuple (observation, reward, done, info) each of which are explained as below:
- observation (object): agent's observation of the current environment
- reward (float): amount of reward returned after previous action
- done (bool): whether the episode has ended, in which case further step() calls will return undefined results
- info (dict): contains auxiliary diagnostic information (helpful for debugging, and sometimes learning)
Note: At termination of each episode, the programmer is responsible to reset the environment.
For further details concerning the CliffWalking-v0 environment of OpenAI Gym toolkit consult the docstring below.
|
help(env)
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
2. RL-Algorithms based on Temporal Difference TD(n): Prediction Problem
2a. Load the "Temporal Difference" Python class
Load the Python class PlotUtils() which provides various plotting utilities and start a new instance.
|
%run ../PlotUtils.py
plotutls = PlotUtils()
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Load the Temporal Difference Python class, TemporalDifferenceUtils():
|
%run ../TDn_Utils.py
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Instantiate the class for the environment of interest:
|
TD = TemporalDifferenceUtils(env)
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
2b. n-step TD Prediction (estimating $V \approx v_{\pi}$)
We define the functions below to help:
1. compute the optimal state-action values of this problem,
2. provide the optimal policy which is expected to be learned by the agent, and
3. visualize the result to verify that everything have been configured correctly.
|
def cliff_walking_optimal_q_values(grid_height=4, grid_width=12, cliff_index = np.s_[:,:,:]):
# Define the start position
start = [grid_height - 1, 0]
# Define the position of target dstination
goal = [grid_height - 1, grid_width - 1]
# Define a dictionary of possible actions
actions_dict = {}
actions = ['UP', 'RIGHT', 'DOWN', 'LEFT']
for k, v in zip(actions, range(0, len(actions))):
actions_dict[k] = v
# Define a "q_values" array for grid-world of interest
n_states = grid_height * grid_width
n_actions = len(actions_dict)
q_values = np.full((grid_height, grid_width, n_actions), fill_value=-100.)
# Determine "q_values" of optimal policy
n_steps = grid_width
q_values[:cliff_index[0],:,actions_dict['RIGHT']] = np.arange(0, n_steps, 1)
q_values[start[0], start[1], actions_dict['UP']] = 0.5
q_values[:goal[0],goal[1],actions_dict['DOWN']] = n_steps
return q_values
def cliff_walking_optimal_policy(env, state):
active_q = cliff_walking_optimal_q_values(grid_height=4, grid_width=12, cliff_index = np.s_[3,1:12,:])
active_q = active_q.reshape((env.observation_space.n, env.action_space.n))
return TD.epsilon_greedy_policy(env, active_q, state, epsilon=0.)
# print optimal policy
def print_optimal_policy(q_values, grid_height=4, grid_width=12):
# Define a helper dictionary of actions
actions_dict = {}
actions = ['UP', 'RIGHT', 'DOWN', 'LEFT']
for k, v in zip(actions, range(0, len(actions))):
actions_dict[k] = v
# Define the position of target dstination
GOAL = [3, 11]
# Reshape the "q_values" table to follow grid-world dimensionality
q_values = q_values.reshape((grid_height, grid_width, len(actions)))
optimal_policy = []
for i in range(0, grid_height):
optimal_policy.append([])
for j in range(0, grid_width):
if [i, j] == GOAL:
optimal_policy[-1].append('G')
continue
bestAction = np.argmax(q_values[i, j, :])
if bestAction == actions_dict['UP']:
optimal_policy[-1].append('\U00002191')
elif bestAction == actions_dict['RIGHT']:
optimal_policy[-1].append('\U00002192')
elif bestAction == actions_dict['DOWN']:
optimal_policy[-1].append('\U00002193')
elif bestAction == actions_dict['LEFT']:
optimal_policy[-1].append('\U00002190')
for row in optimal_policy:
print(*row)
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Verify that the optimal policy has been configured correctly.
|
active_q = cliff_walking_optimal_q_values(grid_height=4, grid_width=12, cliff_index = np.s_[3,1:12,:])
print_optimal_policy(active_q, grid_height=4, grid_width=12)
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Use the temporal_difference_prediction() method to predict the state values if the agent let to follow a 5-step TD learning path.
|
runs=10; n_episodes = 100
s_values = TD.temporal_difference_prediction(env, cliff_walking_optimal_policy,
runs=runs, n_episodes=n_episodes, decimals=2,
n_step=6, step_size=0.3, discount=1., epsilon=0.1)
title = 'State-value Predictions\n[Cliff-Walking task]'
plotutls.plot_state_values(s_values, grid_height=4, grid_width=12, title=title)
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
3. RL-Algorithms based on Temporal Difference: On-Policy TD(n) Control
3a. SARSA: On-Policy TD(n) Control
|
# Define TD(n) execution parameters
runs = 10 # Number of Independent Runs
n_episodes = 100 # Number of Episodes
# Various n-steps SARSA algorithms to try
print('Determine the n-steps your are interested to explore...\n')
n_step_min = 2; n_step_max = 6
n_steps = np.arange(n_step_min, n_step_max + 1)
print('n_steps: {}'.format(n_steps), '\n')
# various discount factors to try
discount_fixed = 1.
print('Determine a fixed discount factor: {}\n'.format(discount_fixed))
# various step size parameters to try
step_size_fixed = 0.3
print('Determine a fixed step-size: {}\n'.format(step_size_fixed))
# various epsilon parameters to try
epsilon_fixed = 0.1
print('Determine a fixed epsilon: {}\n'.format(epsilon_fixed))
# Create a mesh-grid of trials
print('Create a dictionary of the RL-models of interest...\n')
n_steps, discounts = np.meshgrid(n_steps, discount_fixed)
n_steps = n_steps.flatten()
discounts = discounts.flatten()
# Create a dictionary of the RL-trials of interest
RL_trials = {"sarsa(0)":
{'epsilon': epsilon_fixed,
'step_size': step_size_fixed, 'discount': discount_fixed, 'n_step': 1}}
for n, trial in enumerate(list(zip(n_steps, discounts))):
key = 'sarsa({})'.format(trial[0]-1)
RL_trials[key] = {'epsilon': epsilon_fixed,
'step_size': step_size_fixed, 'discount': trial[1], 'n_step': trial[0]}
print('Number of RL-models to try: {}\n'.format(len(RL_trials)))
print('Let all RL-models to be trained for {0:,} episodes and {1:,} independent runs...\n'.format(int(n_episodes), int(runs)))
rewards_per_trial_On_Policy_SARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
q_values_per_trial_On_Policy_SARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
for trial, params_dict in RL_trials.items():
# Read out parameters from "params_dict"
epsilon = params_dict['epsilon']
step_size = params_dict['step_size']
discount = params_dict['discount']
n_step = params_dict['n_step']
# Apply SARSA [on-policy TD(n) Control]
q_values, tot_rewards = TD.sarsa_on_policy_control(env,
runs=runs, n_episodes=n_episodes, n_step=n_step,
step_size=step_size, discount=discount, epsilon=epsilon)
# Update "rewards_per_trial" and "q_values_per_trial" OrderedDicts
rewards_per_trial_On_Policy_SARSA[trial] = tot_rewards
q_values_per_trial_On_Policy_SARSA[trial] = q_values
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Verify the learning curves of the RL-models we trained.
|
title = 'Efficiency of the RL Method\n[SARSA On-Policy TD(n) Control]'
plotutls.plot_learning_curve(rewards_per_trial_On_Policy_SARSA, title=title,
cumulative_reward=True, lower_reward_ratio=None)
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Visualize agent's move which is suggested by the solutions.
|
for trial in list(RL_trials.keys()):
print('\n', trial, ':')
q_vals = q_values_per_trial_On_Policy_SARSA[trial]
print_optimal_policy(q_vals, grid_height=4, grid_width=12)
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
3b. Expected SARSA: On-Policy TD(n) Control
|
# Define TD(n) execution parameters
runs = 10 # Number of Independent Runs
n_episodes = 100 # Number of Episodes
# Various n-steps SARSA algorithms to try
print('Determine the n-steps your are interested to explore...\n')
n_step_min = 2; n_step_max = 6
n_steps = np.arange(n_step_min, n_step_max + 1)
print('n_steps: {}'.format(n_steps), '\n')
# various discount factors to try
discount_fixed = 1.
print('Determine a fixed discount factor: {}\n'.format(discount_fixed))
# various step size parameters to try
step_size_fixed = 0.3
print('Determine a fixed step-size: {}\n'.format(step_size_fixed))
# various epsilon parameters to try
epsilon_fixed = 0.1
print('Determine a fixed epsilon: {}\n'.format(epsilon_fixed))
# Create a mesh-grid of trials
print('Create a dictionary of the RL-models of interest...\n')
n_steps, discounts = np.meshgrid(n_steps, discount_fixed)
n_steps = n_steps.flatten()
discounts = discounts.flatten()
# Create a dictionary of the RL-trials of interest
RL_trials = {"sarsa(0)":
{'epsilon': epsilon_fixed,
'step_size': step_size_fixed, 'discount': discount_fixed, 'n_step': 1}}
for n, trial in enumerate(list(zip(n_steps, discounts))):
key = 'sarsa({})'.format(trial[0]-1)
RL_trials[key] = {'epsilon': epsilon_fixed,
'step_size': step_size_fixed, 'discount': trial[1], 'n_step': trial[0]}
print('Number of RL-models to try: {}\n'.format(len(RL_trials)))
print('Let all RL-models to be trained for {0:,} episodes and {1:,} independent runs...\n'.format(int(n_episodes), int(runs)))
rewards_per_trial_On_Policy_ExpSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
q_values_per_trial_On_Policy_ExpSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
for trial, params_dict in RL_trials.items():
# Read out parameters from "params_dict"
epsilon = params_dict['epsilon']
step_size = params_dict['step_size']
discount = params_dict['discount']
n_step = params_dict['n_step']
# Apply SARSA [on-policy TD(n) Control]
q_values, tot_rewards = TD.sarsa_on_policy_control(env,
runs=runs, n_episodes=n_episodes, n_step=n_step,
expected_sarsa = True,
step_size=step_size, discount=discount, epsilon=epsilon)
# Update "rewards_per_trial" and "q_values_per_trial" OrderedDicts
rewards_per_trial_On_Policy_ExpSARSA[trial] = tot_rewards
q_values_per_trial_On_Policy_ExpSARSA[trial] = q_values
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Verify the learning curves of the RL-models we trained.
|
title = 'Efficiency of the RL Method\n[Expected SARSA ฮn-policy TD(n) Control]'
plotutls.plot_learning_curve(rewards_per_trial_On_Policy_ExpSARSA,title=title,
cumulative_reward=True, lower_reward_ratio=None)
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Visualize agent's move which is suggested by the solutions.
|
for trial in list(RL_trials.keys()):
print('\n', trial, ':')
q_vals = q_values_per_trial_On_Policy_ExpSARSA[trial]
print_optimal_policy(q_vals, grid_height=4, grid_width=12)
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
4. RL-Algorithms based on Temporal Difference: Off-Policy TD(n) Control
4a. SARSA: Off-Policy TD(n) Control
|
# Define TD(n) execution parameters
runs = 10 # Number of Independent Runs
n_episodes = 500 # Number of Episodes
# Various n-steps SARSA algorithms to try
print('Determine the n-steps your are interested to explore...\n')
n_step_min = 2; n_step_max = 6
n_steps = np.arange(n_step_min, n_step_max + 1)
print('n_steps: {}'.format(n_steps), '\n')
# various discount factors to try
discount_fixed = 1.
print('Determine a fixed discount factor: {}\n'.format(discount_fixed))
# various step size parameters to try
step_size_fixed = 0.8
print('Determine a fixed step-size: {}\n'.format(step_size_fixed))
# various epsilon parameters to try
epsilon_fixed = 0.1
print('Determine a fixed epsilon: {}\n'.format(epsilon_fixed))
# Create a mesh-grid of trials
print('Create a dictionary of the RL-models of interest...\n')
n_steps, discounts = np.meshgrid(n_steps, discount_fixed)
n_steps = n_steps.flatten()
discounts = discounts.flatten()
# Create a dictionary of the RL-trials of interest
RL_trials = {"sarsa(0)":
{'epsilon': epsilon_fixed,
'step_size': step_size_fixed, 'discount': discount_fixed, 'n_step': 1}}
for n, trial in enumerate(list(zip(n_steps, discounts))):
key = 'sarsa({})'.format(trial[0]-1)
RL_trials[key] = {'epsilon': epsilon_fixed,
'step_size': step_size_fixed, 'discount': trial[1], 'n_step': trial[0]}
print('Number of RL-models to try: {}\n'.format(len(RL_trials)))
print('Let all RL-models to be trained for {0:,} episodes and {1:,} independent runs...\n'.format(int(n_episodes), int(runs)))
rewards_per_trial_Off_Policy_SARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
q_values_per_trial_Off_Policy_SARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
for trial, params_dict in RL_trials.items():
# Read out parameters from "params_dict"
epsilon = params_dict['epsilon']
step_size = params_dict['step_size']
discount = params_dict['discount']
n_step = params_dict['n_step']
# Apply SARSA [on-policy TD(n) Control]
q_values, tot_rewards = TD.sarsa_off_policy_control(env,
runs=runs, n_episodes=n_episodes, n_step=n_step,
step_size=step_size, discount=discount, epsilon=epsilon)
# Update "rewards_per_trial" and "q_values_per_trial" OrderedDicts
rewards_per_trial_Off_Policy_SARSA[trial] = tot_rewards
q_values_per_trial_Off_Policy_SARSA[trial] = q_values
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Verify the learning curves of the RL-models we trained.
|
title = 'Efficiency of the RL Method\n[SARSA Off-Policy TD(n) Control]'
plotutls.plot_learning_curve(rewards_per_trial_Off_Policy_SARSA,title=title,
cumulative_reward=True, lower_reward_ratio=None)
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Visualize agent's move which is suggested by the solutions.
|
for trial in list(RL_trials.keys()):
print('\n', trial, ':')
q_vals = q_values_per_trial_Off_Policy_SARSA[trial]
print_optimal_policy(q_vals, grid_height=4, grid_width=12)
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
4b. Expected SARSA: Off-Policy TD(n) Control
|
# Define TD(n) execution parameters
runs = 10 # Number of Independent Runs
n_episodes = 500 # Number of Episodes
# Various n-steps SARSA algorithms to try
print('Determine the n-steps your are interested to explore...\n')
n_step_min = 2; n_step_max = 6
n_steps = np.arange(n_step_min, n_step_max + 1)
print('n_steps: {}'.format(n_steps), '\n')
# various discount factors to try
discount_fixed = 1.
print('Determine a fixed discount factor: {}\n'.format(discount_fixed))
# various step size parameters to try
step_size_fixed = 0.8
print('Determine a fixed step-size: {}\n'.format(step_size_fixed))
# various epsilon parameters to try
epsilon_fixed = 0.1
print('Determine a fixed epsilon: {}\n'.format(epsilon_fixed))
# Create a mesh-grid of trials
print('Create a dictionary of the RL-models of interest...\n')
n_steps, discounts = np.meshgrid(n_steps, discount_fixed)
n_steps = n_steps.flatten()
discounts = discounts.flatten()
# Create a dictionary of the RL-trials of interest
RL_trials = {"sarsa(0)":
{'epsilon': epsilon_fixed,
'step_size': step_size_fixed, 'discount': discount_fixed, 'n_step': 1}}
for n, trial in enumerate(list(zip(n_steps, discounts))):
key = 'sarsa({})'.format(trial[0]-1)
RL_trials[key] = {'epsilon': epsilon_fixed,
'step_size': step_size_fixed, 'discount': trial[1], 'n_step': trial[0]}
print('Number of RL-models to try: {}\n'.format(len(RL_trials)))
print('Let all RL-models to be trained for {0:,} episodes and {1:,} independent runs...\n'.format(int(n_episodes), int(runs)))
rewards_per_trial_Off_Policy_ExpSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
q_values_per_trial_Off_Policy_ExpSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
for trial, params_dict in RL_trials.items():
# Read out parameters from "params_dict"
epsilon = params_dict['epsilon']
step_size = params_dict['step_size']
discount = params_dict['discount']
n_step = params_dict['n_step']
# Apply SARSA [on-policy TD(n) Control]
q_values, tot_rewards = TD.sarsa_off_policy_control(env,
runs=runs, n_episodes=n_episodes, n_step=n_step,
expected_sarsa=True,
step_size=step_size, discount=discount, epsilon=epsilon)
# Update "rewards_per_trial" and "q_values_per_trial" OrderedDicts
rewards_per_trial_Off_Policy_ExpSARSA[trial] = tot_rewards
q_values_per_trial_Off_Policy_ExpSARSA[trial] = q_values
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Verify the learning curves of the RL-models we trained.
|
title = 'Efficiency of the RL Method\n[Expected SARSA Off-Policy TD(n) Control]'
plotutls.plot_learning_curve(rewards_per_trial_Off_Policy_ExpSARSA,title=title,
cumulative_reward=True, lower_reward_ratio=None)
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Visualize agent's move which is suggested by the solutions.
|
for trial in list(RL_trials.keys()):
print('\n', trial, ':')
q_vals = q_values_per_trial_Off_Policy_ExpSARSA[trial]
print_optimal_policy(q_vals, grid_height=4, grid_width=12)
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
5. A Unifying Algorithm: Off-Policy n-step $Q(\sigma)$, dynamic $\sigma$
We train below a $Q(\sigma)$ algorithm following a 0 to 12-step temporal difference updates.
The discount factor of the MDP, the step size between the updates and the probability of the epsilon-greedy policy that this agent learns has been set as below:
discount factor of MDP: $\gamma=1$
step size between the updates: $\alpha = 0.3$
epsilon of epsilon-greedy policy: $\varepsilon = 0.1$
The $\sigma$ parameter which controls the degree of sampling followed in each update, and more specifically:
Once $\sigma = 0$, results in a pure expectation without sampling, whereas
Once $\sigma = 1$, results in the other extreme in full sampling,
has been set initially at $\sigma = 0.5$, but with the option "sigma = dynamic" we asked from the .off_policy_q_sigma() function of the TDn_Utils class to change $\sigma$ dynamically, towards larger/smaller values depending on the improvement achieved in terms of "Commulative Mean Reward / Number of episodes". The corrections in the parameter $\sigma$ have been set to occur every 10 next episodes.
|
# Define TD(n) execution parameters
runs = 10 # Number of Independent Runs
n_episodes = 300 # Number of Episodes
# Various n-steps SARSA algorithms to try
print('Determine the n-steps your are interested to explore...\n')
n_step_min = 2; n_step_max = 13
n_steps = np.arange(n_step_min, n_step_max + 1)
print('n_steps: {}'.format(n_steps), '\n')
# various discount factors to try
discount_fixed = 1.
print('Determine a fixed discount factor: {}\n'.format(discount_fixed))
# various step size parameters to try
step_size_fixed = 0.3
print('Determine a fixed step-size: {}\n'.format(step_size_fixed))
# various epsilon parameters to try
epsilon_fixed = 0.1
print('Determine a fixed epsilon: {}\n'.format(epsilon_fixed))
# Determine a sigma prameter, controlling the degree of sampling at each step of the TD-n algorithm
sigma = None
if not sigma:
sigma = 'Random Variable in [0,1] range'
print('Determine sigma: {}'.format(sigma))
print('[Note: Controls the degree of sampling at each step of the TD(n) algorithm]\n')
# Create a mesh-grid of trials
print('Create a dictionary of the RL-models of interest...\n')
n_steps, discounts = np.meshgrid(n_steps, discount_fixed)
n_steps = n_steps.flatten()
discounts = discounts.flatten()
# Create a dictionary of the RL-trials of interest
RL_trials = {"0-step Q(ฯ)":
{'epsilon': epsilon_fixed,
'step_size': step_size_fixed, 'discount': discount_fixed, 'n_step': 1}}
for n, trial in enumerate(list(zip(n_steps, discounts))):
key = '{}-step Q(ฯ)'.format(trial[0]-1)
RL_trials[key] = {'epsilon': epsilon_fixed,
'step_size': step_size_fixed, 'discount': trial[1], 'n_step': trial[0]}
print('Number of RL-models to try: {}\n'.format(len(RL_trials)))
print('Let all RL-models to be trained for {0:,} episodes and {1:,} independent runs...\n'.format(int(n_episodes), int(runs)))
rewards_per_trial_Off_Policy_QSigma = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
q_values_per_trial_Off_Policy_QSigma = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
for trial, params_dict in RL_trials.items():
# Read out parameters from "params_dict"
epsilon = params_dict['epsilon']
step_size = params_dict['step_size']
discount = params_dict['discount']
n_step = params_dict['n_step']
# Apply SARSA [on-policy TD(n) Control]
q_values, tot_rewards = TD.off_policy_q_sigma(env,
runs=runs, n_episodes=n_episodes, n_step=n_step, sigma='dynamic',
step_size=step_size, discount=discount, epsilon=epsilon)
# Update "rewards_per_trial" and "q_values_per_trial" OrderedDicts
rewards_per_trial_Off_Policy_QSigma[trial] = tot_rewards
q_values_per_trial_Off_Policy_QSigma[trial] = q_values
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Reward achieved with the number of episodes: first six trials, 0 to 5-step $Q(\sigma)$
|
first6_RL_trials = list(RL_trials.keys())[:6]
rewards_per_trial = OrderedDict((label, rewards_per_trial_Off_Policy_QSigma[label]) for label in first6_RL_trials)
title = 'Efficiency of the RL Method\n[n-step $\mathbf{Q(\sigma)}$ (Off-Policy TD(n) Control, first 6 trials)]'
plotutls.plot_learning_curve(rewards_per_trial, title=title,
cumulative_reward=True, lower_reward_ratio=None)
for trial in first6_RL_trials:
print('\n', trial, ':')
q_vals = q_values_per_trial_Off_Policy_QSigma[trial]
print_optimal_policy(q_vals, grid_height=4, grid_width=12)
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Reward achieved with the number of episodes: remaining six trials, 6 to 12-step $Q(\sigma)$
|
rest_RL_trials = list(RL_trials.keys())[5:] #+ [first6_RL_trials[0]]
rewards_per_trial = OrderedDict((label, rewards_per_trial_Off_Policy_QSigma[label]) for label in rest_RL_trials)
title = 'Efficiency of the RL Method\n[n-step $\mathbf{Q(\sigma)}$ (Off-Policy TD(n) Control, rest 6 trials)]'
plotutls.plot_learning_curve(rewards_per_trial, title=title,
cumulative_reward=True, lower_reward_ratio=None)
for trial in rest_RL_trials:
print('\n', trial, ':')
q_vals = q_values_per_trial_Off_Policy_QSigma[trial]
print_optimal_policy(q_vals, grid_height=4, grid_width=12)
|
Reinforcement-Learning/TDn-models/01.CliffWalking_with_TD(n)-models.ipynb
|
tgrammat/ML-Data_Challenges
|
apache-2.0
|
Sticky keys causing issues? Need Password feedback?
Had an issue with my keyboard where a few keys were sticking, worn, and they weren't detected or showed up twice. Constant password auth failures so a quick Google search returned the following results:
1) Change Password Entry To Show * (asterix) instead of no feed back - less secure!
bash
#run command
sudo visudo
bash
#change
Defaults env_reset
#to
Defaults env_reset,pwfeedback
2) Change from VI to Nano or Emacs etc..
bash
export VISUAL=nano; visudo
Notes * use spaces not tabs
Changing Git author info
source
Check out clean repo:
bash
git clone --bare https://github.com/[user]/[repo].git
cd [repo].git
create git-author-rewrite.sh file:
```bash
!/bin/sh
git filter-branch --env-filter '
OLD_EMAIL="your-old-email@example.com"
CORRECT_NAME="Your Correct Name"
CORRECT_EMAIL="your-correct-email@example.com"
if [ "$GIT_COMMITTER_EMAIL" = "$OLD_EMAIL" ]
then
export GIT_COMMITTER_NAME="$CORRECT_NAME"
export GIT_COMMITTER_EMAIL="$CORRECT_EMAIL"
fi
if [ "$GIT_AUTHOR_EMAIL" = "$OLD_EMAIL" ]
then
export GIT_AUTHOR_NAME="$CORRECT_NAME"
export GIT_AUTHOR_EMAIL="$CORRECT_EMAIL"
fi
' --tag-name-filter cat -- --branches --tags
```
make executable:
bash
chmod +x create git-author-rewrite.sh
review changes:
bash
git log
push changes:
bash
git push --force --tags origin 'refs/heads/*'
cleanup:
bash
cd ..
rm -rf [repo].git
Managing Remotes
(Managing Remotes Documentation)[https://git-scm.com/book/ch2-5.html]
(Multiple push remotes)[http://stackoverflow.com/questions/14290113/git-pushing-code-to-two-remotes]
Show current remotes:
bash
git remote -v
Add a "all" remote
bash
git remote add all git://original/repo.git
git remote -v
Add another repo to the remote
bash
git remote set-url --add --push all git://another/repo.git
This will replace you orignal push, so simply add it back in
bash
git remote set-url --add --push all git://original/repo.git
Now you should see both pushes
bash
git remote -v
Git general
Quick Refference
Whats my name?
Linux Kernel Version
bash
uname -r
Ubuntu version
bash
lsb_release -sc
|
print "this is a test of the emergency broadcast system"
%%html
<style>
html {
font-size: 62.5% !important; }
body {
font-size: 1.5em !important; /* currently ems cause chrome bug misinterpreting rems on body element */
line-height: 1.6 !important;
font-weight: 400 !important;
font-family: "Raleway", "HelveticaNeue", "Helvetica Neue", Helvetica, Arial, sans-serif !important;
color: #222 !important; }
div{ border-radius: 0px !important; }
div.CodeMirror-sizer{ background: rgb(244, 244, 248) !important; }
div.input_area{ background: rgb(244, 244, 248) !important; }
div.out_prompt_overlay:hover{ background: rgb(244, 244, 248) !important; }
div.input_prompt:hover{ background: rgb(244, 244, 248) !important; }
h1, h2, h3, h4, h5, h6 {
color: #333 !important;
margin-top: 0 !important;
margin-bottom: 2rem !important;
font-weight: 300 !important; }
h1 { font-size: 4.0rem !important; line-height: 1.2 !important; letter-spacing: -.1rem !important;}
h2 { font-size: 3.6rem !important; line-height: 1.25 !important; letter-spacing: -.1rem !important; }
h3 { font-size: 3.0rem !important; line-height: 1.3 !important; letter-spacing: -.1rem !important; }
h4 { font-size: 2.4rem !important; line-height: 1.35 !important; letter-spacing: -.08rem !important; }
h5 { font-size: 1.8rem !important; line-height: 1.5 !important; letter-spacing: -.05rem !important; }
h6 { font-size: 1.5rem !important; line-height: 1.6 !important; letter-spacing: 0 !important; }
@media (min-width: 550px) {
h1 { font-size: 5.0rem !important; }
h2 { font-size: 4.2rem !important; }
h3 { font-size: 3.6rem !important; }
h4 { font-size: 3.0rem !important; }
h5 { font-size: 2.4rem !important; }
h6 { font-size: 1.5rem !important; }
}
p {
margin-top: 0 !important; }
a {
color: #1EAEDB !important; }
a:hover {
color: #0FA0CE !important; }
code {
padding: .2rem .5rem !important;
margin: 0 .2rem !important;
font-size: 90% !important;
white-space: nowrap !important;
background: #F1F1F1 !important;
border: 1px solid #E1E1E1 !important;
border-radius: 4px !important; }
pre > code {
display: block !important;
padding: 1rem 1.5rem !important;
white-space: pre !important; }
button{ border-radius: 0px !important; }
.navbar-inner{ background-image: none !important; }
select, textarea{ border-radius: 0px !important; }
</style>
|
Linux Tools & Tricks.ipynb
|
JENkt4k/pynotes-general
|
gpl-3.0
|
Get the Active Window on Linux
Get active window title in X
- Original Code had the following error: TypeError: can't use a string pattern on a bytes-like object
<br/>
Obtain Active window using Python
Corected code is now here
"import wnck" only works with python 2.x, using python3.x pypie and wx were the only options I found so far
|
import sys
import os
from subprocess import PIPE, Popen
import re
def get_active_window_title():
root = Popen(['xprop', '-root', '_NET_ACTIVE_WINDOW'], stdout=PIPE)
for line in root.stdout:
m = re.search(b'^_NET_ACTIVE_WINDOW.* ([\w]+)$', line)
if m != None:
id_ = m.group(1)
id_w = Popen(['xprop', '-id', id_, 'WM_NAME'], stdout=PIPE)
break
if id_w != None:
for line in id_w.stdout:
match = re.match(b"WM_NAME\(\w+\) = (?P<name>.+)$", line)
if match != None:
return match.group("name")
return "Active window not found"
get_active_window_title()
import time
time.sleep(2)
get_active_window_title()
|
Linux Tools & Tricks.ipynb
|
JENkt4k/pynotes-general
|
gpl-3.0
|
Import some data to play with
|
bc = datasets.load_breast_cancer()
X = bc.data
y = bc.target
random_state = np.random.RandomState(0)
# shuffle and split training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5,
random_state=random_state)
|
examples/jkeung/testing.ipynb
|
pdamodaran/yellowbrick
|
apache-2.0
|
Split the data and prepare data for ROC Curve
|
# Learn to predict each class against the other
classifier = svm.SVC(kernel='linear', probability=True, random_state=random_state)
y_score = classifier.fit(X_train, y_train).decision_function(X_test)
# Compute ROC curve and ROC area for each class
fpr, tpr, _ = roc_curve(y_test, y_score)
roc_auc = auc(fpr, tpr)
|
examples/jkeung/testing.ipynb
|
pdamodaran/yellowbrick
|
apache-2.0
|
Plot ROC Curve using Matplotlib
|
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
|
examples/jkeung/testing.ipynb
|
pdamodaran/yellowbrick
|
apache-2.0
|
Create ROCAUC using YellowBrick
|
import yellowbrick as yb
from yellowbrick.classifier import ROCAUC
visualizer = ROCAUC(classifier)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
|
examples/jkeung/testing.ipynb
|
pdamodaran/yellowbrick
|
apache-2.0
|
Loading GBT spectra from GBTIDL ASCII output
|
input_filename = 'data/HS0033+4300_GBT.dat'
x = GBTspec.from_ascii(input_filename)
x.plotspectrum()
x.velocity[0:5]
x.Tb[0:5]
|
pyND/gbt/docs/GBTspec.usage.ipynb
|
jchowk/pyND
|
gpl-3.0
|
Metadata...
|
x.meta.keys()
x.meta['object'],x.meta['RA'],x.meta['DEC']
|
pyND/gbt/docs/GBTspec.usage.ipynb
|
jchowk/pyND
|
gpl-3.0
|
Loading GBT spectra from GBTIDL FITS format
Loading from the list of objects
|
input_filename = 'data/GBTdata.fits'
y = GBTspec.from_GBTIDLindex(input_filename)
y.plotspectrum()
|
pyND/gbt/docs/GBTspec.usage.ipynb
|
jchowk/pyND
|
gpl-3.0
|
Loading with an object name:
|
input_filename = 'data/GBTdata.fits'
object_name = 'HS0033+4300'
z = GBTspec.from_GBTIDL(input_filename,object_name)
z.plotspectrum()
|
pyND/gbt/docs/GBTspec.usage.ipynb
|
jchowk/pyND
|
gpl-3.0
|
Resample the results to a coarser velocity grid
This is a flux-conserving process.
|
z_new = z.copy()
new_velocity = np.arange(-400,100,10.)
z_new.resample(new_velocity,masked=True)
plt.figure(figsize=(8,5))
plt.plot(z.velocity,z.Tb,drawstyle='steps-mid',label='Original')
plt.plot(z_new.velocity,z_new.Tb,drawstyle='steps-mid',label='Resampled',lw=4)
plt.xlim(-100,50)
plt.legend(loc='upper left')
plt.xlim(-200,50)
plt.xlabel('Velocity')
plt.ylabel('Tb [K]');
|
pyND/gbt/docs/GBTspec.usage.ipynb
|
jchowk/pyND
|
gpl-3.0
|
Compare the two results
In this example, the results are slightly different, as the ASCII data are saved in the OPTICAL-LSR frame, while the GBTIDL data are saved using the RADI-LSR, the radio astronomical definition of the LSR.
|
plt.plot(z.velocity,z.Tb,drawstyle='steps-mid',label='RADIO')
plt.plot(x.velocity,x.Tb,drawstyle='steps-mid',label='OPTICAL')
plt.xlim(-100,50)
plt.legend(loc='upper left')
|
pyND/gbt/docs/GBTspec.usage.ipynb
|
jchowk/pyND
|
gpl-3.0
|
The following is not yet working...
Change OPTICAL to RADIO
N.B. This approach doesn't seem to shift the spectra by enough to be the source of the difference...
|
light_speed = np.float64(c.c.to('m/s').value)
nu0 = np.float64(1420405800.0000000000000000000)
# First calculate frequency from optical:
nu = (nu0/(1+(x.velocity)*1000./light_speed))
# Calculate radio definition
vrad =light_speed*((nu0-nu)/nu0)/1000.
plt.figure(figsize=(8,5))
plt.plot(y.velocity,y.Tb,drawstyle='steps-mid',label='RADIO')
plt.plot(vrad,x.Tb,drawstyle='steps-mid',label='OPTICAL-->RADIO',zorder=0,linewidth=3)
plt.xlim(-100,50)
plt.legend(loc='upper left')
|
pyND/gbt/docs/GBTspec.usage.ipynb
|
jchowk/pyND
|
gpl-3.0
|
Change RADIO to OPTICAL
|
light_speed = (c.c.to('m/s').value)
nu0 = (1420405800.0000000000000000000)
# Frequency from radio:
nu = nu0*(1-(x.velocity)*1000./light_speed)
# Calculate radio definition
vopt = (light_speed/1000.)*((nu0-nu)/nu)
plt.plot(vopt,y.Tb,drawstyle='steps-mid',label='RADIO-->OPTICAL')
plt.plot(x.velocity,x.Tb,drawstyle='steps-mid',label='OPTICAL',zorder=0,linewidth=3)
plt.xlim(-100,50)
plt.legend(loc='upper left')
|
pyND/gbt/docs/GBTspec.usage.ipynb
|
jchowk/pyND
|
gpl-3.0
|
Change RADIO to OPTICAL with built-in function
|
input_filename = 'data/AMIGA-GBT.fits'
object_name = 'RBS2055'
y = GBTspec.from_GBTIDL(input_filename,object_name)
input_filename = 'data/RBS2055_GBT.dat'
x = GBTspec.from_ascii(input_filename)
x.change_veldef()
plt.plot(y.velocity,y.Tb,drawstyle='steps-mid',label='RADIO-->OPTICAL')
plt.plot(x.velocity,x.Tb,drawstyle='steps-mid',label='OPTICAL',zorder=0,linewidth=3)
plt.xlim(-100,50)
plt.legend(loc='upper left')
|
pyND/gbt/docs/GBTspec.usage.ipynb
|
jchowk/pyND
|
gpl-3.0
|
Plotting the numbers
Import numpy
|
import numpy as np
np.random.seed(0)
|
day-1/examples/Random Numbers.ipynb
|
KDD-OpenSource/geox-young-academy
|
mit
|
Generate random numbers.
|
normal_numbers = np.random.normal(ฮผ, ฯ, size=100)
print("normal_numbers = {}".format(normal_numbers))
|
day-1/examples/Random Numbers.ipynb
|
KDD-OpenSource/geox-young-academy
|
mit
|
Install plotly from the command line:
We generate a plot
|
import numpy as np
import matplotlib.pyplot as plt
fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(8, 4))
ax0.hist(normal_numbers, 20, normed=1, histtype='stepfilled', facecolor='g', alpha=0.75)
ax0.set_title('stepfilled')
# Create a histogram by providing the bin edges (unequally spaced).
bins = [100, 150, 180, 195, 205, 220, 250, 300]
ax1.hist(normal_numbers, bins, normed=1, histtype='bar', rwidth=0.8)
ax1.set_title('unequal bins')
fig.tight_layout()
plt.show()
|
day-1/examples/Random Numbers.ipynb
|
KDD-OpenSource/geox-young-academy
|
mit
|
Exercรญcio 01: Calcule a distรขncia mรฉdia, o diรขmetro e o coeficiente de agrupamento das redes abaixo.
|
G1 = nx.erdos_renyi_graph(10,0.4)
nx.draw_shell(G1)
print "Dist. media: ", nx.average_shortest_path_length(G1)
print "Diametro: ", nx.diameter(G1)
print "Coef. Agrupamento mรฉdio: ", nx.average_clustering(G1)
G2 = nx.barabasi_albert_graph(10,3)
nx.draw_shell(G2)
print "Dist. media: ", nx.average_shortest_path_length(G2)
print "Diametro: ", nx.diameter(G2)
print "Coef. Agrupamento mรฉdio: ", nx.average_clustering(G2)
G3 = nx.barabasi_albert_graph(10,4)
nx.draw_shell(G3)
print "Dist. media: ", nx.average_shortest_path_length(G3)
print "Diametro: ", nx.diameter(G3)
print "Coef. Agrupamento mรฉdio: ", nx.average_clustering(G3)
|
Gabarito.ipynb
|
folivetti/CRUFABC
|
mit
|
Exercรญcio 02: Calcule a centralidade de grau, betweenness e pagerank dos nรณs das redes abaixo:
|
G4 = nx.barabasi_albert_graph(10,3)
plt.pyplot.figure(figsize=(10,10))
pos = nx.shell_layout(G4)
nx.draw_networkx_nodes(G4,pos);
nx.draw_networkx_edges(G4,pos);
nx.draw_networkx_labels(G4,pos);
plt.pyplot.axis('off')
print "Centralidades de grau:"
for ni,dc in nx.degree_centrality(G4).items():
print ni, dc
print "Centralidades de pagerank:"
for ni,dc in nx.pagerank(G4).items():
print ni, dc
print "Centralidades de betweenness:"
for ni,dc in nx.betweenness_centrality(G4).items():
print ni, dc
|
Gabarito.ipynb
|
folivetti/CRUFABC
|
mit
|
Introducing Gaussian Mixture Models
We previously saw an example of K-Means, which is a clustering algorithm which is most often fit using an expectation-maximization approach.
Here we'll consider an extension to this which is suitable for both clustering and density estimation.
For example, imagine we have some one-dimensional data in a particular distribution:
|
np.random.seed(2)
x = np.concatenate([np.random.normal(0, 2, 2000),
np.random.normal(5, 5, 2000),
np.random.normal(3, 0.5, 600)])
plt.hist(x, 80, normed=True)
plt.xlim(-10, 20);
|
notebooks/04.3-Density-GMM.ipynb
|
jakevdp/sklearn_tutorial
|
bsd-3-clause
|
Gaussian mixture models will allow us to approximate this density:
|
from sklearn.mixture import GaussianMixture as GMM
X = x[:, np.newaxis]
clf = GMM(4, max_iter=500, random_state=3).fit(X)
xpdf = np.linspace(-10, 20, 1000)
density = np.array([np.exp(clf.score([[xp]])) for xp in xpdf])
plt.hist(x, 80, density=True, alpha=0.5)
plt.plot(xpdf, density, '-r')
plt.xlim(-10, 20);
|
notebooks/04.3-Density-GMM.ipynb
|
jakevdp/sklearn_tutorial
|
bsd-3-clause
|
Note that this density is fit using a mixture of Gaussians, which we can examine by looking at the means_, covars_, and weights_ attributes:
|
clf.means_
clf.covariances_
clf.weights_
plt.hist(x, 80, normed=True, alpha=0.3)
plt.plot(xpdf, density, '-r')
for i in range(clf.n_components):
pdf = clf.weights_[i] * stats.norm(clf.means_[i, 0],
np.sqrt(clf.covariances_[i, 0])).pdf(xpdf)
plt.fill(xpdf, pdf, facecolor='gray',
edgecolor='none', alpha=0.3)
plt.xlim(-10, 20);
|
notebooks/04.3-Density-GMM.ipynb
|
jakevdp/sklearn_tutorial
|
bsd-3-clause
|
These individual Gaussian distributions are fit using an expectation-maximization method, much as in K means, except that rather than explicit cluster assignment, the posterior probability is used to compute the weighted mean and covariance.
Somewhat surprisingly, this algorithm provably converges to the optimum (though the optimum is not necessarily global).
How many Gaussians?
Given a model, we can use one of several means to evaluate how well it fits the data.
For example, there is the Aikaki Information Criterion (AIC) and the Bayesian Information Criterion (BIC)
|
print(clf.bic(X))
print(clf.aic(X))
|
notebooks/04.3-Density-GMM.ipynb
|
jakevdp/sklearn_tutorial
|
bsd-3-clause
|
Let's take a look at these as a function of the number of gaussians:
|
n_estimators = np.arange(1, 10)
clfs = [GMM(n, max_iter=1000).fit(X) for n in n_estimators]
bics = [clf.bic(X) for clf in clfs]
aics = [clf.aic(X) for clf in clfs]
plt.plot(n_estimators, bics, label='BIC')
plt.plot(n_estimators, aics, label='AIC')
plt.legend();
|
notebooks/04.3-Density-GMM.ipynb
|
jakevdp/sklearn_tutorial
|
bsd-3-clause
|
It appears that for both the AIC and BIC, 4 components is preferred.
Example: GMM For Outlier Detection
GMM is what's known as a Generative Model: it's a probabilistic model from which a dataset can be generated.
One thing that generative models can be useful for is outlier detection: we can simply evaluate the likelihood of each point under the generative model; the points with a suitably low likelihood (where "suitable" is up to your own bias/variance preference) can be labeld outliers.
Let's take a look at this by defining a new dataset with some outliers:
|
np.random.seed(0)
# Add 20 outliers
true_outliers = np.sort(np.random.randint(0, len(x), 20))
y = x.copy()
y[true_outliers] += 50 * np.random.randn(20)
clf = GMM(4, max_iter=500, random_state=0).fit(y[:, np.newaxis])
xpdf = np.linspace(-10, 20, 1000)
density_noise = np.array([np.exp(clf.score([[xp]])) for xp in xpdf])
plt.hist(y, 80, density=True, alpha=0.5)
plt.plot(xpdf, density_noise, '-r')
plt.xlim(-15, 30);
|
notebooks/04.3-Density-GMM.ipynb
|
jakevdp/sklearn_tutorial
|
bsd-3-clause
|
Now let's evaluate the log-likelihood of each point under the model, and plot these as a function of y:
|
log_likelihood = np.array([clf.score_samples([[yy]]) for yy in y])
# log_likelihood = clf.score_samples(y[:, np.newaxis])[0]
plt.plot(y, log_likelihood, '.k');
detected_outliers = np.where(log_likelihood < -9)[0]
print("true outliers:")
print(true_outliers)
print("\ndetected outliers:")
print(detected_outliers)
|
notebooks/04.3-Density-GMM.ipynb
|
jakevdp/sklearn_tutorial
|
bsd-3-clause
|
The algorithm misses a few of these points, which is to be expected (some of the "outliers" actually land in the middle of the distribution!)
Here are the outliers that were missed:
|
set(true_outliers) - set(detected_outliers)
|
notebooks/04.3-Density-GMM.ipynb
|
jakevdp/sklearn_tutorial
|
bsd-3-clause
|
And here are the non-outliers which were spuriously labeled outliers:
|
set(detected_outliers) - set(true_outliers)
|
notebooks/04.3-Density-GMM.ipynb
|
jakevdp/sklearn_tutorial
|
bsd-3-clause
|
Finally, we should note that although all of the above is done in one dimension, GMM does generalize to multiple dimensions, as we'll see in the breakout session.
Other Density Estimators
The other main density estimator that you might find useful is Kernel Density Estimation, which is available via sklearn.neighbors.KernelDensity. In some ways, this can be thought of as a generalization of GMM where there is a gaussian placed at the location of every training point!
|
from sklearn.neighbors import KernelDensity
kde = KernelDensity(0.15).fit(x[:, None])
density_kde = np.exp(kde.score_samples(xpdf[:, None]))
plt.hist(x, 80, density=True, alpha=0.5)
plt.plot(xpdf, density, '-b', label='GMM')
plt.plot(xpdf, density_kde, '-r', label='KDE')
plt.xlim(-10, 20)
plt.legend();
|
notebooks/04.3-Density-GMM.ipynb
|
jakevdp/sklearn_tutorial
|
bsd-3-clause
|
View the whole file:
|
from inspect import getfile
gig_file = getfile(Gig)
gig_file
%pycat $gig_file
|
2 - Working with Django.ipynb
|
bhrutledge/jupyter-django
|
mit
|
View the contents of the app directory:
|
from os import path
!ls -l {path.dirname(gig_file)}
|
2 - Working with Django.ipynb
|
bhrutledge/jupyter-django
|
mit
|
View the output of the graph_models command from Django Extensions:
|
from graphviz import Source
from IPython.display import Image
!manage.py graph_models music news shows -o models.png 2>/dev/null
Image('models.png')
|
2 - Working with Django.ipynb
|
bhrutledge/jupyter-django
|
mit
|
Alternatively, capture the output, and render it as SVG:
|
dot = !manage.py graph_models shows 2>/dev/null
Source(dot.n)
|
2 - Working with Django.ipynb
|
bhrutledge/jupyter-django
|
mit
|
Learn more about IPython's magic functions:
|
%quickref
|
2 - Working with Django.ipynb
|
bhrutledge/jupyter-django
|
mit
|
Answering questions
How often do we play gigs?
|
gigs = Gig.objects.published().past()
gigs.count()
|
2 - Working with Django.ipynb
|
bhrutledge/jupyter-django
|
mit
|
Where did we play last year?
|
[gig for gig in gigs.filter(date__year='2016')]
|
2 - Working with Django.ipynb
|
bhrutledge/jupyter-django
|
mit
|
How many gigs have we played each year?
|
for date in gigs.dates('date', 'year'):
gig_count = gigs.filter(date__year=date.year).count()
print('{}: {}'.format(date.year, gig_count))
|
2 - Working with Django.ipynb
|
bhrutledge/jupyter-django
|
mit
|
What venues have we played?
|
gigs.values('venue').distinct().aggregate(count=Count('*'))
|
2 - Working with Django.ipynb
|
bhrutledge/jupyter-django
|
mit
|
Render a Django template in the notebook:
|
from django.template import Context, Template
from IPython.display import HTML
top_venues = (
gigs.values('venue__name', 'venue__city')
.annotate(gig__count=Count('*'))
.order_by('-gig__count')
[:10]
)
template = Template("""
<table>
<tr>
<th>Venue</th>
<th>City</th>
<th>Gigs</th>
</tr>
{% for v in venues %}
<tr>
<td>{{v.venue__name}}</td>
<td>{{v.venue__city}}</td>
<td>{{v.gig__count}}</td>
</tr>
{% endfor %}
</table>
""")
context = Context(
{'venues': top_venues}
)
HTML(template.render(context))
|
2 - Working with Django.ipynb
|
bhrutledge/jupyter-django
|
mit
|
Test
|
commands = display('inputs/input7.test.txt')
def test():
assert(evaluate('d') == 72)
assert(evaluate('e') == 507)
assert(evaluate('f') == 492)
assert(evaluate('g') == 114)
assert(evaluate('h') == 65412)
assert(evaluate('i') == 65079)
assert(evaluate('x') == 123)
assert(evaluate('y') == 456)
test()
|
2015/ferran/day7.ipynb
|
bbglab/adventofcode
|
mit
|
This approach seems correct, but it creates huge expressions along the way that become harder and harder to parse. Thus the time to a final expression that wraps up all the computations is very long. Two ideas to carry on: i) concurrent evaluation of expressions; ii) define lazy variables/functions that collect all the dependencies of the circuit and start firing upon request.
Approach 2: Concurrent evaluation from known variables.
The solution provided hereto owes credit to this source: https://www.reddit.com/r/adventofcode/comments/5id6w0/2015_day_7_part_1_python_wrong_answer/
|
import numpy as np
def RSHIFT(a, b):
result = np.uint16(a) >> int(b)
return int(result)
def LSHIFT(a, b):
result = np.uint16(a) << int(b)
return int(result)
def OR(a, b):
result = np.uint16(a) | np.uint16(b)
return int(result)
def AND(a, b):
result = np.uint16(a) & np.uint16(b)
return int(result)
def NOT(a):
result = ~ np.uint16(a)
return int(result)
import csv
def display(input_file):
"""produce a dict mapping variables to expressions"""
commands = []
with open(input_file, 'rt') as f_input:
csv_reader = csv.reader(f_input, delimiter=' ')
for line in csv_reader:
commands.append((line[-1], line[:-2]))
return dict(commands)
def evaluate(wire):
known = {}
while wire not in known:
if wire in known:
break
for k, v in commands.items():
if (len(v) == 1) and (v[0].isnumeric()) and (k not in known):
known[k] = int(v[0])
elif (len(v) == 1) and (v[0] in known) and (k not in known):
known[k] = known[v[0]]
elif ('AND' in v) and (v[0] in known) and (v[2] in known):
known[k] = AND(known[v[0]], known[v[2]])
elif ('AND' in v) and (v[0].isnumeric()) and (v[2] in known):
known[k] = AND(int(v[0]), known[v[2]])
elif ('AND' in v) and (v[0] in known) and (v[2].isnumeric()):
known[k] = AND(known[v[0]], int(v[2]))
elif ('OR' in v) and (v[0] in known) and (v[2] in known):
known[k] = OR(known[v[0]], known[v[2]])
elif ('OR' in v) and (v[0].isnumeric()) and (v[2] in known):
known[k] = OR(int(v[0]), known[v[2]])
elif ('OR' in v) and (v[0] in known) and (v[2].isnumeric()):
known[k] = OR(known[v[0]], int(v[2]))
elif ('LSHIFT' in v) and (v[0] in known):
known[k] = LSHIFT(known[v[0]], v[2])
elif ('RSHIFT' in v) and (v[0] in known):
known[k] = RSHIFT(known[v[0]], v[2])
elif ('NOT' in v) and (v[1] in known):
known[k] = NOT(known[v[1]])
return known[wire]
|
2015/ferran/day7.ipynb
|
bbglab/adventofcode
|
mit
|
Test 0
|
commands = display('inputs/input7.test1.txt')
commands
evaluate('a')
|
2015/ferran/day7.ipynb
|
bbglab/adventofcode
|
mit
|
Test 1
|
commands = display('inputs/input7.test2.txt')
commands
test()
|
2015/ferran/day7.ipynb
|
bbglab/adventofcode
|
mit
|
Solution
|
commands = display('inputs/input7.txt')
evaluate('a')
|
2015/ferran/day7.ipynb
|
bbglab/adventofcode
|
mit
|
Approach 3: With Lazy Variable Wrapper (Python)
|
import csv
import numpy as np
def display(input_file):
"""produce a dict mapping variables to expressions"""
commands = []
with open(input_file, 'rt') as f_input:
csv_reader = csv.reader(f_input, delimiter=' ')
for line in csv_reader:
commands.append((line[-1], line[:-2]))
return dict(commands)
class LazyVar(object):
def __init__(self, func):
self.func = func
self.value = None
def __call__(self):
if self.value is None:
self.value = self.func()
return self.value
binary_command = {'NOT': '~', 'AND': '&', 'OR': '|', 'LSHIFT': '<<', 'RSHIFT': '>>'}
def translate(l):
translated = []
for a in l:
if a in binary_command:
b = binary_command[a]
elif a.isnumeric():
b = 'np.uint16({})'.format(a)
else:
b = '{}.func()'.format('var_' + a)
translated.append(b)
return translated
|
2015/ferran/day7.ipynb
|
bbglab/adventofcode
|
mit
|
Test
|
commands = display('inputs/input7.test2.txt')
commands = display('inputs/input7.test2.txt')
for k, v in commands.items():
command_str = '{0} = LazyVar(lambda: {1})'.format('var_' + k, ''.join(translate(v)))
print(command_str)
exec(command_str)
def test():
assert(var_d.func() == 72)
assert(var_e.func() == 507)
assert(var_f.func() == 492)
assert(var_g.func() == 114)
assert(var_h.func() == 65412)
assert(var_i.func() == 65079)
assert(var_x.func() == 123)
assert(var_y.func() == 456)
test()
|
2015/ferran/day7.ipynb
|
bbglab/adventofcode
|
mit
|
Although the approach passes the test, it does not end in reasonable time for the full input.
Approach 4: With Lazy Evaluation in R
The approach now is to exploit the lazy evaluation capabilities in R. So we leverage Python to create an R script that does the job.
|
def rscript_command(var, l):
vocab = {'AND' : 'bitwAnd',
'OR' : 'bitwOr',
'LSHIFT' : 'bitwShiftL',
'RSHIFT' : 'bitwShiftR'}
if len(l) == 3:
func = vocab[l[1]]
arg1 = l[0] if l[0].isdigit() else 'var_' + l[0] + '()'
arg2 = l[2] if l[2].isdigit() else 'var_' + l[2] + '()'
return 'var_{0} <- function(a={1}, b={2})'.format(var, arg1, arg2) + ' {' + '{0}(a,b)'.format(func) + '}'
elif len(l) == 2:
func = 'bitwNot'
arg1 = l[1] if l[1].isdigit() else 'var_' + l[1] + '()'
return 'var_{0} <- function(a={1})'.format(var, arg1) + ' {' + '{0}(a)'.format(func) + '}'
else:
arg1 = l[0] if l[0].isdigit() else 'var_' + l[0] + '()'
return 'var_{0} <- function(a={1})'.format(var, arg1) + ' {' + 'a' + '}'
def generate_rscript(commands, target):
with open('day7_commands.R', 'wt') as f:
for k, v in commands.items():
f.write(rscript_command(k, v)+'\n')
f.write('var_' + target + '()')
|
2015/ferran/day7.ipynb
|
bbglab/adventofcode
|
mit
|
Test
|
commands = display('inputs/input7.test2.txt')
generate_rscript(commands, 'd')
! cat day7_commands.R
!Rscript day7_commands.R
|
2015/ferran/day7.ipynb
|
bbglab/adventofcode
|
mit
|
Solution
|
commands = display('inputs/input7.txt')
generate_rscript(commands, 'a')
! cat day7_commands.R
!Rscript day7_commands.R
|
2015/ferran/day7.ipynb
|
bbglab/adventofcode
|
mit
|
Although this approach is more natural than defining a LazyWrapper in Python, it takes quite a lot of time to execute, so this is not a very cool solution after all.
Day 7.2
|
commands = display('inputs/input7.txt')
commands['b'] = ['16076']
evaluate('a')
|
2015/ferran/day7.ipynb
|
bbglab/adventofcode
|
mit
|
We'll re-use some of our code from before to visualize the data and remind us what
we're looking at:
|
%matplotlib inline
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
|
_doc/notebooks/sklearn_ensae_course/03_supervised_classification.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Visualizing the Data
A good first-step for many problems is to visualize the data using a
Dimensionality Reduction technique. We'll start with the
most straightforward one, Principal Component Analysis (PCA).
PCA seeks orthogonal linear combinations of the features which show the greatest
variance, and as such, can help give you a good idea of the structure of the
data set. Here we'll use RandomizedPCA, because it's faster for large N.
|
from sklearn.decomposition import PCA
pca = PCA(n_components=2, svd_solver="randomized")
proj = pca.fit_transform(digits.data)
plt.scatter(proj[:, 0], proj[:, 1], c=digits.target)
plt.colorbar();
|
_doc/notebooks/sklearn_ensae_course/03_supervised_classification.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Question: Given these projections of the data, which numbers do you think
a classifier might have trouble distinguishing?
Gaussian Naive Bayes Classification
For most classification problems, it's nice to have a simple, fast, go-to
method to provide a quick baseline classification. If the simple and fast
method is sufficient, then we don't have to waste CPU cycles on more complex
models. If not, we can use the results of the simple method to give us
clues about our data.
One good method to keep in mind is Gaussian Naive Bayes. It fits a Gaussian distribution to each training label independantly on each feature, and uses this to quickly give a rough classification. It is generally not sufficiently accurate for real-world data, but can perform surprisingly well, for instance on text data.
|
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split
# split the data into training and validation sets
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)
# train the model
clf = GaussianNB()
clf.fit(X_train, y_train)
# use the model to predict the labels of the test data
predicted = clf.predict(X_test)
expected = y_test
|
_doc/notebooks/sklearn_ensae_course/03_supervised_classification.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Question: why did we split the data into training and validation sets?
Let's plot the digits again with the predicted labels to get an idea of
how well the classification is working:
|
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(X_test.reshape(-1, 8, 8)[i], cmap=plt.cm.binary,
interpolation='nearest')
# label the image with the target value
if predicted[i] == expected[i]:
ax.text(0, 7, str(predicted[i]), color='green')
else:
ax.text(0, 7, str(predicted[i]), color='red')
|
_doc/notebooks/sklearn_ensae_course/03_supervised_classification.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Quantitative Measurement of Performance
We'd like to measure the performance of our estimator without having to resort
to plotting examples. A simple method might be to simply compare the number of
matches:
|
matches = (predicted == expected)
print(matches.sum())
print(len(matches))
matches.sum() / float(len(matches))
|
_doc/notebooks/sklearn_ensae_course/03_supervised_classification.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
We see that nearly 1500 of the 1800 predictions match the input. But there are other
more sophisticated metrics that can be used to judge the performance of a classifier:
several are available in the sklearn.metrics submodule.
One of the most useful metrics is the classification_report, which combines several
measures and prints a table with the results:
|
from sklearn import metrics
from pandas import DataFrame
DataFrame(metrics.classification_report(expected, predicted, output_dict=True)).T
|
_doc/notebooks/sklearn_ensae_course/03_supervised_classification.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Another enlightening metric for this sort of multi-label classification
is a confusion matrix: it helps us visualize which labels are
being interchanged in the classification errors:
|
DataFrame(metrics.confusion_matrix(expected, predicted))
|
_doc/notebooks/sklearn_ensae_course/03_supervised_classification.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืขื ืืื! ืื ื ืจืื ืืืฉ ืืื ืคืื ืงืฆืื. ื ืงืจื ืืืื ื ืืื ืฉืืฆืจื ื "<dfn>ืคืื ืงืฆืืืช ืึพgenerator</dfn>".<br>
ืืื ืืื ืึพ<code>yield</code> ืืืืืจ ืืื ืฉื ืืฆื ืฉื?
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ืืคื ื ืฉื ืชืื ืขื ืงื ืงื ื, ืืืื ื ื ืกื ืืงืจืื ืืคืื ืงืฆืื ืื ืจืื ืื ืืื ืืืืืจื:
</p>
|
print(silly_generator())
|
week05/3_Generators.ipynb
|
PythonFreeCourse/Notebooks
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.