markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
itchat.run()
wechat_tool_py3_local/terminal-script-py/lesson_6_terminal_py3.ipynb
telescopeuser/workshop_blog
mit
# interupt kernel, then logout # itchat.logout() # 安全退出
wechat_tool_py3_local/terminal-script-py/lesson_6_terminal_py3.ipynb
telescopeuser/workshop_blog
mit
恭喜您!已经完成了: 第六课:交互式虚拟助手的智能应用 Lesson 6: Interactive Conversatioinal Virtual Assistant Applications / Intelligent Process Automations 虚拟员工: 贷款填表申请审批一条龙自动化流程 (Virtual Worker: When Chat-bot meets RPA-bot for mortgage loan application automation) 虚拟员工: 文字指令交互(Conversational automation using text/message command) 虚拟员工: 语音指令交互(Conversational automation using speech/voice command) 虚拟员工: 多种语言交互(Conversational automation with multiple languages) <img src='../reference/WeChat_SamGu_QR.png' width=80% style="float: left;"> <span style="color:blue">Exercise / Workshop Enhancement</span> Use Cloud AI APIs <span style="color:blue">Install the client library</span> for 虚拟员工: 语音指令交互(Conversational automation using speech/voice command) [ Hints ]
# !pip install --upgrade google-cloud-speech # Imports the Google Cloud client library # from google.cloud import speech # from google.cloud.speech import enums # from google.cloud.speech import types # !pip install --upgrade google-cloud-texttospeech # Imports the Google Cloud client library # from google.cloud import texttospeech
wechat_tool_py3_local/terminal-script-py/lesson_6_terminal_py3.ipynb
telescopeuser/workshop_blog
mit
<span style="color:blue">Exercise / Workshop Enhancement</span> Use Cloud AI APIs <span style="color:blue">Install the client library</span> for 虚拟员工: 多种语言交互(Conversational automation with multiple languages) [ Hints ]
# !pip install --upgrade google-cloud-translate # Imports the Google Cloud client library # from google.cloud import translate
wechat_tool_py3_local/terminal-script-py/lesson_6_terminal_py3.ipynb
telescopeuser/workshop_blog
mit
You have to change this variable each time the EC2 server stops or restarts. Please email/text me to get the new IP address.
ip = '54.236.23.221'
.ipynb_checkpoints/Mongo_Connect-checkpoint.ipynb
georgetown-analytics/yelp-classification
mit
Create the connection to the MongoDB server. The first argument is the IP we've supplied above and the second is the port (TCP) through which we'll be talking to the EC2 server and the MongoDB instance running inside it.
conn = MongoClient(ip, 27017)
.ipynb_checkpoints/Mongo_Connect-checkpoint.ipynb
georgetown-analytics/yelp-classification
mit
Take a look at the databases available in our MongoDB instance
conn.database_names() db = conn.get_database('cleaned_data')
.ipynb_checkpoints/Mongo_Connect-checkpoint.ipynb
georgetown-analytics/yelp-classification
mit
Print the collection names
db.collection_names()
.ipynb_checkpoints/Mongo_Connect-checkpoint.ipynb
georgetown-analytics/yelp-classification
mit
Let's grab a a subset reviews from the academic reviews collection. Suppose we want a random set of 5000, all from after 2010, from each city in our dataset.
collection = db.get_collection('academic_reviews') #I cheated and just had a list of all the states. #You should try to find a unique list of all the states from mongoDB as an exercise. states = [u'OH', u'NC', u'WI', u'IL', u'AZ', u'NV']
.ipynb_checkpoints/Mongo_Connect-checkpoint.ipynb
georgetown-analytics/yelp-classification
mit
First, I'm going to take a look at what one of the reviews looks like. I totally could have done something wrong earlier and the output is pure garbage. This is a good sanity check to make.
collection.find()[0]
.ipynb_checkpoints/Mongo_Connect-checkpoint.ipynb
georgetown-analytics/yelp-classification
mit
Sweet, this is pretty much what we were expecting. Let's pull out the date field from this entry. We're going to filter on this in a second. Depending on its type, we're going to need to develop different strategies in constructing the logical statements that filter for the date.
print collection.find()[0]['date'] print type(collection.find()[0]['date'])
.ipynb_checkpoints/Mongo_Connect-checkpoint.ipynb
georgetown-analytics/yelp-classification
mit
Dang it's unicode. Unicode is a pain in the ass to deal with, it's some Python specific format. Let's try converting it to a more usable Python format (datetime). We care about the relative difference between the date variable. Doing this with a string doesn't make sense to a computer so we have to transform it into a quantitative measure of time.
string_year = collection.find()[0]['date'][0:4] year = datetime.strptime(string_year, '%Y') year
.ipynb_checkpoints/Mongo_Connect-checkpoint.ipynb
georgetown-analytics/yelp-classification
mit
Note that the datetime above is given as January-1st, 2014. We only gave it a year variable so it just defaults to the first day of that year. That's all good though, we just want stuff after 2010, we just define the beginning of 2010 to be January-1st 2010.
threshold_year = datetime.strptime('2010', '%Y')
.ipynb_checkpoints/Mongo_Connect-checkpoint.ipynb
georgetown-analytics/yelp-classification
mit
Running the below code is going to take a little while. But it's essentially doing the following: For each review in the reviews database: If the review comes from one of our states: Check to see if the review was made after 2010: If it did, append it to the overall reviews dictionary. If it didn't, proceed to the next review.
#reviews_dict = {} num_reviews = 50000 for obj in collection.find(): if obj['state'] == 'IL': try: if len(reviews_dict[obj['state']]) > num_reviews: continue except KeyError: pass if datetime.strptime(obj['date'][0:4], '%Y') >= threshold_year: del obj['_id'] try: reviews_dict[obj['state']].append(obj) except KeyError: reviews_dict[obj['state']]=[obj] else: pass
.ipynb_checkpoints/Mongo_Connect-checkpoint.ipynb
georgetown-analytics/yelp-classification
mit
So the new dictionary we created is structured with each state being a key and each entry being a list of reviews. Let's take a look at what Ohio looks like:
reviews_dict['OH'][0:50]
.ipynb_checkpoints/Mongo_Connect-checkpoint.ipynb
georgetown-analytics/yelp-classification
mit
It's good practice to save whatever data you're using in a more permanent location if you plan on using it again. That way, we don't have to load up the EC2 server and wait for our local machines to run the above filtering process.
with open('cleaned_reviews_states_2010.json', 'w+') as outfile: json.dump(reviews_dict, outfile)
.ipynb_checkpoints/Mongo_Connect-checkpoint.ipynb
georgetown-analytics/yelp-classification
mit
Conductor Loss
from pylab import * for wg in wg_list: wg.frequency.plot(rf.np_2_db(wg.alpha), label=wg.name ) legend() xlabel('Frequency(GHz)') ylabel('Loss (dB/m)') title('Loss in Rectangular Waveguide (Au)'); xlim(100,1300) resistivity_list = linspace(1,10,5)*1e-8 # ohm meter for rho in resistivity_list: wg = RectangularWaveguide(f_wr1.copy(), a=10*mil, b=5*mil, rho = rho) wg.frequency.plot(rf.np_2_db(wg.alpha),label=r'$ \rho $=%.e$ \Omega m$'%rho ) legend() #ylim(.0,20) xlabel('Frequency(GHz)') ylabel('Loss (dB/m)') title('Loss vs. Resistivity in\nWR-1.0 Rectangular Waveguide');
doc/source/examples/networktheory/Properties of Rectangular Waveguides.ipynb
Ttl/scikit-rf
bsd-3-clause
Phase Velocity
for wg in wg_list: wg.frequency.plot(100*wg.v_p.real/c, label=wg.name ) legend() ylim(50,200) xlabel('Frequency(GHz)') ylabel('Phase Velocity (\%c)') title('Phase Veclocity in Rectangular Waveguide'); for wg in wg_list: plt.plot(wg.frequency.f_scaled[1:], 100/c*diff(wg.frequency.w)/diff(wg.beta), label=wg.name ) legend() ylim(50,100) xlabel('Frequency(GHz)') ylabel('Group Velocity (\%c)') title('Phase Veclocity in Rectangular Waveguide');
doc/source/examples/networktheory/Properties of Rectangular Waveguides.ipynb
Ttl/scikit-rf
bsd-3-clause
Propagation Constant
for wg in wg_list+[freespace]: wg.frequency.plot(wg.beta, label=wg.name ) legend() xlabel('Frequency(GHz)') ylabel('Propagation Constant (rad/m)') title('Propagation Constant \nin Rectangular Waveguide'); semilogy();
doc/source/examples/networktheory/Properties of Rectangular Waveguides.ipynb
Ttl/scikit-rf
bsd-3-clause
3. Enter CM360 Report Emailed To BigQuery Recipe Parameters The person executing this recipe must be the recipient of the email. Schedule a CM report to be sent to ****. Or set up a redirect rule to forward a report you already receive. The report must be sent as an attachment. Ensure this recipe runs after the report is email daily. Give a regular expression to match the email subject. Configure the destination in BigQuery to write the data. Modify the values below for your use case, can be done multiple times, then click play.
FIELDS = { 'auth_read':'user', # Credentials used for reading data. 'email':'', # Email address report was sent to. 'subject':'.*', # Regular expression to match subject. Double escape backslashes. 'dataset':'', # Existing dataset in BigQuery. 'table':'', # Name of table to be written to. 'is_incremental_load':False, # Append report data to table based on date column, de-duplicates. } print("Parameters Set To: %s" % FIELDS)
colabs/email_cm_to_bigquery.ipynb
google/starthinker
apache-2.0
4. Execute CM360 Report Emailed To BigQuery This does NOT need to be modified unless you are changing the recipe, click play.
from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'email':{ 'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}}, 'read':{ 'from':'noreply-cm@google.com', 'to':{'field':{'name':'email','kind':'string','order':1,'default':'','description':'Email address report was sent to.'}}, 'subject':{'field':{'name':'subject','kind':'string','order':2,'default':'.*','description':'Regular expression to match subject. Double escape backslashes.'}}, 'attachment':'.*' }, 'write':{ 'bigquery':{ 'dataset':{'field':{'name':'dataset','kind':'string','order':3,'default':'','description':'Existing dataset in BigQuery.'}}, 'table':{'field':{'name':'table','kind':'string','order':4,'default':'','description':'Name of table to be written to.'}}, 'header':True, 'is_incremental_load':{'field':{'name':'is_incremental_load','kind':'boolean','order':6,'default':False,'description':'Append report data to table based on date column, de-duplicates.'}} } } } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True)
colabs/email_cm_to_bigquery.ipynb
google/starthinker
apache-2.0
<font color='mediumblue'> What are classes? A way of organising your code Data is inherently linked to the things you can do with it. Pros Can do everything you can do without classes, but idea is to make it easier Classes encourage code reuse through a concept called "inheritance" - we will discuss later. Cons Can make your code more complicated, and without careful thinking, harder to maintain. More work for the developer. <font color='mediumblue'> Start by defining some terminology - Classes vs Objects vs Instances Often used interchangably but they are different concepts. A Class is like a template - you could consider the class "Car" An object is a particular occurence of a class - so, for example, you could have "Ford Mondeo", "Vauxhall Astra", "Lamborghini Gallardo" be objects of type "Car". An instance is a unique single object. <font color='mediumblue'> Where are classes used in Python? Everywhere! You've been using classes all of the time, without even knowing it. Everything in Python is an object. You have some data (number, text, etc.)with some methods (or functions) which are internal to the object, and which you can use on that data. Lets look at a few examples...
a = 10.1 type(a)
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
How can I see what methods an object of type float has?
print(dir(a)) # Show all of the methods of a a.is_integer()
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
<font color='midnightblue'> Aside - What do all those underscores mean? They're hidden methods - we'll talk more about these later. <font color='mediumblue'> Creating a class Define some key things: * self - 'self' is a special type of variable which can be used inside the class to refer to itself. * Methods - a function which is part of a class, and which have access to data held by a class. * A constructor - a special method which is called when you create an instance of a class. In Python this function must be called "__init__" * A destructor - a special method which is called when you destroy an instance of a class. Aside: If you're a C++/Java programmer, 'self' is exactly equivalent to 'this', but functions must have self as an argument, as it is passed in implicitly as the first argument of any method call in Python.
# Create a class by using class keyword followed by name. class MyClass: # The 'self' variable ALWAYS needs to be the first variable given to any class method. def __init__(self, message): # Here we create a new variable inside "self" called "mess" and save the argument "message" # passed from the constructor to it. self.mess = message def say(self): print(self.mess) # Don't normally need to write a destructor - one is created by Python automatically. However we do it here # just to show you that it can be done: def __del__(self): print("Deleting object of type MyClass")
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
<font color='mediumblue'> Using the class Use the same syntax as we use to call a function, BUT the arguments get passed in to the "__init__" function. Note that you ignore the self object, as Python sorts this out.
a = MyClass("Hello") print(a.mess)
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
How do I access data stored in the class? with the ".", followed by the name.
# But, we also defined a method called "say" which does the same thing: a.say()
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
What happens though if we reuse the variable name 'a'? Aside: * Your computer has Random Access Memory (RAM) which is used to store information. * Whenever, in a programming language, you tell the language to store something, you effectively create a 'box' of memory to put those values in. * The location of the specific 'box' is known as a 'memory address' * You can see the memory address of a Python object quite easily:
print(a)
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
So, what happens if we either choose to store something else under the name 'a', or tell Python to delte it?
del a a = MyClass('Hello') a = 2
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
Why bother? This can be achieved without classes very easily:
mess = "Hello" def say(mess): print(mess) say(mess)
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
Need a better example! How about a Simulation class? * Write once, but can take different parameters. * Can include data analysis methods as well <font color='mediumblue'> Consider a 1-D box of some length: What information does it need to know about itself? * How big is the box?
class Box: def __init__(self, length): self.length = length
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
What we're going to try and do is add particles to the box, which have some properties: * An initial position. * An initial velocity $r(t + \delta t) \approx r(t) + v(t)\delta t$
class Particle: def __init__(self, r0, v0): """ r0 = initial position v0 = initial speed """ self.r = r0 self.v = v0 def step(self, dt, L): """ Move the particle dt = timestep L = length of the containing box """ self.r = self.r + self.v * dt if self.r >= L: self.r -= L elif self.r < 0: self.r += L
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
Lets just check this, if a Particle is in a box of length 10, has r0 = 0, v0=5, then after 1 step of length 3, the position should be at position 5:
p = Particle(0, 5) p.step(3, 10) print(p.r)
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
Lets add a place to store the particles to the box class, and add a method to add particles to the box:
class Box: def __init__(self, length): self.length = length self.particles = [] def add_particle(particle): self.particles.append(particle)
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
<font color='mediumblue'> Now lets get you to do something... Tasks (30-40 minutes): 1) Add a method that calculates the average position of Particles in the box (Hint: you might have to think about what to do when there are no particles!) 2) Add a method that makes all of the particles step forwards, and keep track of how much time has passed in the box class. 3) Add a method which plots the current position of the particles in the box. 4) Write a method that writes the current positions and velocities to a CSV file. 5) Write a method that can load a CSV file of positions and velocities, create particles with these and then add them to the Box list of particles. (Hint: Look up the documentation for the module 'csv')
class Box: def __init__(self, length): self.length = length self.particles = [] self.t = 0 def add_particle(self, particle): self.particles.append(particle) def step(self, dt): for particle in self.particles: particle.step(dt, self.length) def write(self, filename): f = open(filename, 'w') for particle in self.particles: f.write('{},{}\n'.format(particle.r, particle.v)) f.close() def plot(self): for particle in self.particles: plt.scatter(particle.r, 0) def load(self, filename): f = open(filename, 'r') csvfile = csv.reader(f) for position, velocity in csvfile: p = Particle(position, velocity) self.add_particle(p) b = Box(10) for i in range(10): p = Particle(i/2, i/3) b.add_particle(p) b.write('test.csv') !cat test.csv
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
<font color='mediumblue'> Class Properties Properties can be used to do interesting things Special functions as part of a class that we mark with a 'decorator' - '@property' Lets adjust the class Particle we used to make its data members a property of the class. We also need to write a 'setter' method to set the data members.
class Particle: def __init__(self, r0, v0): """ r0 = initial position v0 = initial speed """ self._r = r0 self._v = v0 def step(self, dt, L): """ Move the particle dt = timestep L = length of the containing box """ self._r = self._r + self._v * dt if self._r >= L: self._r -= L elif self._r < 0: self._r += L @property def r(self): return self._r @r.setter def r_setter(self, value): self._r = value @property def v(self): return self._v @r.setter def r_setter(self, value): self._v = value
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
<font color='midnightblue'> Why bother? It looks the same when we use it! Well known in programming - 'an interface is a contract' You might want to at some point rewrite a large portion of the underlying data - how it is stored for example. If you do this without using properties to access the data, you then need to go through all code that uses this class and change it to use the new variable names. <font color='mediumblue'> Inheritance Last part of the course on Classes, but also one of the main reason for using classes! Inheritance allows you to reuse parts of the code, but change some of the methods. Lets see how it might be useful...
class SlowParticle(Particle): def __init__(self, r0, v0, slowing_factor): Particle.__init__(self, r0, v0) self.factor = slowing_factor def step(self, dt, L): """ Move the particle, but change so that if the particle bounces off of a wall, it slows down by 50% dt = timestep L = length of the containing box """ self._r = self._r + self._v * dt if self._r >= L: self._r -= L self._v /= factor elif self._r < 0: self._r += L self._v /= factor
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
Here we have inherited most of the class Particle, and just changed the method 'step' to do something differently. Because we kept the properties the same, we can use this class everywhere that we could use Particle - our Box class can take a mixture of Particles and SlowParticles <font color='mediumblue'> Magic Methods: Remember earlier, when we did:
a = 1.0 print(dir(a))
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
Notice that there is a method "__add__" - we can define these special methods to allow our class to do things that you can ordinarily do with built in types.
class Box: def __init__(self, length): self.length = length self.particles = [] self.t = 0 def __add__(self, other): if self.length == other.length: b = Box(self.length) for p in self.particles: b.add_particle(p) for p in other.particles: b.add_particle(p) return b else: return ValueError('To add two boxes they must be of the same length') def mean_position(self): l = np.sum([p.r for p in self.particles])/len(self.particles) return l def add_particle(self, particle): self.particles.append(particle) def step(self, dt): for particle in self.particles: particle.step(dt, self.length) def write(self, filename): f = open(filename, 'w') for particle in self.particles: f.write('{},{}\n'.format(particle.r, particle.v)) f.close() def plot(self): for particle in self.particles: plt.scatter(particle.r, 0) def load(self, filename): f = open(filename, 'r') csvfile = csv.reader(f) for position, velocity in csvfile: p = Particle(position, velocity) self.add_particle(p) def __repr__(self): if len(self.particles) == 1: return 'Box containing 1 particle' else: return 'Box containing {} particles'.format(len(self.particles))
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
Now we've created an 'add' method, we can, create two boxes and add these together!
a = Box(10) a.add_particle(Particle(10, 10)) b = Box(10) b.add_particle(Particle(15, 10)) c = a + b print(a) print(b) print(c)
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
Looks good! But hang on...
a.mean_position(), b.mean_position(), c.mean_position() a.step(0.5) a.mean_position(), b.mean_position(), c.mean_position()
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
Why has the mean position of particles in Box C changed? Look at the memory address of the particles:
a.particles, c.particles
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
Boxes are pointing to the SAME particles! If we don't want this to happen, we need to write a 'copy' constructor for the class - a function which knows how to create an identical copy of the particle! We can do this by using the 'deepcopy' function in the 'copy' module, and redefine the particle and slow particle classes:
import copy class Particle: def __init__(self, r0, v0): """ r0 = initial position v0 = initial speed """ self.r = r0 self.v = v0 def step(self, dt, L): """ Move the particle dt = timestep L = length of the containing box """ self.r = self.r + self.v * dt if self.r >= L: self.r -= L elif self.r < 0: self.r += L def copy(self): return copy.deepcopy(self)
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
Then, we should change the Box class's 'add' method, to use this copy operation rather than just append the child particles of the existing boxes:
class Box: def __init__(self, length): self.length = length self.particles = [] self.t = 0 def __add__(self, other): if self.length == other.length: b = Box(self.length) for p in self.particles: b.add_particle(p) for p in other.particles: b.add_particle(p) return b else: return ValueError('To add two boxes they must be of the same length') def mean_position(self): l = np.sum([p.r for p in self.particles])/len(self.particles) return l def add_particle(self, particle): self.particles.append(particle.copy()) def step(self, dt): for particle in self.particles: particle.step(dt, self.length) def write(self, filename): f = open(filename, 'w') for particle in self.particles: f.write('{},{}\n'.format(particle.r, particle.v)) f.close() def plot(self): for particle in self.particles: plt.scatter(particle.r, 0) def load(self, filename): f = open(filename, 'r') csvfile = csv.reader(f) for position, velocity, ptype in csvfile: p = Particle(position, velocity) self.add_particle(p) def __repr__(self): if len(self.particles) == 1: return 'Box containing 1 particle' else: return 'Box containing {} particles'.format(len(self.particles)) a = Box(10) a.add_particle(Particle(10, 10)) b = Box(10) b.add_particle(Particle(15, 10)) c = a + b print(a) print(b) print(c)
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
ngcm/summer-academy-2017-basics
mit
Predict with Model (CLI)
%%bash pio predict \ --model-test-request-path ./data/test_request.json
jupyterhub.ml/notebooks/train_deploy/python3/python3_zscore/04_PredictModel.ipynb
fluxcapacitor/source.ml
apache-2.0
Predict with Model under Mini-Load (CLI) This is a mini load test to provide instant feedback on relative performance.
%%bash pio predict_many \ --model-test-request-path ./data/test_request.json \ --num-iterations 5
jupyterhub.ml/notebooks/train_deploy/python3/python3_zscore/04_PredictModel.ipynb
fluxcapacitor/source.ml
apache-2.0
Predict with Model (REST) Setup Prediction Inputs
import requests model_type = 'python3' model_namespace = 'default' model_name = 'python3_zscore' model_version = 'v1' deploy_url = 'http://prediction-%s.community.pipeline.io/api/v1/model/predict/%s/%s/%s/%s' % (model_type, model_type, model_namespace, model_name, model_version) print(deploy_url) with open('./data/test_request.json', 'rb') as fh: model_input_binary = fh.read() response = requests.post(url=deploy_url, data=model_input_binary, timeout=30) print("Success!\n\n%s" % response.text)
jupyterhub.ml/notebooks/train_deploy/python3/python3_zscore/04_PredictModel.ipynb
fluxcapacitor/source.ml
apache-2.0
Imports
%matplotlib inline import matplotlib.pyplot as plt import tensorflow as tf import numpy as np from sklearn.metrics import confusion_matrix import time from datetime import timedelta import math import os # Use PrettyTensor to simplify Neural Network construction. import prettytensor as pt
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
tf.__version__
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
from tensorflow.examples.tutorials.mnist import input_data data = input_data.read_data_sets('data/MNIST/', one_hot=True)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
print("Size of:") print("- Training-set:\t\t{}".format(len(data.train.labels))) print("- Test-set:\t\t{}".format(len(data.test.labels))) print("- Validation-set:\t{}".format(len(data.validation.labels)))
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.
data.test.cls = np.argmax(data.test.labels, axis=1) data.validation.cls = np.argmax(data.validation.labels, axis=1)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Data Dimensions The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
# We know that MNIST images are 28 pixels in each dimension. img_size = 28 # Images are stored in one-dimensional arrays of this length. img_size_flat = img_size * img_size # Tuple with height and width of images used to reshape arrays. img_shape = (img_size, img_size) # Number of colour channels for the images: 1 channel for gray-scale. num_channels = 1 # Number of classes, one class for each of 10 digits. num_classes = 10
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
def plot_images(images, cls_true, cls_pred=None): assert len(images) == len(cls_true) == 9 # Create figure with 3x3 sub-plots. fig, axes = plt.subplots(3, 3) fig.subplots_adjust(hspace=0.3, wspace=0.3) for i, ax in enumerate(axes.flat): # Plot image. ax.imshow(images[i].reshape(img_shape), cmap='binary') # Show true and predicted classes. if cls_pred is None: xlabel = "True: {0}".format(cls_true[i]) else: xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i]) # Show the classes as the label on the x-axis. ax.set_xlabel(xlabel) # Remove ticks from the plot. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show()
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Plot a few images to see if data is correct
# Get the first images from the test-set. images = data.test.images[0:9] # Get the true classes for those images. cls_true = data.test.cls[0:9] # Plot the images and labels using our helper-function above. plot_images(images=images, cls_true=cls_true)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
TensorFlow Graph The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time. TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives. TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs. A TensorFlow graph consists of the following parts which will be detailed below: Placeholder variables used for inputting data to the graph. Variables that are going to be optimized so as to make the convolutional network perform better. The mathematical formulas for the convolutional network. A loss measure that can be used to guide the optimization of the variables. An optimization method which updates the variables. In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial. Placeholder variables Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below. First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
y_true_cls = tf.argmax(y_true, dimension=1)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Neural Network This section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial #03. The basic idea is to wrap the input tensor x_image in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.
x_pretty = pt.wrap(x_image)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code. Note that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers.
with pt.defaults_scope(activation_fn=tf.nn.relu): y_pred, loss = x_pretty.\ conv2d(kernel=5, depth=16, name='layer_conv1').\ max_pool(kernel=2, stride=2).\ conv2d(kernel=5, depth=36, name='layer_conv2').\ max_pool(kernel=2, stride=2).\ flatten().\ fully_connected(size=128, name='layer_fc1').\ softmax_classifier(class_count=10, labels=y_true)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Getting the Weights Further below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow. We used the names layer_conv1 and layer_conv2 for the two convolutional layers. These are also called variable scopes (not to be confused with defaults_scope as described above). Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name. The implementation is somewhat awkward because we have to use the TensorFlow function get_variable() which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
def get_weights_variable(layer_name): # Retrieve an existing variable named 'weights' in the scope # with the given layer_name. # This is awkward because the TensorFlow function was # really intended for another purpose. with tf.variable_scope(layer_name, reuse=True): variable = tf.get_variable('weights') return variable
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: contents = session.run(weights_conv1) as demonstrated further below.
weights_conv1 = get_weights_variable(layer_name='layer_conv1') weights_conv2 = get_weights_variable(layer_name='layer_conv2')
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Optimization Method Pretty Tensor gave us the predicted class-label (y_pred) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images. It is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the AdamOptimizer to minimize the loss. Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Performance Measures We need a few more performance measures to display the progress to the user. First we calculate the predicted class number from the output of the neural network y_pred, which is a vector with 10 elements. The class number is the index of the largest element.
y_pred_cls = tf.argmax(y_pred, dimension=1)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Saver In order to save the variables of the neural network, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below in the optimize()-function.
saver = tf.train.Saver()
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
The saved files are often called checkpoints because they may be written at regular intervals during optimization. This is the directory used for saving and retrieving the data.
save_dir = 'checkpoints/'
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Create the directory if it does not exist.
if not os.path.exists(save_dir): os.makedirs(save_dir)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
This is the path for the checkpoint-file.
save_path = save_dir + 'best_validation'
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
TensorFlow Run Create TensorFlow session Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
session = tf.Session()
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Initialize variables The variables for weights and biases must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it again below.
def init_variables(): session.run(tf.initialize_all_variables())
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Execute the function now to initialize the variables.
init_variables()
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Helper-function to perform optimization iterations There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer. If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
train_batch_size = 64
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
The classification accuracy for the validation-set will be calculated for every 100 iterations of the optimization function below. The optimization will be stopped if the validation accuracy has not been improved in 1000 iterations. We need a few variables to keep track of this.
# Best validation accuracy seen so far. best_validation_accuracy = 0.0 # Iteration-number for last improvement to validation accuracy. last_improvement = 0 # Stop optimization if no improvement found in this many iterations. require_improvement = 1000
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations where the validation accuracy is also calculated and saved to a file if it is an improvement.
# Counter for total number of iterations performed so far. total_iterations = 0 def optimize(num_iterations): # Ensure we update the global variables rather than local copies. global total_iterations global best_validation_accuracy global last_improvement # Start-time used for printing time-usage below. start_time = time.time() for i in range(num_iterations): # Increase the total number of iterations performed. # It is easier to update it in each iteration because # we need this number several times in the following. total_iterations += 1 # Get a batch of training examples. # x_batch now holds a batch of images and # y_true_batch are the true labels for those images. x_batch, y_true_batch = data.train.next_batch(train_batch_size) # Put the batch into a dict with the proper names # for placeholder variables in the TensorFlow graph. feed_dict_train = {x: x_batch, y_true: y_true_batch} # Run the optimizer using this batch of training data. # TensorFlow assigns the variables in feed_dict_train # to the placeholder variables and then runs the optimizer. session.run(optimizer, feed_dict=feed_dict_train) # Print status every 100 iterations and after last iteration. if (total_iterations % 100 == 0) or (i == (num_iterations - 1)): # Calculate the accuracy on the training-batch. acc_train = session.run(accuracy, feed_dict=feed_dict_train) # Calculate the accuracy on the validation-set. # The function returns 2 values but we only need the first. acc_validation, _ = validation_accuracy() # If validation accuracy is an improvement over best-known. if acc_validation > best_validation_accuracy: # Update the best-known validation accuracy. best_validation_accuracy = acc_validation # Set the iteration for the last improvement to current. last_improvement = total_iterations # Save all variables of the TensorFlow graph to file. saver.save(sess=session, save_path=save_path) # A string to be printed below, shows improvement found. improved_str = '*' else: # An empty string to be printed below. # Shows that no improvement was found. improved_str = '' # Status-message for printing. msg = "Iter: {0:>6}, Train-Batch Accuracy: {1:>6.1%}, Validation Acc: {2:>6.1%} {3}" # Print it. print(msg.format(i + 1, acc_train, acc_validation, improved_str)) # If no improvement found in the required number of iterations. if total_iterations - last_improvement > require_improvement: print("No improvement found in a while, stopping optimization.") # Break out from the for-loop. break # Ending time. end_time = time.time() # Difference between start and end-times. time_dif = end_time - start_time # Print the time-usage. print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Helper-function to plot example errors Function for plotting examples of images from the test-set that have been mis-classified.
def plot_example_errors(cls_pred, correct): # This function is called from print_test_accuracy() below. # cls_pred is an array of the predicted class-number for # all images in the test-set. # correct is a boolean array whether the predicted class # is equal to the true class for each image in the test-set. # Negate the boolean array. incorrect = (correct == False) # Get the images from the test-set that have been # incorrectly classified. images = data.test.images[incorrect] # Get the predicted classes for those images. cls_pred = cls_pred[incorrect] # Get the true classes for those images. cls_true = data.test.cls[incorrect] # Plot the first 9 images. plot_images(images=images[0:9], cls_true=cls_true[0:9], cls_pred=cls_pred[0:9])
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Helper-function to plot confusion matrix
def plot_confusion_matrix(cls_pred): # This is called from print_test_accuracy() below. # cls_pred is an array of the predicted class-number for # all images in the test-set. # Get the true classifications for the test-set. cls_true = data.test.cls # Get the confusion matrix using sklearn. cm = confusion_matrix(y_true=cls_true, y_pred=cls_pred) # Print the confusion matrix as text. print(cm) # Plot the confusion matrix as an image. plt.matshow(cm) # Make various adjustments to the plot. plt.colorbar() tick_marks = np.arange(num_classes) plt.xticks(tick_marks, range(num_classes)) plt.yticks(tick_marks, range(num_classes)) plt.xlabel('Predicted') plt.ylabel('True') # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show()
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Helper-functions for calculating classifications This function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct. The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
# Split the data-set in batches of this size to limit RAM usage. batch_size = 256 def predict_cls(images, labels, cls_true): # Number of images. num_images = len(images) # Allocate an array for the predicted classes which # will be calculated in batches and filled into this array. cls_pred = np.zeros(shape=num_images, dtype=np.int) # Now calculate the predicted classes for the batches. # We will just iterate through all the batches. # There might be a more clever and Pythonic way of doing this. # The starting index for the next batch is denoted i. i = 0 while i < num_images: # The ending index for the next batch is denoted j. j = min(i + batch_size, num_images) # Create a feed-dict with the images and labels # between index i and j. feed_dict = {x: images[i:j, :], y_true: labels[i:j, :]} # Calculate the predicted class using TensorFlow. cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict) # Set the start-index for the next batch to the # end-index of the current batch. i = j # Create a boolean array whether each image is correctly classified. correct = (cls_true == cls_pred) return correct, cls_pred
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Calculate the predicted class for the test-set.
def predict_cls_test(): return predict_cls(images = data.test.images, labels = data.test.labels, cls_true = data.test.cls)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Calculate the predicted class for the validation-set.
def predict_cls_validation(): return predict_cls(images = data.validation.images, labels = data.validation.labels, cls_true = data.validation.cls)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Helper-functions for the classification accuracy This function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. cls_accuracy([True, True, False, False, False]) = 2/5 = 0.4
def cls_accuracy(correct): # Calculate the number of correctly classified images. # When summing a boolean array, False means 0 and True means 1. correct_sum = correct.sum() # Classification accuracy is the number of correctly classified # images divided by the total number of images in the test-set. acc = float(correct_sum) / len(correct) return acc, correct_sum
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Calculate the classification accuracy on the validation-set.
def validation_accuracy(): # Get the array of booleans whether the classifications are correct # for the validation-set. # The function returns two values but we only need the first. correct, _ = predict_cls_validation() # Calculate the classification accuracy and return it. return cls_accuracy(correct)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Helper-function for showing the performance Function for printing the classification accuracy on the test-set. It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
def print_test_accuracy(show_example_errors=False, show_confusion_matrix=False): # For all the images in the test-set, # calculate the predicted classes and whether they are correct. correct, cls_pred = predict_cls_test() # Classification accuracy and the number of correct classifications. acc, num_correct = cls_accuracy(correct) # Number of images being classified. num_images = len(correct) # Print the accuracy. msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})" print(msg.format(acc, num_correct, num_images)) # Plot some examples of mis-classifications, if desired. if show_example_errors: print("Example errors:") plot_example_errors(cls_pred=cls_pred, correct=correct) # Plot the confusion matrix, if desired. if show_confusion_matrix: print("Confusion Matrix:") plot_confusion_matrix(cls_pred=cls_pred)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Helper-function for plotting convolutional weights
def plot_conv_weights(weights, input_channel=0): # Assume weights are TensorFlow ops for 4-dim variables # e.g. weights_conv1 or weights_conv2. # Retrieve the values of the weight-variables from TensorFlow. # A feed-dict is not necessary because nothing is calculated. w = session.run(weights) # Print mean and standard deviation. print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w.mean(), w.std())) # Get the lowest and highest values for the weights. # This is used to correct the colour intensity across # the images so they can be compared with each other. w_min = np.min(w) w_max = np.max(w) # Number of filters used in the conv. layer. num_filters = w.shape[3] # Number of grids to plot. # Rounded-up, square-root of the number of filters. num_grids = math.ceil(math.sqrt(num_filters)) # Create figure with a grid of sub-plots. fig, axes = plt.subplots(num_grids, num_grids) # Plot all the filter-weights. for i, ax in enumerate(axes.flat): # Only plot the valid filter-weights. if i<num_filters: # Get the weights for the i'th filter of the input channel. # The format of this 4-dim tensor is determined by the # TensorFlow API. See Tutorial #02 for more details. img = w[:, :, input_channel, i] # Plot image. ax.imshow(img, vmin=w_min, vmax=w_max, interpolation='nearest', cmap='seismic') # Remove ticks from the plot. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show()
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Performance before any optimization The accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
print_test_accuracy()
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
The convolutional weights are random, but it can be difficult to see any difference from the optimized weights that are shown below. The mean and standard deviation is shown so we can see whether there is a difference.
plot_conv_weights(weights=weights_conv1)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Perform 10,000 optimization iterations We now perform 10,000 optimization iterations and abort the optimization if no improvement is found on the validation-set in 1000 iterations. An asterisk * is shown if the classification accuracy on the validation-set is an improvement.
optimize(num_iterations=10000) print_test_accuracy(show_example_errors=True, show_confusion_matrix=True)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
The convolutional weights have now been optimized. Compare these to the random weights shown above. They appear to be almost identical. In fact, I first thought there was a bug in the program because the weights look identical before and after optimization. But try and save the images and compare them side-by-side (you can just right-click the image to save it). You will notice very small differences before and after optimization. The mean and standard deviation has also changed slightly, so the optimized weights must be different.
plot_conv_weights(weights=weights_conv1)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Initialize Variables Again Re-initialize all the variables of the neural network with random values.
init_variables()
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
This means the neural network classifies the images completely randomly again, so the classification accuracy is very poor because it is like random guesses.
print_test_accuracy()
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
The convolutional weights should now be different from the weights shown above.
plot_conv_weights(weights=weights_conv1)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Restore Best Variables Re-load all the variables that were saved to file during optimization.
saver.restore(sess=session, save_path=save_path)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
The classification accuracy is high again when using the variables that were previously saved. Note that the classification accuracy may be slightly higher or lower than that reported above, because the variables in the file were chosen to maximize the classification accuracy on the validation-set, but the optimization actually continued for another 1000 iterations after saving those variables, so we are reporting the results for two slightly different sets of variables. Sometimes this leads to slightly better or worse performance on the test-set.
print_test_accuracy(show_example_errors=True, show_confusion_matrix=True)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
The convolutional weights should be nearly identical to those shown above, although not completely identical because the weights shown above had 1000 optimization iterations more.
plot_conv_weights(weights=weights_conv1)
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Close TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources.
# This has been commented out in case you want to modify and experiment # with the Notebook without having to restart it. # session.close()
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
thomasantony/CarND-Projects
mit
Generating some discontinuities examples Let's start by degrading some audio files with some discontinuities. Discontinuities are generally occasioned by hardware issues in the process of recording or copying. Let's simulate this by removing a random number of samples from the input audio file.
def testRegression(self, frameSize=512, hopSize=256): fs = 44100 audio = MonoLoader(filename=join(testdata.audio_dir, 'recorded/cat_purrrr.wav'), sampleRate=fs)() originalLen = len(audio) startJump = originalLen / 4 groundTruth = [startJump / float(fs)] # make sure that the artificial jump produces a prominent discontinuity if audio[startJump] > 0: end = next(idx for idx, i in enumerate(audio[startJump:]) if i < -.3) else: end = next(idx for idx, i in enumerate(audio[startJump:]) if i > .3) endJump = startJump + end audio = esarr(np.hstack([audio[:startJump], audio[endJump:]])) frameList = [] discontinuityDetector = self.InitDiscontinuityDetector( frameSize=frameSize, hopSize=hopSize, detectionThreshold=10) for idx, frame in enumerate(FrameGenerator( audio, frameSize=frameSize, hopSize=hopSize, startFromZero=True)): locs, _ = discontinuityDetector(frame) if not len(locs) == 0: for loc in locs: frameList.append((idx * hopSize + loc) / float(fs)) self.assertAlmostEqualVector(frameList, groundTruth, 1e-7) fs = 44100. audio_dir = '../../audio/' audio = es.MonoLoader(filename='{}/{}'.format(audio_dir, 'recorded/vignesh.wav'), sampleRate=fs)() originalLen = len(audio) startJumps = np.array([originalLen / 4, originalLen / 2]) groundTruth = startJumps / float(fs) for startJump in startJumps: # make sure that the artificial jump produces a prominent discontinuity if audio[startJump] > 0: end = next(idx for idx, i in enumerate(audio[startJump:]) if i < -.3) else: end = next(idx for idx, i in enumerate(audio[startJump:]) if i > .3) endJump = startJump + end audio = esarr(np.hstack([audio[:startJump], audio[endJump:]])) for point in groundTruth: l1 = plt.axvline(point, color='g', alpha=.5) times = np.linspace(0, len(audio) / fs, len(audio)) plt.plot(times, audio) plt.title('Signal with artificial clicks of different amplitudes') l1.set_label('Click locations') plt.legend()
src/examples/tutorial/example_discontinuitydetector.ipynb
carthach/essentia
agpl-3.0
Lets listen to the clip to have an idea on how audible the discontinuities are
Audio(audio, rate=fs)
src/examples/tutorial/example_discontinuitydetector.ipynb
carthach/essentia
agpl-3.0
The algorithm This algorithm outputs the starts and ends timestapms of the clicks. The following plots show how the algorithm performs in the previous examples
locs, amps = compute(audio) fig, ax = plt.subplots(len(groundTruth)) plt.subplots_adjust(hspace=.4) for idx, point in enumerate(groundTruth): l1 = ax[idx].axvline(locs[idx], color='r', alpha=.5) l2 = ax[idx].axvline(point, color='g', alpha=.5) ax[idx].plot(times, audio) ax[idx].set_xlim([point-.001, point+.001]) ax[idx].set_title('Click located at {:.2f}s'.format(point)) fig.legend((l1, l2), ('Detected discontinuity', 'Ground truth'), 'upper right')
src/examples/tutorial/example_discontinuitydetector.ipynb
carthach/essentia
agpl-3.0
Data Preparation and Model Selection Now we are ready to test the XGB approach, along the way confusion matrix and f1_score are imported as metric for classification, as well as GridSearchCV, which is an excellent tool for parameter optimization.
import xgboost as xgb import numpy as np from sklearn.metrics import confusion_matrix, f1_score from classification_utilities import display_cm, display_adj_cm from sklearn.model_selection import GridSearchCV X_train = training_data.drop(['Facies', 'Well Name','Formation','Depth'], axis = 1 ) Y_train = training_data['Facies' ] - 1 dtrain = xgb.DMatrix(X_train, Y_train)
HouMath/Face_classification_HouMath_XGB_01.ipynb
esa-as/2016-ml-contest
apache-2.0
The accuracy function and accuracy_adjacent function are defined in teh following to quatify the prediction correctness.
def accuracy(conf): total_correct = 0. nb_classes = conf.shape[0] for i in np.arange(0,nb_classes): total_correct += conf[i][i] acc = total_correct/sum(sum(conf)) return acc adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]]) def accuracy_adjacent(conf, adjacent_facies): nb_classes = conf.shape[0] total_correct = 0. for i in np.arange(0,nb_classes): total_correct += conf[i][i] for j in adjacent_facies[i]: total_correct += conf[i][j] return total_correct / sum(sum(conf))
HouMath/Face_classification_HouMath_XGB_01.ipynb
esa-as/2016-ml-contest
apache-2.0