markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
With while loops we need to make sure that something actually changes from iteration to iteration so that that the loop actually terminates. In this case, we use the shorthand i -= 1 (short for i = i - 1) so that the value of i gets smaller with each iteration. Eventually i will be reduced to 0, rendering the condition False and exiting the loop. A for loop iterates a set number of times, determined when you state the entry into the loop. In this case we are iterating over the list returned from range(). The for loop selects a value from the list, in order, and temporarily assigns the value of i to it so that operations can be performed with the value.
for i in range(5): print 'I am looping! I have looped {0} times!'.format(i + 1)
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
Note that in this for loop we use the in keyword. Use of the in keyword is not limited to checking for membership as in the if-statement example. You can iterate over any collection with a for loop by using the in keyword. In this next example, we will iterate over a set because we want to check for containment and add to a new set.
my_list = {'cats', 'dogs', 'lizards', 'cows', 'bats', 'sponges', 'humans'} # Lists all the animals in the world mammal_list = {'cats', 'dogs', 'cows', 'bats', 'humans'} # Lists all the mammals in the world my_new_list = set() for animal in my_list: if animal in mammal_list: # This adds any animal that is both in my_list and mammal_list to my_new_list my_new_list.add(animal) print my_new_list
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
There are two statements that are very helpful in dealing with both for and while loops. These are break and continue. If break is encountered at any point while a loop is executing, the loop will immediately end.
i = 10 while True: if i == 14: break i += 1 # This is shorthand for i = i + 1. It increments i with each iteration. print i for i in range(5): if i == 2: break print i
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
The continue statement will tell the loop to immediately end this iteration and continue onto the next iteration of the loop.
i = 0 while i < 5: i += 1 if i == 3: continue print i
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
This loop skips printing the number $3$ because of the continue statement that executes when we enter the if-statement. The code never sees the command to print the number $3$ because it has already moved to the next iteration. The break and continue statements are further tools to help you control the flow of your loops and, as a result, your code. The variable that we use to iterate over a loop will retain its value when the loop exits. Similarly, any variables defined within the context of the loop will continue to exist outside of it.
for i in range(5): loop_string = 'I transcend the loop!' print 'I am eternal! I am {0} and I exist everywhere!'.format(i) print 'I persist! My value is {0}'.format(i) print loop_string
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
We can also iterate over a dictionary!
my_dict = {'firstname' : 'Inigo', 'lastname' : 'Montoya', 'nemesis' : 'Rugen'} for key in my_dict: print key
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
If we just iterate over a dictionary without doing anything else, we will only get the keys. We can either use the keys to get the values, like so:
for key in my_dict: print my_dict[key]
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
Or we can use the iteritems() function to get both key and value at the same time.
for key, value in my_dict.iteritems(): print key, ':', value
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
The iteritems() function creates a tuple of each key-value pair and the for loop stores unpacks that tuple into key, value on each separate execution of the loop! Functions A function is a reusable block of code that you can call repeatedly to make calculations, output data, or really do anything that you want. This is one of the key aspects of using a programming language. To add to the built-in functions in Python, you can define your own!
def hello_world(): """ Prints Hello, world! """ print 'Hello, world!' hello_world() for i in range(5): hello_world()
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
Functions are defined with def, a function name, a list of parameters, and a colon. Everything indented below the colon will be included in the definition of the function. We can have our functions do anything that you can do with a normal block of code. For example, our hello_world() function prints a string every time it is called. If we want to keep a value that a function calculates, we can define the function so that it will return the value we want. This is a very important feature of functions, as any variable defined purely within a function will not exist outside of it.
def see_the_scope(): in_function_string = "I'm stuck in here!" see_the_scope() print in_function_string
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
The scope of a variable is the part of a block of code where that variable is tied to a particular value. Functions in Python have an enclosed scope, making it so that variables defined within them can only be accessed directly within them. If we pass those values to a return statement we can get them out of the function. This makes it so that the function call returns values so that you can store them in variables that have a greater scope. In this case specifically,including a return statement allows us to keep the string value that we define in the function.
def free_the_scope(): in_function_string = "Anything you can do I can do better!" return in_function_string my_string = free_the_scope() print my_string
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
Just as we can get values out of a function, we can also put values into a function. We do this by defining our function with parameters.
def multiply_by_five(x): """ Multiplies an input number by 5 """ return x * 5 n = 4 print n print multiply_by_five(n)
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
In this example we only had one parameter for our function, x. We can easily add more parameters, separating everything with a comma.
def calculate_area(length, width): """ Calculates the area of a rectangle """ return length * width l = 5 w = 10 print 'Area: ', calculate_area(l, w) print 'Length: ', l print 'Width: ', w def calculate_volume(length, width, depth): """ Calculates the volume of a rectangular prism """ return length * width * depth
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
If we want to, we can define a function so that it takes an arbitrary number of parameters. We tell Python that we want this by using an asterisk (*).
def sum_values(*args): sum_val = 0 for i in args: sum_val += i return sum_val print sum_values(1, 2, 3) print sum_values(10, 20, 30, 40, 50) print sum_values(4, 2, 5, 1, 10, 249, 25, 24, 13, 6, 4)
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
The time to use *args as a parameter for your function is when you do not know how many values may be passed to it, as in the case of our sum function. The asterisk in this case is the syntax that tells Python that you are going to pass an arbitrary number of parameters into your function. These parameters are stored in the form of a tuple.
def test_args(*args): print type(args) test_args(1, 2, 3, 4, 5, 6)
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
We can put as many elements into the args tuple as we want to when we call the function. However, because args is a tuple, we cannot modify it after it has been created. The args name of the variable is purely by convention. You could just as easily name your parameter *vars or *things. You can treat the args tuple like you would any other tuple, easily accessing arg's values and iterating over it, as in the above sum_values(*args) function. Our functions can return any data type. This makes it easy for us to create functions that check for conditions that we might want to monitor. Here we define a function that returns a boolean value. We can easily use this in conjunction with if-statements and other situations that require a boolean.
def has_a_vowel(word): """ Checks to see whether a word contains a vowel If it doesn't contain a conventional vowel, it will check for the presence of 'y' or 'w'. Does not check to see whether those are in the word in a vowel context. """ vowel_list = ['a', 'e', 'i', 'o', 'u'] for vowel in vowel_list: if vowel in word: return True # If there is a vowel in the word, the function returns, preventing anything after this loop from running return False my_word = 'catnapping' if has_a_vowel(my_word): print 'How surprising, an english word contains a vowel.' else: print 'This is actually surprising.' def point_maker(x, y): """ Groups x and y values into a point, technically a tuple """ return x, y
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
This above function returns an ordered pair of the input parameters, stored as a tuple.
a = point_maker(0, 10) b = point_maker(5, 3) def calculate_slope(point_a, point_b): """ Calculates the linear slope between two points """ return (point_b[1] - point_a[1])/(point_b[0] - point_a[0]) print "The slope between a and b is {0}".format(calculate_slope(a, b))
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
And that one calculates the slope between two points!
print "The slope-intercept form of the line between a and b, using point a, is: y - {0} = {2}(x - {1})".format(a[1], a[0], calculate_slope(a, b))
Notebooks/quantopian_research_public/notebooks/lectures/Introduction_to_Python/notebook.ipynb
d00d/quantNotebooks
unlicense
Data types There are 5 basic numerical types representing booleans (bool), integers (int), unsigned integers (uint) floating point (float) and complex. Those with numbers in their name indicate the bitsize of the type (i.e. how many bits are needed to represent a single value in memory).
print("np.float32(1.0) :", np.float32(1.0)) print("np.arange(3, dtype=np.uint8) :", np.arange(3, dtype=np.uint8)) z = np.array([1, 2, 3], dtype='f') print(z) z = np.arange(3, dtype=np.uint8) print(z) print(z.astype(float)) print(z.dtype)
day2/scicomp_numpy.ipynb
grokkaine/biopycourse
cc0-1.0
Array creation
# extrinsic x = np.array([2,3,1,0]) print(x) print() x = np.array([[ 1., 2.], [ 0., 0.], [ 1., 3.]]) print(x) #intrinsic b = np.arange(1, 9, 2) print(b) c = np.linspace(0, 1, 6) print(c) print(np.arange(35).reshape(5,7)) x = np.random.rand(35).reshape(5,7) print(x) %pylab inline import matplotlib.pyplot as plt image = np.random.rand(30, 30) plt.imshow(image, cmap=plt.cm.hot) plt.colorbar()
day2/scicomp_numpy.ipynb
grokkaine/biopycourse
cc0-1.0
Indexing, slicing and selection
a = np.arange(10) print(a) print(a[0], a[2], a[-1], a[-3]) print(a[2:5], a[2:], a[:-2], a[::2], a[2::2]) a = np.diag(np.arange(3)) a[2, 1] = 10 # !third line, !second column print(a) print(a[1, 1]) #print(a[1]) #print(a[:,1], a[1,:]) #print(a[1:,1:]) # array indexes x = np.arange(10,1,-1) print(x) print() print(x[np.array([3,3,-3,8])]) print() print(x[np.array([[1,1],[2,3]])]) # 10 random numbers 0 - 20 a = np.random.randint(0, 20, 10) print(a) print(a%3==0) print(a[a%3==0]) a[a % 3 == 0] = -1 print(a)
day2/scicomp_numpy.ipynb
grokkaine/biopycourse
cc0-1.0
Task: - What does this do:
# How does it work? # Print the primes! def get_primes(): primes = np.ones((100,), dtype=bool) primes[:2] = 0 N_max = int(np.sqrt(len(primes))) for j in range(2, N_max): primes[2*j::j] = 0 return primes print(get_primes())
day2/scicomp_numpy.ipynb
grokkaine/biopycourse
cc0-1.0
Broadcasting, assignment, structured arrays
a = np.arange(10) b = a print(np.may_share_memory(a, b)) a = np.arange(10) c = a.copy() # force a copy print(np.may_share_memory(a, c)) #Array operations a = np.array([1, 2, 3, 4]) print("a: ", a) print("a + 1, 2**a: ", a + 1, 2**a) print ("2**(a + 1) - a: ", 2**(a + 1) - a) a = np.array([1, 2, 3, 4]) b = np.ones(4)+1 print("a: ",a) print("b: ",b) print("a - b, a * b: ", a - b, a * b) c = np.ones((3, 3)) print(c) print(2*c + 1) # matrix multiplication a = np.ones((3, 2)) + 1 b = np.ones((2, 3)) + 1 c = a.dot(b) print(a, b, c, sep="\n\n") a = np.arange(5) print(np.sin(a)) print(np.log(a)) print(np.exp(a)) # shape manipulation x = np.array([1, 2, 3]) y = x[:, np.newaxis] z = x[np.newaxis, :] print(x, y, z, sep='\n\n') # flatten a = np.array([[1, 2, 3], [4, 5, 6]]) print(a) print(a.ravel()) # sorting matrices a = np.array([[4, 3, 5], [1, 2, 1]]) b = np.sort(a, axis=1) #sorting per row print(a) print(b) # sorting arguments a = np.array([4, 3, 1, 2]) j = np.argsort(a) print(j, a[j])
day2/scicomp_numpy.ipynb
grokkaine/biopycourse
cc0-1.0
Reductions
# unidimensional x = np.array([1, 2, 3, 4]) print(np.sum(x), x.sum(), x.sum(axis=0)) # multidimensional x = np.array([[1, 1], [2, 2]]) print(x) print(x.sum(axis=0)) # rows (first dimension) print(x.sum(axis=1)) # columns (second dimension) x = np.array([1, 3, 2]) print(x) print(x.min(), x.max(), x.argmin(), x.argmax()) print(np.all([True, True, False]), np.any([True, True, False])) x = np.array([1, 2, 3, 1]) y = np.array([[1, 2, 3], [5, 6, 1]]) print(x.mean(), x.std(), np.median(x), np.median(y, axis=-1)) a = np.zeros((100, 100)) print(np.any(a != 0)) a = np.array([1, 2, 3, 2]) b = np.array([2, 2, 3, 2]) c = np.array([6, 4, 4, 5]) print(((a <= b) & (b <= c)).all())
day2/scicomp_numpy.ipynb
grokkaine/biopycourse
cc0-1.0
Tricksy task: - Replace all values greater than 25 with 9 and all values smaller than 10 with 29.
a = np.random.randint(0, 50, 10) print(a)
day2/scicomp_numpy.ipynb
grokkaine/biopycourse
cc0-1.0
Sympy Symbolic math is sometimes important, especially if we are weak at calculus or if we need to perform automated calculus on long formulas. We are briefly going through a few test cases, to get the feel of it. Symbolic math is especially developed for Mathematica, or Sage which is an open-source equivalent.
import sympy print sympy.sqrt(8) import math print math.sqrt(8) from sympy import symbols x, y, z, t = symbols('x y z t') expr = x + 2*y print expr print x * expr from sympy import expand, factor, simplify expanded_expr = expand(x*expr) print expanded_expr print factor(expanded_expr) exp = expanded_expr.subs(x, z**t) print exp print simplify(exp)
day2/scicomp_numpy.ipynb
grokkaine/biopycourse
cc0-1.0
In the scipy.optimize paragraph we needed the Hessian matrix for a function f. Here is how you can obtain it in sympy:
import sympy x, y = sympy.symbols('x y') f = .5*(1 - x)**2 + (y - x**2)**2 h = sympy.hessian(f, [x,y]) print(h) from IPython.display import Latex Latex(sympy.latex(h)) from IPython.display import HTML HTML('<iframe src=http://en.wikipedia.org/wiki/Hessian_matrix width=700 height=350></iframe>')
day2/scicomp_numpy.ipynb
grokkaine/biopycourse
cc0-1.0
due to the structure of the report this number can not be used directly as a reference e.g. maybe large occurance due to small integration and observed many time and also it is possible only for one object in one band (like the first project in here) I think the year of Cycle is more important due to number of antenna.
# 15 first for i in sorted_from_large[0:15]: print(i)
notebooks/selecting_source/alma_database_selection10.ipynb
bosscha/alma-calibrator
gpl-2.0
Sorted based on year
sorted_project_year = sorted(list_of_project, key=lambda data: data[0]) sorted_from_new = list(reversed(sorted_project_year)) # 15 first for i in sorted_from_new[0:15]: print(i)
notebooks/selecting_source/alma_database_selection10.ipynb
bosscha/alma-calibrator
gpl-2.0
Data Format The data is stored in a vector format, although the original data was a 2-dimensional matirx with values representing how much pigment was at a certain location. Let's explore this:
type(mnist) type(mnist.train.images) #mnist.train.images[0] mnist.train.images[2].shape sample = mnist.train.images[2].reshape(28,28) import matplotlib.pyplot as plt %matplotlib inline plt.imshow(sample)
Challenges/MNIST with Multi-Layer Perceptron.ipynb
DhavalThkkar/internship2017
apache-2.0
Parameters We'll need to define 4 parameters, it is really (really) hard to know what good parameter values are on a data set for which you have no experience with, however since MNIST is pretty famous, we have some reasonable values for our data below. The parameters here are: Learning Rate - How quickly to adjust the cost function. Training Epochs - How many training cycles to go through Batch Size - Size of the 'batches' of training data
# Parameters learning_rate = 0.001 training_epochs = 150 batch_size = 100
Challenges/MNIST with Multi-Layer Perceptron.ipynb
DhavalThkkar/internship2017
apache-2.0
Network Parameters Here we have parameters which will directly define our Neural Network, these would be adjusted depending on what your data looked like and what kind of a net you would want to build. Basically just some numbers we will eventually use to define some variables later on in our model:
# Network Parameters n_hidden_1 = 256 # 1st layer number of features n_hidden_2 = 256 # 2nd layer number of features n_input = 784 # MNIST data input (img shape: 28*28) n_classes = 10 # MNIST total classes (0-9 digits) n_samples = mnist.train.num_examples
Challenges/MNIST with Multi-Layer Perceptron.ipynb
DhavalThkkar/internship2017
apache-2.0
TensorFlow Graph Input
x = tf.placeholder("float", [None, n_input]) y = tf.placeholder("float", [None, n_classes])
Challenges/MNIST with Multi-Layer Perceptron.ipynb
DhavalThkkar/internship2017
apache-2.0
MultiLayer Model It is time to create our model, let's review what we want to create here. First we receive the input data array and then to send it to the first hidden layer. Then the data will begin to have a weight attached to it between layers (remember this is initially a random value) and then sent to a node to undergo an activation function (along with a Bias as mentioned in the lecture). Then it will continue on to the next hidden layer, and so on until the final output layer. In our case, we will just use two hidden layers, the more you use the longer the model will take to run (but it has more of an opportunity to possibly be more accurate on the training data). Once the transformed "data" has reached the output layer we need to evaluate it. Here we will use a loss function (also called a cost function) to evaluate how far off we are from the desired result. In this case, how many of the classes we got correct. Then we will apply an optimization function to minimize the cost (lower the error). This is done by adjusting weight values accordingly across the network. In out example, we will use the Adam Optimizer, which keep in mind, relative to other mathematical concepts, is an extremely recent development. We can adjust how quickly to apply this optimization by changing our earlier learning rate parameter. The lower the rate the higher the possibility for accurate training results, but that comes at the cost of having to wait (physical time wise) for the results. Of course, after a certain point there is no benefit to lower the learning rate. Now we will create our model, we'll start with 2 hidden layers, which use the [RELU](https://en.wikipedia.org/wiki/Rectifier_(neural_networks) activation function, which is a very simple rectifier function which essentially either returns x or zero. For our final output layer we will use a linear activation with matrix multiplication:
def multilayer_perceptron(x, weights, biases): ''' x : Place Holder for Data Input weights: Dictionary of weights biases: Dicitionary of biases ''' # First Hidden layer with RELU activation layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1']) layer_1 = tf.nn.relu(layer_1) # Second Hidden layer with RELU activation layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']) layer_2 = tf.nn.relu(layer_2) # Last Output layer with linear activation out_layer = tf.matmul(layer_2, weights['out']) + biases['out'] return out_layer
Challenges/MNIST with Multi-Layer Perceptron.ipynb
DhavalThkkar/internship2017
apache-2.0
Weights and Bias In order for our tensorflow model to work we need to create two dictionaries containing our weight and bias objects for the model. We can use the tf.variable object type. This is different from a constant because TensorFlow's Graph Object becomes aware of the states of all the variables. A Variable is a modifiable tensor that lives in TensorFlow's graph of interacting operations. It can be used and even modified by the computation. We will generally have the model parameters be Variables. From the documentation string: A variable maintains state in the graph across calls to `run()`. You add a variable to the graph by constructing an instance of the class `Variable`. The `Variable()` constructor requires an initial value for the variable, which can be a `Tensor` of any type and shape. The initial value defines the type and shape of the variable. After construction, the type and shape of the variable are fixed. The value can be changed using one of the assign methods. We'll use tf's built-in random_normal method to create the random values for our weights and biases (you could also just pass ones as the initial biases).
weights = { 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])), 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes])) } biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1])), 'b2': tf.Variable(tf.random_normal([n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_classes])) } # Construct model pred = multilayer_perceptron(x, weights, biases)
Challenges/MNIST with Multi-Layer Perceptron.ipynb
DhavalThkkar/internship2017
apache-2.0
Cost and Optimization Functions We'll use Tensorflow's built-in functions for this part (check out the documentation for a lot more options and discussion on this):
# Define loss and optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=x)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
Challenges/MNIST with Multi-Layer Perceptron.ipynb
DhavalThkkar/internship2017
apache-2.0
Initialization of Variables Now initialize all those tf.Variable objects we created earlier. This will be the first thing we run when training our model:
# Initializing the variables init = tf.initialize_all_variables()
Challenges/MNIST with Multi-Layer Perceptron.ipynb
DhavalThkkar/internship2017
apache-2.0
Training the Model next_batch() Before we get started I want to cover one more convenience function in our mnist data object called next_batch. This returns a tuple in the form (X,y) with an array of the data and a y array indicating the class in the form of a binary array. For example:
Xsamp,ysamp = mnist.train.next_batch(1) plt.imshow(Xsamp.reshape(28,28)) # Remember indexing starts at zero! print(ysamp)
Challenges/MNIST with Multi-Layer Perceptron.ipynb
DhavalThkkar/internship2017
apache-2.0
Running the Session Now it is time to run our session! Pay attention to how we have two loops, the outer loop which runs the epochs, and the inner loop which runs the batches for each epoch of training. Let's breakdown each step!
# Launch the session sess = tf.InteractiveSession() # Intialize all the variables sess.run(init) # Training Epochs # Essentially the max amount of loops possible before we stop # May stop earlier if cost/loss limit was set for epoch in range(training_epochs): # Start with cost = 0.0 avg_cost = 0.0 # Convert total number of batches to integer total_batch = int(n_samples/batch_size) # Loop over all batches for i in range(total_batch): # Grab the next batch of training data and labels batch_x, batch_y = mnist.train.next_batch(batch_size) # Feed dictionary for optimization and loss value # Returns a tuple, but we only need 'c' the cost # So we set an underscore as a "throwaway" _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y}) # Compute average loss avg_cost += c / total_batch print("Epoch: {} cost={:.4f}".format(epoch+1,avg_cost)) print("Model has completed {} Epochs of Training".format(training_epochs))
Challenges/MNIST with Multi-Layer Perceptron.ipynb
DhavalThkkar/internship2017
apache-2.0
Model Evaluations Tensorflow comes with some built-in functions to help evaluate our model, including tf.equal and tf.cast with tf.reduce_mean. tf.equal() This is essentially just a check of predictions == y_test. In our case since we know the format of the labels is a 1 in an array of zeroes, we can compare argmax() location of that 1. Remember that y here is still that placeholder we created at the very beginning, we will perform a series of operations to get a Tensor that we can eventually fill in the test data for with an evaluation method. What we are currently running will still be empty of test data:
# Test model correct_predictions = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) print(correct_predictions[0])
Challenges/MNIST with Multi-Layer Perceptron.ipynb
DhavalThkkar/internship2017
apache-2.0
In order to get a numerical value for our predictions we will need to use tf.cast to cast the Tensor of booleans back into a Tensor of Floating point values in order to take the mean of it.
correct_predictions = tf.cast(correct_predictions, "float") print(correct_predictions[0])
Challenges/MNIST with Multi-Layer Perceptron.ipynb
DhavalThkkar/internship2017
apache-2.0
Now we use the tf.reduce_mean function in order to grab the mean of the elements across the tensor.
accuracy = tf.reduce_mean(correct_predictions) type(accuracy)
Challenges/MNIST with Multi-Layer Perceptron.ipynb
DhavalThkkar/internship2017
apache-2.0
This may seem a little strange, but this accuracy is still a Tensor object. Remember that we still need to pass in our actual test data! Now we can call the MNIST test labels and images and evaluate our accuracy!
mnist.test.labels mnist.test.images
Challenges/MNIST with Multi-Layer Perceptron.ipynb
DhavalThkkar/internship2017
apache-2.0
The eval() method allows you to directly evaluates this tensor in a Session without needing to call tf.sess():mm
print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))
Challenges/MNIST with Multi-Layer Perceptron.ipynb
DhavalThkkar/internship2017
apache-2.0
Download the example data files if we don't already have them.
targdir = 'a1835_xmm' if not os.path.isdir(targdir): os.mkdir(targdir) filenames = ('P0098010101M2U009IMAGE_3000.FTZ', 'P0098010101M2U009EXPMAP3000.FTZ', 'P0098010101M2X000BKGMAP3000.FTZ') remotedir = 'http://heasarc.gsfc.nasa.gov/FTP/xmm/data/rev0/0098010101/PPS/' for filename in filenames: path = os.path.join(targdir, filename) url = os.path.join(remotedir, filename) if not os.path.isfile(path): urllib.urlretrieve(url, path) imagefile, expmapfile, bkgmapfile = [os.path.join(targdir, filename) for filename in filenames] for filename in os.listdir(targdir): print('{0:>10.2f} KB {1}'.format(os.path.getsize(os.path.join(targdir, filename))/1024.0, filename))
examples/XrayImage/FirstLook.ipynb
ondrejiayc/StatisticalMethods
gpl-2.0
The XMM MOS2 image Let's find the "science" image taken with the MOS2 camera, and display it.
imfits = pyfits.open(imagefile) imfits.info()
examples/XrayImage/FirstLook.ipynb
ondrejiayc/StatisticalMethods
gpl-2.0
imfits is a FITS object, containing multiple data structures. The image itself is an array of integer type, and size 648x648 pixels, stored in the primary "header data unit" or HDU. If we need it to be floating point for some reason, we need to cast it: im = imfits[0].data.astype('np.float32') Note that this (probably?) prevents us from using the pyfits "writeto" method to save any changes. Assuming the integer type is ok, just get a pointer to the image data. Accessing the .data member of the FITS object returns the image data as a numpy ndarray.
im = imfits[0].data
examples/XrayImage/FirstLook.ipynb
ondrejiayc/StatisticalMethods
gpl-2.0
Let's look at this with ds9.
!ds9 -log "$imagefile"
examples/XrayImage/FirstLook.ipynb
ondrejiayc/StatisticalMethods
gpl-2.0
If you don't have the image viewing tool ds9, you should install it - it's very useful astronomical software. You can download it (later!) from this webpage. We can also display the image in the notebook:
plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower');
examples/XrayImage/FirstLook.ipynb
ondrejiayc/StatisticalMethods
gpl-2.0
Exercise What is going on in this image? Make a list of everything that is interesting about this image with your neighbor, and we'll discuss the features you identify in about 5 minutes time.
index = np.unravel_index(im.argmax(), im.shape) print("image dimensions:",im.shape) print("location of maximum pixel value:",index) print("maximum pixel value: ",im[index])
examples/XrayImage/FirstLook.ipynb
ondrejiayc/StatisticalMethods
gpl-2.0
Perplexity on Each Dataset
print('Train Perplexity: ', report['train_perplexity']) print('Valid Perplexity: ', report['valid_perplexity']) print('Test Perplexity: ', report['test_perplexity'])
report_notebooks/encdec_noing_250_512_040dr.ipynb
kingb12/languagemodelRNN
mit
Loss vs. Epoch
%matplotlib inline for k in logs.keys(): plt.plot(logs[k][0], logs[k][1], label=str(k) + ' (train)') plt.plot(logs[k][0], logs[k][2], label=str(k) + ' (valid)') plt.title('Loss v. Epoch') plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() plt.show()
report_notebooks/encdec_noing_250_512_040dr.ipynb
kingb12/languagemodelRNN
mit
Perplexity vs. Epoch
%matplotlib inline for k in logs.keys(): plt.plot(logs[k][0], logs[k][3], label=str(k) + ' (train)') plt.plot(logs[k][0], logs[k][4], label=str(k) + ' (valid)') plt.title('Perplexity v. Epoch') plt.xlabel('Epoch') plt.ylabel('Perplexity') plt.legend() plt.show()
report_notebooks/encdec_noing_250_512_040dr.ipynb
kingb12/languagemodelRNN
mit
Generations
def print_sample(sample): enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>']) gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>']) print('Input: '+ enc_input + '\n') print('Gend: ' + sample['generated'] + '\n') print('True: ' + gold + '\n') print('\n') for sample in report['train_samples']: print_sample(sample) for sample in report['valid_samples']: print_sample(sample) for sample in report['test_samples']: print_sample(sample)
report_notebooks/encdec_noing_250_512_040dr.ipynb
kingb12/languagemodelRNN
mit
BLEU Analysis
print 'Overall Score: ', report['bleu']['score'], '\n' print '1-gram Score: ', report['bleu']['components']['1'] print '2-gram Score: ', report['bleu']['components']['2'] print '3-gram Score: ', report['bleu']['components']['3'] print '4-gram Score: ', report['bleu']['components']['4']
report_notebooks/encdec_noing_250_512_040dr.ipynb
kingb12/languagemodelRNN
mit
N-pairs BLEU Analysis This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
npairs_generated = report['n_pairs_bleu_generated'] npairs_gold = report['n_pairs_bleu_gold'] print 'Overall Score (Generated): ', npairs_generated['score'], '\n' print '1-gram Score: ', npairs_generated['components']['1'] print '2-gram Score: ', npairs_generated['components']['2'] print '3-gram Score: ', npairs_generated['components']['3'] print '4-gram Score: ', npairs_generated['components']['4'] print '\n' print 'Overall Score: (Gold)', npairs_gold['score'], '\n' print '1-gram Score: ', npairs_gold['components']['1'] print '2-gram Score: ', npairs_gold['components']['2'] print '3-gram Score: ', npairs_gold['components']['3'] print '4-gram Score: ', npairs_gold['components']['4']
report_notebooks/encdec_noing_250_512_040dr.ipynb
kingb12/languagemodelRNN
mit
Alignment Analysis This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
print 'Average Generated Score: ', report['average_alignment_generated'] print 'Average Gold Score: ', report['average_alignment_gold']
report_notebooks/encdec_noing_250_512_040dr.ipynb
kingb12/languagemodelRNN
mit
the exploratory modeling workbench comes with a seperate analysis package. This analysis package contains prim. So let's import prim. The workbench also has its own logging functionality. We can turn this on to get some more insight into prim while it is running.
from ema_workbench.analysis import prim from ema_workbench.util import ema_logging ema_logging.log_to_stderr(ema_logging.INFO);
ema_workbench/examples/scenario_discovery_resampling.ipynb
quaquel/EMAworkbench
bsd-3-clause
Next, we need to instantiate the prim algorithm. To mimic the original work of Ben Bryant and Rob Lempert, we set the peeling alpha to 0.1. The peeling alpha determines how much data is peeled off in each iteration of the algorithm. The lower the value, the less data is removed in each iteration. The minimium coverage threshold that a box should meet is set to 0.8. Next, we can use the instantiated algorithm to find a first box.
prim_alg = prim.Prim(x, y, threshold=0.8, peel_alpha=0.1) box1 = prim_alg.find_box()
ema_workbench/examples/scenario_discovery_resampling.ipynb
quaquel/EMAworkbench
bsd-3-clause
Let's investigate this first box is some detail. A first thing to look at is the trade off between coverage and density. The box has a convenience function for this called show_tradeoff.
box1.show_tradeoff() plt.show()
ema_workbench/examples/scenario_discovery_resampling.ipynb
quaquel/EMAworkbench
bsd-3-clause
Since we are doing this analysis in a notebook, we can take advantage of the interactivity that the browser offers. A relatively recent addition to the python ecosystem is the library altair. Altair can be used to create interactive plots for use in a browser. Altair is an optional dependency for the workbench. If available, we can create the following visual.
box1.inspect_tradeoff()
ema_workbench/examples/scenario_discovery_resampling.ipynb
quaquel/EMAworkbench
bsd-3-clause
Here we can interactively explore the boxes associated with each point in the density coverage trade-off. It also offers mouse overs for the various points on the trade off curve. Given the id of each point, we can also use the workbench to manually inpect the peeling trajectory. Following Bryant & Lempert, we inspect box 21.
box1.resample(21) box1.inspect(21) box1.inspect(21, style="graph") plt.show()
ema_workbench/examples/scenario_discovery_resampling.ipynb
quaquel/EMAworkbench
bsd-3-clause
If one where to do a detailed comparison with the results reported in the original article, one would see small numerical differences. These differences arise out of subtle differences in implementation. The most important difference is that the exploratory modeling workbench uses a custom objective function inside prim which is different from the one used in the scenario discovery toolkit. Other differences have to do with details about the hill climbing optimization that is used in prim, and in particular how ties are handled in selected the next step. The differences between the two implementations are only numerical, and don't affect the overarching conclusions drawn from the analysis. Let's select this 21 box, and get a more detailed view of what the box looks like. Following Bryant et al., we can use scatter plots for this.
box1.select(21) fig = box1.show_pairs_scatter(21) plt.show()
ema_workbench/examples/scenario_discovery_resampling.ipynb
quaquel/EMAworkbench
bsd-3-clause
Because the last restriction is not significant, we can choose to drop this restriction from the box.
box1.drop_restriction("Cellulosic cost") box1.inspect(style="graph") plt.show()
ema_workbench/examples/scenario_discovery_resampling.ipynb
quaquel/EMAworkbench
bsd-3-clause
We have now found a first box that explains over 75% of the cases of interest. Let's see if we can find a second box that explains the remainder of the cases.
box2 = prim_alg.find_box()
ema_workbench/examples/scenario_discovery_resampling.ipynb
quaquel/EMAworkbench
bsd-3-clause
As we can see, we are unable to find a second box. The best coverage we can achieve is 0.35, which is well below the specified 0.8 threshold. Let's look at the final overal results from interactively fitting PRIM to the data. For this, we can use to convenience functions that transform the stats and boxes to pandas data frames.
prim_alg.stats_to_dataframe() prim_alg.boxes_to_dataframe()
ema_workbench/examples/scenario_discovery_resampling.ipynb
quaquel/EMAworkbench
bsd-3-clause
CART The way of interacting with CART is quite similar to how we setup the prim analysis. We import cart from the analysis package. We instantiate the algorithm, and next fit CART to the data. This is done via the build_tree method.
from ema_workbench.analysis import cart cart_alg = cart.CART(x, y, 0.05) cart_alg.build_tree()
ema_workbench/examples/scenario_discovery_resampling.ipynb
quaquel/EMAworkbench
bsd-3-clause
Now that we have trained CART on the data, we can investigate its results. Just like PRIM, we can use stats_to_dataframe and boxes_to_dataframe to get an overview.
cart_alg.stats_to_dataframe() cart_alg.boxes_to_dataframe()
ema_workbench/examples/scenario_discovery_resampling.ipynb
quaquel/EMAworkbench
bsd-3-clause
Alternatively, we might want to look at the classification tree directly. For this, we can use the show_tree method.
fig = cart_alg.show_tree() fig.set_size_inches((18, 12)) plt.show()
ema_workbench/examples/scenario_discovery_resampling.ipynb
quaquel/EMAworkbench
bsd-3-clause
Ordinary Differential Equations The previously generated ordinary differential equations that describe chemical kinetic reactions are loaded below. These expressions describe the right hand side of this mathematical equation: $$\frac{d\mathbf{y}}{dt} = \mathbf{f}(\mathbf{y}(t))$$ where the state vector $\mathbf{y}(t)$ is made up of 14 states, i.e. $\mathbf{y}(t) \in \mathbb{R}^{14}$. Below the variable rhs_of_odes represents $\mathbf{f}(\mathbf{y}(t))$ and states represents $\mathbf{y}(t)$. From now own we will simply use $\mathbf{y}$ instead of $\mathbf{y}(t)$ and assume an implicit function of $t$.
from scipy2017codegen.chem import load_large_ode rhs_of_odes, states = load_large_ode()
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Exercise [2 min] Display the expressions (rhs_of_odes and states), inspect them, and find out their types and dimensions. What are some of the characteristics of the equations (type of mathematical expressions, linear or non-linear, etc)? Double Click For Solution <!-- rhs_of_odes type(rhs_of_odes) rhs_of_odes.shape # rhs_of_odes is a 14 x 1 SymPy matrix of expressions. The expressions are # long multivariate polynomials. states type(states) states.shape # states is a 14 x 1 SymPy matrix of symbols The equations are nonlinear equations of the states. There are 14 equations and 14 states. The coefficients in the equations are various floating point numbers. -->
# write your solution here
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Compute the Jacobian As has been shown in the previous lesson the Jacobian of the right hand side of the differential equations is often very useful for computations, such as integration and optimization. With: $$\frac{d\mathbf{y}}{dt} = \mathbf{f}(\mathbf{y})$$ the Jacobian is defined as: $$\mathbf{J}(\mathbf{y}) = \frac{\partial\mathbf{f}(\mathbf{y})}{\partial\mathbf{y}}$$ SymPy can compute the Jacobian of matrix objects with the Matrix.jacobian() method. Exercise [3 min] Look up the Jacobian in the SymPy documentation then compute the Jacobian and store the result in the variable jac_of_odes. Inspect the resulting Jacobian for dimensionality, type, and the symbolic form. Double Click For Solution <!-- jac_of_odes = rhs_of_odes.jacobian(states) type(jac_of_odes) jac_of_odes.shape jac_of_odes The Jacobian is a 14 x 14 SymPy matrix and contains 196 expressions which are linear functions of the state variables. -->
# write your answer here
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
C Code Printing The two expressions are large and will likely have to be excuted many thousands of times to compute the desired numerical values, so we want them to execute as fast as possible. We can use SymPy to print these expressions as C code. We will design a double precision C function that evaluates both $\mathbf{f}(\mathbf{y})$ and $\mathbf{J}(\mathbf{y})$ simultaneously given the values of the states $\mathbf{y}$. Below is a basic template for a C program that includes such a function, evaluate_odes(). Our job is to populate the function with the C version of the SymPy expressions. ```C include <math.h> include <stdio.h> void evaluate_odes(const double state_vals[14], double rhs_result[14], double jac_result[196]) { // We need to fill in the code here using SymPy. } int main() { // initialize the state vector with some values double state_vals[14] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14}; // create "empty" 1D arrays to hold the results of the computation double rhs_result[14]; double jac_result[196]; // call the function evaluate_odes(state_vals, rhs_result, jac_result); // print the computed values to the terminal int i; printf("The right hand side of the equations evaluates to:\n"); for (i=0; i &lt; 14; i++) { printf("%lf\n", rhs_result[i]); } printf("\nThe Jacobian evaluates to:\n"); for (i=0; i &lt; 196; i++) { printf("%lf\n", jac_result[i]); } return 0; } ``` Instead of using the ccode convenience function you learned earlier let's use the underlying code printer class to do the printing. This will allow us to modify the class to for custom printing further down.
from sympy.printing.ccode import C99CodePrinter
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
All printing classes have to be instantiated and then the .doprint() method can be used to print SymPy expressions. Let's try to print the right hand side of the differential equations.
printer = C99CodePrinter() print(printer.doprint(rhs_of_odes))
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
In this case, the C code printer does not do what we desire. It does not support printing a SymPy Matrix (see the first line of the output). In C, on possible representation of a matrix is an array type. The array type in C stores contigous values, e.g. doubles, in a chunk of memory. You can declare an array of doubles in C like: C double my_array[10]; The word double is the data type of the individual values in the array which must all be the same. The word my_array is the variable name we choose to name the array and the [10] is the syntax to declare that this array will have 10 values. The array is "empty" when first declared and can be filled with values like so: C my_array[0] = 5; my_array[1] = 6.78; my array[2] = my_array[0] * 12; or like: C my_array = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; It is possible to declare multidimensional arrays in C that could map more directly to the indices of our two dimensional matrix, but in this case we will map our two dimensional matrix to a one dimenasional array using C contingous row ordering. The code printers are capable of dealing with this need through the assign_to keyword argument in the .doprint() method but we must define a SymPy object that is appropriate to be assigned to. In our case, since we want to assign a Matrix we need to use an appropriately sized Matrix symbol.
rhs_result = sym.MatrixSymbol('rhs_result', 14, 1) print(rhs_result) print(rhs_result[0]) print(printer.doprint(rhs_of_odes, assign_to=rhs_result))
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Notice that we have proper array value assignment and valid lines of C code that can be used in our function. Excercise [5 min] Print out valid C code for the Jacobian matrix. Double Click For Solution <!--- jac_result = sym.MatrixSymbol('jac_result', 14, 14) print(jac_result) print(printer.doprint(jac_of_odes, assign_to=jac_result)) -->
# write your answer here
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Changing the Behavior of the Printer The SymPy code printers are relatively easy to extend. They are designed such that if you want to change how a particularly SymPy object prints, for example a Symbol, then you only need to modify the _print_Symbol method of the printer. In general, the code printers have a method for every SymPy object and also many builtin types. Use tab completion with C99CodePrinter._print_ to see all of the options. Once you find the method you want to modify, it is often useful to look at the existing impelementation of the print method to see how the code is written.
C99CodePrinter._print_Symbol??
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Below is a simple example of overiding the Symbol printer method. Note that you should use the self._print() method instead of simply returning the string so that the proper printer, self._print_str(), is dispatched. This is most important if you are printing non-singletons, i.e. expressions that are made up of multiple singletons.
C99CodePrinter._print_str?? class MyCodePrinter(C99CodePrinter): def _print_Symbol(self, expr): return self._print("No matter what symbol you pass in I will always print:\n\nNi!") my_printer = MyCodePrinter() theta = sym.symbols('theta') theta print(my_printer.doprint(theta))
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Exercise [10 min] One issue with our current code printer is that the expressions use the symbols y0, y1, ..., y13 instead of accessing the values directly from the arrays with state_vals[0], state_vals[1], ..., state_vals[13]. We could go back and rename our SymPy symbols to use brackets, but another way would be to override the _print_Symbol() method to print these symbols as we desire. Modify the code printer so that it prints with the proper array access in the expression. Double Click For Solution: Subclassing <!-- The following solution examines the symbol and if it is a state variable it overrides the printer, otherwise it uses the parent class to print the symbol as a fall back. class MyCodePrinter(C99CodePrinter): def _print_Symbol(self, symbol): if symbol in states: idx = list(states).index(symbol) return self._print('state_vals[{}]'.format(idx)) else: return super()._print_Symbol(symbol) my_printer = MyCodePrinter() print(my_printer.doprint(rhs_of_odes, assign_to=rhs_result)) --> Double Click For Solution: Exact replacement <!-- Another option is to replace the symbols with `MatrixSymbol` elements. Notice that the C printer assumes that a 2D matrix will get mapped to a 1D C array. state_vals = sym.MatrixSymbol('state_vals', 14, 1) state_array_map = dict(zip(states, state_vals)) print(state_array_map) print(printer.doprint(rhs_of_odes.xreplace(state_array_map), assign_to=rhs_result)) -->
# write your answer here
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Bonus Exercise Do this exercise if you finish the previous one quickly. It turns out that calling pow() for low value integer exponents executes slower than simply expanding the multiplication. For example pow(x, 2) could be printed as x*x. Modify the CCodePrinter ._print_Pow method to expand the multiplication if the exponent is less than or equal to 4. You may want to have a look at the source code with printer._print_Pow?? Note that a Pow expression has an .exp for exponent and .base for the item being raised. For example $x^2$ would have: python expr = x**2 expr.base == x expr.exp == 2 Double Click for Solution <!-- printer._print_Pow?? class MyCodePrinter(C99CodePrinter): def _print_Pow(self, expr): if expr.exp.is_integer and expr.exp > 0 and expr.exp <= 4: return '*'.join([self._print(expr.base) for i in range(expr.exp)]) else: return super()._print_Pow(expr) my_printer = MyCodePrinter() x = sym.Symbol('x') my_printer.doprint(x) my_printer.doprint(x**2) my_printer.doprint(x**4) my_printer.doprint(x**5) my_printer.doprint(x**1.5) -->
# write your answer here
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Common Subexpression Elimination If you look carefully at the expressions in the two matrices you'll see repeated expressions. These are not ideal in the sense that the computer has to repeat the exact same calculation multiple times. For large expressions this can be a major issue. Compilers, such as gcc, can often eliminate common subexpressions on their own when different optimization flags are invoked but for complex expressions the algorithms in some compilers do not do a thorough job or compilation can take an extremely long time. SymPy has tools to perform common subexpression elimination which is both thorough and reasonably efficient. In particular if gcc is run with the lowest optimization setting -O0 cse can give large speedups. For example if you have two expressions: python a = x*y + 5 b = x*y + 6 you can convert this to these three expressions: python z = x*y a = z + 5 b = z + 6 and x*y only has to be computed once. The cse() function in SymPy returns the subexpression, z = x*y, and the simplified expressions: a = z + 5, b = z + 6. Here is how it works:
sm.cse? sub_exprs, simplified_rhs = sym.cse(rhs_of_odes) for var, expr in sub_exprs: sym.pprint(sym.Eq(var, expr))
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
cse() can return a number of simplified expressions and to do this it returns a list. In our case we have 1 simplified expression that can be accessed as the first item of the list.
type(simplified_rhs) len(simplified_rhs) simplified_rhs[0]
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
You can find common subexpressions among multiple objects also:
jac_of_odes = rhs_of_odes.jacobian(states) sub_exprs, simplified_exprs = sym.cse((rhs_of_odes, jac_of_odes)) for var, expr in sub_exprs: sym.pprint(sym.Eq(var, expr)) simplified_exprs[0] simplified_exprs[1]
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Exercise [15min] Use common subexpression elimination to print out C code for your two arrays such that: ```C double x0 = first_sub_expression; ... double xN = last_sub_expression; rhs_result[0] = expressions_containing_the_subexpressions; ... rhs_result[13] = ...; jac_result[0] = ...; ... jac_result[195] = ...; ``` The code you create can be copied and pasted into the provided template above to make a C program. Refer back to the introduction to C code printing above. To give you a bit of help we will first introduce the Assignment class. The printers know how to print variable assignments that are defined by an Assignment instance.
from sympy.printing.codeprinter import Assignment print(printer.doprint(Assignment(theta, 5)))
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
The following code demonstrates a way to use cse() to simplify single matrix objects. Note that we use ImmutableDenseMatrix because all dense matrics are internally converted to this type in the printers. Check the type of your matrices to see.
class CMatrixPrinter(C99CodePrinter): def _print_ImmutableDenseMatrix(self, expr): sub_exprs, simplified = sym.cse(expr) lines = [] for var, sub_expr in sub_exprs: lines.append('double ' + self._print(Assignment(var, sub_expr))) M = sym.MatrixSymbol('M', *expr.shape) return '\n'.join(lines) + '\n' + self._print(Assignment(M, expr)) p = CMatrixPrinter() print(p.doprint(jac_of_odes))
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Now create a custom printer that uses cse() on the two matrices simulatneously so that subexpressions are not repeated. Hint: think about how the list printer method, _print_list(self, list_of_exprs), might help here. Double Click For Solution <!-- class CMatrixPrinter(C99CodePrinter): def _print_list(self, list_of_exprs): # NOTE : The MutableDenseMatrix is turned in an ImmutableMatrix inside here. if all(isinstance(x, sym.ImmutableMatrix) for x in list_of_exprs): sub_exprs, simplified_exprs = sym.cse(list_of_exprs) lines = [] for var, sub_expr in sub_exprs: ass = Assignment(var, sub_expr.xreplace(state_array_map)) lines.append('double ' + self._print(ass)) for mat in simplified_exprs: lines.append(self._print(mat.xreplace(state_array_map))) return '\n'.join(lines) else: return super()._print_list(list_of_exprs) def _print_ImmutableDenseMatrix(self, expr): if expr.shape[1] > 1: M = sym.MatrixSymbol('jac_result', *expr.shape) else: M = sym.MatrixSymbol('rhs_result', *expr.shape) return self._print(Assignment(M, expr)) p = CMatrixPrinter() print(p.doprint([rhs_of_odes, jac_of_odes])) -->
# write your answer here
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
Bonus Exercise: Compile and Run the C Program Below we provide you with a template for the C program described above. You can use it by passing in a string like: python c_template.format(code='the holy grail') Use this template and your code printer to create a file called run.c in the working directory. To compile the code there are several options. The first is gcc (the GNU C Compiler). If you have Linux, Mac, or Windows (w/ mingw installed) you can use the Jupyter notebook ! command to send your command to the terminal. For example: ipython !gcc run.c -lm -o run This will compile run.c, link against the C math library with -lm and output, -o, to a file run (Mac/Linux) or run.exe (Windows). On Mac and Linux the program can be executed with: ipython !./run and on Windows: ipython !run.exe Other options are using the clang compiler or Windows cl compiler command: ipython !clang run.c -lm -o run !cl run.c -lm Double Click For Solution <!-- c_program = c_template.format(code=p.doprint([rhs_of_odes, jac_of_odes])) print(c_program) with open('run.c', 'w') as f: f.write(c_program) -->
c_template = """\ #include <math.h> #include <stdio.h> void evaluate_odes(const double state_vals[14], double rhs_result[14], double jac_result[196]) {{ // We need to fill in the code here using SymPy. {code} }} int main() {{ // initialize the state vector with some values double state_vals[14] = {{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14}}; // create "empty" 1D arrays to hold the results of the computation double rhs_result[14]; double jac_result[196]; // call the function evaluate_odes(state_vals, rhs_result, jac_result); // print the computed values to the terminal int i; printf("The right hand side of the equations evaluates to:\\n"); for (i=0; i < 14; i++) {{ printf("%lf\\n", rhs_result[i]); }} printf("\\nThe Jacobian evaluates to:\\n"); for (i=0; i < 196; i++) {{ printf("%lf\\n", jac_result[i]); }} return 0; }}\ """ # write your answer here
notebooks/07-the-hard-way.ipynb
sympy/scipy-2017-codegen-tutorial
bsd-3-clause
As always, let's do imports and initialize a logger and a new bundle.
import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary() b.add_dataset('lc', dataset='lc01') b.add_dataset('mesh', times=[0], columns=['intensities*'])
2.3/tutorials/gravb_bol.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Relevant Parameters The 'gravb_bol' parameter corresponds to the &beta; coefficient for gravity darkening corrections.
print(b['gravb_bol']) print(b['gravb_bol@primary'])
2.3/tutorials/gravb_bol.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
If you have a logger enabled, PHOEBE will print a warning if the value of gravb_bol is outside the "suggested" ranges. Note that this is strictly a warning, and will never turn into an error at b.run_compute(). You can also manually call b.run_checks(). The first returned item tells whether the system has passed checks: True means it has, False means it has failed, and None means the tests pass but with a warning. The second argument tells the first warning/error message raised by the checks. The checks use the following "suggested" values: * teff 8000+: gravb_bol >= 0.9 (suggest 1.0) * teff 6600-8000: gravb_bol 0.32-1.0 * teff 6600-: grav_bol < 0.9 (suggest 0.32)
print(b.run_checks()) b['teff@primary'] = 8500 b['gravb_bol@primary'] = 0.8 print(b.run_checks()) b['teff@primary'] = 7000 b['gravb_bol@primary'] = 0.2 print(b.run_checks()) b['teff@primary'] = 6000 b['gravb_bol@primary'] = 1.0 print(b.run_checks())
2.3/tutorials/gravb_bol.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Influence on Intensities
b['teff@primary'] = 6000 b['gravb_bol@primary'] = 0.32 b.run_compute(model='gravb_bol_32') afig, mplfig = b['primary@mesh01@gravb_bol_32'].plot(fc='intensities', ec='None', show=True) b['gravb_bol@primary'] = 1.0 b.run_compute(model='gravb_bol_10') afig, mplfig = b['primary@mesh01@gravb_bol_10'].plot(fc='intensities', ec='None', show=True)
2.3/tutorials/gravb_bol.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Comparing these two plots, it is essentially impossible to notice any difference between the two models. But if we compare the intensities directly, we can see that there is a subtle difference, with a maximum difference of about 3%.
np.nanmax((b.get_value('intensities', component='primary', model='gravb_bol_32') - b.get_value('intensities', component='primary', model='gravb_bol_10'))/b.get_value('intensities', component='primary', model='gravb_bol_10'))
2.3/tutorials/gravb_bol.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Description A heap is a type of binary tree that allows fast insertion and fast traversal. This makes them a good candidate for sorting large amounts of data. Heaps have the following properties: - the root node has maximum value key - the key stored at a non-root is at most the value of its parent Therefore: - any path from the root node to a leaf node is in nonincreasing order However: - the left and right sub-trees don't have a formal relationship Storage A binary tree can be represented using an array with the following indexing: $$\texttt{parent}\left(i\right) = (i-1)/2$$ $$\texttt{left}\left(i\right) = (2i)+1$$ $$\texttt{right}\left(i\right) = (2i)+2$$ Example root node index: $0$ left child: $1$ left child: $2*1+1 = 3$ right child: $2*1+2 = 4$ right child: $2$ left child: $2*2 + 1 = 5$ right child: $2*2 + 2 = 6$ The figure below provides a visual demonstration.
binary_heap_allocation_example()
Heaps.ipynb
abeschneider/algorithm_notes
mit
Operations Inserting a new item into the heap To insert a new item into the heap, we start by first adding it to the end. Once added, we will percolate the item up until its parent is larger than the item.
def parent(i): return (i-1)/2 def left(i): return 2*i+1 def right(i): return 2*i+2 def percolate_up(heap, startpos, pos): ppos = parent(pos) while pos > startpos and heap[ppos] < heap[pos]: # percolate value up by swapping current position with parent position heap[pos], heap[ppos] = heap[ppos], heap[pos] # move up one node pos = ppos ppos = parent(pos) def heap_insert(heap, value): # add value to end heap.append(value) # move value up heap until the nodes below it are smaller percolate_up(heap, 0, len(heap)-1)
Heaps.ipynb
abeschneider/algorithm_notes
mit
To see why this works, we can visualize the algorithm. We start with a new value of 100 (highlighted with red). That is inserted into the bottom of the heap. We percoluate 100 up (each swap is highlighted) until it gets placed into the root note. Once finished, the heap's properties are now restored, and every child will have a smaller value than its parent. To get a good sense of how percolute_up works, try putting different values in for the heap. Note that, it won't work correctly if the initial value isn't a proper heap.
heap = [16, 14, 10, 8, 7, 9, 3, 2, 4] heap.append(100) insert_item_to_heap_example(heap)
Heaps.ipynb
abeschneider/algorithm_notes
mit
A quick example of using the code:
heap = [] heap_insert(heap, 20) print("adding 20: ", heap) # [20] heap_insert(heap, 5) print("adding 5: ", heap) # [5, 20] heap_insert(heap, 1) print("adding 1: ", heap) # [1, 20, 5] heap_insert(heap, 50) print("adding 50: ", heap) # [1, 20, 5, 50] heap_insert(heap, 6) print("adding 6: ", heap) # [1, 5, 6, 50, 20] with Canvas(400, 150) as ctx: draw_binary_tree(ctx, (200, 50), heap)
Heaps.ipynb
abeschneider/algorithm_notes
mit
Removing an item from the heap Removing the root node from the heap gives the largest value. In place of the root node, the smallest (i.e. last value in the heap) can be placed at the root, and the heap properties are then restored. To restore the heap properties, the function percolate_down starts at the root node, and traverses down the tree. At every node it compares the current node's value with the left and right child. If the children are smaller than the current node, because of the heap properties, we know the rest of the tree is correctly ordered. If the current node is less than the left node or right node, it is swapped with the largest value. To understand why this works, consider the two possibilities: (1) The current node is largest. This meets the definition of a heap.
heap = [10, 5, 3] with Canvas(400, 80) as ctx: draw_binary_tree(ctx, (200, 20), heap)
Heaps.ipynb
abeschneider/algorithm_notes
mit
(2) The left child is largest. In the case if we swap the parent node with the child, the heap properties are restored (i.e. the top node is larger than either of its children).
heap1 = [5, 10, 3] heap2 = [10, 5, 3] with Canvas(400, 80) as ctx: draw_binary_tree(ctx, (100, 20), heap1) draw_binary_tree(ctx, (300, 20), heap2)
Heaps.ipynb
abeschneider/algorithm_notes
mit
We have to do this recursively down the tree, as every swap we make can potentially cause a violation of the heap below. The code for the algorithm is given below:
def percolate_down(heap, i, size): l = left(i) r = right(i) if l < size and heap[l] > heap[i]: max = l else: max = i if r < size and heap[r] > heap[l]: max = r # if left or right is greater than current index if max != i: # swap values heap[i], heap[max] = heap[max], heap[i] # continue downward percolate_down(heap, max, len(heap))
Heaps.ipynb
abeschneider/algorithm_notes
mit
To see this code in action, we'll start with a well-formed heap. Next, we'll take a value off of the heap by swapping the root node with the last node. Finally, we restore the heap with a call to percolate_down. In the demo below the highlighted nodes show the two nodes that will be swapped (i.e. parent node and the largest child).
heap = [16, 14, 10, 8, 7, 9, 3, 2, 4] # swap root with last value (4 is now root, and 16 is at the bottom) heap[0], heap[-1] = heap[-1], heap[0] # remove `16` from heap, and restore the heap properties value = heap.pop() percolate_down_example(heap)
Heaps.ipynb
abeschneider/algorithm_notes
mit