code
stringlengths
2.5k
6.36M
kind
stringclasses
2 values
parsed_code
stringlengths
0
404k
quality_prob
float64
0
0.98
learning_prob
float64
0.03
1
### Python Data Types 1. Integer 2. Float 3. String 4. Boolean 5. List 6. Set 7. Tuple 8. Dictionary ----------- ### Mutable Python Data Types Variable value can be changed. 1. List 2. Set 3. Dictionary ### Immutable Python Data Types Variable value can NOT be changed. 1. Integer 2. Float 3. String 4. Boolean 5. Tuple --- ### Integer & Float ``` apple123 = 123 float123 = 123.5 print(apple123) print(float123) ``` ### String ``` string123 = "123" string123 string123 = apple ``` When creating a string variable, the value must be sorrounded by either double `"` or single quotes `'`. If it's not, python looks for a variable named `apple`, as we have done with `string123`, but we haven't defined any variable named `apple` so there's an error. The solution is to sorround `apple` with quotes. ``` string123 = "apple" # both do the same thing string123 = 'apple' # both do the same thing string123 print(string123[0]) print(string123[1]) ``` All indexing in every programming language starts at 0. ``` string123[0] = 'q' ``` String is an **immutable** type, as listed above. This means the value of the variable can not be changed, as I am trying to above changing the first character from `1` to `a`. The only way to change the value of variable `string123` is to assign it to a new value. ``` string123 = 'orange' string123 ``` ### Boolean ``` bool123 = True bool123 = False bool123 ``` Any word must have quotes around it or Python throws an error, except if the word is a keyword. `True` and `False` are two keywords in Python that don't require quotes. `False` is not the same as `'False'`, likewise for `True` and `'True'`. ``` False == 'False' ``` ### List ``` list123 = [100, 2, 2, 453.1231, 'apple', 'orange'] list123 ``` Python lists allow for mixing different types of variables in the same list. In `list123` there are `integers`, `floats`, and `strings`. ``` print(list123[0]) print(list123[1]) print('First value of the list is:', list123[0]) list123[0] = 'new value' list123 ``` Unlike the `string` example before, lists are **mutable**. This means the value of the list can be changed by positional indexing as shown above. ### Set ``` list123 set123 = {100, 2, 2, 453.1231, 'apple', 'orange'} set123 ``` Python sets are just like lists except sets use `{` instead of `[`. Also sets don't allow for unique values, so instead of showing two 2's, there's only one. ### Tuple ``` print(list123) print(set123) tuple123 = (100, 2, 2, 453.1231, 'apple', 'orange') tuple123 ``` Python tuples are also very similar to lists. They don't force unique values like sets though. ``` print('First value of the tuple is:', tuple123[0]) tuple123[0] = 12343 ``` However, unlike lists tuples are **immutable** meaning they can't be changed with positional indexing. We ran into the same problem with strings previously. ### Dictionary ``` dict123 = {'a': 3, 'b': 5, 'a':10} ## key-value pairs dict123 dict123[0] ``` Python dictionaries do NOT use positional indexing, like strings, lists, and tuples. Instead they use something called key-value pairs. To access values in the dictionary you must use the keys of the dictionaries, meaning the values to left of the `:` in the dictionary. Just like sets though, dictionaries do NOT allow duplicate values, meaning the keys to the left side of the `:`. There can be duplicate values to the right of the `:`. ``` dict123['a'], dict123['b'] ``` ### Real-World Examples - I find myself using strings, integers, floats, and booleans constantly. - For the others data types, that store multiple values, I find myself using `list` and `dictionary` much more often than `tuple` and `set`, with `tuple` extremely rarely. --- Assume your building a program to manage a store of customers. - A `list` might store all the names of all your customers. - If you don't want to record names twice you would use a `set`, or convert the `list` to a `set`. - If you want to store the names, as well as their phone numbers you would use a `dictionary`.
github_jupyter
apple123 = 123 float123 = 123.5 print(apple123) print(float123) string123 = "123" string123 string123 = apple string123 = "apple" # both do the same thing string123 = 'apple' # both do the same thing string123 print(string123[0]) print(string123[1]) string123[0] = 'q' string123 = 'orange' string123 bool123 = True bool123 = False bool123 False == 'False' list123 = [100, 2, 2, 453.1231, 'apple', 'orange'] list123 print(list123[0]) print(list123[1]) print('First value of the list is:', list123[0]) list123[0] = 'new value' list123 list123 set123 = {100, 2, 2, 453.1231, 'apple', 'orange'} set123 print(list123) print(set123) tuple123 = (100, 2, 2, 453.1231, 'apple', 'orange') tuple123 print('First value of the tuple is:', tuple123[0]) tuple123[0] = 12343 dict123 = {'a': 3, 'b': 5, 'a':10} ## key-value pairs dict123 dict123[0] dict123['a'], dict123['b']
0.261614
0.960915
## Mocking ### Definition **Mock**: *verb*, 1. to tease or laugh at in a scornful or contemptuous manner 2. to make a replica or imitation of something **Mocking** - Replace a real object with a pretend object, which records how it is called, and can assert if it is called wrong ### Mocking frameworks * C: [CMocka](http://www.cmocka.org/) * C++: [googlemock](https://code.google.com/p/googlemock/) * Python: [unittest.mock](http://docs.python.org/dev/library/unittest.mock) ### Recording calls with mock Mock objects record the calls made to them: ``` from unittest.mock import Mock function = Mock(name="myroutine", return_value=2) function(1) function(5, "hello", a=True) function.mock_calls ``` The arguments of each call can be recovered ``` name, args, kwargs = function.mock_calls[1] args, kwargs ``` Mock objects can return different values for each call ``` function = Mock(name="myroutine", side_effect=[2, "xyz"]) function(1) function(1, "hello", {"a": True}) ``` We expect an error if there are no return values left in the list: ``` function() ``` ### Using mocks to model test resources Often we want to write tests for code which interacts with remote resources. (E.g. databases, the internet, or data files.) We don't want to have our tests *actually* interact with the remote resource, as this would mean our tests failed due to lost internet connections, for example. Instead, we can use mocks to assert that our code does the right thing in terms of the *messages it sends*: the parameters of the function calls it makes to the remote resource. For example, consider the following code that downloads a map from the internet: ``` import requests def map_at(lat, long, satellite=False, zoom=12, size=(400,400)): base = "https://static-maps.yandex.ru/1.x/?" params = dict( z = zoom, size = str(size[0]) + "," + str(size[1]), ll = str(long) + "," + str(lat), l = "sat" if satellite else "map", lang = "en_US" ) return requests.get(base, params=params) london_map = map_at(51.5073509, -0.1277583) %matplotlib inline import IPython IPython.core.display.Image(london_map.content) ``` We would like to test that it is building the parameters correctly. We can do this by **mocking** the requests object. We need to temporarily replace a method in the library with a mock. We can use "patch" to do this: ``` from unittest.mock import patch with patch.object(requests, "get") as mock_get: london_map = map_at(51.5073509, -0.1277583) print(mock_get.mock_calls) ``` Our tests then look like: ``` def test_build_default_params(): with patch.object(requests, "get") as mock_get: default_map = map_at(51.0, 0.0) mock_get.assert_called_with( "https://static-maps.yandex.ru/1.x/?", params={ "z": 12, "size": "400,400", "ll": "0.0,51.0", "l": "map", "lang": "en_US", }, ) test_build_default_params() ``` That was quiet, so it passed. When I'm writing tests, I usually modify one of the expectations, to something 'wrong', just to check it's not passing "by accident", run the tests, then change it back! ### Testing functions that call other functions <div align="left"> ``` def partial_derivative(function, at, direction, delta=1.0): f_x = function(at) x_plus_delta = at[:] x_plus_delta[direction] += delta f_x_plus_delta = function(x_plus_delta) return (f_x_plus_delta - f_x) / delta ``` We want to test that the above function does the right thing. It is supposed to compute the derivative of a function of a vector in a particular direction. E.g.: ``` partial_derivative(sum, [0, 0, 0], 1) ``` How do we assert that it is doing the right thing? With tests like this: ``` from unittest.mock import MagicMock def test_derivative_2d_y_direction(): func = MagicMock() partial_derivative(func, [0, 0], 1) func.assert_any_call([0, 1.0]) func.assert_any_call([0, 0]) test_derivative_2d_y_direction() ``` We made our mock a "Magic Mock" because otherwise, the mock results `f_x_plus_delta` and `f_x` can't be subtracted: ``` MagicMock() - MagicMock() Mock() - Mock() ```
github_jupyter
from unittest.mock import Mock function = Mock(name="myroutine", return_value=2) function(1) function(5, "hello", a=True) function.mock_calls name, args, kwargs = function.mock_calls[1] args, kwargs function = Mock(name="myroutine", side_effect=[2, "xyz"]) function(1) function(1, "hello", {"a": True}) function() import requests def map_at(lat, long, satellite=False, zoom=12, size=(400,400)): base = "https://static-maps.yandex.ru/1.x/?" params = dict( z = zoom, size = str(size[0]) + "," + str(size[1]), ll = str(long) + "," + str(lat), l = "sat" if satellite else "map", lang = "en_US" ) return requests.get(base, params=params) london_map = map_at(51.5073509, -0.1277583) %matplotlib inline import IPython IPython.core.display.Image(london_map.content) from unittest.mock import patch with patch.object(requests, "get") as mock_get: london_map = map_at(51.5073509, -0.1277583) print(mock_get.mock_calls) def test_build_default_params(): with patch.object(requests, "get") as mock_get: default_map = map_at(51.0, 0.0) mock_get.assert_called_with( "https://static-maps.yandex.ru/1.x/?", params={ "z": 12, "size": "400,400", "ll": "0.0,51.0", "l": "map", "lang": "en_US", }, ) test_build_default_params() def partial_derivative(function, at, direction, delta=1.0): f_x = function(at) x_plus_delta = at[:] x_plus_delta[direction] += delta f_x_plus_delta = function(x_plus_delta) return (f_x_plus_delta - f_x) / delta partial_derivative(sum, [0, 0, 0], 1) from unittest.mock import MagicMock def test_derivative_2d_y_direction(): func = MagicMock() partial_derivative(func, [0, 0], 1) func.assert_any_call([0, 1.0]) func.assert_any_call([0, 0]) test_derivative_2d_y_direction() MagicMock() - MagicMock() Mock() - Mock()
0.61173
0.928018
``` # HIDDEN from datascience import * import numpy as np path_data = '../../../data/' %matplotlib inline import matplotlib.pyplot as plots plots.style.use('fivethirtyeight') ``` ### Visualizing Numerical Distributions ### Many of the variables that data scientists study are *quantitative* or *numerical*. Their values are numbers on which you can perform arithmetic. Examples that we have seen include the number of periods in chapters of a book, the amount of money made by movies, and the age of people in the United States. The values of a categorical variable can be given numerical codes, but that doesn't make the variable quantitative. In the example in which we studied Census data broken down by age group, the categorial variable `SEX` had the numerical codes `1` for 'Male,' `2` for 'Female,' and `0` for the aggregate of both groups `1` and `2`. While 0, 1, and 2 are numbers, in this context it doesn't make sense to subtract 1 from 2, or take the average of 0, 1, and 2, or perform other arithmetic on the three values. `SEX` is a categorical variable even though the values have been given a numerical code. For our main example, we will return to a dataset that we studied when we were visualizing categorical data. It is the table `top`, which consists of data from U.S.A.'s top grossing movies of all time. For convenience, here is the description of the table again. The first column contains the title of the movie. The second column contains the name of the studio that produced the movie. The third contains the domestic box office gross in dollars, and the fourth contains the gross amount that would have been earned from ticket sales at 2016 prices. The fifth contains the release year of the movie. There are 200 movies on the list. Here are the top ten according to the unadjusted gross receipts in the column `Gross`. ``` top = Table.read_table(path_data + 'top_movies.csv') # Make the numbers in the Gross and Gross (Adjusted) columns look nicer: top.set_format([2, 3], NumberFormatter) ``` ### Visualizing the Distribution of the Adjusted Receipts ### In this section we will draw graphs of the distribution of the numerical variable in the column `Gross (Adjusted)`. For simplicity, let's create a smaller table that has the information that we need. And since three-digit numbers are easier to work with than nine-digit numbers, let's measure the `Adjusted Gross` receipts in millions of dollars. Note how `round` is used to retain only two decimal places. ``` millions = top.select(0).with_column('Adjusted Gross', np.round(top.column(3)/1e6, 2)) millions ``` ### A Histogram ### A *histogram* of a numerical dataset looks very much like a bar chart, though it has some important differences that we will examine in this section. First, let's just draw a histogram of the adjusted receipts. The `hist` method generates a histogram of the values in a column. The optional `unit` argument is used in the labels on the two axes. The histogram shows the distribution of the adjusted gross amounts, in millions of 2016 dollars. ``` millions.hist('Adjusted Gross', unit="Million Dollars") ``` ### The Horizontal Axis ### The amounts have been grouped into contiguous intervals called *bins*. Although in this dataset no movie grossed an amount that is exactly on the edge between two bins, `hist` does have to account for situations where there might have been values at the edges. So `hist` has an *endpoint convention*: bins include the data at their left endpoint, but not the data at their right endpoint. We will use the notation [*a*, *b*) for the bin that starts at *a* and ends at *b* but doesn't include *b*. Sometimes, adjustments have to be made in the first or last bin, to ensure that the smallest and largest values of the variable are included. You saw an example of such an adjustment in the Census data studied earlier, where an age of "100" years actually meant "100 years old or older." We can see that there are 10 bins (some bars are so low that they are hard to see), and that they all have the same width. We can also see that none of the movies grossed fewer than 300 million dollars; that is because we are considering only the top grossing movies of all time. It is a little harder to see exactly where the ends of the bins are situated. For example, it is not easy to pinpoint exactly where the value 500 lies on the horizontal axis. So it is hard to judge exactly where one bar ends and the next begins. The optional argument `bins` can be used with `hist` to specify the endpoints of the bins. It must consist of a sequence of numbers that starts with the left end of the first bin and ends with the right end of the last bin. We will start by setting the numbers in `bins` to be 300, 400, 500, and so on, ending with 2000. ``` millions.hist('Adjusted Gross', bins=np.arange(300,2001,100), unit="Million Dollars") ``` The horizontal axis of this figure is easier to read. The labels 200, 400, 600, and so on are centered at the corresponding values. The tallest bar is for movies that grossed between 300 million and 400 million dollars. A very small number of movies grossed 800 million dollars or more. This results in the figure being "skewed to the right," or, less formally, having "a long right hand tail." Distributions of variables like income or rent in large populations also often have this kind of shape. ### The Counts in the Bins ### The counts of values in the bins can be computed from a table using the `bin` method, which takes a column label or index and an optional sequence or number of bins. The result is a tabular form of a histogram. The first column lists the left endpoints of the bins (but see the note about the final value, below). The second column contains the counts of all values in the `Adjusted Gross` column that are in the corresponding bin. That is, it counts all the `Adjusted Gross` values that are greater than or equal to the value in `bin`, but less than the next value in `bin`. ``` bin_counts = millions.bin('Adjusted Gross', bins=np.arange(300,2001,100)) bin_counts.show() ``` Notice the `bin` value 2000 in the last row. That's not the left end-point of any bar – it's the right end point of the last bar. By the endpoint convention, the data there are not included. So the corresponding `count` is recorded as 0, and would have been recorded as 0 even if there had been movies that made more than \$2,000$ million dollars. When either `bin` or `hist` is called with a `bins` argument, the graph only considers values that are in the specified bins. Once values have been binned, the resulting counts can be used to generate a histogram using the `bin_column` named argument to specify which column contains the bin lower bounds. ``` bin_counts.hist('Adjusted Gross count', bin_column='bin', unit='Million Dollars') ``` ### The Vertical Axis: Density Scale ### The horizontal axis of a histogram is straightforward to read, once we have taken care of details like the ends of the bins. The features of the vertical axis require a little more attention. We will go over them one by one. Let's start by examining how to calculate the numbers on the vertical axis. If the calculation seems a little strange, have patience – the rest of the section will explain the reasoning. **Calculation.** The height of each bar is the percent of elements that fall into the corresponding bin, relative to the width of the bin. ``` counts = bin_counts.relabeled('Adjusted Gross count', 'Count') percents = counts.with_column( 'Percent', (counts.column('Count')/200)*100 ) heights = percents.with_column( 'Height', percents.column('Percent')/100 ) heights ``` Go over the numbers on the vertical axis of the histogram above to check that the column `Heights` looks correct. The calculations will become clear if we just examine the first row of the table. Remember that there are 200 movies in the dataset. The [300, 400) bin contains 81 movies. That's 40.5% of all the movies: $$ \mbox{Percent} = \frac{81}{200} \cdot 100 = 40.5 $$ The width of the [300, 400) bin is $ 400 - 300 = 100$. So $$ \mbox{Height} = \frac{40.5}{100} = 0.405 $$ The code for calculating the heights used the facts that there are 200 movies in all and that the width of each bin is 100. **Units.** The height of the bar is 40.5% divided by 100 million dollars, and so the height is 0.405% per million dollars. This method of drawing histograms creates a vertical axis that is said to be *on the density scale*. The height of bar is **not** the percent of entries in the bin; it is the percent of entries in the bin relative to the amount of space in the bin. That is why the height measures crowdedness or *density*. Let's see why this matters. ### Unequal Bins ### An advantage of the histogram over a bar chart is that a histogram can contain bins of unequal width. Below, the values in the `Millions` column are binned into three uneven categories. ``` uneven = make_array(300, 400, 600, 1500) millions.hist('Adjusted Gross', bins=uneven, unit="Million Dollars") ``` Here are the counts in the three bins. ``` millions.bin('Adjusted Gross', bins=uneven) ``` Although the ranges [300, 400) and [400, 600) have nearly identical counts, the bar over the former is twice as tall as the latter because it is only half as wide. The density of values in the [300, 400) is twice as much as the density in [400, 600). Histograms help us visualize where on the number line the data are most concentrated, epecially when the bins are uneven. ### The Problem with Simply Plotting Counts ### It is possible to display counts directly in a chart, using the `normed=False` option of the `hist` method. The resulting chart has the same shape as a histogram when the bins all have equal widths, though the numbers on the vertical axis are different. ``` millions.hist('Adjusted Gross', bins=np.arange(300,2001,100), normed=False) ``` While the count scale is perhaps more natural to interpret than the density scale, the chart becomes highly misleading when bins have different widths. Below, it appears (due to the count scale) that high-grossing movies are quite common, when in fact we have seen that they are relatively rare. ``` millions.hist('Adjusted Gross', bins=uneven, normed=False) ``` Even though the method used is called `hist`, **the figure above is NOT A HISTOGRAM.** It misleadingly exaggerates the proportion of movies grossing at least 600 million dollars. The height of each bar is simply plotted at the number of movies in the bin, *without accounting for the difference in the widths of the bins*. The picture becomes even more absurd if the last two bins are combined. ``` very_uneven = make_array(300, 400, 1500) millions.hist('Adjusted Gross', bins=very_uneven, normed=False) ``` In this count-based figure, the shape of the distribution of movies is lost entirely. ### The Histogram: General Principles and Calculation ## The figure above shows that what the eye perceives as "big" is area, not just height. This observation becomes particularly important when the bins have different widths. That is why a histogram has two defining properties: 1. The bins are drawn to scale and are contiguous (though some might be empty), because the values on the horizontal axis are numerical. 2. The **area** of each bar is proportional to the number of entries in the bin. Property 2 is the key to drawing a histogram, and is usually achieved as follows: $$ \mbox{area of bar} ~=~ \mbox{percent of entries in bin} $$ The calculation of the heights just uses the fact that the bar is a rectangle: $$ \mbox{area of bar} = \mbox{height of bar} \times \mbox{width of bin} $$ and so $$ \mbox{height of bar} ~=~ \frac{\mbox{area of bar}}{\mbox{width of bin}} ~=~ \frac{\mbox{percent of entries in bin}}{\mbox{width of bin}} $$ The units of height are "percent per unit on the horizontal axis." When drawn using this method, the histogram is said to be drawn on the density scale. On this scale: - The area of each bar is equal to the percent of data values that are in the corresponding bin. - The total area of all the bars in the histogram is 100%. Speaking in terms of proportions, we say that the areas of all the bars in a histogram "sum to 1". ### Flat Tops and the Level of Detail ### Even though the density scale correctly represents percents using area, some detail is lost by grouping values into bins. Take another look at the [300, 400) bin in the figure below. The flat top of the bar, at the level 0.405% per million dollars, hides the fact that the movies are somewhat unevenly distributed across that bin. ``` millions.hist('Adjusted Gross', bins=uneven, unit="Million Dollars") ``` To see this, let us split the [300, 400) bin into 10 narrower bins, each of width 10 million dollars. ``` some_tiny_bins = make_array(300, 310, 320, 330, 340, 350, 360, 370, 380, 390, 400, 600, 1500) millions.hist('Adjusted Gross', bins=some_tiny_bins, unit='Million Dollars') ``` Some of the skinny bars are taller than 0.405 and others are shorter; the first two have heights of 0 because there are no data between 300 and 320. By putting a flat top at the level 0.405 across the whole bin, we are deciding to ignore the finer detail and are using the flat level as a rough approximation. Often, though not always, this is sufficient for understanding the general shape of the distribution. **The height as a rough approximation.** This observation gives us a different way of thinking about the height. Look again at the [300, 400) bin in the earlier histograms. As we have seen, the bin is 100 million dollars wide and contains 40.5% of the data. Therefore the height of the corresponding bar is 0.405% per million dollars. Now think of the bin as consisting of 100 narrow bins that are each 1 million dollars wide. The bar's height of "0.405% per million dollars" means that as a rough approximation, 0.405% of the movies are in each of those 100 skinny bins of width 1 million dollars. Notice that because we have the entire dataset that is being used to draw the histograms, we can draw the histograms to as fine a level of detail as the data and our patience will allow. However, if you are looking at a histogram in a book or on a website, and you don't have access to the underlying dataset, then it becomes important to have a clear understanding of the "rough approximation" created by the flat tops. ### Histograms Q&A ### Let's draw the histogram again, this time with four bins, and check our understanding of the concepts. ``` uneven_again = make_array(300, 350, 400, 450, 1500) millions.hist('Adjusted Gross', bins=uneven_again, unit='Million Dollars') millions.bin('Adjusted Gross', bins=uneven_again) ``` Look again at the histogram, and compare the [400, 450) bin with the [450, 1500) bin. **Q**: Which has more movies in it? **A**: The [450, 1500) bin. It has 92 movies, compared with 25 movies in the [400, 450) bin. **Q**: Then why is the [450, 1500) bar so much shorter than the [400, 450) bar? **A**: Because height represents density per unit of space in the bin, not the number of movies in the bin. The [450, 1500) bin does have more movies than the [400, 450) bin, but it is also a whole lot wider. So it is less crowded. The density of movies in it is much lower. ### Differences Between Bar Charts and Histograms ### - Bar charts display one quantity per category. They are often used to display the distributions of categorical variables. Histograms display the distributions of quantitative variables. - All the bars in a bar chart have the same width, and there is an equal amount of space between consecutive bars. The bars of a histogram can have different widths, and they are contiguous. - The lengths (or heights, if the bars are drawn vertically) of the bars in a bar chart are proportional to the value for each category. The heights of bars in a histogram measure densities; the *areas* of bars in a histogram are proportional to the numbers of entries in the bins.
github_jupyter
# HIDDEN from datascience import * import numpy as np path_data = '../../../data/' %matplotlib inline import matplotlib.pyplot as plots plots.style.use('fivethirtyeight') top = Table.read_table(path_data + 'top_movies.csv') # Make the numbers in the Gross and Gross (Adjusted) columns look nicer: top.set_format([2, 3], NumberFormatter) millions = top.select(0).with_column('Adjusted Gross', np.round(top.column(3)/1e6, 2)) millions millions.hist('Adjusted Gross', unit="Million Dollars") millions.hist('Adjusted Gross', bins=np.arange(300,2001,100), unit="Million Dollars") bin_counts = millions.bin('Adjusted Gross', bins=np.arange(300,2001,100)) bin_counts.show() bin_counts.hist('Adjusted Gross count', bin_column='bin', unit='Million Dollars') counts = bin_counts.relabeled('Adjusted Gross count', 'Count') percents = counts.with_column( 'Percent', (counts.column('Count')/200)*100 ) heights = percents.with_column( 'Height', percents.column('Percent')/100 ) heights uneven = make_array(300, 400, 600, 1500) millions.hist('Adjusted Gross', bins=uneven, unit="Million Dollars") millions.bin('Adjusted Gross', bins=uneven) millions.hist('Adjusted Gross', bins=np.arange(300,2001,100), normed=False) millions.hist('Adjusted Gross', bins=uneven, normed=False) very_uneven = make_array(300, 400, 1500) millions.hist('Adjusted Gross', bins=very_uneven, normed=False) millions.hist('Adjusted Gross', bins=uneven, unit="Million Dollars") some_tiny_bins = make_array(300, 310, 320, 330, 340, 350, 360, 370, 380, 390, 400, 600, 1500) millions.hist('Adjusted Gross', bins=some_tiny_bins, unit='Million Dollars') uneven_again = make_array(300, 350, 400, 450, 1500) millions.hist('Adjusted Gross', bins=uneven_again, unit='Million Dollars') millions.bin('Adjusted Gross', bins=uneven_again)
0.443841
0.99444
# Natural Language Processing This chapter covers text analysis, also known as natural language processing. We'll cover tokenisation of text, removing stop words, counting words, performing other statistics on words, and analysing the parts of speech. ## Introduction When doing NLP, it's worth thinking carefully about the unit of analysis: is it a corpus, a text, a line, a paragraph, a sentence, a word, or even a character? It could also be two of these simultaneously, and working with document x token matrices is one very common way of doing NLP. Although we'll be mixing between a few of these in this chapter, thinking about what the block of text data you're working with will really help you keep track of what operations are being deployed and how they might interact. In this chapter, we'll use a single example and using NLP on it in a few different ways. First we need to read in the text data we'll be using, part of Adam Smith's *The Wealth of Nations* and do some light cleaning of it though. We'll read in our text so that each new line appears on a different row of a **pandas** dataframe. We'll also import the packages we'll need; remember, if you need these on your computer you may need to run `pip install packagename` on your own computer. ``` import pandas as pd import string df = pd.read_csv('https://github.com/aeturrell/coding-for-economists/raw/main/data/smith_won.txt', delimiter = "\n", names=["text"]) df.head() ``` We need to do a bit of light text cleaning before we get on to the more in-depth natural language processing. We'll make use of vectorised string operations as seen in the [Introduction to Text](text-intro) chapter. First, we want to put everything in lower case: ``` df["text"] = df["text"].str.lower() df.head() ``` Next, we'll remove the punctuation from the text. You may not always wish to do this but it's a good default. ``` translator = string.punctuation.maketrans({x: "" for x in string.punctuation}) df["text"] = df["text"].str.translate(translator) df.head() ``` Okay, we now have rows and rows of lower case words without punctuation. ## Tokenisation We're going to now see an example of tokenisation: the process of taking blocks of text and breaking them down into tokens, most commonly a word but potentially all one and two word pairs. Note that you might sometimes see all two word pairs referred to as 2-grams, with an n-gram being all n-word pairs. There are many ways to tokenise text; we'll look at two of the most common: using regular expressions and using pre-configured NLP packages. ### Tokenisation with regular expressions Because regular expressions excel at finding patterns in text, they can also be used to decide where to split text up into tokens. For a very simple example, let's take the first line of our text example: ``` import re word_pattern = r'\w+' tokens = re.findall(word_pattern, df.iloc[0, 0]) tokens ``` This produced a split of a single line into one word tokens that are represented by a list of strings. We could have also asked for other variations, eg sentences, by asking to split at every '.'. ### Tokenisation using NLP tools Many of the NLP packages available in Python come with built-in tokenisation tools. Two of the most loved NLP packages are [**nltk**](https://www.nltk.org/) and [**spaCy**](https://spacy.io/). We'll use nltk for tokenisation. ``` from nltk.tokenize import word_tokenize word_tokenize(df.iloc[0, 0]) ``` We have the same results as before. Now let's scale this up to our whole corpus while retaining the lines of text, giving us a structure of the form (lines x tokens): ``` df["tokens"] = df["text"].apply(lambda x: word_tokenize(x)) df.head() ``` **nltk** also has a `sent_tokenize` function that tokenises sentences, although as it makes use of punctuation you must take care with what pre-cleaning of text you undertake. ## Removing Stop Words Stop words are frequent but uninformative words such as 'that', 'which', 'the', 'is', 'and', and 'but'. These words tend to be very common in the English language, but knowing that they appear frequently in a corpus doesn't really tell us much. Therefore, it is quite common to strip these 'stop' words out of text before doing any count-based analysis (or to use methods that implicitly ignore them). Many NLP libraries come with built-in methods that remove stop words. ## Counting Text There are several ways of performing basic counting statistics on text. We saw one in the previous chapter, `str.count()`, but that only applies to one word at a time. Often, we're interested in the relative counts of words in a corpus. In this section, we'll look at two powerful ways of computing this: using the `Counter` function and via term frequenc-inverse document frequency. First, `Counter`, which is a built-in Python library that does pretty much what you'd expect. Here's a simple example: ``` from collections import Counter fruit_list = ["apple", "apple", "orange", "satsuma", "banana", "orange", "mango", "satsuma", "orange"] freq = Counter(fruit_list) freq ``` Counter returns a `collections.Counter` object where the numbers of each type in a given input list are summed. The resulting dictionnary of unique counts can be extracted using `dict(freq)`, and `Counter` has some other useful functions too including `most_common()` which, given a number `n`, returns `n` tuples of the form `(thing, count)`: ``` freq.most_common(10) ``` Say we wanted to apply this not just to every line in our corpus separately, but to our whole corpus in one go; how would we do it? `Counter` will happily accept a list but our dataframe token column is currently a vector of lists. So we must first transform the token column to a single list of all tokens and then apply `Counter`. To achieve the former and flatten a list of lists, we'll use `itertools` chain function which makes an iterator that returns elements from the first iterable until it is exhausted, then proceeds to the next iterable, until all of the iterables in all inputs are exhausted. For example, given `[a, b, c]` and `[d, e, f]` as arguments, this function would return `[a, b, c, d, e, f]`. Because this function accepts an arbitrary number of iterable arguments, we use the splat operator to tell it to expect lots of different arguments. The second step using `Counter` is far more straightforward! ``` import itertools merged_list = list(itertools.chain(*df["tokens"].to_list())) freq = Counter(merged_list) freq.most_common(10) ``` ### TF-IDF Term frequency - inverse document frequency is a measure of term counts (where terms could be 1-grams, 2-grams, etc.) that is weighted to try and
github_jupyter
import pandas as pd import string df = pd.read_csv('https://github.com/aeturrell/coding-for-economists/raw/main/data/smith_won.txt', delimiter = "\n", names=["text"]) df.head() df["text"] = df["text"].str.lower() df.head() translator = string.punctuation.maketrans({x: "" for x in string.punctuation}) df["text"] = df["text"].str.translate(translator) df.head() import re word_pattern = r'\w+' tokens = re.findall(word_pattern, df.iloc[0, 0]) tokens from nltk.tokenize import word_tokenize word_tokenize(df.iloc[0, 0]) df["tokens"] = df["text"].apply(lambda x: word_tokenize(x)) df.head() from collections import Counter fruit_list = ["apple", "apple", "orange", "satsuma", "banana", "orange", "mango", "satsuma", "orange"] freq = Counter(fruit_list) freq freq.most_common(10) import itertools merged_list = list(itertools.chain(*df["tokens"].to_list())) freq = Counter(merged_list) freq.most_common(10)
0.298389
0.983707
## Expand log file ``` def save_list_to_file(path, thelist): ''' Tool function to save a list to a .txt file ''' with open(path, 'w') as f: for item in thelist: f.write("%s" % item) with open('epoch_total_test_log_reproduction.txt','r') as f: l = f.readlines() filename = 'epoch_{}_test_log_reproduction.txt' for i in range(1, 150): try: idx = l.index('\n') save_list_to_file(filename.format(str(i)),l[:idx]) l = l[idx+1:] except: print(i) save_list_to_file(filename.format(str(i)),l) ``` ## Training curves ``` import matplotlib.pyplot as plt train_all = [] train = [] test = [] src_path = 'logs/linemod/' filename_train = 'epoch_{}_log.txt' filename_test = 'epoch_{}_test_log.txt' nb_subepochs = 40000 / 800 for epoch in range(1,171): with open(src_path + filename_train.format(str(epoch)),'r') as f: current = 0 for l in f.readlines()[1:]: train_all.append(float(l.split(' ')[-1][9:-2])) current += train_all[-1] current /= nb_subepochs train.append(current) with open(src_path + filename_test.format(str(epoch)),'r') as f: for l in f.readlines()[1:]: test.append(float(l.split(' ')[-1][:-2])) plt.figure(figsize=(10,8)) plt.plot(list(range(1,171)),train,label='train',color='r') plt.plot(list(range(1,171)),test,label='test',color='b') plt.title('Training curves RGB',size=20) plt.xlabel('Epoch', size=15) plt.ylabel('Average distance', size=15) plt.legend() plt.show() ``` ## Training curves reproduction ``` train_all = [] train = [] test = [] src_path = 'logs/linemod/' filename_train = 'epoch_{}_log_reproduction.txt' filename_test = 'epoch_{}_test_log_reproduction.txt' nb_subepochs = 40000 / 800 for epoch in range(1,150): with open(src_path + filename_train.format(str(epoch)),'r') as f: current = 0 for l in f.readlines()[1:]: train_all.append(float(l.split(' ')[-1][9:-2])) current += train_all[-1] current /= nb_subepochs train.append(current) with open(src_path + filename_test.format(str(epoch)),'r') as f: for l in f.readlines()[1:]: test.append(float(l.split(' ')[-1][:-2])) plt.figure(figsize=(10,8)) plt.plot(list(range(1,150)),train,label='train',color='r') plt.plot(list(range(1,150)),test,label='test',color='b') plt.title('Training curves RGB-D',size=20) plt.xlabel('Epoch', size=15) plt.ylabel('Average distance', size=15) plt.legend() plt.show() ``` ## Plot evaluation results ``` import matplotlib.pyplot as plt import numpy as np diameters = [ 0.10209865663, 0.24750624233, 0.17249224865, 0.20140358597000002, 0.15454551808, 0.26147178102, 0.10899920102000001, 0.14554287471000002, 0.27807811733, 0.28260129399, 0.212335825148, ] objlist = ['01', '02', '04', '05', '06', '08', '09', '12', '13', '14', '15'] obj_diameters = [] for idx,obj in enumerate(objlist): with open('../datasets/linemod/Linemod_preprocessed/data/{}/test.txt'.format(obj),'r') as f: if not idx: obj_diameters += (len(f.readlines())-1)*[diameters[idx]] else: obj_diameters += len(f.readlines())*[diameters[idx]] obj_diameters = np.array(obj_diameters) dists_rgb = [] with open('eval_result/linemod/eval_result_logs_RGB_final.txt','r') as f: for line in f.readlines(): dists_rgb.append(float(line.split(' ')[-1][:-2])) f.close() dists_rgb = np.array(dists_rgb) dists_tdepth = [] with open('eval_result/linemod/eval_result_logs_TransDepth.txt','r') as f: for line in f.readlines(): dists_tdepth.append(float(line.split(' ')[-1][:-2])) f.close() dists_tdepth = np.array(dists_tdepth) dists_rmv_icp = [] with open('eval_result/linemod/eval_result_logs_rmv_icp.txt','r') as f: for line in f.readlines(): dists_rmv_icp.append(float(line.split(' ')[-1][:-2])) f.close() dists_rmv_icp = np.array(dists_rmv_icp) tolerance = np.linspace(0.1,1,10) rgb_avg_acc = [] tdepth_avg_acc = [] rmv_icp_avg_acc = [] for t in tolerance: rgb_avg_acc.append(1 - np.count_nonzero(np.clip(dists_rgb - t*obj_diameters,0,None)) / len(dists_rgb)) tdepth_avg_acc.append(1 - np.count_nonzero(np.clip(dists_tdepth - t*obj_diameters,0,None)) / len(dists_tdepth)) rmv_icp_avg_acc.append(1 - np.count_nonzero(np.clip(dists_rmv_icp - t*obj_diameters,0,None)) / len(dists_rmv_icp)) plt.figure(figsize=(10,6)) plt.plot(tolerance, rgb_avg_acc, label='RGB', color='r') #plt.plot(tolerance, tdepth_avg_acc, label='trans_depth', color='b') plt.plot(tolerance, rmv_icp_avg_acc, label='RGB-D per-pixel', color='y') #plt.title('Evaluation average accuracy',size=20) plt.xlabel('Diameter tolerance', size=15) plt.ylabel('Matching score', size=15) plt.legend() plt.show() ``` ## Pose results visualization ``` def crop_center(img,cropx,cropy): y,x,z = img.shape startx = x//2-(cropx//2) starty = y//2-(cropy//2) return img[starty:starty+cropy,startx:startx+cropx,:] adds = [2.27,5.34,1.0] #adds = [3.2,9.2,5.1] imgs = ['0029','0056','0287'] #imgs = ['0578','0626','0951'] fig, ax = plt.subplots(nrows=1,ncols=3,figsize=(15,10)) i = 0 for add, img in zip(adds,imgs): ax[i].imshow(crop_center(plt.imread('../../rendus/images/{}.png'.format(img)), 250,250)) ax[i].axis('off') ax[i].set_title('ADD = '+str(add)+' cm') i += 1 ```
github_jupyter
def save_list_to_file(path, thelist): ''' Tool function to save a list to a .txt file ''' with open(path, 'w') as f: for item in thelist: f.write("%s" % item) with open('epoch_total_test_log_reproduction.txt','r') as f: l = f.readlines() filename = 'epoch_{}_test_log_reproduction.txt' for i in range(1, 150): try: idx = l.index('\n') save_list_to_file(filename.format(str(i)),l[:idx]) l = l[idx+1:] except: print(i) save_list_to_file(filename.format(str(i)),l) import matplotlib.pyplot as plt train_all = [] train = [] test = [] src_path = 'logs/linemod/' filename_train = 'epoch_{}_log.txt' filename_test = 'epoch_{}_test_log.txt' nb_subepochs = 40000 / 800 for epoch in range(1,171): with open(src_path + filename_train.format(str(epoch)),'r') as f: current = 0 for l in f.readlines()[1:]: train_all.append(float(l.split(' ')[-1][9:-2])) current += train_all[-1] current /= nb_subepochs train.append(current) with open(src_path + filename_test.format(str(epoch)),'r') as f: for l in f.readlines()[1:]: test.append(float(l.split(' ')[-1][:-2])) plt.figure(figsize=(10,8)) plt.plot(list(range(1,171)),train,label='train',color='r') plt.plot(list(range(1,171)),test,label='test',color='b') plt.title('Training curves RGB',size=20) plt.xlabel('Epoch', size=15) plt.ylabel('Average distance', size=15) plt.legend() plt.show() train_all = [] train = [] test = [] src_path = 'logs/linemod/' filename_train = 'epoch_{}_log_reproduction.txt' filename_test = 'epoch_{}_test_log_reproduction.txt' nb_subepochs = 40000 / 800 for epoch in range(1,150): with open(src_path + filename_train.format(str(epoch)),'r') as f: current = 0 for l in f.readlines()[1:]: train_all.append(float(l.split(' ')[-1][9:-2])) current += train_all[-1] current /= nb_subepochs train.append(current) with open(src_path + filename_test.format(str(epoch)),'r') as f: for l in f.readlines()[1:]: test.append(float(l.split(' ')[-1][:-2])) plt.figure(figsize=(10,8)) plt.plot(list(range(1,150)),train,label='train',color='r') plt.plot(list(range(1,150)),test,label='test',color='b') plt.title('Training curves RGB-D',size=20) plt.xlabel('Epoch', size=15) plt.ylabel('Average distance', size=15) plt.legend() plt.show() import matplotlib.pyplot as plt import numpy as np diameters = [ 0.10209865663, 0.24750624233, 0.17249224865, 0.20140358597000002, 0.15454551808, 0.26147178102, 0.10899920102000001, 0.14554287471000002, 0.27807811733, 0.28260129399, 0.212335825148, ] objlist = ['01', '02', '04', '05', '06', '08', '09', '12', '13', '14', '15'] obj_diameters = [] for idx,obj in enumerate(objlist): with open('../datasets/linemod/Linemod_preprocessed/data/{}/test.txt'.format(obj),'r') as f: if not idx: obj_diameters += (len(f.readlines())-1)*[diameters[idx]] else: obj_diameters += len(f.readlines())*[diameters[idx]] obj_diameters = np.array(obj_diameters) dists_rgb = [] with open('eval_result/linemod/eval_result_logs_RGB_final.txt','r') as f: for line in f.readlines(): dists_rgb.append(float(line.split(' ')[-1][:-2])) f.close() dists_rgb = np.array(dists_rgb) dists_tdepth = [] with open('eval_result/linemod/eval_result_logs_TransDepth.txt','r') as f: for line in f.readlines(): dists_tdepth.append(float(line.split(' ')[-1][:-2])) f.close() dists_tdepth = np.array(dists_tdepth) dists_rmv_icp = [] with open('eval_result/linemod/eval_result_logs_rmv_icp.txt','r') as f: for line in f.readlines(): dists_rmv_icp.append(float(line.split(' ')[-1][:-2])) f.close() dists_rmv_icp = np.array(dists_rmv_icp) tolerance = np.linspace(0.1,1,10) rgb_avg_acc = [] tdepth_avg_acc = [] rmv_icp_avg_acc = [] for t in tolerance: rgb_avg_acc.append(1 - np.count_nonzero(np.clip(dists_rgb - t*obj_diameters,0,None)) / len(dists_rgb)) tdepth_avg_acc.append(1 - np.count_nonzero(np.clip(dists_tdepth - t*obj_diameters,0,None)) / len(dists_tdepth)) rmv_icp_avg_acc.append(1 - np.count_nonzero(np.clip(dists_rmv_icp - t*obj_diameters,0,None)) / len(dists_rmv_icp)) plt.figure(figsize=(10,6)) plt.plot(tolerance, rgb_avg_acc, label='RGB', color='r') #plt.plot(tolerance, tdepth_avg_acc, label='trans_depth', color='b') plt.plot(tolerance, rmv_icp_avg_acc, label='RGB-D per-pixel', color='y') #plt.title('Evaluation average accuracy',size=20) plt.xlabel('Diameter tolerance', size=15) plt.ylabel('Matching score', size=15) plt.legend() plt.show() def crop_center(img,cropx,cropy): y,x,z = img.shape startx = x//2-(cropx//2) starty = y//2-(cropy//2) return img[starty:starty+cropy,startx:startx+cropx,:] adds = [2.27,5.34,1.0] #adds = [3.2,9.2,5.1] imgs = ['0029','0056','0287'] #imgs = ['0578','0626','0951'] fig, ax = plt.subplots(nrows=1,ncols=3,figsize=(15,10)) i = 0 for add, img in zip(adds,imgs): ax[i].imshow(crop_center(plt.imread('../../rendus/images/{}.png'.format(img)), 250,250)) ax[i].axis('off') ax[i].set_title('ADD = '+str(add)+' cm') i += 1
0.254509
0.656287
``` %matplotlib inline import pandas as pd import matplotlib.pyplot as plt women_degrees = pd.read_csv('percent-bachelors-degrees-women-usa.csv') cb_dark_blue = (0/255,107/255,164/255) cb_orange = (255/255, 128/255, 14/255) stem_cats = ['Engineering', 'Computer Science', 'Psychology', 'Biology', 'Physical Sciences', 'Math and Statistics'] fig = plt.figure(figsize=(18, 3)) for sp in range(0,6): ax = fig.add_subplot(1,6,sp+1) ax.plot(women_degrees['Year'], women_degrees[stem_cats[sp]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[stem_cats[sp]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_title(stem_cats[sp]) ax.tick_params(bottom="off", top="off", left="off", right="off") if sp == 0: ax.text(2005, 87, 'Men') ax.text(2002, 8, 'Women') elif sp == 5: ax.text(2005, 62, 'Men') ax.text(2001, 35, 'Women') plt.show() women_degrees.info() ``` # Grid - Step 1 Because there are seventeen degrees that we need to generate line charts for, we'll use a subplot grid layout of 6 rows by 3 columns. We can then group the degrees into STEM, liberal arts, and other, in the following way. ``` stem_cats = ['Psychology', 'Biology', 'Math and Statistics', 'Physical Sciences', 'Computer Science', 'Engineering'] lib_arts_cats = ['Foreign Languages', 'English', 'Communications and Journalism', 'Art and Performance', 'Social Sciences and History'] other_cats = ['Health Professions', 'Public Administration', 'Education', 'Agriculture','Business', 'Architecture'] fig = plt.figure(figsize=(18, 18)) all_cats = [stem_cats, lib_arts_cats, other_cats] # stem_cats for c in range(0, 3): series = all_cats[c] for sp in range(0, len(series)): position = sp * 3 + c + 1 ax = fig.add_subplot(6, 3, position) ax.plot(women_degrees['Year'], women_degrees[series[sp]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[series[sp]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_title(series[sp]) ax.tick_params(bottom="off", top="off", left="off", right="off") if sp == 0: if c == 0: ax.text(2004, 85, 'Women') ax.text(2006, 11, 'Men') elif c == 1: ax.text(2004, 78, 'Women') ax.text(2006, 18, 'Men') elif c == 2: ax.text(2004, 93, 'Women') ax.text(2006, 2, 'Men') elif sp == len(series) - 1: if c == 0: ax.text(2003, 89, 'Men') ax.text(2003, 5, 'Women') if c == 2: ax.text(2004, 62, 'Men') ax.text(2002, 30, 'Women') ``` # Grid - Step 2 With seventeen line charts in one diagram, the non-data elements quickly clutter the field of view. The most immediate issue that sticks out is the titles of some line charts overlapping with the x-axis labels for the line chart above it. If we remove the titles for each line chart, the viewer won't know what degree each line chart refers to. Let's instead remove the x-axis labels for every line chart in a column except for the bottom most one. We can accomplish this by modifying the call to `Axes.tick_params()` and setting `labelbottom` to `off`: ``` ax.tick_params(bottom="off", top="off", left="off", right="off", labelbottom='off') ``` This will disable the x-axis labels for all of the line charts. You can then enable the x-axis labels for the bottommost line charts in each column: ``` ax.tick_params(labelbottom='on') ``` ``` fig = plt.figure(figsize=(18, 18)) all_cats = [stem_cats, lib_arts_cats, other_cats] # stem_cats for c in range(0, 3): series = all_cats[c] for sp in range(0, len(series)): position = sp * 3 + c + 1 ax = fig.add_subplot(6, 3, position) ax.plot(women_degrees['Year'], women_degrees[series[sp]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[series[sp]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_title(series[sp]) ax.tick_params(bottom="off", top="off", left="off", right="off", labelbottom="off") if sp == len(series) - 1: ax.tick_params(labelbottom='on') if sp == 0: if c == 0: ax.text(2004, 85, 'Women') ax.text(2006, 11, 'Men') elif c == 1: ax.text(2004, 78, 'Women') ax.text(2006, 18, 'Men') elif c == 2: ax.text(2004, 93, 'Women') ax.text(2006, 2, 'Men') elif sp == len(series) - 1: if c == 0: ax.text(2003, 89, 'Men') ax.text(2003, 5, 'Women') if c == 2: ax.text(2004, 62, 'Men') ax.text(2002, 30, 'Women') ``` # Grid - Step 3 Removing the x-axis labels for all but the bottommost plots solved the issue we noticed with the overlapping text. In addition, the plots are cleaner and more readable. The trade-off we made is that it's now more difficult for the viewer to discern approximately which years some interesting changes in trends may have happened. This is acceptable because we're primarily interested in enabling the viewer to quickly get a high level understanding of which degrees are prone to gender imbalance and how has that changed over time. In the vein of reducing cluttering, let's also simplify the y-axis labels. Currently, all seventeen plots have six y-axis labels and even though they are consistent across the plots, they still add to the visual clutter. By keeping just the starting and ending labels (0 and 100), we can keep some of the benefits of having the y-axis labels to begin with. We can use the Axes.set_yticks() method to specify which labels we want displayed. The following code enables just the 0 and 100 labels to be displayed: ``` ax.set_yticks([0,100]) ``` ``` fig = plt.figure(figsize=(18, 18)) all_cats = [stem_cats, lib_arts_cats, other_cats] # stem_cats for c in range(0, 3): series = all_cats[c] for sp in range(0, len(series)): position = sp * 3 + c + 1 ax = fig.add_subplot(6, 3, position) ax.plot(women_degrees['Year'], women_degrees[series[sp]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[series[sp]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_yticks([0,100]) ax.set_title(series[sp]) ax.tick_params(bottom="off", top="off", left="off", right="off", labelbottom="off") if sp == len(series) - 1: ax.tick_params(labelbottom='on') if sp == 0: if c == 0: ax.text(2004, 85, 'Women') ax.text(2006, 11, 'Men') elif c == 1: ax.text(2004, 78, 'Women') ax.text(2006, 18, 'Men') elif c == 2: ax.text(2004, 93, 'Women') ax.text(2006, 2, 'Men') elif sp == len(series) - 1: if c == 0: ax.text(2003, 89, 'Men') ax.text(2003, 5, 'Women') if c == 2: ax.text(2004, 62, 'Men') ax.text(2002, 30, 'Women') ``` # Grid - Step 4 While removing most of the y-axis labels definitely reduced clutter, it also made it hard to understand which degrees have close to 50-50 gender breakdown. While keeping all of the y-axis labels would have made it easier, we can actually do one better and use a horizontal line across all of the line charts where the y-axis label 50 would have been. We can generate a horizontal line across an entire subplot using the Axes.axhline() method. The only required parameter is the y-axis location for the start of the line: ``` ax.axhline(50) ``` Let's use the next color in the Color Blind 10 palette for this horizontal line, which has an RGB value of (171, 171, 171). Because we don't want this line to clutter the viewing experience, let's increase the transparency of the line. We can set the color using the c parameter and the transparency using the `alpha` parameter. The value passed in to the `alpha` parameter must range between 0 and 1: ``` ax.axhline(50, c=(171/255, 171/255, 171/255), alpha=0.3) ``` ``` fig = plt.figure(figsize=(18, 18)) all_cats = [stem_cats, lib_arts_cats, other_cats] # stem_cats for c in range(0, 3): series = all_cats[c] for sp in range(0, len(series)): position = sp * 3 + c + 1 ax = fig.add_subplot(6, 3, position) ax.plot(women_degrees['Year'], women_degrees[series[sp]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[series[sp]], c=cb_orange, label='Men', linewidth=3) # gray line at 50% ax.axhline(50, c=(171/255, 171/255, 171/255), alpha=0.3) for loc in ["right", "left", "top", "bottom"]: ax.spines[loc].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_yticks([0,100]) ax.set_title(series[sp]) ax.tick_params(bottom="off", top="off", left="off", right="off", labelbottom="off") if sp == len(series) - 1: ax.tick_params(labelbottom='on') if sp == 0: if c == 0: ax.text(2004, 85, 'Women') ax.text(2006, 11, 'Men') elif c == 1: ax.text(2004, 78, 'Women') ax.text(2006, 18, 'Men') elif c == 2: ax.text(2004, 93, 'Women') ax.text(2006, 2, 'Men') elif sp == len(series) - 1: if c == 0: ax.text(2003, 89, 'Men') ax.text(2003, 5, 'Women') if c == 2: ax.text(2004, 62, 'Men') ax.text(2002, 30, 'Women') ``` # Grid - Step 5 If you recall, matplotlib can be used many different ways. It can be used within a Jupyter Notebook interface (like this one), from the command line, or in an integrated development environment. Many of these ways of using matplotlib vary in workflow and handle the rendering of images differently as well. To help support these different use cases, matplotlib can target different outputs or backends. If you import matplotlib and run matplotlib.get_backend(), you'll see the specific backend you're currently using. With the current backend we're using, we can use Figure.savefig() or pyplot.savefig() to export all of the plots contained in the figure as a single image file. Note that these have to be called before we display the figure using `pyplot.show()`.: ``` plt.plot(women_degrees['Year'], women_degrees['Biology']) plt.savefig('biology_degrees.png') ``` In the above code, we saved a line chart as a PNG file. You can read about the different popular file types for images here. The image will be exported into the same folder that your Jupyter Notebook server is running. You can click on the Jupyter logo to navigate the file system and find this image: Jupyter Logo Exporting plots we create using matplotlib allows us to use them in Word documents, Powerpoint presentations, and even in emails. ``` fig = plt.figure(figsize=(18, 18)) all_cats = [stem_cats, lib_arts_cats, other_cats] # stem_cats for c in range(0, 3): series = all_cats[c] for sp in range(0, len(series)): position = sp * 3 + c + 1 ax = fig.add_subplot(6, 3, position) ax.plot(women_degrees['Year'], women_degrees[series[sp]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[series[sp]], c=cb_orange, label='Men', linewidth=3) # gray line at 50% ax.axhline(50, c=(171/255, 171/255, 171/255), alpha=0.3) for loc in ["right", "left", "top", "bottom"]: ax.spines[loc].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_yticks([0,100]) ax.set_title(series[sp]) ax.tick_params(bottom="off", top="off", left="off", right="off", labelbottom="off") if sp == len(series) - 1: ax.tick_params(labelbottom='on') if sp == 0: if c == 0: ax.text(2004, 85, 'Women') ax.text(2006, 11, 'Men') elif c == 1: ax.text(2004, 78, 'Women') ax.text(2006, 18, 'Men') elif c == 2: ax.text(2004, 93, 'Women') ax.text(2006, 2, 'Men') elif sp == len(series) - 1: if c == 0: ax.text(2003, 89, 'Men') ax.text(2003, 5, 'Women') if c == 2: ax.text(2004, 62, 'Men') ax.text(2002, 30, 'Women') plt.savefig("gender_degrees.png") ```
github_jupyter
%matplotlib inline import pandas as pd import matplotlib.pyplot as plt women_degrees = pd.read_csv('percent-bachelors-degrees-women-usa.csv') cb_dark_blue = (0/255,107/255,164/255) cb_orange = (255/255, 128/255, 14/255) stem_cats = ['Engineering', 'Computer Science', 'Psychology', 'Biology', 'Physical Sciences', 'Math and Statistics'] fig = plt.figure(figsize=(18, 3)) for sp in range(0,6): ax = fig.add_subplot(1,6,sp+1) ax.plot(women_degrees['Year'], women_degrees[stem_cats[sp]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[stem_cats[sp]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_title(stem_cats[sp]) ax.tick_params(bottom="off", top="off", left="off", right="off") if sp == 0: ax.text(2005, 87, 'Men') ax.text(2002, 8, 'Women') elif sp == 5: ax.text(2005, 62, 'Men') ax.text(2001, 35, 'Women') plt.show() women_degrees.info() stem_cats = ['Psychology', 'Biology', 'Math and Statistics', 'Physical Sciences', 'Computer Science', 'Engineering'] lib_arts_cats = ['Foreign Languages', 'English', 'Communications and Journalism', 'Art and Performance', 'Social Sciences and History'] other_cats = ['Health Professions', 'Public Administration', 'Education', 'Agriculture','Business', 'Architecture'] fig = plt.figure(figsize=(18, 18)) all_cats = [stem_cats, lib_arts_cats, other_cats] # stem_cats for c in range(0, 3): series = all_cats[c] for sp in range(0, len(series)): position = sp * 3 + c + 1 ax = fig.add_subplot(6, 3, position) ax.plot(women_degrees['Year'], women_degrees[series[sp]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[series[sp]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_title(series[sp]) ax.tick_params(bottom="off", top="off", left="off", right="off") if sp == 0: if c == 0: ax.text(2004, 85, 'Women') ax.text(2006, 11, 'Men') elif c == 1: ax.text(2004, 78, 'Women') ax.text(2006, 18, 'Men') elif c == 2: ax.text(2004, 93, 'Women') ax.text(2006, 2, 'Men') elif sp == len(series) - 1: if c == 0: ax.text(2003, 89, 'Men') ax.text(2003, 5, 'Women') if c == 2: ax.text(2004, 62, 'Men') ax.text(2002, 30, 'Women') ax.tick_params(bottom="off", top="off", left="off", right="off", labelbottom='off') ax.tick_params(labelbottom='on') fig = plt.figure(figsize=(18, 18)) all_cats = [stem_cats, lib_arts_cats, other_cats] # stem_cats for c in range(0, 3): series = all_cats[c] for sp in range(0, len(series)): position = sp * 3 + c + 1 ax = fig.add_subplot(6, 3, position) ax.plot(women_degrees['Year'], women_degrees[series[sp]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[series[sp]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_title(series[sp]) ax.tick_params(bottom="off", top="off", left="off", right="off", labelbottom="off") if sp == len(series) - 1: ax.tick_params(labelbottom='on') if sp == 0: if c == 0: ax.text(2004, 85, 'Women') ax.text(2006, 11, 'Men') elif c == 1: ax.text(2004, 78, 'Women') ax.text(2006, 18, 'Men') elif c == 2: ax.text(2004, 93, 'Women') ax.text(2006, 2, 'Men') elif sp == len(series) - 1: if c == 0: ax.text(2003, 89, 'Men') ax.text(2003, 5, 'Women') if c == 2: ax.text(2004, 62, 'Men') ax.text(2002, 30, 'Women') ax.set_yticks([0,100]) fig = plt.figure(figsize=(18, 18)) all_cats = [stem_cats, lib_arts_cats, other_cats] # stem_cats for c in range(0, 3): series = all_cats[c] for sp in range(0, len(series)): position = sp * 3 + c + 1 ax = fig.add_subplot(6, 3, position) ax.plot(women_degrees['Year'], women_degrees[series[sp]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[series[sp]], c=cb_orange, label='Men', linewidth=3) ax.spines["right"].set_visible(False) ax.spines["left"].set_visible(False) ax.spines["top"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_yticks([0,100]) ax.set_title(series[sp]) ax.tick_params(bottom="off", top="off", left="off", right="off", labelbottom="off") if sp == len(series) - 1: ax.tick_params(labelbottom='on') if sp == 0: if c == 0: ax.text(2004, 85, 'Women') ax.text(2006, 11, 'Men') elif c == 1: ax.text(2004, 78, 'Women') ax.text(2006, 18, 'Men') elif c == 2: ax.text(2004, 93, 'Women') ax.text(2006, 2, 'Men') elif sp == len(series) - 1: if c == 0: ax.text(2003, 89, 'Men') ax.text(2003, 5, 'Women') if c == 2: ax.text(2004, 62, 'Men') ax.text(2002, 30, 'Women') ax.axhline(50) ax.axhline(50, c=(171/255, 171/255, 171/255), alpha=0.3) fig = plt.figure(figsize=(18, 18)) all_cats = [stem_cats, lib_arts_cats, other_cats] # stem_cats for c in range(0, 3): series = all_cats[c] for sp in range(0, len(series)): position = sp * 3 + c + 1 ax = fig.add_subplot(6, 3, position) ax.plot(women_degrees['Year'], women_degrees[series[sp]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[series[sp]], c=cb_orange, label='Men', linewidth=3) # gray line at 50% ax.axhline(50, c=(171/255, 171/255, 171/255), alpha=0.3) for loc in ["right", "left", "top", "bottom"]: ax.spines[loc].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_yticks([0,100]) ax.set_title(series[sp]) ax.tick_params(bottom="off", top="off", left="off", right="off", labelbottom="off") if sp == len(series) - 1: ax.tick_params(labelbottom='on') if sp == 0: if c == 0: ax.text(2004, 85, 'Women') ax.text(2006, 11, 'Men') elif c == 1: ax.text(2004, 78, 'Women') ax.text(2006, 18, 'Men') elif c == 2: ax.text(2004, 93, 'Women') ax.text(2006, 2, 'Men') elif sp == len(series) - 1: if c == 0: ax.text(2003, 89, 'Men') ax.text(2003, 5, 'Women') if c == 2: ax.text(2004, 62, 'Men') ax.text(2002, 30, 'Women') plt.plot(women_degrees['Year'], women_degrees['Biology']) plt.savefig('biology_degrees.png') fig = plt.figure(figsize=(18, 18)) all_cats = [stem_cats, lib_arts_cats, other_cats] # stem_cats for c in range(0, 3): series = all_cats[c] for sp in range(0, len(series)): position = sp * 3 + c + 1 ax = fig.add_subplot(6, 3, position) ax.plot(women_degrees['Year'], women_degrees[series[sp]], c=cb_dark_blue, label='Women', linewidth=3) ax.plot(women_degrees['Year'], 100-women_degrees[series[sp]], c=cb_orange, label='Men', linewidth=3) # gray line at 50% ax.axhline(50, c=(171/255, 171/255, 171/255), alpha=0.3) for loc in ["right", "left", "top", "bottom"]: ax.spines[loc].set_visible(False) ax.set_xlim(1968, 2011) ax.set_ylim(0,100) ax.set_yticks([0,100]) ax.set_title(series[sp]) ax.tick_params(bottom="off", top="off", left="off", right="off", labelbottom="off") if sp == len(series) - 1: ax.tick_params(labelbottom='on') if sp == 0: if c == 0: ax.text(2004, 85, 'Women') ax.text(2006, 11, 'Men') elif c == 1: ax.text(2004, 78, 'Women') ax.text(2006, 18, 'Men') elif c == 2: ax.text(2004, 93, 'Women') ax.text(2006, 2, 'Men') elif sp == len(series) - 1: if c == 0: ax.text(2003, 89, 'Men') ax.text(2003, 5, 'Women') if c == 2: ax.text(2004, 62, 'Men') ax.text(2002, 30, 'Women') plt.savefig("gender_degrees.png")
0.298798
0.916521
# Create results plots perlin vs mri experiment ``` import pandas as pd import os import nibabel as nib import numpy as np import matplotlib.pyplot as plt plt.rc('image', cmap='gray') from copy import deepcopy import time, datetime import h5py import pickle as pkl from sklearn.preprocessing import minmax_scale from sklearn.metrics import auc, classification_report, roc_auc_score, confusion_matrix, precision_recall_curve, average_precision_score from sklearn.metrics._ranking import _binary_clf_curve import seaborn as sns import ptitprince as pt from matplotlib import gridspec save_fig = True sns.set_style('dark') plt.rcParams["font.size"] = "14" result_dir_perlin = '/path/to/perlin/results' result_dir_mri = '/path/to/mri/results' data_dir_perlin = '/path/to/perlin/data' data_dir_mri = '/path/to/mri/data' result_dir = '/general/results/path' name = f'overview_heatmaps_perlin_vs_mri' ``` ## Load data ``` def open_heatmaps_scores(result_dir, data_dir): print(f'Opening files for {data_dir}') with open(os.path.join(result_dir, f'heatmaps/simple_model_best_fold.pkl'), 'rb') as f: dict_all_heatmaps = pkl.load(f) with open(os.path.join(result_dir, f'heatmaps/prediction_scores.pkl'), 'rb') as f: dict_all_scores = pkl.load(f) with open(os.path.join(data_dir, f'ground_truth_maps_holdout.pkl'), 'rb') as f: ground_truth = pkl.load(f) return dict_all_heatmaps, dict_all_scores, ground_truth holdout_h5_perlin = h5py.File(os.path.join(data_dir_perlin, 'holdout_data.h5'), 'r') X_holdout_perlin, y_holdout_perlin = holdout_h5_perlin['X'], holdout_h5_perlin['y'] X_holdout_perlin = np.array(X_holdout_perlin) y_holdout_perlin = np.array(y_holdout_perlin) heatmaps_perlin, scores_perlin, ground_truth_perlin = open_heatmaps_scores(result_dir_perlin, data_dir_perlin) holdout_h5_mri = h5py.File(os.path.join(data_dir_mri, 'holdout_data.h5'), 'r') X_holdout_mri, y_holdout_mri = holdout_h5_mri['X'], holdout_h5_mri['y'] X_holdout_mri = np.array(X_holdout_mri) y_holdout_mri = np.array(y_holdout_mri) heatmaps_mri, scores_mri, ground_truth_mri = open_heatmaps_scores(result_dir_mri, data_dir_mri) ground_truths = [] for idx in range(200): slice_idx = idx % 135 part_idx = int(idx/135) ground_truths.append(ground_truth_mri[part_idx][slice_idx][2]) ground_truth_mri = ground_truths methods = ['gradient', 'deep_taylor', 'lrp.z', 'lrp.alpha_beta', 'deconvnet', 'guided_backprop', 'pattern.net', 'pattern.attribution'] methods_title_list = ['Gradient', 'DTD', 'LRP-z', 'LRP-alpha/beta', 'DeConvNet', 'Guided Backprop', 'PatternNet', 'PatternAttribution'] labels = y_holdout_mri[:200] list_idx_pos = [i for i in range(200) if labels[i] == 1] list_idx_neg = [i for i in range(200) if labels[i] == 0] all_idx = np.arange(len(labels)) def explanation_to_heatmap(e): tmp = (e/np.max(np.abs(e))) * 127.5 + 127.5 colormap = plt.cm.get_cmap("seismic") tmp = colormap(tmp.flatten().astype(np.int64))[: , :3] tmp = tmp.reshape((e.shape[0], e.shape[1], 3)) return tmp def explanation_to_projection(e): # To range [0, 1] return (np.abs(e)/np.max(np.abs(e))) def explanation_to_graymap(e): # Reduce color axis # To range [0, 255] tmp = ( e / np.max(np.abs(e))) * 255 # Create and apply red - blue heatmap colormap = plt.cm.get_cmap("gray") tmp = colormap(tmp.flatten().astype(np.int64 ))[: , :3] tmp = tmp.reshape((e.shape[0], e.shape[1], 3)) return tmp ``` ## Generate results plot for set indices ``` indices = [91, 89] nrow = len(indices) * 2 ncol = len(methods) + 2 fig = plt.figure(figsize = (2 * (ncol + 1), 2 * (nrow) - 1)) gs = gridspec.GridSpec(nrow, ncol, wspace=0.0, hspace=0.0, top=1.-0.5/(nrow+1), bottom=0.5/(nrow+1), left=0.5/(ncol+1), right=1-0.5/(ncol+1)) k = 0 for i in range(4): if i == 0: X_holdout, y_holdout = X_holdout_perlin, y_holdout_perlin ground_truth = ground_truth_perlin heatmaps = heatmaps_perlin prediction_scores = scores_perlin idx = indices[0] background = 'perlin' elif i == 1: X_holdout, y_holdout = X_holdout_perlin, y_holdout_mri ground_truth = ground_truth_perlin heatmaps = heatmaps_perlin prediction_scores = scores_perlin idx = indices[1] background = 'perlin' elif i == 2: X_holdout, y_holdout = X_holdout_mri, y_holdout_mri ground_truth = ground_truth_mri heatmaps = heatmaps_mri prediction_scores = scores_mri idx = indices[0] background = 'mri' else: X_holdout, y_holdout = X_holdout_mri, y_holdout_mri ground_truth = ground_truth_mri heatmaps = heatmaps_mri prediction_scores = scores_mri idx = indices[1] background = 'mri' softmax_scores = [prediction_scores['gradient'][i][0] for i in range(len(prediction_scores['gradient']))] softmax_output = np.array([i.max() for i in softmax_scores]) pred_labels = np.array([prediction_scores['gradient'][i][2] for i in range(len(prediction_scores['gradient']))]) for j in range(len(methods) + 2): ax = plt.subplot(gs[i, j]) ax.axes.xaxis.set_ticks([]) ax.axes.yaxis.set_ticks([]) if j % (2 + len(methods)) == 0: cmap = 'gist_gray' ax.imshow(X_holdout[idx], cmap) label = y_holdout[idx] ax.set_ylabel(f'Class: {int(label + 1)} \ntype: {background}', fontsize = 14, rotation = 0, labelpad = 45, ha = 'center') if k == 0: ax.set_title(f'Sample') elif j % (2 + len(methods)) == 1: cmap = 'gist_gray' ax.imshow(ground_truth[idx], cmap) if k == 0: ax.set_title('Ground truth') else: method = methods[j - 2] method_title = methods_title_list[j-2] heatmap = heatmaps[method][idx].reshape(heatmaps[method][idx].shape[1], heatmaps[method][idx].shape[2]) if method == 'pattern.net' or method == 'deconvnet' or method == 'guided_backprop': heatmap = explanation_to_projection(heatmap) cmap = 'binary' elif method == 'gradient': heatmap = explanation_to_heatmap(np.abs(heatmap)) else: heatmap = explanation_to_heatmap(heatmap) ax.imshow(heatmap, cmap) if k == 0: ax.set_title(f'{method_title}') k += 2 + len(methods) # plt.tight_layout() if save_fig is True: output_path = f'overview_result_plot_medneurips_perlin_mri.pdf' fig.savefig(os.path.join(result_dir, output_path), orientation='landscape', dpi=300, bbox_inches = 'tight') quit() ```
github_jupyter
import pandas as pd import os import nibabel as nib import numpy as np import matplotlib.pyplot as plt plt.rc('image', cmap='gray') from copy import deepcopy import time, datetime import h5py import pickle as pkl from sklearn.preprocessing import minmax_scale from sklearn.metrics import auc, classification_report, roc_auc_score, confusion_matrix, precision_recall_curve, average_precision_score from sklearn.metrics._ranking import _binary_clf_curve import seaborn as sns import ptitprince as pt from matplotlib import gridspec save_fig = True sns.set_style('dark') plt.rcParams["font.size"] = "14" result_dir_perlin = '/path/to/perlin/results' result_dir_mri = '/path/to/mri/results' data_dir_perlin = '/path/to/perlin/data' data_dir_mri = '/path/to/mri/data' result_dir = '/general/results/path' name = f'overview_heatmaps_perlin_vs_mri' def open_heatmaps_scores(result_dir, data_dir): print(f'Opening files for {data_dir}') with open(os.path.join(result_dir, f'heatmaps/simple_model_best_fold.pkl'), 'rb') as f: dict_all_heatmaps = pkl.load(f) with open(os.path.join(result_dir, f'heatmaps/prediction_scores.pkl'), 'rb') as f: dict_all_scores = pkl.load(f) with open(os.path.join(data_dir, f'ground_truth_maps_holdout.pkl'), 'rb') as f: ground_truth = pkl.load(f) return dict_all_heatmaps, dict_all_scores, ground_truth holdout_h5_perlin = h5py.File(os.path.join(data_dir_perlin, 'holdout_data.h5'), 'r') X_holdout_perlin, y_holdout_perlin = holdout_h5_perlin['X'], holdout_h5_perlin['y'] X_holdout_perlin = np.array(X_holdout_perlin) y_holdout_perlin = np.array(y_holdout_perlin) heatmaps_perlin, scores_perlin, ground_truth_perlin = open_heatmaps_scores(result_dir_perlin, data_dir_perlin) holdout_h5_mri = h5py.File(os.path.join(data_dir_mri, 'holdout_data.h5'), 'r') X_holdout_mri, y_holdout_mri = holdout_h5_mri['X'], holdout_h5_mri['y'] X_holdout_mri = np.array(X_holdout_mri) y_holdout_mri = np.array(y_holdout_mri) heatmaps_mri, scores_mri, ground_truth_mri = open_heatmaps_scores(result_dir_mri, data_dir_mri) ground_truths = [] for idx in range(200): slice_idx = idx % 135 part_idx = int(idx/135) ground_truths.append(ground_truth_mri[part_idx][slice_idx][2]) ground_truth_mri = ground_truths methods = ['gradient', 'deep_taylor', 'lrp.z', 'lrp.alpha_beta', 'deconvnet', 'guided_backprop', 'pattern.net', 'pattern.attribution'] methods_title_list = ['Gradient', 'DTD', 'LRP-z', 'LRP-alpha/beta', 'DeConvNet', 'Guided Backprop', 'PatternNet', 'PatternAttribution'] labels = y_holdout_mri[:200] list_idx_pos = [i for i in range(200) if labels[i] == 1] list_idx_neg = [i for i in range(200) if labels[i] == 0] all_idx = np.arange(len(labels)) def explanation_to_heatmap(e): tmp = (e/np.max(np.abs(e))) * 127.5 + 127.5 colormap = plt.cm.get_cmap("seismic") tmp = colormap(tmp.flatten().astype(np.int64))[: , :3] tmp = tmp.reshape((e.shape[0], e.shape[1], 3)) return tmp def explanation_to_projection(e): # To range [0, 1] return (np.abs(e)/np.max(np.abs(e))) def explanation_to_graymap(e): # Reduce color axis # To range [0, 255] tmp = ( e / np.max(np.abs(e))) * 255 # Create and apply red - blue heatmap colormap = plt.cm.get_cmap("gray") tmp = colormap(tmp.flatten().astype(np.int64 ))[: , :3] tmp = tmp.reshape((e.shape[0], e.shape[1], 3)) return tmp indices = [91, 89] nrow = len(indices) * 2 ncol = len(methods) + 2 fig = plt.figure(figsize = (2 * (ncol + 1), 2 * (nrow) - 1)) gs = gridspec.GridSpec(nrow, ncol, wspace=0.0, hspace=0.0, top=1.-0.5/(nrow+1), bottom=0.5/(nrow+1), left=0.5/(ncol+1), right=1-0.5/(ncol+1)) k = 0 for i in range(4): if i == 0: X_holdout, y_holdout = X_holdout_perlin, y_holdout_perlin ground_truth = ground_truth_perlin heatmaps = heatmaps_perlin prediction_scores = scores_perlin idx = indices[0] background = 'perlin' elif i == 1: X_holdout, y_holdout = X_holdout_perlin, y_holdout_mri ground_truth = ground_truth_perlin heatmaps = heatmaps_perlin prediction_scores = scores_perlin idx = indices[1] background = 'perlin' elif i == 2: X_holdout, y_holdout = X_holdout_mri, y_holdout_mri ground_truth = ground_truth_mri heatmaps = heatmaps_mri prediction_scores = scores_mri idx = indices[0] background = 'mri' else: X_holdout, y_holdout = X_holdout_mri, y_holdout_mri ground_truth = ground_truth_mri heatmaps = heatmaps_mri prediction_scores = scores_mri idx = indices[1] background = 'mri' softmax_scores = [prediction_scores['gradient'][i][0] for i in range(len(prediction_scores['gradient']))] softmax_output = np.array([i.max() for i in softmax_scores]) pred_labels = np.array([prediction_scores['gradient'][i][2] for i in range(len(prediction_scores['gradient']))]) for j in range(len(methods) + 2): ax = plt.subplot(gs[i, j]) ax.axes.xaxis.set_ticks([]) ax.axes.yaxis.set_ticks([]) if j % (2 + len(methods)) == 0: cmap = 'gist_gray' ax.imshow(X_holdout[idx], cmap) label = y_holdout[idx] ax.set_ylabel(f'Class: {int(label + 1)} \ntype: {background}', fontsize = 14, rotation = 0, labelpad = 45, ha = 'center') if k == 0: ax.set_title(f'Sample') elif j % (2 + len(methods)) == 1: cmap = 'gist_gray' ax.imshow(ground_truth[idx], cmap) if k == 0: ax.set_title('Ground truth') else: method = methods[j - 2] method_title = methods_title_list[j-2] heatmap = heatmaps[method][idx].reshape(heatmaps[method][idx].shape[1], heatmaps[method][idx].shape[2]) if method == 'pattern.net' or method == 'deconvnet' or method == 'guided_backprop': heatmap = explanation_to_projection(heatmap) cmap = 'binary' elif method == 'gradient': heatmap = explanation_to_heatmap(np.abs(heatmap)) else: heatmap = explanation_to_heatmap(heatmap) ax.imshow(heatmap, cmap) if k == 0: ax.set_title(f'{method_title}') k += 2 + len(methods) # plt.tight_layout() if save_fig is True: output_path = f'overview_result_plot_medneurips_perlin_mri.pdf' fig.savefig(os.path.join(result_dir, output_path), orientation='landscape', dpi=300, bbox_inches = 'tight') quit()
0.404037
0.699928
Following the blog: https://towardsdatascience.com/custom-named-entity-recognition-using-spacy-7140ebbb3718 ``` from __future__ import unicode_literals, print_function import spacy import pandas as pd import json import logging import sys import pickle import plac import random from pathlib import Path import spacy from spacy.util import minibatch, compounding path ='/Users/kinga/Documents/4_th_Brain/glg_sandbox/' input_file = path + "Data/ner_dataset.json" output_file = path + "Data/dump1.p" ``` Downloaded the 'ner_dataset.csv' dataset from kaggle: https://www.kaggle.com/abhinavwalia95/entity-annotated-corpus?select=ner_dataset.csv part of which will be used to train the spacy model ``` docLoc = "/Users/kinga/Documents/4_th_Brain/glg_sandbox/Data/ner_dataset.csv" df = pd.read_csv(docLoc, encoding = "ISO-8859-1", error_bad_lines=False) df.shape df.head() df = df.drop(columns = ['Sentence #', "POS"]) df.to_csv('Data/ner_dataset.tsv', sep='\t', encoding='utf-8', index=False) #converting the .csv to .tsv file # Convert .tsv file to dataturks json format. import json import logging import sys def tsv_to_json_format(input_path,output_path,unknown_label): try: f=open(input_path,'r') # input file fp=open(output_path, 'w') # output file data_dict={} annotations =[] label_dict={} s='' start=0 for line in f: if line[0:len(line)-1]!='.\tO': word,entity=line.split('\t') s+=word+" " entity=entity[:len(entity)-1] if entity!=unknown_label: if len(entity) != 1: d={} d['text']=word d['start']=start d['end']=start+len(word)-1 try: label_dict[entity].append(d) except: label_dict[entity]=[] label_dict[entity].append(d) start+=len(word)+1 else: data_dict['content']=s s='' label_list=[] for ents in list(label_dict.keys()): for i in range(len(label_dict[ents])): if(label_dict[ents][i]['text']!=''): l=[ents,label_dict[ents][i]] for j in range(i+1,len(label_dict[ents])): if(label_dict[ents][i]['text']==label_dict[ents][j]['text']): di={} di['start']=label_dict[ents][j]['start'] di['end']=label_dict[ents][j]['end'] di['text']=label_dict[ents][i]['text'] l.append(di) label_dict[ents][j]['text']='' label_list.append(l) for entities in label_list: label={} label['label']=[entities[0]] label['points']=entities[1:] annotations.append(label) data_dict['annotation']=annotations annotations=[] json.dump(data_dict, fp) fp.write('\n') data_dict={} start=0 label_dict={} except Exception as e: logging.exception("Unable to process file" + "\n" + "error = " + str(e)) return None tsv_to_json_format("Data/ner_dataset.tsv",'Data/ner_dataset.json','abc') #converting .tsv to .json file path ='/Users/kinga/Documents/4_th_Brain/glg_sandbox/' input_file = path + "Data/ner_dataset.json" output_file = path + "Data/dump1.p" try: training_data = [] lines=[] with open(input_file, 'r') as f: lines = f.readlines() for line in lines: data = json.loads(line) text = data['content'] entities = [] for annotation in data['annotation']: point = annotation['points'][0] labels = annotation['label'] if not isinstance(labels, list): labels = [labels] for label in labels: entities.append((point['start'], point['end'] + 1 ,label)) training_data.append((text, {"entities" : entities})) # print(training_data) with open(output_file, 'wb') as fp: pickle.dump(training_data, fp) except Exception as e: logging.exception("Unable to process " + input_file + "\n" + "error = " + str(e)) type(training_data) training_data[0] len(training_data) # Loading training data with open (output_file, 'rb') as fp: training_data = pickle.load(fp) # sentence_count = 260 TRAIN_DATA = training_data[:260] # Training additional entity types using spaCy from __future__ import unicode_literals, print_function import pickle import plac import random from pathlib import Path import spacy from spacy.util import minibatch, compounding # New entity labels # Specify the new entity labels which you want to add here LABEL = ['I-geo', 'B-geo', 'I-art', 'B-art', 'B-tim', 'B-nat', 'B-eve', 'O', 'I-per', 'I-tim', 'I-nat', 'I-eve', 'B-per', 'I-org', 'B-gpe', 'B-org', 'I-gpe'] """ geo = Geographical Entity org = Organization per = Person gpe = Geopolitical Entity tim = Time indicator art = Artifact eve = Event nat = Natural Phenomenon """ nlp = spacy.blank('en') # create blank Language class print("Created blank 'en' model") if 'ner' not in nlp.pipe_names: ner = nlp.create_pipe('ner') nlp.add_pipe(ner) else: ner = nlp.get_pipe('ner') #adding the labels for i in LABEL: ner.add_label(i) optimizer = nlp.begin_training() n_iter = 1000 other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'ner'] with nlp.disable_pipes(*other_pipes): # only train NER for itn in range(n_iter): random.shuffle(TRAIN_DATA) losses = {} batches = minibatch(TRAIN_DATA, size=compounding(4., 32., 1.001)) for batch in batches: texts, annotations = zip(*batch) nlp.update(texts, annotations, sgd=optimizer, drop=0.35, losses=losses) if itn%10 == 0: print(f'Losses for iteration {itn} is {losses}.') # Test the trained model test_text = 'Gianni Infantino is the president of FIFA.' doc = nlp(test_text) print("Entities in '%s'" % test_text) for ent in doc.ents: print(ent.label_, ent.text) # output_dir = "/Users/kinga/Documents/4_th_Brain/glg_sandbox/Data/" # nlp.to_disk("model1") # print("Saved model to", output_dir) # Test the saved model test_text = 'Gianni Infantino is the president of FIFA.' # print("Loading from", output_dir) nlp2 = spacy.load('/Users/kinga/Documents/4_th_Brain/glg_sandbox/model1') doc2 = nlp2(test_text) for ent in doc2.ents: print(ent.label_, ent.text) ``` Trying model on GLG Case Study text: "EarthEnable installs affordable earthen floors in homes across Rwanda and Uganda, which helps mitigate health issues caused by dirt floors such as asthma, diarrhea, and malnutrition. The underside of EarthEnable’s flooring product had been suffering cracks and erosion at an unusually high rate, and they needed help diagnosing the cause." ``` test_text2 = "EarthEnable installs affordable earthen floors in homes across Rwanda and Uganda, which helps mitigate health issues caused by dirt floors such as asthma, diarrhea, and malnutrition. The underside of EarthEnable’s flooring product had been suffering cracks and erosion at an unusually high rate, and they needed help diagnosing the cause." doc3 = nlp2(test_text2) print("Entities in '%s'" % test_text2) for ent in doc3.ents: print(ent.label_, ent.text) test_text3 = "A marketing team at a leading advertising technology company had difficulties understanding how today’s marketers in Southeast Asia measure advertising effectiveness across different segments. The team also sought to differentiate their advertising offerings from key competitors." doc4 = nlp2(test_text3) print("Entities in '%s'" % test_text3) for ent in doc4.ents: print(ent.label_, ent.text) test_text4 = "In November 2019, Tesla announced the release of the Cybertruck, the company’s all electric pickup truck and the automaker’s sixth vehicle since its founding. While initial research had been conducted, GLG clients still had questions regarding consumer sentiment about electric trucks and where the luxury brand’s truck fit into the marketplace before making an investment decision." doc5 = nlp2(test_text4) print("Entities in '%s'" % test_text4) for ent in doc5.ents: print(ent.label_, ent.text) ```
github_jupyter
from __future__ import unicode_literals, print_function import spacy import pandas as pd import json import logging import sys import pickle import plac import random from pathlib import Path import spacy from spacy.util import minibatch, compounding path ='/Users/kinga/Documents/4_th_Brain/glg_sandbox/' input_file = path + "Data/ner_dataset.json" output_file = path + "Data/dump1.p" docLoc = "/Users/kinga/Documents/4_th_Brain/glg_sandbox/Data/ner_dataset.csv" df = pd.read_csv(docLoc, encoding = "ISO-8859-1", error_bad_lines=False) df.shape df.head() df = df.drop(columns = ['Sentence #', "POS"]) df.to_csv('Data/ner_dataset.tsv', sep='\t', encoding='utf-8', index=False) #converting the .csv to .tsv file # Convert .tsv file to dataturks json format. import json import logging import sys def tsv_to_json_format(input_path,output_path,unknown_label): try: f=open(input_path,'r') # input file fp=open(output_path, 'w') # output file data_dict={} annotations =[] label_dict={} s='' start=0 for line in f: if line[0:len(line)-1]!='.\tO': word,entity=line.split('\t') s+=word+" " entity=entity[:len(entity)-1] if entity!=unknown_label: if len(entity) != 1: d={} d['text']=word d['start']=start d['end']=start+len(word)-1 try: label_dict[entity].append(d) except: label_dict[entity]=[] label_dict[entity].append(d) start+=len(word)+1 else: data_dict['content']=s s='' label_list=[] for ents in list(label_dict.keys()): for i in range(len(label_dict[ents])): if(label_dict[ents][i]['text']!=''): l=[ents,label_dict[ents][i]] for j in range(i+1,len(label_dict[ents])): if(label_dict[ents][i]['text']==label_dict[ents][j]['text']): di={} di['start']=label_dict[ents][j]['start'] di['end']=label_dict[ents][j]['end'] di['text']=label_dict[ents][i]['text'] l.append(di) label_dict[ents][j]['text']='' label_list.append(l) for entities in label_list: label={} label['label']=[entities[0]] label['points']=entities[1:] annotations.append(label) data_dict['annotation']=annotations annotations=[] json.dump(data_dict, fp) fp.write('\n') data_dict={} start=0 label_dict={} except Exception as e: logging.exception("Unable to process file" + "\n" + "error = " + str(e)) return None tsv_to_json_format("Data/ner_dataset.tsv",'Data/ner_dataset.json','abc') #converting .tsv to .json file path ='/Users/kinga/Documents/4_th_Brain/glg_sandbox/' input_file = path + "Data/ner_dataset.json" output_file = path + "Data/dump1.p" try: training_data = [] lines=[] with open(input_file, 'r') as f: lines = f.readlines() for line in lines: data = json.loads(line) text = data['content'] entities = [] for annotation in data['annotation']: point = annotation['points'][0] labels = annotation['label'] if not isinstance(labels, list): labels = [labels] for label in labels: entities.append((point['start'], point['end'] + 1 ,label)) training_data.append((text, {"entities" : entities})) # print(training_data) with open(output_file, 'wb') as fp: pickle.dump(training_data, fp) except Exception as e: logging.exception("Unable to process " + input_file + "\n" + "error = " + str(e)) type(training_data) training_data[0] len(training_data) # Loading training data with open (output_file, 'rb') as fp: training_data = pickle.load(fp) # sentence_count = 260 TRAIN_DATA = training_data[:260] # Training additional entity types using spaCy from __future__ import unicode_literals, print_function import pickle import plac import random from pathlib import Path import spacy from spacy.util import minibatch, compounding # New entity labels # Specify the new entity labels which you want to add here LABEL = ['I-geo', 'B-geo', 'I-art', 'B-art', 'B-tim', 'B-nat', 'B-eve', 'O', 'I-per', 'I-tim', 'I-nat', 'I-eve', 'B-per', 'I-org', 'B-gpe', 'B-org', 'I-gpe'] """ geo = Geographical Entity org = Organization per = Person gpe = Geopolitical Entity tim = Time indicator art = Artifact eve = Event nat = Natural Phenomenon """ nlp = spacy.blank('en') # create blank Language class print("Created blank 'en' model") if 'ner' not in nlp.pipe_names: ner = nlp.create_pipe('ner') nlp.add_pipe(ner) else: ner = nlp.get_pipe('ner') #adding the labels for i in LABEL: ner.add_label(i) optimizer = nlp.begin_training() n_iter = 1000 other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'ner'] with nlp.disable_pipes(*other_pipes): # only train NER for itn in range(n_iter): random.shuffle(TRAIN_DATA) losses = {} batches = minibatch(TRAIN_DATA, size=compounding(4., 32., 1.001)) for batch in batches: texts, annotations = zip(*batch) nlp.update(texts, annotations, sgd=optimizer, drop=0.35, losses=losses) if itn%10 == 0: print(f'Losses for iteration {itn} is {losses}.') # Test the trained model test_text = 'Gianni Infantino is the president of FIFA.' doc = nlp(test_text) print("Entities in '%s'" % test_text) for ent in doc.ents: print(ent.label_, ent.text) # output_dir = "/Users/kinga/Documents/4_th_Brain/glg_sandbox/Data/" # nlp.to_disk("model1") # print("Saved model to", output_dir) # Test the saved model test_text = 'Gianni Infantino is the president of FIFA.' # print("Loading from", output_dir) nlp2 = spacy.load('/Users/kinga/Documents/4_th_Brain/glg_sandbox/model1') doc2 = nlp2(test_text) for ent in doc2.ents: print(ent.label_, ent.text) test_text2 = "EarthEnable installs affordable earthen floors in homes across Rwanda and Uganda, which helps mitigate health issues caused by dirt floors such as asthma, diarrhea, and malnutrition. The underside of EarthEnable’s flooring product had been suffering cracks and erosion at an unusually high rate, and they needed help diagnosing the cause." doc3 = nlp2(test_text2) print("Entities in '%s'" % test_text2) for ent in doc3.ents: print(ent.label_, ent.text) test_text3 = "A marketing team at a leading advertising technology company had difficulties understanding how today’s marketers in Southeast Asia measure advertising effectiveness across different segments. The team also sought to differentiate their advertising offerings from key competitors." doc4 = nlp2(test_text3) print("Entities in '%s'" % test_text3) for ent in doc4.ents: print(ent.label_, ent.text) test_text4 = "In November 2019, Tesla announced the release of the Cybertruck, the company’s all electric pickup truck and the automaker’s sixth vehicle since its founding. While initial research had been conducted, GLG clients still had questions regarding consumer sentiment about electric trucks and where the luxury brand’s truck fit into the marketplace before making an investment decision." doc5 = nlp2(test_text4) print("Entities in '%s'" % test_text4) for ent in doc5.ents: print(ent.label_, ent.text)
0.211906
0.428592
<a href="https://colab.research.google.com/github/KSY1526/myblog/blob/master/_notebooks/kagglessu2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # "[SSUDA] 심장병 데이터 분석" - author: Seong Yeon Kim - categories: [SSUDA, jupyter, kaggle, logistic, scale, keras, Regression] # 데이터 불러오기 ``` from google.colab import drive drive.mount('/content/drive') import pandas as pd data = pd.read_csv("/content/drive/MyDrive/heart.csv") ``` # Verson 1. 심플한 로지스틱 회귀 모형 # 데이터 이해 ``` df = data.copy() df.head() ``` 디폴트 값은 5입니다. ``` df.columns df.columns.values.tolist() ``` 컬럼은 이런 방식으로 확인할 수 있습니다. 밑에 DataFrame.columns.values.tolist() 함수는 컬럼 추출 중 가장 런타임이 빠르다고 합니다. ``` print('Shape is',df.shape) ``` 303개 데이터, 14개 특성값이 있습니다. ``` df.isnull().sum() df.info() ``` 글쓴이는 윗방식으로 null값 유무를 체크했습니다. 그러나 df.info() 방식이 여러가지 정보를 같이 줘 더 효율적입니다. ``` df.describe() ``` 데이터를 보면 어느정도 스케일링이 필요하다는 것을 알 수 있습니다. # 특성 스케일링 ``` df['age'] = df['age']/max(df['age']) df['cp'] = df['cp']/max(df['cp']) df['trtbps'] = df['trtbps']/max(df['trtbps']) df['chol'] = df['chol']/max(df['chol']) df['thalachh'] = df['thalachh']/max(df['thalachh']) df.describe() ``` 이전과 달리 특성 스케일이 확실히 비슷해졌습니다. # 데이터 모델링 ``` from sklearn.model_selection import train_test_split #splitting data into training data and testing data X_train, X_test, y_train, y_test = train_test_split( df.drop(['output'], axis=1), df.output, test_size= 0.2, # 20% test data & 80% train data random_state=0, stratify=df.output ) ``` stratify 속성 => y값의 공평한 분배를 위해 사용하는 속성입니다. ``` from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf.fit(X_train, y_train) from sklearn.metrics import accuracy_score Y_pred = clf.predict(X_test) acc=accuracy_score(y_test, Y_pred) print('Accuracy is',round(acc,2)*100,'%') ``` 로지스틱 회귀 모형을 별다른 튜닝 없이 사용했습니다. 정확도 측면에서만 보면 캐글에 있는 다른 코드와 별반 다르지 않습니다. # Verson 2. 심플한 딥러닝 모형 # 데이터 이해2 ``` df = data.copy() df.output.value_counts() ``` 이전 모델에서 생략(?)된 부분인거 같은데 1과 0 값의 비율이 조금 차이가 있습니다. ``` df.corr().abs()['output'].sort_values(ascending = False) ``` Y값과의 상관계수가 어느정도 되는지 확인해보았습니다. # 데이터 모델링2 ``` from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score X = df.drop('output', axis = 1) y = df['output'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42) X_train.shape from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) ``` 여기서는 StandardScaler를 사용해 스케일링을 했습니다. 평균 0, 분산 1로 조정합니다. 이 스케일링은 이상치가 있을때 잘 작용하지 않을 수 있습니다. ``` from tensorflow import keras model = keras.Sequential( [ keras.layers.Dense( 256, activation="relu", input_shape=[13] ), keras.layers.Dense(515, activation="relu"), keras.layers.Dropout(0.3), keras.layers.Dense(50, activation="relu"), keras.layers.Dropout(0.3), keras.layers.Dense(1, activation="sigmoid"), ] ) model.summary() ``` 활성화 함수로 제일 많이 사용하는 relu와 sigmoid함수를 사용했습니다. relu함수 : 입력이 양수일 경우 그대로 반환, 음수일경우 0으로 만듭니다. sigmoid함수 : 1 / (1 + e^z) 함수. 값을 0에서 1 사이로 변환합니다. 첫번째 구간에 아웃풋 값을 256개 주었는데, 변수값이 13개임으로 모수가 14개입니다. 그래서 256*14 = 3584개 파라미터가 나오게 된 것입니다. 중간에 있는 드롭아웃은 일정 비율만큼 뉴런을 랜덤하게 꺼서 과대적합을 막는 역할을 합니다. ``` model.compile(optimizer = 'Adam', loss = 'binary_crossentropy', metrics = ['binary_accuracy']) early_stopping = keras.callbacks.EarlyStopping( patience = 20, min_delta = 0.001, restore_best_weights =True ) history = model.fit( X_train, y_train, validation_data=(X_test, y_test), batch_size=15, epochs=50, callbacks = [early_stopping], verbose=1, ) model.evaluate(X_test, y_test) predictions =(model.predict(X_test)>0.5).astype("int32") from sklearn.metrics import classification_report, confusion_matrix, accuracy_score accuracy_score(y_test, predictions) ``` 아까 결과와 비슷한 수치를 보입니다. ``` print(classification_report(y_test, predictions)) ``` classification_report 함수가 상당히 유용한 걸 알 수있습니다. 한번에 정밀도, 재현율, f1-score 값 까지 보여줍니다. # 느낀점 분류에 기본적인 로지스틱 회귀모형과 단순한 딥러닝 코드를 따라해봤습니다. 특히 딥러닝 부분에 경우 정말 기본적인 것밖에 몰라 코드 해석에 시간이 많이 걸렸네요. 여러가지로 코드를 만져가며 느낀점은 이번 데이터에 경우 스케일링이 많이 중요한 것 같습니다. 스케일링 종류에 따라서 정확도 값이 크게 변하는 것을 관찰했습니다. 특히 트리기반 부스팅 모델이 아니라 더 그런 것 같습니다. 너무 복잡한 모델을 급하게 이해하기 보다, 이해할 수 있는 모델을 관찰하며 데이터 분석은 어떤 과정으로 하는가를 살펴봤습니다.
github_jupyter
from google.colab import drive drive.mount('/content/drive') import pandas as pd data = pd.read_csv("/content/drive/MyDrive/heart.csv") df = data.copy() df.head() df.columns df.columns.values.tolist() print('Shape is',df.shape) df.isnull().sum() df.info() df.describe() df['age'] = df['age']/max(df['age']) df['cp'] = df['cp']/max(df['cp']) df['trtbps'] = df['trtbps']/max(df['trtbps']) df['chol'] = df['chol']/max(df['chol']) df['thalachh'] = df['thalachh']/max(df['thalachh']) df.describe() from sklearn.model_selection import train_test_split #splitting data into training data and testing data X_train, X_test, y_train, y_test = train_test_split( df.drop(['output'], axis=1), df.output, test_size= 0.2, # 20% test data & 80% train data random_state=0, stratify=df.output ) from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf.fit(X_train, y_train) from sklearn.metrics import accuracy_score Y_pred = clf.predict(X_test) acc=accuracy_score(y_test, Y_pred) print('Accuracy is',round(acc,2)*100,'%') df = data.copy() df.output.value_counts() df.corr().abs()['output'].sort_values(ascending = False) from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score X = df.drop('output', axis = 1) y = df['output'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42) X_train.shape from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) from tensorflow import keras model = keras.Sequential( [ keras.layers.Dense( 256, activation="relu", input_shape=[13] ), keras.layers.Dense(515, activation="relu"), keras.layers.Dropout(0.3), keras.layers.Dense(50, activation="relu"), keras.layers.Dropout(0.3), keras.layers.Dense(1, activation="sigmoid"), ] ) model.summary() model.compile(optimizer = 'Adam', loss = 'binary_crossentropy', metrics = ['binary_accuracy']) early_stopping = keras.callbacks.EarlyStopping( patience = 20, min_delta = 0.001, restore_best_weights =True ) history = model.fit( X_train, y_train, validation_data=(X_test, y_test), batch_size=15, epochs=50, callbacks = [early_stopping], verbose=1, ) model.evaluate(X_test, y_test) predictions =(model.predict(X_test)>0.5).astype("int32") from sklearn.metrics import classification_report, confusion_matrix, accuracy_score accuracy_score(y_test, predictions) print(classification_report(y_test, predictions))
0.594434
0.981753
``` %pylab inline import re import math import string from collections import Counter from __future__ import division ``` ## Sample data ``` TEXT = file('big.txt').read() len(TEXT) def tokens(text): return re.findall('[a-z]+', text.lower()) WORDS = tokens(TEXT) len(WORDS) def sample(bag, n=10): return " ".join(random.choice(bag) for _ in xrange(n)) sample(WORDS) COUNTS = Counter(WORDS) print COUNTS.most_common(10) for w in tokens('the rare and neverbeforeseen words'): print COUNTS[w], w M = COUNTS["the"] yscale('log'); xscale('log'); title('Frequency of n-th most frequent word and 1/n line.') plot([c for (w, c) in COUNTS.most_common()]) plot([M/i for i in xrange(1, len(COUNTS)+1)]) ``` ## Spellchecker using edit distances ``` def edits0(word): return {word} def edits1(word): pairs = splits(word) deletes = [a+b[1:] for (a, b) in pairs if b] transposes = [a+b[1]+b[0]+b[2:] for (a, b) in pairs if len(b) > 1] replaces = [a+c+b[1:] for (a, b) in pairs for c in alphabet if b] inserts = [a+c+b for (a, b) in pairs for c in alphabet] return set(deletes + transposes + replaces + inserts) def edits2(word): return {e2 for e1 in edits1(word) for e2 in edits1(e1)} def splits(word): return [(word[:i], word[i:]) for i in range(len(word)+1)] alphabet = 'abcdefghijklmnopqrstuvwxyz' def known(words): return {w for w in words if w in COUNTS} def correct(word): candidates = (known(edits0(word)) or known(edits1(word)) or known(edits2(word)) or {word}) return max(candidates, key=COUNTS.get) map(correct, tokens('Speling errurs in somethink. Whutever; unusuel misteakes everyware?')) def correct_text(text): return re.sub("[a-zA-Z]+", correct_match, text) def correct_match(match): word = match.group() return case_of(word)(correct(word.lower())) def case_of(text): return (str.upper if text.isupper() else str.lower if text.islower() else str.title if text.istitle() else str) correct_text('Speling Errurs IN somethink. Whutever; unusuel misteakes?') ``` ## Probabilities of Word Sequences ``` def prob_dist(counter): N = sum(counter.values()) return lambda x:counter[x]/N prob_word = prob_dist(COUNTS) for w in tokens('"The" is most common word in English'): print prob_word(w), w def prob_words(words): return product(prob_word(w) for w in words) def product(nums): result = 1 for x in nums: result *= x return result prob_words("this is a car".split()) def memo(f): "Memoize function f, whose args must all be hashable." cache = {} def fmemo(*args): if args not in cache: cache[args] = f(*args) return cache[args] fmemo.cache = cache return fmemo @memo def segment(text): if not text: return [] else: candidates = ([first] + segment(rest) for (first, rest) in splits(text, 1)) return max(candidates, key=prob_words) decl = ('wheninthecourseofhumaneventsitbecomesnecessaryforonepeople' + 'todissolvethepoliticalbandswhichhaveconnectedthemwithanother' + 'andtoassumeamongthepowersoftheearththeseparateandequalstation' + 'towhichthelawsofnatureandofnaturesgodentitlethem') segment(decl) ```
github_jupyter
%pylab inline import re import math import string from collections import Counter from __future__ import division TEXT = file('big.txt').read() len(TEXT) def tokens(text): return re.findall('[a-z]+', text.lower()) WORDS = tokens(TEXT) len(WORDS) def sample(bag, n=10): return " ".join(random.choice(bag) for _ in xrange(n)) sample(WORDS) COUNTS = Counter(WORDS) print COUNTS.most_common(10) for w in tokens('the rare and neverbeforeseen words'): print COUNTS[w], w M = COUNTS["the"] yscale('log'); xscale('log'); title('Frequency of n-th most frequent word and 1/n line.') plot([c for (w, c) in COUNTS.most_common()]) plot([M/i for i in xrange(1, len(COUNTS)+1)]) def edits0(word): return {word} def edits1(word): pairs = splits(word) deletes = [a+b[1:] for (a, b) in pairs if b] transposes = [a+b[1]+b[0]+b[2:] for (a, b) in pairs if len(b) > 1] replaces = [a+c+b[1:] for (a, b) in pairs for c in alphabet if b] inserts = [a+c+b for (a, b) in pairs for c in alphabet] return set(deletes + transposes + replaces + inserts) def edits2(word): return {e2 for e1 in edits1(word) for e2 in edits1(e1)} def splits(word): return [(word[:i], word[i:]) for i in range(len(word)+1)] alphabet = 'abcdefghijklmnopqrstuvwxyz' def known(words): return {w for w in words if w in COUNTS} def correct(word): candidates = (known(edits0(word)) or known(edits1(word)) or known(edits2(word)) or {word}) return max(candidates, key=COUNTS.get) map(correct, tokens('Speling errurs in somethink. Whutever; unusuel misteakes everyware?')) def correct_text(text): return re.sub("[a-zA-Z]+", correct_match, text) def correct_match(match): word = match.group() return case_of(word)(correct(word.lower())) def case_of(text): return (str.upper if text.isupper() else str.lower if text.islower() else str.title if text.istitle() else str) correct_text('Speling Errurs IN somethink. Whutever; unusuel misteakes?') def prob_dist(counter): N = sum(counter.values()) return lambda x:counter[x]/N prob_word = prob_dist(COUNTS) for w in tokens('"The" is most common word in English'): print prob_word(w), w def prob_words(words): return product(prob_word(w) for w in words) def product(nums): result = 1 for x in nums: result *= x return result prob_words("this is a car".split()) def memo(f): "Memoize function f, whose args must all be hashable." cache = {} def fmemo(*args): if args not in cache: cache[args] = f(*args) return cache[args] fmemo.cache = cache return fmemo @memo def segment(text): if not text: return [] else: candidates = ([first] + segment(rest) for (first, rest) in splits(text, 1)) return max(candidates, key=prob_words) decl = ('wheninthecourseofhumaneventsitbecomesnecessaryforonepeople' + 'todissolvethepoliticalbandswhichhaveconnectedthemwithanother' + 'andtoassumeamongthepowersoftheearththeseparateandequalstation' + 'towhichthelawsofnatureandofnaturesgodentitlethem') segment(decl)
0.312685
0.592932
# Market Basket Analysis ``` import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt data = pd.read_csv('data.csv', encoding="ISO-8859-1") data data = data.dropna() data.info() data['Country'].unique() data.describe() data.describe(exclude='number') data['InvoiceDate'] = pd.to_datetime(data['InvoiceDate']) print("""This Dataset start from {} to {}""".format(data['InvoiceDate'].describe()['first'], data['InvoiceDate'].describe()['last'])) data_plus = data[data['Quantity']>=0] data_plus.info() (406829-397924) / 406829 * 100 data_plus.describe() top_10 = data_plus.groupby('Country').nunique().sort_values('InvoiceNo', ascending=False).head(10) top_10 top_10_transaction = pd.DataFrame(data_plus.groupby('Country').nunique().sort_values('InvoiceNo', ascending=False).head(10)['InvoiceNo']) top_10_transaction #Import plotly.express libraries for visualization import plotly.express as px # total bookings per market segment (incl. canceled) segments=top_10_transaction # pie plot fig = px.pie(segments, values=top_10_transaction['InvoiceNo'], names=top_10_transaction.index, title="Country Performance by Number of Invoice", template="seaborn") fig.update_traces(rotation=-90, textinfo="percent+label") fig.show() basket_plus = (data_plus[data_plus['Country'] =="United Kingdom"].groupby(['InvoiceNo', 'Description'])['Quantity'] .sum().unstack().reset_index().fillna(0).set_index('InvoiceNo')) basket_plus basket_plus.tail() def encode_units(x): if x <= 0: return 0 if x >= 1: return 1 basket_encode_plus = basket_plus.applymap(encode_units) basket_encode_plus basket_encode_plus.tail() basket_filter_plus = basket_encode_plus[(basket_encode_plus > 0).sum(axis=1) >= 2] basket_filter_plus pip install mlxtend from mlxtend.frequent_patterns import apriori frequent_itemsets_plus = apriori(basket_filter_plus, min_support=0.03, use_colnames=True).sort_values('support', ascending=False).reset_index(drop=True) frequent_itemsets_plus['length'] = frequent_itemsets_plus['itemsets'].apply(lambda x: len(x)) frequent_itemsets_plus frequent_itemsets_plus[ (frequent_itemsets_plus['length'] == 2) & (frequent_itemsets_plus['support'] >= 0.03) ] from mlxtend.frequent_patterns import association_rules association_rules(frequent_itemsets_plus, metric='lift', min_threshold=1).sort_values('lift', ascending=False).reset_index(drop=True) ```
github_jupyter
import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt data = pd.read_csv('data.csv', encoding="ISO-8859-1") data data = data.dropna() data.info() data['Country'].unique() data.describe() data.describe(exclude='number') data['InvoiceDate'] = pd.to_datetime(data['InvoiceDate']) print("""This Dataset start from {} to {}""".format(data['InvoiceDate'].describe()['first'], data['InvoiceDate'].describe()['last'])) data_plus = data[data['Quantity']>=0] data_plus.info() (406829-397924) / 406829 * 100 data_plus.describe() top_10 = data_plus.groupby('Country').nunique().sort_values('InvoiceNo', ascending=False).head(10) top_10 top_10_transaction = pd.DataFrame(data_plus.groupby('Country').nunique().sort_values('InvoiceNo', ascending=False).head(10)['InvoiceNo']) top_10_transaction #Import plotly.express libraries for visualization import plotly.express as px # total bookings per market segment (incl. canceled) segments=top_10_transaction # pie plot fig = px.pie(segments, values=top_10_transaction['InvoiceNo'], names=top_10_transaction.index, title="Country Performance by Number of Invoice", template="seaborn") fig.update_traces(rotation=-90, textinfo="percent+label") fig.show() basket_plus = (data_plus[data_plus['Country'] =="United Kingdom"].groupby(['InvoiceNo', 'Description'])['Quantity'] .sum().unstack().reset_index().fillna(0).set_index('InvoiceNo')) basket_plus basket_plus.tail() def encode_units(x): if x <= 0: return 0 if x >= 1: return 1 basket_encode_plus = basket_plus.applymap(encode_units) basket_encode_plus basket_encode_plus.tail() basket_filter_plus = basket_encode_plus[(basket_encode_plus > 0).sum(axis=1) >= 2] basket_filter_plus pip install mlxtend from mlxtend.frequent_patterns import apriori frequent_itemsets_plus = apriori(basket_filter_plus, min_support=0.03, use_colnames=True).sort_values('support', ascending=False).reset_index(drop=True) frequent_itemsets_plus['length'] = frequent_itemsets_plus['itemsets'].apply(lambda x: len(x)) frequent_itemsets_plus frequent_itemsets_plus[ (frequent_itemsets_plus['length'] == 2) & (frequent_itemsets_plus['support'] >= 0.03) ] from mlxtend.frequent_patterns import association_rules association_rules(frequent_itemsets_plus, metric='lift', min_threshold=1).sort_values('lift', ascending=False).reset_index(drop=True)
0.426322
0.706266
[![pypi pytorch_inferno version](https://img.shields.io/pypi/v/pytorch_inferno.svg)](https://pypi.python.org/pypi/pytorch_inferno) [![pytorch_inferno python compatibility](https://img.shields.io/pypi/pyversions/pytorch_inferno.svg)](https://pypi.python.org/pypi/pytorch_inferno) [![pytorch_inferno license](https://img.shields.io/pypi/l/pytorch_inferno.svg)](https://pypi.python.org/pypi/pytorch_inferno) [![CI](https://github.com/GilesStrong/pytorch_inferno/actions/workflows/main.yml/badge.svg)](https://github.com/GilesStrong/pytorch_inferno/actions) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4597140.svg)](https://doi.org/10.5281/zenodo.4597140) # PyTorch INFERNO Documentation: https://gilesstrong.github.io/pytorch_inferno/ This package provides a PyTorch implementation of INFERNO ([de Castro and Dorigo, 2018](https://www.sciencedirect.com/science/article/pii/S0010465519301948)), along with a minimal high-level wrapper for training and applying PyTorch models, and running statistical inference of parameters of interest in the presence of nuisance parameters. INFERNO is implemented in the form of a callback, allowing it to be dropped in and swapped out with heavy rewriting of code. For an overview of the package, a breakdown of the INFERNO algorithm, and an introduction to parameter inference in HEP, I have written a 5-post blog series: https://gilesstrong.github.io/website/statistics/hep/inferno/2020/12/04/inferno-1.html The authors' Tensorflow 1 code may be found here: https://github.com/pablodecm/paper-inferno And Lukas Layer's Tenforflow 2 version may be found here: https://github.com/llayer/inferno ### User install ``` pip install pytorch_inferno ``` ### Developer install ``` [install torch>=1.7 according to CUDA version] pip install nbdev fastcore numpy pandas fastprogress matplotlib>=3.0.0 seaborn scipy git clone git@github.com:GilesStrong/pytorch_inferno.git cd pytorch_inferno pip install -e . nbdev_install_git_hooks ``` ## Overview Library developed and testing in `nbs` directory. Experiments run in `experiments` directory. Use `nbdev_build_lib` to export code to library located in `pytorch_inferno`. This overwrites any changes in `pytorch_inferno`, i.e. only edit the notebooks. ## Results This package has been tested against the paper problem and reproduces its results within uncertainty ![title](imgs/results.png) ## Reference If you have used this implementation of INFERNO in your analysis work and wish to cite it, the preferred reference is: *Giles C. Strong, pytorch_inferno, Zenodo (Mar. 2021), http://doi.org/10.5281/zenodo.4597140, Note: Please check https://github.com/GilesStrong/pytorch_inferno/graphs/contributors for the full list of contributors* ``` @misc{giles_chatham_strong_2021_4597140,   author = {Giles Chatham Strong},   title = {LUMIN},   month = mar,   year = 2021,   note = {{Please check https://github.com/GilesStrong/pytorch_inferno/graphs/contributors for the full list of contributors}},   doi = {10.5281/zenodo.4597140},   url = {https://doi.org/10.5281/zenodo.4597140} } ``` The INFERNO algorithm should also be cited: ``` @article{DECASTRO2019170, title = {INFERNO: Inference-Aware Neural Optimisation}, journal = {Computer Physics Communications}, volume = {244}, pages = {170-179}, year = {2019}, issn = {0010-4655}, doi = {https://doi.org/10.1016/j.cpc.2019.06.007}, url = {https://www.sciencedirect.com/science/article/pii/S0010465519301948}, author = {Pablo {de Castro} and Tommaso Dorigo}, } ```
github_jupyter
pip install pytorch_inferno [install torch>=1.7 according to CUDA version] pip install nbdev fastcore numpy pandas fastprogress matplotlib>=3.0.0 seaborn scipy git clone git@github.com:GilesStrong/pytorch_inferno.git cd pytorch_inferno pip install -e . nbdev_install_git_hooks @misc{giles_chatham_strong_2021_4597140,   author = {Giles Chatham Strong},   title = {LUMIN},   month = mar,   year = 2021,   note = {{Please check https://github.com/GilesStrong/pytorch_inferno/graphs/contributors for the full list of contributors}},   doi = {10.5281/zenodo.4597140},   url = {https://doi.org/10.5281/zenodo.4597140} } @article{DECASTRO2019170, title = {INFERNO: Inference-Aware Neural Optimisation}, journal = {Computer Physics Communications}, volume = {244}, pages = {170-179}, year = {2019}, issn = {0010-4655}, doi = {https://doi.org/10.1016/j.cpc.2019.06.007}, url = {https://www.sciencedirect.com/science/article/pii/S0010465519301948}, author = {Pablo {de Castro} and Tommaso Dorigo}, }
0.64969
0.960768
# VacationPy ---- #### Note * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. ``` # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import gmaps import os import gmaps.datasets # Import API key from api_keys import g_key ``` ### Store Part I results into DataFrame * Load the csv exported in Part I to a DataFrame ### Humidity Heatmap * Configure gmaps. * Use the Lat and Lng as locations and Humidity as the weight. * Add Heatmap layer to map. ``` weather_data = pd.read_csv("WeatherPy.csv") weather_data ``` ### Create new DataFrame fitting weather criteria * Narrow down the cities to fit weather conditions. * Drop any rows will null values. ``` idealtemp = (weather_data['temp']<80) & (weather_data['temp']>50) idealhumid = weather_data['humidity']<40 idealcloud = weather_data['cloudiness']<20 idealwind = weather_data['wind speed']<10 hotel_df = weather_data[idealtemp & idealhumid & idealcloud & idealwind] hotel_df ``` ### Hotel Map * Store into variable named `hotel_df`. * Add a "Hotel Name" column to the DataFrame. * Set parameters to search for hotels with 5000 meters. * Hit the Google Places API for each city's coordinates. * Store the first Hotel result into the DataFrame. * Plot markers on top of the heatmap. ``` hotel_df['Hotel Name'] = "" hotel_df # Dependencies import requests import json from pprint import pprint base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json" def get_nearby_hotel_name(lat, lon): params = { "location": str(lat) + ',' + str(lon), #"rankby":"distance", "radius":"5000", "type":"hotel", "key": g_key } response = requests.get(base_url, params = params).json() try: return response['results'][1]['name'] except: return 'No hotel name' for i, row in hotel_df.iterrows(): lat = row['Lat'] lon = row['Long'] hotel_name = get_nearby_hotel_name(lat, lon) hotel_df.loc[i, 'Hotel Name'] = hotel_name hotel_df # NOTE: Do not change any of the code in this cell # Using the template add the hotel marks to the heatmap info_box_template = """ <dl> <dt>Name</dt><dd>{Hotel Name}</dd> <dt>City</dt><dd>{City}</dd> <dt>Country</dt><dd>{Country}</dd> </dl> """ # # Store the DataFrame Row # # NOTE: be sure to update with your DataFrame name hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()] locations = hotel_df[["Lat", "Long"]] # Add marker layer ontop of heat map fig = gmaps.figure() fig.add_layer(gmaps.heatmap_layer(weather_data[["Lat","Long"]], weather_data['humidity'])) marker_layer = gmaps.marker_layer(locations, info_box_content=hotel_info) fig.add_layer(marker_layer) # Display Map fig ```
github_jupyter
# Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import gmaps import os import gmaps.datasets # Import API key from api_keys import g_key weather_data = pd.read_csv("WeatherPy.csv") weather_data idealtemp = (weather_data['temp']<80) & (weather_data['temp']>50) idealhumid = weather_data['humidity']<40 idealcloud = weather_data['cloudiness']<20 idealwind = weather_data['wind speed']<10 hotel_df = weather_data[idealtemp & idealhumid & idealcloud & idealwind] hotel_df hotel_df['Hotel Name'] = "" hotel_df # Dependencies import requests import json from pprint import pprint base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json" def get_nearby_hotel_name(lat, lon): params = { "location": str(lat) + ',' + str(lon), #"rankby":"distance", "radius":"5000", "type":"hotel", "key": g_key } response = requests.get(base_url, params = params).json() try: return response['results'][1]['name'] except: return 'No hotel name' for i, row in hotel_df.iterrows(): lat = row['Lat'] lon = row['Long'] hotel_name = get_nearby_hotel_name(lat, lon) hotel_df.loc[i, 'Hotel Name'] = hotel_name hotel_df # NOTE: Do not change any of the code in this cell # Using the template add the hotel marks to the heatmap info_box_template = """ <dl> <dt>Name</dt><dd>{Hotel Name}</dd> <dt>City</dt><dd>{City}</dd> <dt>Country</dt><dd>{Country}</dd> </dl> """ # # Store the DataFrame Row # # NOTE: be sure to update with your DataFrame name hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()] locations = hotel_df[["Lat", "Long"]] # Add marker layer ontop of heat map fig = gmaps.figure() fig.add_layer(gmaps.heatmap_layer(weather_data[["Lat","Long"]], weather_data['humidity'])) marker_layer = gmaps.marker_layer(locations, info_box_content=hotel_info) fig.add_layer(marker_layer) # Display Map fig
0.280912
0.819785
``` import numpy as np import pandas as pd import torch import torchvision from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from matplotlib import pyplot as plt %matplotlib inline from scipy.stats import entropy mu1 = np.array([3,3,3,3,0]) sigma1 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu2 = np.array([4,4,4,4,0]) sigma2 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu3 = np.array([10,5,5,10,0]) sigma3 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu4 = np.array([-10,-10,-10,-10,0]) sigma4 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu5 = np.array([-21,4,4,-21,0]) sigma5 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu6 = np.array([-10,18,18,-10,0]) sigma6 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu7 = np.array([4,20,4,20,0]) sigma7 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu8 = np.array([4,-20,-20,4,0]) sigma8 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu9 = np.array([20,20,20,20,0]) sigma9 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu10 = np.array([20,-10,-10,20,0]) sigma10 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) sample1 = np.random.multivariate_normal(mean=mu1,cov= sigma1,size=500) sample2 = np.random.multivariate_normal(mean=mu2,cov= sigma2,size=500) sample3 = np.random.multivariate_normal(mean=mu3,cov= sigma3,size=500) sample4 = np.random.multivariate_normal(mean=mu4,cov= sigma4,size=500) sample5 = np.random.multivariate_normal(mean=mu5,cov= sigma5,size=500) sample6 = np.random.multivariate_normal(mean=mu6,cov= sigma6,size=500) sample7 = np.random.multivariate_normal(mean=mu7,cov= sigma7,size=500) sample8 = np.random.multivariate_normal(mean=mu8,cov= sigma8,size=500) sample9 = np.random.multivariate_normal(mean=mu9,cov= sigma9,size=500) sample10 = np.random.multivariate_normal(mean=mu10,cov= sigma10,size=500) # mu1 = np.array([3,3,0,0,0]) # sigma1 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu2 = np.array([4,4,0,0,0]) # sigma2 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu3 = np.array([10,5,0,0,0]) # sigma3 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu4 = np.array([-10,-10,0,0,0]) # sigma4 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu5 = np.array([-21,4,0,0,0]) # sigma5 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu6 = np.array([-10,18,0,0,0]) # sigma6 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu7 = np.array([4,20,0,0,0]) # sigma7 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu8 = np.array([4,-20,0,0,0]) # sigma8 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu9 = np.array([20,20,0,0,0]) # sigma9 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu10 = np.array([20,-10,0,0,0]) # sigma10 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # sample1 = np.random.multivariate_normal(mean=mu1,cov= sigma1,size=500) # sample2 = np.random.multivariate_normal(mean=mu2,cov= sigma2,size=500) # sample3 = np.random.multivariate_normal(mean=mu3,cov= sigma3,size=500) # sample4 = np.random.multivariate_normal(mean=mu4,cov= sigma4,size=500) # sample5 = np.random.multivariate_normal(mean=mu5,cov= sigma5,size=500) # sample6 = np.random.multivariate_normal(mean=mu6,cov= sigma6,size=500) # sample7 = np.random.multivariate_normal(mean=mu7,cov= sigma7,size=500) # sample8 = np.random.multivariate_normal(mean=mu8,cov= sigma8,size=500) # sample9 = np.random.multivariate_normal(mean=mu9,cov= sigma9,size=500) # sample10 = np.random.multivariate_normal(mean=mu10,cov= sigma10,size=500) X = np.concatenate((sample1,sample2,sample3,sample4,sample5,sample6,sample7,sample8,sample9,sample10),axis=0) Y = np.concatenate((np.zeros((500,1)),np.ones((500,1)),2*np.ones((500,1)),3*np.ones((500,1)),4*np.ones((500,1)), 5*np.ones((500,1)),6*np.ones((500,1)),7*np.ones((500,1)),8*np.ones((500,1)),9*np.ones((500,1))),axis=0).astype(int) print(X.shape,Y.shape) # plt.scatter(sample1[:,0],sample1[:,1],label="class_0") # plt.scatter(sample2[:,0],sample2[:,1],label="class_1") # plt.scatter(sample3[:,0],sample3[:,1],label="class_2") # plt.scatter(sample4[:,0],sample4[:,1],label="class_3") # plt.scatter(sample5[:,0],sample5[:,1],label="class_4") # plt.scatter(sample6[:,0],sample6[:,1],label="class_5") # plt.scatter(sample7[:,0],sample7[:,1],label="class_6") # plt.scatter(sample8[:,0],sample8[:,1],label="class_7") # plt.scatter(sample9[:,0],sample9[:,1],label="class_8") # plt.scatter(sample10[:,0],sample10[:,1],label="class_9") # plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left') class SyntheticDataset(Dataset): """MosaicDataset dataset.""" def __init__(self, x, y): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.x = x self.y = y #self.fore_idx = fore_idx def __len__(self): return len(self.y) def __getitem__(self, idx): return self.x[idx] , self.y[idx] #, self.fore_idx[idx] trainset = SyntheticDataset(X,Y) # testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform) classes = ('zero','one','two','three','four','five','six','seven','eight','nine') foreground_classes = {'zero','one','two'} fg_used = '012' fg1, fg2, fg3 = 0,1,2 all_classes = {'zero','one','two','three','four','five','six','seven','eight','nine'} background_classes = all_classes - foreground_classes background_classes trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=True) dataiter = iter(trainloader) background_data=[] background_label=[] foreground_data=[] foreground_label=[] batch_size=100 for i in range(50): images, labels = dataiter.next() for j in range(batch_size): if(classes[labels[j]] in background_classes): img = images[j].tolist() background_data.append(img) background_label.append(labels[j]) else: img = images[j].tolist() foreground_data.append(img) foreground_label.append(labels[j]) foreground_data = torch.tensor(foreground_data) foreground_label = torch.tensor(foreground_label) background_data = torch.tensor(background_data) background_label = torch.tensor(background_label) def create_mosaic_img(bg_idx,fg_idx,fg): """ bg_idx : list of indexes of background_data[] to be used as background images in mosaic fg_idx : index of image to be used as foreground image from foreground data fg : at what position/index foreground image has to be stored out of 0-8 """ image_list=[] j=0 for i in range(9): if i != fg: image_list.append(background_data[bg_idx[j]]) j+=1 else: image_list.append(foreground_data[fg_idx]) label = foreground_label[fg_idx] - fg1 # minus fg1 because our fore ground classes are fg1,fg2,fg3 but we have to store it as 0,1,2 #image_list = np.concatenate(image_list ,axis=0) image_list = torch.stack(image_list) return image_list,label desired_num = 3000 mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9 mosaic_label=[] # label of mosaic image = foreground class present in that mosaic list_set_labels = [] for i in range(desired_num): set_idx = set() np.random.seed(i) bg_idx = np.random.randint(0,3500,8) set_idx = set(background_label[bg_idx].tolist()) fg_idx = np.random.randint(0,1500) set_idx.add(foreground_label[fg_idx].item()) fg = np.random.randint(0,9) fore_idx.append(fg) image_list,label = create_mosaic_img(bg_idx,fg_idx,fg) mosaic_list_of_images.append(image_list) mosaic_label.append(label) list_set_labels.append(set_idx) def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number): """ mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point labels : mosaic_dataset labels foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is "j" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9 """ avg_image_dataset = [] for i in range(len(mosaic_dataset)): img = torch.zeros([5], dtype=torch.float64) for j in range(9): if j == foreground_index[i]: img = img + mosaic_dataset[i][j]*dataset_number/9 else : img = img + mosaic_dataset[i][j]*(9-dataset_number)/(8*9) avg_image_dataset.append(img) return torch.stack(avg_image_dataset) , torch.stack(labels) , foreground_index def calculate_loss(dataloader,model,criter): model.eval() r_loss = 0 with torch.no_grad(): for i, data in enumerate(dataloader, 0): inputs, labels = data inputs, labels = inputs.to("cuda"),labels.to("cuda") outputs = model(inputs) loss = criter(outputs, labels) r_loss += loss.item() return r_loss/i class MosaicDataset1(Dataset): """MosaicDataset dataset.""" def __init__(self, mosaic_list, mosaic_label,fore_idx): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.mosaic = mosaic_list self.label = mosaic_label self.fore_idx = fore_idx def __len__(self): return len(self.label) def __getitem__(self, idx): return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx] batch = 250 msd = MosaicDataset1(mosaic_list_of_images, mosaic_label, fore_idx) train_loader = DataLoader( msd,batch_size= batch ,shuffle=True) ``` **Focus Net** ``` class Focus_deep(nn.Module): ''' deep focus network averaged at zeroth layer input : elemental data ''' def __init__(self,inputs,output,K,d): super(Focus_deep,self).__init__() self.inputs = inputs self.output = output self.K = K self.d = d self.linear1 = nn.Linear(self.inputs,300) #,self.output) self.linear2 = nn.Linear(300,self.output) def forward(self,z): batch = z.shape[0] x = torch.zeros([batch,self.K],dtype=torch.float64) y = torch.zeros([batch,self.d], dtype=torch.float64) x,y = x.to("cuda"),y.to("cuda") for i in range(self.K): x[:,i] = self.helper(z[:,i] )[:,0] # self.d*i:self.d*i+self.d x = F.softmax(x,dim=1) # alphas x1 = x[:,0] for i in range(self.K): x1 = x[:,i] y = y+torch.mul(x1[:,None],z[:,i]) # self.d*i:self.d*i+self.d return y , x def helper(self,x): x = F.relu(self.linear1(x)) x = self.linear2(x) return x ``` **Classification Net** ``` class Classification_deep(nn.Module): ''' input : elemental data deep classification module data averaged at zeroth layer ''' def __init__(self,inputs,output): super(Classification_deep,self).__init__() self.inputs = inputs self.output = output self.linear1 = nn.Linear(self.inputs,50) self.linear2 = nn.Linear(50,self.output) def forward(self,x): x = F.relu(self.linear1(x)) x = self.linear2(x) return x ``` ``` where = Focus_deep(5,1,9,5).double() what = Classification_deep(5,3).double() where = where.to("cuda") what = what.to("cuda") def calculate_attn_loss(dataloader,what,where,criter,k): what.eval() where.eval() r_loss = 0 alphas = [] lbls = [] pred = [] fidices = [] with torch.no_grad(): for i, data in enumerate(dataloader, 0): inputs, labels, fidx = data lbls.append(labels) fidices.append(fidx) inputs = inputs.double() inputs, labels = inputs.to("cuda"),labels.to("cuda") avg,alpha = where(inputs) outputs = what(avg) _, predicted = torch.max(outputs.data, 1) pred.append(predicted.cpu().numpy()) alphas.append(alpha.cpu().numpy()) ent = np.sum(entropy(alpha.cpu().detach().numpy(), base=2, axis=1))/batch # mx,_ = torch.max(alpha,1) # entropy = np.mean(-np.log2(mx.cpu().detach().numpy())) # print("entropy of batch", entropy) loss = criter(outputs, labels) + k*ent r_loss += loss.item() alphas = np.concatenate(alphas,axis=0) pred = np.concatenate(pred,axis=0) lbls = np.concatenate(lbls,axis=0) fidices = np.concatenate(fidices,axis=0) #print(alphas.shape,pred.shape,lbls.shape,fidices.shape) analysis = analyse_data(alphas,lbls,pred,fidices) return r_loss/i,analysis def analyse_data(alphas,lbls,predicted,f_idx): ''' analysis data is created here ''' batch = len(predicted) amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0 for j in range (batch): focus = np.argmax(alphas[j]) if(alphas[j][focus] >= 0.5): amth +=1 else: alth +=1 if(focus == f_idx[j] and predicted[j] == lbls[j]): ftpt += 1 elif(focus != f_idx[j] and predicted[j] == lbls[j]): ffpt +=1 elif(focus == f_idx[j] and predicted[j] != lbls[j]): ftpf +=1 elif(focus != f_idx[j] and predicted[j] != lbls[j]): ffpf +=1 #print(sum(predicted==lbls),ftpt+ffpt) return [ftpt,ffpt,ftpf,ffpf,amth,alth] print("--"*40) criterion = nn.CrossEntropyLoss() optimizer_where = optim.Adam(where.parameters(),lr =0.001) optimizer_what = optim.Adam(what.parameters(), lr=0.001) acti = [] loss_curi = [] analysis_data = [] epochs = 1000 k=0.1 running_loss,anlys_data = calculate_attn_loss(train_loader,what,where,criterion,k) loss_curi.append(running_loss) analysis_data.append(anlys_data) print('epoch: [%d ] loss: %.3f' %(0,running_loss)) for epoch in range(epochs): # loop over the dataset multiple times ep_lossi = [] running_loss = 0.0 what.train() where.train() for i, data in enumerate(train_loader, 0): # get the inputs inputs, labels,_ = data inputs = inputs.double() inputs, labels = inputs.to("cuda"),labels.to("cuda") # zero the parameter gradients optimizer_where.zero_grad() optimizer_what.zero_grad() # forward + backward + optimize avg, alpha = where(inputs) outputs = what(avg) ent = np.sum(entropy(alpha.cpu().detach().numpy(), base=2, axis=1))/batch #entropy(alpha.cpu().numpy(), base=2, axis=1) # mx,_ = torch.max(alpha,1) # entropy = np.mean(-np.log2(mx.cpu().detach().numpy())) # print("entropy of batch", entropy) loss = criterion(outputs, labels) + k*ent # loss = criterion(outputs, labels) # print statistics running_loss += loss.item() loss.backward() optimizer_where.step() optimizer_what.step() running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion,k) analysis_data.append(anls_data) print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss)) loss_curi.append(running_loss) #loss per epoch if running_loss<=0.05: break print('Finished Training') correct = 0 total = 0 with torch.no_grad(): for data in train_loader: images, labels,_ = data images = images.double() images, labels = images.to("cuda"), labels.to("cuda") avg, alpha = where(images) outputs = what(avg) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total)) analysis_data = np.array(analysis_data) plt.figure(figsize=(6,6)) plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0],label="ftpt") plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1],label="ffpt") plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2],label="ftpf") plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3],label="ffpf") plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.savefig("trends_synthetic_300_300.png",bbox_inches="tight") plt.savefig("trends_synthetic_300_300.pdf",bbox_inches="tight") analysis_data[-1,:2]/3000 running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion,k) print(running_loss, anls_data) what.eval() where.eval() alphas = [] max_alpha =[] alpha_ftpt=[] alpha_ffpt=[] alpha_ftpf=[] alpha_ffpf=[] argmax_more_than_half=0 argmax_less_than_half=0 cnt =0 with torch.no_grad(): for i, data in enumerate(train_loader, 0): inputs, labels, fidx = data inputs = inputs.double() inputs, labels = inputs.to("cuda"),labels.to("cuda") avg, alphas = where(inputs) outputs = what(avg) _, predicted = torch.max(outputs.data, 1) batch = len(predicted) mx,_ = torch.max(alphas,1) max_alpha.append(mx.cpu().detach().numpy()) for j in range (batch): cnt+=1 focus = torch.argmax(alphas[j]).item() if alphas[j][focus] >= 0.5 : argmax_more_than_half += 1 else: argmax_less_than_half += 1 if (focus == fidx[j].item() and predicted[j].item() == labels[j].item()): alpha_ftpt.append(alphas[j][focus].item()) # print(focus, fore_idx[j].item(), predicted[j].item() , labels[j].item() ) elif (focus != fidx[j].item() and predicted[j].item() == labels[j].item()): alpha_ffpt.append(alphas[j][focus].item()) elif (focus == fidx[j].item() and predicted[j].item() != labels[j].item()): alpha_ftpf.append(alphas[j][focus].item()) elif (focus != fidx[j].item() and predicted[j].item() != labels[j].item()): alpha_ffpf.append(alphas[j][focus].item()) np.sum(entropy(alphas.cpu().numpy(), base=2, axis=1))/batch np.mean(-np.log2(mx.cpu().detach().numpy())) a = np.array([[0.1,0.9], [0.5, 0.5]]) -0.1*np.log2(0.1)-0.9*np.log2(0.9) entropy([9/10, 1/10], base=2), entropy([0.5, 0.5], base=2), entropy(a, base=2, axis=1) np.mean(-np.log2(a)) max_alpha = np.concatenate(max_alpha,axis=0) print(max_alpha.shape, cnt) np.array(alpha_ftpt).size, np.array(alpha_ffpt).size, np.array(alpha_ftpf).size, np.array(alpha_ffpf).size plt.figure(figsize=(6,6)) _,bins,_ = plt.hist(max_alpha,bins=50,color ="c") plt.title("alpha values histogram") plt.savefig("attention_model_2_hist") plt.figure(figsize=(6,6)) _,bins,_ = plt.hist(np.array(alpha_ftpt),bins=50,color ="c") plt.title("alpha values in ftpt") plt.savefig("attention_model_2_hist") plt.figure(figsize=(6,6)) _,bins,_ = plt.hist(np.array(alpha_ffpt),bins=50,color ="c") plt.title("alpha values in ffpt") plt.savefig("attention_model_2_hist") ```
github_jupyter
import numpy as np import pandas as pd import torch import torchvision from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from matplotlib import pyplot as plt %matplotlib inline from scipy.stats import entropy mu1 = np.array([3,3,3,3,0]) sigma1 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu2 = np.array([4,4,4,4,0]) sigma2 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu3 = np.array([10,5,5,10,0]) sigma3 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu4 = np.array([-10,-10,-10,-10,0]) sigma4 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu5 = np.array([-21,4,4,-21,0]) sigma5 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu6 = np.array([-10,18,18,-10,0]) sigma6 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu7 = np.array([4,20,4,20,0]) sigma7 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu8 = np.array([4,-20,-20,4,0]) sigma8 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu9 = np.array([20,20,20,20,0]) sigma9 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) mu10 = np.array([20,-10,-10,20,0]) sigma10 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) sample1 = np.random.multivariate_normal(mean=mu1,cov= sigma1,size=500) sample2 = np.random.multivariate_normal(mean=mu2,cov= sigma2,size=500) sample3 = np.random.multivariate_normal(mean=mu3,cov= sigma3,size=500) sample4 = np.random.multivariate_normal(mean=mu4,cov= sigma4,size=500) sample5 = np.random.multivariate_normal(mean=mu5,cov= sigma5,size=500) sample6 = np.random.multivariate_normal(mean=mu6,cov= sigma6,size=500) sample7 = np.random.multivariate_normal(mean=mu7,cov= sigma7,size=500) sample8 = np.random.multivariate_normal(mean=mu8,cov= sigma8,size=500) sample9 = np.random.multivariate_normal(mean=mu9,cov= sigma9,size=500) sample10 = np.random.multivariate_normal(mean=mu10,cov= sigma10,size=500) # mu1 = np.array([3,3,0,0,0]) # sigma1 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu2 = np.array([4,4,0,0,0]) # sigma2 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu3 = np.array([10,5,0,0,0]) # sigma3 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu4 = np.array([-10,-10,0,0,0]) # sigma4 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu5 = np.array([-21,4,0,0,0]) # sigma5 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu6 = np.array([-10,18,0,0,0]) # sigma6 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu7 = np.array([4,20,0,0,0]) # sigma7 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu8 = np.array([4,-20,0,0,0]) # sigma8 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu9 = np.array([20,20,0,0,0]) # sigma9 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # mu10 = np.array([20,-10,0,0,0]) # sigma10 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]]) # sample1 = np.random.multivariate_normal(mean=mu1,cov= sigma1,size=500) # sample2 = np.random.multivariate_normal(mean=mu2,cov= sigma2,size=500) # sample3 = np.random.multivariate_normal(mean=mu3,cov= sigma3,size=500) # sample4 = np.random.multivariate_normal(mean=mu4,cov= sigma4,size=500) # sample5 = np.random.multivariate_normal(mean=mu5,cov= sigma5,size=500) # sample6 = np.random.multivariate_normal(mean=mu6,cov= sigma6,size=500) # sample7 = np.random.multivariate_normal(mean=mu7,cov= sigma7,size=500) # sample8 = np.random.multivariate_normal(mean=mu8,cov= sigma8,size=500) # sample9 = np.random.multivariate_normal(mean=mu9,cov= sigma9,size=500) # sample10 = np.random.multivariate_normal(mean=mu10,cov= sigma10,size=500) X = np.concatenate((sample1,sample2,sample3,sample4,sample5,sample6,sample7,sample8,sample9,sample10),axis=0) Y = np.concatenate((np.zeros((500,1)),np.ones((500,1)),2*np.ones((500,1)),3*np.ones((500,1)),4*np.ones((500,1)), 5*np.ones((500,1)),6*np.ones((500,1)),7*np.ones((500,1)),8*np.ones((500,1)),9*np.ones((500,1))),axis=0).astype(int) print(X.shape,Y.shape) # plt.scatter(sample1[:,0],sample1[:,1],label="class_0") # plt.scatter(sample2[:,0],sample2[:,1],label="class_1") # plt.scatter(sample3[:,0],sample3[:,1],label="class_2") # plt.scatter(sample4[:,0],sample4[:,1],label="class_3") # plt.scatter(sample5[:,0],sample5[:,1],label="class_4") # plt.scatter(sample6[:,0],sample6[:,1],label="class_5") # plt.scatter(sample7[:,0],sample7[:,1],label="class_6") # plt.scatter(sample8[:,0],sample8[:,1],label="class_7") # plt.scatter(sample9[:,0],sample9[:,1],label="class_8") # plt.scatter(sample10[:,0],sample10[:,1],label="class_9") # plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left') class SyntheticDataset(Dataset): """MosaicDataset dataset.""" def __init__(self, x, y): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.x = x self.y = y #self.fore_idx = fore_idx def __len__(self): return len(self.y) def __getitem__(self, idx): return self.x[idx] , self.y[idx] #, self.fore_idx[idx] trainset = SyntheticDataset(X,Y) # testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform) classes = ('zero','one','two','three','four','five','six','seven','eight','nine') foreground_classes = {'zero','one','two'} fg_used = '012' fg1, fg2, fg3 = 0,1,2 all_classes = {'zero','one','two','three','four','five','six','seven','eight','nine'} background_classes = all_classes - foreground_classes background_classes trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=True) dataiter = iter(trainloader) background_data=[] background_label=[] foreground_data=[] foreground_label=[] batch_size=100 for i in range(50): images, labels = dataiter.next() for j in range(batch_size): if(classes[labels[j]] in background_classes): img = images[j].tolist() background_data.append(img) background_label.append(labels[j]) else: img = images[j].tolist() foreground_data.append(img) foreground_label.append(labels[j]) foreground_data = torch.tensor(foreground_data) foreground_label = torch.tensor(foreground_label) background_data = torch.tensor(background_data) background_label = torch.tensor(background_label) def create_mosaic_img(bg_idx,fg_idx,fg): """ bg_idx : list of indexes of background_data[] to be used as background images in mosaic fg_idx : index of image to be used as foreground image from foreground data fg : at what position/index foreground image has to be stored out of 0-8 """ image_list=[] j=0 for i in range(9): if i != fg: image_list.append(background_data[bg_idx[j]]) j+=1 else: image_list.append(foreground_data[fg_idx]) label = foreground_label[fg_idx] - fg1 # minus fg1 because our fore ground classes are fg1,fg2,fg3 but we have to store it as 0,1,2 #image_list = np.concatenate(image_list ,axis=0) image_list = torch.stack(image_list) return image_list,label desired_num = 3000 mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9 mosaic_label=[] # label of mosaic image = foreground class present in that mosaic list_set_labels = [] for i in range(desired_num): set_idx = set() np.random.seed(i) bg_idx = np.random.randint(0,3500,8) set_idx = set(background_label[bg_idx].tolist()) fg_idx = np.random.randint(0,1500) set_idx.add(foreground_label[fg_idx].item()) fg = np.random.randint(0,9) fore_idx.append(fg) image_list,label = create_mosaic_img(bg_idx,fg_idx,fg) mosaic_list_of_images.append(image_list) mosaic_label.append(label) list_set_labels.append(set_idx) def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number): """ mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point labels : mosaic_dataset labels foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is "j" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9 """ avg_image_dataset = [] for i in range(len(mosaic_dataset)): img = torch.zeros([5], dtype=torch.float64) for j in range(9): if j == foreground_index[i]: img = img + mosaic_dataset[i][j]*dataset_number/9 else : img = img + mosaic_dataset[i][j]*(9-dataset_number)/(8*9) avg_image_dataset.append(img) return torch.stack(avg_image_dataset) , torch.stack(labels) , foreground_index def calculate_loss(dataloader,model,criter): model.eval() r_loss = 0 with torch.no_grad(): for i, data in enumerate(dataloader, 0): inputs, labels = data inputs, labels = inputs.to("cuda"),labels.to("cuda") outputs = model(inputs) loss = criter(outputs, labels) r_loss += loss.item() return r_loss/i class MosaicDataset1(Dataset): """MosaicDataset dataset.""" def __init__(self, mosaic_list, mosaic_label,fore_idx): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.mosaic = mosaic_list self.label = mosaic_label self.fore_idx = fore_idx def __len__(self): return len(self.label) def __getitem__(self, idx): return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx] batch = 250 msd = MosaicDataset1(mosaic_list_of_images, mosaic_label, fore_idx) train_loader = DataLoader( msd,batch_size= batch ,shuffle=True) class Focus_deep(nn.Module): ''' deep focus network averaged at zeroth layer input : elemental data ''' def __init__(self,inputs,output,K,d): super(Focus_deep,self).__init__() self.inputs = inputs self.output = output self.K = K self.d = d self.linear1 = nn.Linear(self.inputs,300) #,self.output) self.linear2 = nn.Linear(300,self.output) def forward(self,z): batch = z.shape[0] x = torch.zeros([batch,self.K],dtype=torch.float64) y = torch.zeros([batch,self.d], dtype=torch.float64) x,y = x.to("cuda"),y.to("cuda") for i in range(self.K): x[:,i] = self.helper(z[:,i] )[:,0] # self.d*i:self.d*i+self.d x = F.softmax(x,dim=1) # alphas x1 = x[:,0] for i in range(self.K): x1 = x[:,i] y = y+torch.mul(x1[:,None],z[:,i]) # self.d*i:self.d*i+self.d return y , x def helper(self,x): x = F.relu(self.linear1(x)) x = self.linear2(x) return x class Classification_deep(nn.Module): ''' input : elemental data deep classification module data averaged at zeroth layer ''' def __init__(self,inputs,output): super(Classification_deep,self).__init__() self.inputs = inputs self.output = output self.linear1 = nn.Linear(self.inputs,50) self.linear2 = nn.Linear(50,self.output) def forward(self,x): x = F.relu(self.linear1(x)) x = self.linear2(x) return x where = Focus_deep(5,1,9,5).double() what = Classification_deep(5,3).double() where = where.to("cuda") what = what.to("cuda") def calculate_attn_loss(dataloader,what,where,criter,k): what.eval() where.eval() r_loss = 0 alphas = [] lbls = [] pred = [] fidices = [] with torch.no_grad(): for i, data in enumerate(dataloader, 0): inputs, labels, fidx = data lbls.append(labels) fidices.append(fidx) inputs = inputs.double() inputs, labels = inputs.to("cuda"),labels.to("cuda") avg,alpha = where(inputs) outputs = what(avg) _, predicted = torch.max(outputs.data, 1) pred.append(predicted.cpu().numpy()) alphas.append(alpha.cpu().numpy()) ent = np.sum(entropy(alpha.cpu().detach().numpy(), base=2, axis=1))/batch # mx,_ = torch.max(alpha,1) # entropy = np.mean(-np.log2(mx.cpu().detach().numpy())) # print("entropy of batch", entropy) loss = criter(outputs, labels) + k*ent r_loss += loss.item() alphas = np.concatenate(alphas,axis=0) pred = np.concatenate(pred,axis=0) lbls = np.concatenate(lbls,axis=0) fidices = np.concatenate(fidices,axis=0) #print(alphas.shape,pred.shape,lbls.shape,fidices.shape) analysis = analyse_data(alphas,lbls,pred,fidices) return r_loss/i,analysis def analyse_data(alphas,lbls,predicted,f_idx): ''' analysis data is created here ''' batch = len(predicted) amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0 for j in range (batch): focus = np.argmax(alphas[j]) if(alphas[j][focus] >= 0.5): amth +=1 else: alth +=1 if(focus == f_idx[j] and predicted[j] == lbls[j]): ftpt += 1 elif(focus != f_idx[j] and predicted[j] == lbls[j]): ffpt +=1 elif(focus == f_idx[j] and predicted[j] != lbls[j]): ftpf +=1 elif(focus != f_idx[j] and predicted[j] != lbls[j]): ffpf +=1 #print(sum(predicted==lbls),ftpt+ffpt) return [ftpt,ffpt,ftpf,ffpf,amth,alth] print("--"*40) criterion = nn.CrossEntropyLoss() optimizer_where = optim.Adam(where.parameters(),lr =0.001) optimizer_what = optim.Adam(what.parameters(), lr=0.001) acti = [] loss_curi = [] analysis_data = [] epochs = 1000 k=0.1 running_loss,anlys_data = calculate_attn_loss(train_loader,what,where,criterion,k) loss_curi.append(running_loss) analysis_data.append(anlys_data) print('epoch: [%d ] loss: %.3f' %(0,running_loss)) for epoch in range(epochs): # loop over the dataset multiple times ep_lossi = [] running_loss = 0.0 what.train() where.train() for i, data in enumerate(train_loader, 0): # get the inputs inputs, labels,_ = data inputs = inputs.double() inputs, labels = inputs.to("cuda"),labels.to("cuda") # zero the parameter gradients optimizer_where.zero_grad() optimizer_what.zero_grad() # forward + backward + optimize avg, alpha = where(inputs) outputs = what(avg) ent = np.sum(entropy(alpha.cpu().detach().numpy(), base=2, axis=1))/batch #entropy(alpha.cpu().numpy(), base=2, axis=1) # mx,_ = torch.max(alpha,1) # entropy = np.mean(-np.log2(mx.cpu().detach().numpy())) # print("entropy of batch", entropy) loss = criterion(outputs, labels) + k*ent # loss = criterion(outputs, labels) # print statistics running_loss += loss.item() loss.backward() optimizer_where.step() optimizer_what.step() running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion,k) analysis_data.append(anls_data) print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss)) loss_curi.append(running_loss) #loss per epoch if running_loss<=0.05: break print('Finished Training') correct = 0 total = 0 with torch.no_grad(): for data in train_loader: images, labels,_ = data images = images.double() images, labels = images.to("cuda"), labels.to("cuda") avg, alpha = where(images) outputs = what(avg) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total)) analysis_data = np.array(analysis_data) plt.figure(figsize=(6,6)) plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0],label="ftpt") plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1],label="ffpt") plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2],label="ftpf") plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3],label="ffpf") plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.savefig("trends_synthetic_300_300.png",bbox_inches="tight") plt.savefig("trends_synthetic_300_300.pdf",bbox_inches="tight") analysis_data[-1,:2]/3000 running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion,k) print(running_loss, anls_data) what.eval() where.eval() alphas = [] max_alpha =[] alpha_ftpt=[] alpha_ffpt=[] alpha_ftpf=[] alpha_ffpf=[] argmax_more_than_half=0 argmax_less_than_half=0 cnt =0 with torch.no_grad(): for i, data in enumerate(train_loader, 0): inputs, labels, fidx = data inputs = inputs.double() inputs, labels = inputs.to("cuda"),labels.to("cuda") avg, alphas = where(inputs) outputs = what(avg) _, predicted = torch.max(outputs.data, 1) batch = len(predicted) mx,_ = torch.max(alphas,1) max_alpha.append(mx.cpu().detach().numpy()) for j in range (batch): cnt+=1 focus = torch.argmax(alphas[j]).item() if alphas[j][focus] >= 0.5 : argmax_more_than_half += 1 else: argmax_less_than_half += 1 if (focus == fidx[j].item() and predicted[j].item() == labels[j].item()): alpha_ftpt.append(alphas[j][focus].item()) # print(focus, fore_idx[j].item(), predicted[j].item() , labels[j].item() ) elif (focus != fidx[j].item() and predicted[j].item() == labels[j].item()): alpha_ffpt.append(alphas[j][focus].item()) elif (focus == fidx[j].item() and predicted[j].item() != labels[j].item()): alpha_ftpf.append(alphas[j][focus].item()) elif (focus != fidx[j].item() and predicted[j].item() != labels[j].item()): alpha_ffpf.append(alphas[j][focus].item()) np.sum(entropy(alphas.cpu().numpy(), base=2, axis=1))/batch np.mean(-np.log2(mx.cpu().detach().numpy())) a = np.array([[0.1,0.9], [0.5, 0.5]]) -0.1*np.log2(0.1)-0.9*np.log2(0.9) entropy([9/10, 1/10], base=2), entropy([0.5, 0.5], base=2), entropy(a, base=2, axis=1) np.mean(-np.log2(a)) max_alpha = np.concatenate(max_alpha,axis=0) print(max_alpha.shape, cnt) np.array(alpha_ftpt).size, np.array(alpha_ffpt).size, np.array(alpha_ftpf).size, np.array(alpha_ffpf).size plt.figure(figsize=(6,6)) _,bins,_ = plt.hist(max_alpha,bins=50,color ="c") plt.title("alpha values histogram") plt.savefig("attention_model_2_hist") plt.figure(figsize=(6,6)) _,bins,_ = plt.hist(np.array(alpha_ftpt),bins=50,color ="c") plt.title("alpha values in ftpt") plt.savefig("attention_model_2_hist") plt.figure(figsize=(6,6)) _,bins,_ = plt.hist(np.array(alpha_ffpt),bins=50,color ="c") plt.title("alpha values in ffpt") plt.savefig("attention_model_2_hist")
0.428831
0.477128
# ncview ### Ncview is a useful tool that allows for quick and easy viewing of NetCDF files Version: 2.1.1 (Available on NCI's Virtual Desktop Infrastructure) `____________________________________________________________________________________` ### 1. Basic Usage: From the command line: ``` $ ncview <file_path> ``` <img src="./images/ncview1.png" width=700> This will display the main `ncview` window with all the available viewing options. Select a variable from the `Var` box. <img src="./images/ncview2.png" width=500> A new window will display a plot of the selected variable. If the data includes a `time` dimension, use the single forward/backward arrows from the animation panel to step through time. (The double arrows will produce an animation and the 'delay' option can be used to slow down the speed at which it cycles through time). <img src="./images/ncview3.png"> A second panel can be used to modify a range of plot settings. <img src="./images/ncview4.png"> For example, the `3gauss` option can be selected to change the colour map: <img src="./images/ncview5.png"> ` ` The table below provides a quick overview of some of the useful viewing options available: | Option | Usage | | :-------------: |-------------| | 3gauss | Cycle through colour maps | | Inv P | Invert the plot | | Inv C | Invert the colour scale | | Mag X1 | Zoom in/out (right/left click) | | Axes | Modify axes | ### Additional command line options: ``` useage: ncview [options] datafiles Options -minmax: selects how rapidly minimum and maximum values in the data files will be determined; by scanning every third time entry ("-minmax fast"), every fifth time entry ("-minmax med"), every tenth ("-minmax slow"), or all entries ("-minmax all"). -frames: Dump out PNG images (to make a movie, for instance) -nc: Specify number of colors to use. -no1d: Do NOT allow 1-D variables to be displayed. -repl: Set default blowup type to replicate rathern than bilinear. -calendar: Specify time calendar to use, overriding value in file. Known: noleap standard gregorian 365_day 360_day. -private: Use a private colormap. -debug: Print lots of debugging info. -beep: Ring the bell when the movie restarts at frame zero. -extra: Put some extra information on the display window. -mtitle: My title to use on the display window. -noautoflip: Do not automatically flip image, even if dimensions indicate that it would make sense. -w: print the lack-of-warranty blurb. -small: Keep popup window as small as possible by default. -shrink_mode: Shrink image assuming integer classes, so most common value in sub-block returned instead of arithmetic mean. -listsel_max NN: max number of vars allowed before switching to menu selection -no_color_ndims: do NOT color the var selection buttons by their dimensionality -no_auto_overlay: do NOT automatically put on continental overlays -autoscale: scale color map of EACH frame to range of that frame. Note: MUCH SLOWER! -missvalrgb RRR GGG BBB: specifies 3 integers (range: 0 to 255) to use for missing value color -maxsize: specifies max size of window before scrollbars are added. Either a single integer between 30 and 100 giving percentage, or two integers separated by a comma giving width and height. Ex: -maxsize 75 or -maxsize 800,600 -c: print the copying policy. datafiles: You can have up to 32 of these. They must all be in the same general format, or have different variables in them. Ncview tries its best under such circumstances. ```
github_jupyter
$ ncview <file_path> useage: ncview [options] datafiles Options -minmax: selects how rapidly minimum and maximum values in the data files will be determined; by scanning every third time entry ("-minmax fast"), every fifth time entry ("-minmax med"), every tenth ("-minmax slow"), or all entries ("-minmax all"). -frames: Dump out PNG images (to make a movie, for instance) -nc: Specify number of colors to use. -no1d: Do NOT allow 1-D variables to be displayed. -repl: Set default blowup type to replicate rathern than bilinear. -calendar: Specify time calendar to use, overriding value in file. Known: noleap standard gregorian 365_day 360_day. -private: Use a private colormap. -debug: Print lots of debugging info. -beep: Ring the bell when the movie restarts at frame zero. -extra: Put some extra information on the display window. -mtitle: My title to use on the display window. -noautoflip: Do not automatically flip image, even if dimensions indicate that it would make sense. -w: print the lack-of-warranty blurb. -small: Keep popup window as small as possible by default. -shrink_mode: Shrink image assuming integer classes, so most common value in sub-block returned instead of arithmetic mean. -listsel_max NN: max number of vars allowed before switching to menu selection -no_color_ndims: do NOT color the var selection buttons by their dimensionality -no_auto_overlay: do NOT automatically put on continental overlays -autoscale: scale color map of EACH frame to range of that frame. Note: MUCH SLOWER! -missvalrgb RRR GGG BBB: specifies 3 integers (range: 0 to 255) to use for missing value color -maxsize: specifies max size of window before scrollbars are added. Either a single integer between 30 and 100 giving percentage, or two integers separated by a comma giving width and height. Ex: -maxsize 75 or -maxsize 800,600 -c: print the copying policy. datafiles: You can have up to 32 of these. They must all be in the same general format, or have different variables in them. Ncview tries its best under such circumstances.
0.506347
0.956309
<a href="https://colab.research.google.com/github/zhujiajunbryan/ProtTrans/blob/master/bert_none_DNA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` !pip -q install transformers seqeval SentencePiece biopython import torch from transformers import AutoTokenizer, Trainer, TrainingArguments, AutoModelForSequenceClassification from torch.utils.data import Dataset import os import pandas as pd import requests from tqdm.auto import tqdm import numpy as np from sklearn.metrics import accuracy_score, precision_recall_fscore_support import re model_name = 'Rostlab/prot_bert_bfd' class DeepLocDataset(Dataset): """Face Landmarks dataset.""" def __init__(self, split="train", tokenizer_name='Rostlab/prot_bert_bfd', max_length=1024): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, do_lower_case=False) if split=="train": self.seqs, self.labels = self.load_tr() elif split=="valid": self.seqs, self.labels = self.load_ev() elif split=="test": self.seqs, self.labels = self.load_te() self.max_length = max_length def load_tr(self): f = open("/content/drive/MyDrive/TF-DATA/Train_p.fasta", "r") lines = f.readlines() tr_pseqs = [] for ele in lines: if not ele.startswith('>'): tr_pseqs.append(ele) f = open("/content/drive/MyDrive/TF-DATA/Train_n.fasta", "r") lines = f.readlines() tr_nseqs = [] for ele in lines: if not ele.startswith('>'): tr_nseqs.append(ele) tr_seqs = tr_nseqs + tr_pseqs tr_label = [0] * len(tr_nseqs) + [1] * len(tr_pseqs) from sklearn.model_selection import train_test_split tr_seqs,ev_seqs,tr_label,ev_label = train_test_split(tr_seqs,tr_label,test_size=0.2,random_state=1,shuffle=True) return tr_seqs,tr_label def load_ev(self): f = open("/content/drive/MyDrive/TF-DATA/Train_p.fasta", "r") lines = f.readlines() tr_pseqs = [] for ele in lines: if not ele.startswith('>'): tr_pseqs.append(ele) f = open("/content/drive/MyDrive/TF-DATA/Train_n.fasta", "r") lines = f.readlines() tr_nseqs = [] for ele in lines: if not ele.startswith('>'): tr_nseqs.append(ele) tr_seqs = tr_nseqs + tr_pseqs tr_label = [0] * len(tr_nseqs) + [1] * len(tr_pseqs) from sklearn.model_selection import train_test_split tr_seqs,ev_seqs,tr_label,ev_label = train_test_split(tr_seqs,tr_label,test_size=0.2,random_state=1,shuffle=True) return ev_seqs,ev_label def load_te(self): f = open("/content/drive/MyDrive/TF-DATA/te_p.fasta", "r") lines = f.readlines() te_pseqs = [] for ele in lines: if not ele.startswith('>'): te_pseqs.append(ele) f = open("/content/drive/MyDrive/TF-DATA/te_n.fasta", "r") lines = f.readlines() te_nseqs = [] for ele in lines: if not ele.startswith('>'): te_nseqs.append(ele) te_seqs = te_nseqs + te_pseqs te_label = [0] * len(te_nseqs) + [1] * len(te_pseqs) return te_seqs,te_label def __len__(self): return len(self.labels) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() seq = " ".join("".join(self.seqs[idx].split())) seq = re.sub(r"[UZOB]", "X", seq) seq_ids = self.tokenizer(seq, truncation=True, padding='max_length', max_length=self.max_length) sample = {key: torch.tensor(val) for key, val in seq_ids.items()} sample['labels'] = torch.tensor(self.labels[idx]) return sample train_dataset = DeepLocDataset(split="train", tokenizer_name=model_name, max_length=1024) val_dataset = DeepLocDataset(split="valid", tokenizer_name=model_name, max_length=1024) test_dataset = DeepLocDataset(split="test", tokenizer_name=model_name, max_length=1024) print(len(train_dataset),len(val_dataset),len(test_dataset)) def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='binary') acc = accuracy_score(labels, preds) return { 'accuracy': acc, 'f1': f1, 'precision': precision, 'recall': recall } def model_init(): return AutoModelForSequenceClassification.from_pretrained(model_name) training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=6, # total number of training epochs per_device_train_batch_size=1, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_steps=5, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=200, # How often to print logs do_train=True, # Perform training do_eval=True, # Perform evaluation evaluation_strategy="epoch", # evalute after eachh epoch gradient_accumulation_steps=64, # total number of steps before back propagation fp16=False, # Use mixed precision fp16_opt_level="02", # mixed precision mode run_name="ProBert-BFD-MS", # experiment name seed=2, learning_rate=1e-5 # Seed for experiment reproducibility 3x3 ) trainer = Trainer( model_init=model_init, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset, # evaluation dataset compute_metrics = compute_metrics, # evaluation metrics ) trainer.train() predictions, label_ids, metrics = trainer.predict(test_dataset) print(metrics) tokenizer = AutoTokenizer.from_pretrained('Rostlab/prot_bert_bfd', do_lower_case=False,local_files_only=False) tokenizer.save_pretrained('/content/drive/MyDrive/savefinal') trainer.save_model('/content/drive/MyDrive/savefinal') ```
github_jupyter
!pip -q install transformers seqeval SentencePiece biopython import torch from transformers import AutoTokenizer, Trainer, TrainingArguments, AutoModelForSequenceClassification from torch.utils.data import Dataset import os import pandas as pd import requests from tqdm.auto import tqdm import numpy as np from sklearn.metrics import accuracy_score, precision_recall_fscore_support import re model_name = 'Rostlab/prot_bert_bfd' class DeepLocDataset(Dataset): """Face Landmarks dataset.""" def __init__(self, split="train", tokenizer_name='Rostlab/prot_bert_bfd', max_length=1024): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, do_lower_case=False) if split=="train": self.seqs, self.labels = self.load_tr() elif split=="valid": self.seqs, self.labels = self.load_ev() elif split=="test": self.seqs, self.labels = self.load_te() self.max_length = max_length def load_tr(self): f = open("/content/drive/MyDrive/TF-DATA/Train_p.fasta", "r") lines = f.readlines() tr_pseqs = [] for ele in lines: if not ele.startswith('>'): tr_pseqs.append(ele) f = open("/content/drive/MyDrive/TF-DATA/Train_n.fasta", "r") lines = f.readlines() tr_nseqs = [] for ele in lines: if not ele.startswith('>'): tr_nseqs.append(ele) tr_seqs = tr_nseqs + tr_pseqs tr_label = [0] * len(tr_nseqs) + [1] * len(tr_pseqs) from sklearn.model_selection import train_test_split tr_seqs,ev_seqs,tr_label,ev_label = train_test_split(tr_seqs,tr_label,test_size=0.2,random_state=1,shuffle=True) return tr_seqs,tr_label def load_ev(self): f = open("/content/drive/MyDrive/TF-DATA/Train_p.fasta", "r") lines = f.readlines() tr_pseqs = [] for ele in lines: if not ele.startswith('>'): tr_pseqs.append(ele) f = open("/content/drive/MyDrive/TF-DATA/Train_n.fasta", "r") lines = f.readlines() tr_nseqs = [] for ele in lines: if not ele.startswith('>'): tr_nseqs.append(ele) tr_seqs = tr_nseqs + tr_pseqs tr_label = [0] * len(tr_nseqs) + [1] * len(tr_pseqs) from sklearn.model_selection import train_test_split tr_seqs,ev_seqs,tr_label,ev_label = train_test_split(tr_seqs,tr_label,test_size=0.2,random_state=1,shuffle=True) return ev_seqs,ev_label def load_te(self): f = open("/content/drive/MyDrive/TF-DATA/te_p.fasta", "r") lines = f.readlines() te_pseqs = [] for ele in lines: if not ele.startswith('>'): te_pseqs.append(ele) f = open("/content/drive/MyDrive/TF-DATA/te_n.fasta", "r") lines = f.readlines() te_nseqs = [] for ele in lines: if not ele.startswith('>'): te_nseqs.append(ele) te_seqs = te_nseqs + te_pseqs te_label = [0] * len(te_nseqs) + [1] * len(te_pseqs) return te_seqs,te_label def __len__(self): return len(self.labels) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() seq = " ".join("".join(self.seqs[idx].split())) seq = re.sub(r"[UZOB]", "X", seq) seq_ids = self.tokenizer(seq, truncation=True, padding='max_length', max_length=self.max_length) sample = {key: torch.tensor(val) for key, val in seq_ids.items()} sample['labels'] = torch.tensor(self.labels[idx]) return sample train_dataset = DeepLocDataset(split="train", tokenizer_name=model_name, max_length=1024) val_dataset = DeepLocDataset(split="valid", tokenizer_name=model_name, max_length=1024) test_dataset = DeepLocDataset(split="test", tokenizer_name=model_name, max_length=1024) print(len(train_dataset),len(val_dataset),len(test_dataset)) def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='binary') acc = accuracy_score(labels, preds) return { 'accuracy': acc, 'f1': f1, 'precision': precision, 'recall': recall } def model_init(): return AutoModelForSequenceClassification.from_pretrained(model_name) training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=6, # total number of training epochs per_device_train_batch_size=1, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_steps=5, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=200, # How often to print logs do_train=True, # Perform training do_eval=True, # Perform evaluation evaluation_strategy="epoch", # evalute after eachh epoch gradient_accumulation_steps=64, # total number of steps before back propagation fp16=False, # Use mixed precision fp16_opt_level="02", # mixed precision mode run_name="ProBert-BFD-MS", # experiment name seed=2, learning_rate=1e-5 # Seed for experiment reproducibility 3x3 ) trainer = Trainer( model_init=model_init, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset, # evaluation dataset compute_metrics = compute_metrics, # evaluation metrics ) trainer.train() predictions, label_ids, metrics = trainer.predict(test_dataset) print(metrics) tokenizer = AutoTokenizer.from_pretrained('Rostlab/prot_bert_bfd', do_lower_case=False,local_files_only=False) tokenizer.save_pretrained('/content/drive/MyDrive/savefinal') trainer.save_model('/content/drive/MyDrive/savefinal')
0.837454
0.711587
# WRFcast Tutorial This tutorial will walk through forecast data from your own WRF forecast model data using the wrfcast.py module within pvlib. This tutorial has been tested against the following package versions: * Python * IPython * pandas * matplotlib * netcdf4 1.4.2 It should work with other Python and Pandas versions. It requires pvlib >= 0.3.0 and IPython >= 3.0. Authors: * Jeffrey Sward (jas983@cornell.edu), Cornell University, November 2019 ``` %matplotlib inline import matplotlib.pyplot as plt # built in python modules import datetime import os import inspect import sys # python add-ons import numpy as np import pandas as pd import xarray as xr import netCDF4 import wrf # # Import the pvlib module if sys.platform == 'linux': sys.path.append('/home/jsward/Documents/01_Research/01_Renewable_Analysis/WRF/pvlib-python') import pvlib from pvlib.wrfcast import WRF # Find the absolute file path to your pvlib installation pvlib_abspath = os.path.dirname(os.path.abspath(inspect.getfile(pvlib))) # absolute path to WRF data file datapath = os.path.join(pvlib_abspath, 'data', 'wrfout_d01_2011-01-24_01:00:00') # Read in the wrfout file using the netCDF4.Dataset method (I think you can also do this with an xarray method) netcdf_data = netCDF4.Dataset(datapath) netcdf_data # Create an xarray.Dataset from the wrf qurery_variables. query_variables = [ 'Times', 'T2', 'U10', 'V10', 'CLDFRA', 'SWDDNI', 'SWDDIF' ] first = True for key in query_variables: var = wrf.getvar(netcdf_data, key, timeidx=wrf.ALL_TIMES) if first: solar_data = var first = False else: solar_data = xr.merge([solar_data, var]) variables = { 'times': 'times', 'XLAT': 'lat', 'XLONG': 'lon', 'T2': 'temp_air', 'U10': 'wind_speed_u', 'V10': 'wind_speed_v', 'CLDFRA': 'total_clouds', 'SWDDNI': 'dni', 'SWDDIF': 'dhi' } solar_data = xr.Dataset.rename(solar_data, variables) times = solar_data.times ntimes = solar_data.sizes['Time'] nlat = solar_data.sizes['south_north'] nlon = solar_data.sizes['west_east'] solar_data # Explore how the WRF forecast model behaves fm = WRF() wind_speed = fm.uv_to_speed(solar_data) temp_air = fm.kelvin_to_celsius(solar_data['temp_air']) # ghi = fm.dni_and_dhi_to_ghi(solar_data['dni'], solar_data['dhi']) # Convert xarray Datasets to a pandas DataFrames solar_data = solar_data.to_dataframe() times = times.to_dataframe() solar_data # Run the solar position algorithm for time, lat, and lon indices, and concatonate them into a single DataFrame numerical_time_indices = range(0, ntimes) lat_indices = range(0, nlat) lon_indices = range(0, nlon) first = True for num_time_idx in numerical_time_indices: time = times.index.get_level_values('Time')[num_time_idx] print(f'Time Index: {time}') for lat_idx in lat_indices: for lon_idx in lon_indices: # print(f'Time Index: {time}') # print(f'\tLatitude index: {lat_idx}') # print(f'\t\tLongitude index: {lon_idx}') solpos_new = pvlib.solarposition.spa_xarray_python(time, solar_data['lat'].loc[time, 0, lat_idx, lon_idx], solar_data['lon'].loc[time, 0, lat_idx, lon_idx], lat_idx, lon_idx) if first: solpos = solpos_new first = False else: solpos = pd.concat([solpos, solpos_new]) solpos solpos = solpos.to_dataframe() solar_data['zenith'] = solpos['zenith'] solar_data = solar_data.to_xarray() solpos new_filename_1 = './solpos_d01_2011-01-24_01:00:00.nc' solpos.to_netcdf(path=new_filename_1) ```
github_jupyter
%matplotlib inline import matplotlib.pyplot as plt # built in python modules import datetime import os import inspect import sys # python add-ons import numpy as np import pandas as pd import xarray as xr import netCDF4 import wrf # # Import the pvlib module if sys.platform == 'linux': sys.path.append('/home/jsward/Documents/01_Research/01_Renewable_Analysis/WRF/pvlib-python') import pvlib from pvlib.wrfcast import WRF # Find the absolute file path to your pvlib installation pvlib_abspath = os.path.dirname(os.path.abspath(inspect.getfile(pvlib))) # absolute path to WRF data file datapath = os.path.join(pvlib_abspath, 'data', 'wrfout_d01_2011-01-24_01:00:00') # Read in the wrfout file using the netCDF4.Dataset method (I think you can also do this with an xarray method) netcdf_data = netCDF4.Dataset(datapath) netcdf_data # Create an xarray.Dataset from the wrf qurery_variables. query_variables = [ 'Times', 'T2', 'U10', 'V10', 'CLDFRA', 'SWDDNI', 'SWDDIF' ] first = True for key in query_variables: var = wrf.getvar(netcdf_data, key, timeidx=wrf.ALL_TIMES) if first: solar_data = var first = False else: solar_data = xr.merge([solar_data, var]) variables = { 'times': 'times', 'XLAT': 'lat', 'XLONG': 'lon', 'T2': 'temp_air', 'U10': 'wind_speed_u', 'V10': 'wind_speed_v', 'CLDFRA': 'total_clouds', 'SWDDNI': 'dni', 'SWDDIF': 'dhi' } solar_data = xr.Dataset.rename(solar_data, variables) times = solar_data.times ntimes = solar_data.sizes['Time'] nlat = solar_data.sizes['south_north'] nlon = solar_data.sizes['west_east'] solar_data # Explore how the WRF forecast model behaves fm = WRF() wind_speed = fm.uv_to_speed(solar_data) temp_air = fm.kelvin_to_celsius(solar_data['temp_air']) # ghi = fm.dni_and_dhi_to_ghi(solar_data['dni'], solar_data['dhi']) # Convert xarray Datasets to a pandas DataFrames solar_data = solar_data.to_dataframe() times = times.to_dataframe() solar_data # Run the solar position algorithm for time, lat, and lon indices, and concatonate them into a single DataFrame numerical_time_indices = range(0, ntimes) lat_indices = range(0, nlat) lon_indices = range(0, nlon) first = True for num_time_idx in numerical_time_indices: time = times.index.get_level_values('Time')[num_time_idx] print(f'Time Index: {time}') for lat_idx in lat_indices: for lon_idx in lon_indices: # print(f'Time Index: {time}') # print(f'\tLatitude index: {lat_idx}') # print(f'\t\tLongitude index: {lon_idx}') solpos_new = pvlib.solarposition.spa_xarray_python(time, solar_data['lat'].loc[time, 0, lat_idx, lon_idx], solar_data['lon'].loc[time, 0, lat_idx, lon_idx], lat_idx, lon_idx) if first: solpos = solpos_new first = False else: solpos = pd.concat([solpos, solpos_new]) solpos solpos = solpos.to_dataframe() solar_data['zenith'] = solpos['zenith'] solar_data = solar_data.to_xarray() solpos new_filename_1 = './solpos_d01_2011-01-24_01:00:00.nc' solpos.to_netcdf(path=new_filename_1)
0.24726
0.80406
# Harmonizome ETL: GWAS Catalog Created by: Charles Dai <br> Credit to: Moshe Silverstein Data Source: http://www.ebi.ac.uk/gwas/docs/file-downloads ``` # appyter init from appyter import magic magic.init(lambda _=globals: _()) import sys import os from datetime import date import re import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import harmonizome.utility_functions as uf import harmonizome.lookup as lookup %load_ext autoreload %autoreload 2 ``` ### Notebook Information ``` print('This notebook was run on:', date.today(), '\nPython version:', sys.version) ``` # Initialization ``` %%appyter hide_code {% do SectionField( name='data', title='Upload Data', img='load_icon.png' ) %} %%appyter code_eval {% do DescriptionField( name='description', text='The example below was sourced from <a href="http://www.ebi.ac.uk/gwas/docs/file-downloads" target="_blank">www.ebi.ac.uk/gwas</a>. If clicking on the example does not work, it should be downloaded directly from the source website.', section='data' ) %} {% set df_file = FileField( constraint='.*\.tsv$', name='associations', label='All Associations (tsv)', default='gwas_catalog_v1.0.2-associations_e100_r2020-06-04.tsv', examples={ 'gwas_catalog_v1.0.2-associations_e100_r2020-06-04.tsv': 'https://www.ebi.ac.uk/gwas/api/search/downloads/alternative' }, section='data' ) %} ``` ### Load Mapping Dictionaries ``` symbol_lookup, geneid_lookup = lookup.get_lookups() ``` ### Output Path ``` output_name = 'gwas_catalog' path = 'Output/GWAS-CATALOG' if not os.path.exists(path): os.makedirs(path) ``` # Load Data ``` %%appyter code_exec df = pd.read_csv( {{df_file}}, sep='\t', usecols=['DISEASE/TRAIT', 'MAPPED_GENE'] ) df.head() df.shape ``` # Pre-process Data ## Split Gene Lists ``` df = df.dropna() df['MAPPED_GENE'] = df['MAPPED_GENE'].str.split(pat= '; | - |, ') df.head() df = df.explode('MAPPED_GENE') df = df.set_index('MAPPED_GENE') df.index.name = 'Gene Symbol' df.columns = ['Disease/Trait'] df.head() df.shape ``` # Filter Data ## Map Gene Symbols to Up-to-date Approved Gene Symbols ``` df = uf.map_symbols(df, symbol_lookup, remove_duplicates=True) df.shape ``` # Analyze Data ## Create Binary Matrix ``` binary_matrix = uf.binary_matrix(df) binary_matrix.head() binary_matrix.shape uf.save_data(binary_matrix, path, output_name + '_binary_matrix', compression='npz', dtype=np.uint8) ``` ## Create Gene List ``` gene_list = uf.gene_list(binary_matrix, geneid_lookup) gene_list.head() gene_list.shape uf.save_data(gene_list, path, output_name + '_gene_list', ext='tsv', compression='gzip', index=False) ``` ## Create Attribute List ``` attribute_list = uf.attribute_list(binary_matrix) attribute_list.head() attribute_list.shape uf.save_data(attribute_list, path, output_name + '_attribute_list', ext='tsv', compression='gzip') ``` ## Create Gene and Attribute Set Libraries ``` uf.save_setlib(binary_matrix, 'gene', 'up', path, output_name + '_gene_up_set') uf.save_setlib(binary_matrix, 'attribute', 'up', path, output_name + '_attribute_up_set') ``` ## Create Attribute Similarity Matrix ``` attribute_similarity_matrix = uf.similarity_matrix( binary_matrix.T, 'jaccard', sparse=True) attribute_similarity_matrix.head() uf.save_data(attribute_similarity_matrix, path, output_name + '_attribute_similarity_matrix', compression='npz', symmetric=True, dtype=np.float32) ``` ## Create Gene Similarity Matrix ``` gene_similarity_matrix = uf.similarity_matrix(binary_matrix, 'jaccard', sparse=True) gene_similarity_matrix.head() uf.save_data(gene_similarity_matrix, path, output_name + '_gene_similarity_matrix', compression='npz', symmetric=True, dtype=np.float32) ``` ## Create Gene-Attribute Edge List ``` edge_list = uf.edge_list(binary_matrix) uf.save_data(edge_list, path, output_name + '_edge_list', ext='tsv', compression='gzip') ``` # Create Downloadable Save File ``` uf.archive(path) ``` ### Link to download output files: [click here](./output_archive.zip)
github_jupyter
# appyter init from appyter import magic magic.init(lambda _=globals: _()) import sys import os from datetime import date import re import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import harmonizome.utility_functions as uf import harmonizome.lookup as lookup %load_ext autoreload %autoreload 2 print('This notebook was run on:', date.today(), '\nPython version:', sys.version) %%appyter hide_code {% do SectionField( name='data', title='Upload Data', img='load_icon.png' ) %} %%appyter code_eval {% do DescriptionField( name='description', text='The example below was sourced from <a href="http://www.ebi.ac.uk/gwas/docs/file-downloads" target="_blank">www.ebi.ac.uk/gwas</a>. If clicking on the example does not work, it should be downloaded directly from the source website.', section='data' ) %} {% set df_file = FileField( constraint='.*\.tsv$', name='associations', label='All Associations (tsv)', default='gwas_catalog_v1.0.2-associations_e100_r2020-06-04.tsv', examples={ 'gwas_catalog_v1.0.2-associations_e100_r2020-06-04.tsv': 'https://www.ebi.ac.uk/gwas/api/search/downloads/alternative' }, section='data' ) %} symbol_lookup, geneid_lookup = lookup.get_lookups() output_name = 'gwas_catalog' path = 'Output/GWAS-CATALOG' if not os.path.exists(path): os.makedirs(path) %%appyter code_exec df = pd.read_csv( {{df_file}}, sep='\t', usecols=['DISEASE/TRAIT', 'MAPPED_GENE'] ) df.head() df.shape df = df.dropna() df['MAPPED_GENE'] = df['MAPPED_GENE'].str.split(pat= '; | - |, ') df.head() df = df.explode('MAPPED_GENE') df = df.set_index('MAPPED_GENE') df.index.name = 'Gene Symbol' df.columns = ['Disease/Trait'] df.head() df.shape df = uf.map_symbols(df, symbol_lookup, remove_duplicates=True) df.shape binary_matrix = uf.binary_matrix(df) binary_matrix.head() binary_matrix.shape uf.save_data(binary_matrix, path, output_name + '_binary_matrix', compression='npz', dtype=np.uint8) gene_list = uf.gene_list(binary_matrix, geneid_lookup) gene_list.head() gene_list.shape uf.save_data(gene_list, path, output_name + '_gene_list', ext='tsv', compression='gzip', index=False) attribute_list = uf.attribute_list(binary_matrix) attribute_list.head() attribute_list.shape uf.save_data(attribute_list, path, output_name + '_attribute_list', ext='tsv', compression='gzip') uf.save_setlib(binary_matrix, 'gene', 'up', path, output_name + '_gene_up_set') uf.save_setlib(binary_matrix, 'attribute', 'up', path, output_name + '_attribute_up_set') attribute_similarity_matrix = uf.similarity_matrix( binary_matrix.T, 'jaccard', sparse=True) attribute_similarity_matrix.head() uf.save_data(attribute_similarity_matrix, path, output_name + '_attribute_similarity_matrix', compression='npz', symmetric=True, dtype=np.float32) gene_similarity_matrix = uf.similarity_matrix(binary_matrix, 'jaccard', sparse=True) gene_similarity_matrix.head() uf.save_data(gene_similarity_matrix, path, output_name + '_gene_similarity_matrix', compression='npz', symmetric=True, dtype=np.float32) edge_list = uf.edge_list(binary_matrix) uf.save_data(edge_list, path, output_name + '_edge_list', ext='tsv', compression='gzip') uf.archive(path)
0.35869
0.756425
## PREAMBLE ``` !pip install matplotlib==3.5.1 !pip install xgboost==1.5.1 from google.colab import drive drive.mount('/content/drive') import numpy as np import pandas as pd import os import sys import xgboost as xgb import matplotlib.pyplot as plt import datetime as dt import missingno as msno import seaborn as sns import sklearn from scipy import stats %matplotlib inline state = 32 np.random.seed(state) print ('numpy version:', np.__version__, 'pandas version:', pd.__version__, 'Xgb version:', xgb.__version__ ) sys.path.append('/content/drive/MyDrive/Transshipment/code/') datapath = '/content/drive/MyDrive/Transshipment/data/' modelspath = '/content/drive/MyDrive/Transshipment/models/' from importlib import reload import utils reload(utils) from utils import * import smogn reload(smogn) from smogn import * import pipeline reload(pipeline) from pipeline import * ``` ## READ DATA ``` df = pd.read_excel(datapath +'/VesselData.xlsx') df df.info() ``` ### INVESTIGATE TARGET VARIABLES load1, discharge1 --> ore transshipment <br> load2, discharge2 --> coal transshipment <br> load3, discharge3 --> oil transshipment <br> load4, discharge4 --> petrol transshipment <br> ``` targets_df = df.filter(regex= r"load\d*|discharge\d*") targets_df ``` Count non-zero values in the target space ``` ax = targets_df.apply(lambda x: x[x > 0].count()).plot.bar(figsize = (10,5)) ax.bar_label(ax.containers[0], fontsize = 12) ax.set_title('Non-zero values per target', fontsize = 15) plt.show() ``` #### TARGETS INITIAL INSIGHTS Based on a quick search https://bulkcarrierguide.com/cargo.html, I proceeded with an assumption: <br> The handling of ore & coal could be similar (i.e., solid bulk cargo).<br> Likewise, oil & petrol (i.e., liquid bulk) might have similar handling routines in terms of safety, regulations, transportation. <br> Therefore, we can maximize the supervision signal by concatenating similar bulk load and discharge targets, such that:<br> 1. concatenate: load1 + load2 --> solid bulk load <br> 2. concatenate: load3 + load4 --> liquid bulk load <br> 3. concatenate: discharge1 + discharge2 --> solid bulk discharge <br> 4. concatenate: discharge3 + discharge4 --> liquid bulk discharge <br> 5. Construct a binary variable ```bulk``` to define the cargo type {'solid': 0, 'liquid': 1} ``` processor = preprocess(dataframe= df) targs = processor.format_targets(targets_df) targs ``` ### FEATURES SELECTION & ENGINEERING 1. Dropping non-informative features, (i.e., id columns): ['previousportid', 'nextportid','vesselid'] 2. Engineering a new feature ```n_stevs``` outlining the number of stevedores per ship 3. Construct a ```process_time``` feature as the difference in days from the 2 dates ```latesteta ``` - ```earliesteta``` ``` pd.options.mode.chained_assignment = None processor = preprocess(dataframe= df) feature_df = processor.format_features() feature_df ``` Investigating missing values ``` msno.matrix(feature_df.sort_values(by=['vesseltype'], ascending=True), figsize= (15,7), fontsize = 12) plt.show() feature_df[['load','discharge']].isna().groupby(by=feature_df['vesseltype']).sum().plot.bar(figsize = (10,5), title = 'NaNs per vesselType') plt.grid(axis = 'y', linestyle='-') feature_df[['load','discharge', 'vesseltype']].groupby(by=feature_df['vesseltype'], dropna= True).count().plot.bar(figsize = (10,5), title = 'Target/Feature dist per vesselType') plt.legend(['load','discharge','num_instances']) plt.grid(axis = 'y', linestyle='-') feature_df[['load','discharge']].groupby(by=feature_df['vesseltype'], dropna= True).sum().plot.bar(figsize = (10,5), title = 'Label Value per vesselType') plt.grid(axis = 'y', linestyle='-') # feature_df[['vesseltype']].groupby(by=feature_df['vesseltype'], dropna= True).count().plot.bar(figsize = (10,5), title = 'Row Count per vesseltype') plt.show() load_mean = feature_df['load'].mean() discharge_mean = feature_df['discharge'].mean() feature_df.hist(['load', 'discharge'], bins=20, figsize=(15,6), edgecolor='k') plt.subplots_adjust(wspace= 0.1) feature_df[['load','discharge']].plot.density(bw_method = 0.2, figsize = (15,6)) plt.axvline(x=load_mean, color = 'magenta', linestyle='dashed', linewidth = 2) plt.axvline(x=discharge_mean, color = 'orange', linestyle='dashed', linewidth = 2) plt.text(load_mean, y= 0.1e-5, s= 'mean_load') plt.text(discharge_mean, y= 0, s= 'mean_discharge') plt.subplots_adjust(wspace= 0.0001) plt.show() feature_df.boxplot(['vesseldwt','load','discharge'], vert = False, figsize=(15,6)) feature_df.boxplot(['load','discharge'], by='vesseltype', figsize=(14,6)) plt.subplots_adjust(wspace= 0.1) plt.show() ``` #### FEATURE ENCODING: One-hot encoding the categorical variables, and returning separate load/discharge dataset. Our focus at this stage is predicting discharge. ``` load_dataset, discharge_dataset = processor.encode_features() discharge_dataset ``` # BALANCING PIPELINE ``` from sklearn.model_selection import train_test_split X = discharge_dataset.drop('discharge', axis= 1) y = discharge_dataset[['discharge']] ############################################ xtrain, xtest, ytrain, ytest = train_test_split(X,y, test_size= 0.2, shuffle= True) print (xtrain.shape, xtest.shape) xtrain, xval, ytrain, yval = train_test_split(xtrain, ytrain, test_size= 0.25, shuffle= True) print (xtrain.shape, xval.shape) ``` ### 1. SMOGN Below is a demonstration of applying `synthetic oversampling with gaussian noise (SMOGN) on a training split. ``` xout, yout = apply_smogn(xtrain, ytrain) traindata = pd.concat([xout, yout],axis= 1).reset_index() fig, ax = plt.subplots(1,1, figsize = (10,6)) sns.kdeplot(ytrain.discharge, ax = ax) # # sns.kdeplot(simple.discharge, ax = ax[1]) sns.kdeplot(traindata.discharge, ax = ax) plt.legend(['y_train', 'y_train-SMOGN']) plt.title('KDE of Training Targets') plt.show() ``` ### 2. LDS - KERNEL SMOOTHING Label distribution smoothing aims to address the discontinuity in the regression target space, by smoothing and approximating an effective distribution of the targets from their original empirical distribution. <br> Below is a demonstration of estimating the LDS distibution by rebinning the discharge targets and subsequently using this distribution to calcuate `Inverse frequency reweighing`` for each data point. ``` from importlib import reload import utils reload(utils) from utils import * lds = LDS(labels= ytrain.discharge, buffer= 10000) bin_df = lds.bin_dataframe(ytrain) x_wts_df = lds.weight_df(bin_df, weight= 'inverse') x_wts_df smoothed, emp = lds.effective_dist(smooth= 3) bins, freqs = zip(*emp.items()) sns.set(font_scale = 1) fig, (ax0, ax1) = plt.subplots(2,1, figsize= (50,20), ) # sns.barplot(x= list(bins), y= list(freqs), ax =ax0) ax0.set_xticklabels(ax0.get_xticklabels(), rotation=90, ha='right') sns.histplot(data= ytrain, ax =ax0, bins = 300) ax0.set_xticklabels(bins, fontsize = 20) bins, freqs = zip(*smoothed.items()) p = sns.barplot(x= list(bins), y= list(freqs), ax = ax1) ax1.set_xticklabels(ax1.get_xticklabels(), rotation=90, ha='right', fontsize = 15) plt.subplots_adjust(hspace= .2) plt.show() ``` # INTEGRATED CROSS VALIDATION PIPELINE ``` from sklearn.experimental import enable_halving_search_cv from sklearn.model_selection import GridSearchCV, HalvingGridSearchCV,ShuffleSplit, PredefinedSplit, LeaveOneOut, cross_val_score, cross_val_predict, KFold, GroupKFold, HalvingRandomSearchCV, RepeatedKFold, cross_validate from sklearn.metrics import make_scorer, mean_absolute_error, median_absolute_error from scipy.stats import t ``` #### FITTING A VANILLA XBGOOSTER ``` params = {'objective': 'reg:squarederror' , 'learning_rate': 0.3, 'gamma': 0, 'max_depth': 6, 'missing': 0, 'seed': 32 , 'subsample': 0.5, 'colsample_bytree': 0.5, 'reg_lambda': 0, 'n_estimators': 3000, 'eval_metric': ['mae']} vanilla = xgb.XGBRegressor(**params, verbosity = 1, validate_parameters = True) eval_sets = [ (xtrain, ytrain), (xval, yval)] # model.get_xgb_params() vanilla.fit(xtrain, ytrain, eval_set= eval_sets, verbose= True, early_stopping_rounds= 10 ) ``` #### SEARCH SPACE ``` #### RANDOM SEARCH DISTs # 1. a beta prime distribution for gamma gamma_dist = stats.betaprime(a = 2,b =6, scale = 10) # 2. an exponential distribution for lambda lambda_dist = stats.expon(scale = 3) # 3. a log uniform distribution for learning rate lr_dist = stats.loguniform(.0001, .5) # 4. a negative binomial distribution for maxchild maxchild_dist = stats.nbinom(2, .4, loc = 1) # 5. a beta distribution for colsample_bytree colrow_dist = stats.beta(2.5,1) # 6. a negative binomial distribution for max_depth treedepth_dist = stats.nbinom(20, .5) search_space = {'objective': ['reg:squarederror'], 'learning_rate':lr_dist, 'max_depth':treedepth_dist, 'subsample': colrow_dist, 'colsample_bytree': colrow_dist, 'reg_lambda': lambda_dist, 'min_child_weight': maxchild_dist, 'gamma': gamma_dist} ``` # EXECUTE PIPELINE ``` # 1. setting the inner random parameter search loop @ k = 4 folds inner_cv = RepeatedKFold(n_splits=4, n_repeats= 1, random_state= state) # 2. setting the outer error estiamttion loop @ k = 4 folds outer_cv = KFold(n_splits= 4, shuffle= True, random_state= state) # 3. Initializing a set of parameters init_params = vanilla.get_params() ``` Below is the execution of the Integrated balancing and nested cross validation pipeline ``` from importlib import reload import pipeline reload(pipeline) import utils reload(utils) from pipeline import * import smogn reload(smogn) from smogn import * pipe = pipeline(vanilla.get_params(), inner_cv= inner_cv, outer_cv= outer_cv) bestmodel, scores_df = pipe.nested_validation(X, y, n_candids= 30) scores_df.to_pickle(datapath + 'scores_df.pkl') scores_df ``` ### CV SCORES ``` scores_df bestmodel.save_model('/content/drive/MyDrive/Transshipment/models/bestmodel_2.json') ``` ### BEST MODEL PARAMETERS ``` bestmodel.get_xgb_params() pd.DataFrame.from_dict(bestmodel.get_xgb_params(), orient= 'index') ``` ### FEATURE IMPORTANCE ``` fig, ax = plt.subplots(figsize=(15, 8)) xgb.plot_importance(bestmodel, ax = ax) plt.show() ``` ### INSTANCES OF THE ESTIMATOR TREES ``` fig, ax = plt.subplots(figsize=(30, 15)) xgb.plot_tree(bestmodel, num_trees= 0, ax = ax, rankdir= 'LR') fig, ax = plt.subplots(figsize=(30, 20)) xgb.plot_tree(bestmodel, num_trees= 10, ax = ax, rankdir= 'LR') plt.show() m = xgb.XGBRegressor() m.load_model(modelspath + 'bestmodel_2.json') m ```
github_jupyter
!pip install matplotlib==3.5.1 !pip install xgboost==1.5.1 from google.colab import drive drive.mount('/content/drive') import numpy as np import pandas as pd import os import sys import xgboost as xgb import matplotlib.pyplot as plt import datetime as dt import missingno as msno import seaborn as sns import sklearn from scipy import stats %matplotlib inline state = 32 np.random.seed(state) print ('numpy version:', np.__version__, 'pandas version:', pd.__version__, 'Xgb version:', xgb.__version__ ) sys.path.append('/content/drive/MyDrive/Transshipment/code/') datapath = '/content/drive/MyDrive/Transshipment/data/' modelspath = '/content/drive/MyDrive/Transshipment/models/' from importlib import reload import utils reload(utils) from utils import * import smogn reload(smogn) from smogn import * import pipeline reload(pipeline) from pipeline import * df = pd.read_excel(datapath +'/VesselData.xlsx') df df.info() targets_df = df.filter(regex= r"load\d*|discharge\d*") targets_df ax = targets_df.apply(lambda x: x[x > 0].count()).plot.bar(figsize = (10,5)) ax.bar_label(ax.containers[0], fontsize = 12) ax.set_title('Non-zero values per target', fontsize = 15) plt.show() processor = preprocess(dataframe= df) targs = processor.format_targets(targets_df) targs Investigating missing values #### FEATURE ENCODING: One-hot encoding the categorical variables, and returning separate load/discharge dataset. Our focus at this stage is predicting discharge. # BALANCING PIPELINE ### 1. SMOGN Below is a demonstration of applying `synthetic oversampling with gaussian noise (SMOGN) on a training split. ### 2. LDS - KERNEL SMOOTHING Label distribution smoothing aims to address the discontinuity in the regression target space, by smoothing and approximating an effective distribution of the targets from their original empirical distribution. <br> Below is a demonstration of estimating the LDS distibution by rebinning the discharge targets and subsequently using this distribution to calcuate `Inverse frequency reweighing`` for each data point. # INTEGRATED CROSS VALIDATION PIPELINE #### FITTING A VANILLA XBGOOSTER #### SEARCH SPACE # EXECUTE PIPELINE Below is the execution of the Integrated balancing and nested cross validation pipeline ### CV SCORES ### BEST MODEL PARAMETERS ### FEATURE IMPORTANCE ### INSTANCES OF THE ESTIMATOR TREES
0.396652
0.750895
Importing Required Libraries ``` import os import PIL import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.models import Sequential from tensorflow.keras import layers from keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.applications import EfficientNetB0 from google.colab import drive drive.mount('/content/gdrive',force_remount=True) p="/content/gdrive/MyDrive/Agriculture Vision Dataset/New Augmented" os.chdir(p) class_count = len(os.listdir(p)) print(class_count) batch_size=32 image_size=(512,512) ``` Splitting dataset ``` train_ds = tf.keras.preprocessing.image_dataset_from_directory( "/content/gdrive/MyDrive/Agriculture Vision Dataset/New Augmented", validation_split=0.1, subset="training", seed=123, image_size=image_size, batch_size=32 ) val_ds = tf.keras.preprocessing.image_dataset_from_directory( "/content/gdrive/MyDrive/Agriculture Vision Dataset/New Augmented" , validation_split=0.1, subset="validation", seed=123, image_size=image_size, batch_size=32 ) ``` Loading InceptionV3 Model ``` from tensorflow.keras.applications.inception_v3 import InceptionV3 base_model = InceptionV3(input_shape = (512, 512, 3), include_top = False, weights = None) base_model1 = InceptionV3(input_shape = (512, 512, 3), include_top = False, weights = None) ``` Training last 12 layers ``` for layer in base_model.layers[0:-12]: layer.trainable = False ``` Modifying Architecture and Compiling model ``` flat = layers.Flatten()(base_model.output) x = layers.Dense(512, activation='relu')(flat) output = layers.Dense(4, activation='sigmoid')(x) model = tf.keras.models.Model(base_model.input, output) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ``` Model Training Number of epochs = 10 ``` reshist = model.fit(train_ds, validation_data = val_ds, epochs = 10) import matplotlib.pyplot as plt acc = reshist.history['accuracy'] val_acc = reshist.history['val_accuracy'] loss = reshist.history['loss'] val_loss = reshist.history['val_loss'] epochs_range = range(10) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() ``` Number of epochs = 8 ``` reshist1 = model.fit(train_ds, validation_data = val_ds, epochs = 8) acc = reshist1.history['accuracy'] val_acc = reshist1.history['val_accuracy'] loss = reshist1.history['loss'] val_loss = reshist1.history['val_loss'] epochs_range = range(8) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() ``` Number of epochs = 20 ``` reshist2 = model.fit(train_ds, validation_data = val_ds, epochs = 20) acc = reshist2.history['accuracy'] val_acc = reshist2.history['val_accuracy'] loss = reshist2.history['loss'] val_loss = reshist2.history['val_loss'] epochs_range = range(20) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() ```
github_jupyter
import os import PIL import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.models import Sequential from tensorflow.keras import layers from keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.applications import EfficientNetB0 from google.colab import drive drive.mount('/content/gdrive',force_remount=True) p="/content/gdrive/MyDrive/Agriculture Vision Dataset/New Augmented" os.chdir(p) class_count = len(os.listdir(p)) print(class_count) batch_size=32 image_size=(512,512) train_ds = tf.keras.preprocessing.image_dataset_from_directory( "/content/gdrive/MyDrive/Agriculture Vision Dataset/New Augmented", validation_split=0.1, subset="training", seed=123, image_size=image_size, batch_size=32 ) val_ds = tf.keras.preprocessing.image_dataset_from_directory( "/content/gdrive/MyDrive/Agriculture Vision Dataset/New Augmented" , validation_split=0.1, subset="validation", seed=123, image_size=image_size, batch_size=32 ) from tensorflow.keras.applications.inception_v3 import InceptionV3 base_model = InceptionV3(input_shape = (512, 512, 3), include_top = False, weights = None) base_model1 = InceptionV3(input_shape = (512, 512, 3), include_top = False, weights = None) for layer in base_model.layers[0:-12]: layer.trainable = False flat = layers.Flatten()(base_model.output) x = layers.Dense(512, activation='relu')(flat) output = layers.Dense(4, activation='sigmoid')(x) model = tf.keras.models.Model(base_model.input, output) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) reshist = model.fit(train_ds, validation_data = val_ds, epochs = 10) import matplotlib.pyplot as plt acc = reshist.history['accuracy'] val_acc = reshist.history['val_accuracy'] loss = reshist.history['loss'] val_loss = reshist.history['val_loss'] epochs_range = range(10) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() reshist1 = model.fit(train_ds, validation_data = val_ds, epochs = 8) acc = reshist1.history['accuracy'] val_acc = reshist1.history['val_accuracy'] loss = reshist1.history['loss'] val_loss = reshist1.history['val_loss'] epochs_range = range(8) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() reshist2 = model.fit(train_ds, validation_data = val_ds, epochs = 20) acc = reshist2.history['accuracy'] val_acc = reshist2.history['val_accuracy'] loss = reshist2.history['loss'] val_loss = reshist2.history['val_loss'] epochs_range = range(20) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show()
0.639398
0.724017
# Problem Statement Any natural language is a medium of communication used by human beings. Natural languages are quite difficult to process using algorithms considering its complexity and variants. Natural Language Processing (NLP) mainly includes three tasks, - Lexical Processing: Statistical analysis of words present in the given corpora and gain inferences out of it. This can be used for document classification, spam/non-spam classification etc. - Syntactic Processing: This includes parsing of sentences and their grammatical structure without considering its meaning. It includes Parts Of Speech (POS) tagging, Information Extraction etc. - Semantic Processing: This final part includes gaining the meaningful insights out of given corpora. It can be used for higher level application like building program to chat with users to assist them (e.g. customer service). Here, we are going to focus on POS tagging part of NLP using Viterbi algorithm and regular expression. POS Tagging: POS (Parts Of Speech) tagging includes tagging English POS (in this case) to each word in the given corpora. The purpose of POS tagging in NLP is to understand correctness of grammatical integrity and extract relationship between each word. Viterbi Algorithm: It is based on an assumption that POS tag for current words depends upon POS tag for previous word and current word itself (Hidden Markov Model - HMM). It does steps by step maximization of ***P(word_n|POS_n) * P(POS_n|POS_n-1)***. For more insights on Viterbi algorithm please check this link: https://www.freecodecamp.org/news/a-deep-dive-into-part-of-speech-tagging-using-viterbi-algorithm-17c8de32e8bc/ ### Dependancies ``` # Importing libraries import nltk nltk.download('treebank') from nltk.probability import FreqDist import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import time import random from sklearn.model_selection import train_test_split ``` ## 1. Reading Dataset NLTK (Natural Language Took-Kit) is having many corpora (datasets). Here, we would be using **treebank corpus** for our algorithm. Here, we would be using universal tag-set as it contains only 12 tags which are easier to parse and computationally less expensive. ``` # reading the Treebank tagged sentences nltk_data = list(nltk.corpus.treebank.tagged_sents(tagset='universal')) len(nltk_data) ``` There are 3914 sentenses in the treebank dataset. ``` nltk_data[0:2] ``` Dataset is in the form of list of sentences where each sentence is a list of tuples. Each tuple contains a word and corresponding POS tag. Dataset format: data = [[sent_1], [sent_2], [sent_3],....[sent_n]] where, sent_n = [(word_1, POS_1), (word_2, POS_2),....(word_n, POS_n)] ## 2. Train-Test Split ``` # Splitting into train and test random.seed(42) train_set, test_set = train_test_split(nltk_data, test_size=0.05) print('Train set size:', len(train_set)) print(' ') print('Test set size:', len(test_set)) ``` We would be using only 5% of the data as test set because Viterbi is a computationally expensive algorithm. ## 3. EDA ``` # train tagged words train_tagged_words = [x for y in train_set for x in y] len(train_tagged_words) ``` Train set is having total 95558 tagged words. It can be repeating as well in line to corpus. Same words can be having different POS tags based on its applicability in the sentence. ``` # train_words: words in train set train_words = [x[0] for x in train_tagged_words] print(train_words[:50]) # train set vocabulary train_vocab = set(train_words) print(len(train_vocab)) ``` Total vocabulary size for train set is 12076. ``` # tags from train set tags = set([x[1] for x in train_tagged_words]) print(tags) print(' ') print(len(tags)) ``` Total 12 unique POS tags as per universal tagger standards. ### Developing Tagger as per Viterbi Algorithm Bayes Theorem: **P(POS|word) = P(word|POS)P(POS)/P(word)** Emission Probability = P(word|POS) Transition Probability = P(POS_n|POS_n-1) Where, `P(POS|word)` is conditional probability of the tag for a given word, `p(word|POS)` is the emission probability or conditional probability of the word for given tag, `p(POS_n|POS_n-1)` is the transition probability or conditional probability of the POS tag for the given previous POS tag. ``` # Emission Probability # Here, 'tag' stands for POS tag def emission(word, tag, train = train_tagged_words): tag_list = [x for x in train if x[1]==tag] w_given_tag_list = [x[0] for x in tag_list if x[0]==word] count_tag = len(tag_list) count_w_given_tag = len(w_given_tag_list) return (count_w_given_tag/count_tag) # Transition Probability def transition(t2, t1, train = train_tagged_words): tags = [x[1] for x in train] count_t1 = len([x for x in tags if x==t1]) count_t2_t1 = 0 for index in range(len(tags)-1): if tags[index]==t1 and tags[index+1] == t2: count_t2_t1 += 1 return (count_t2_t1/count_t1) # creating t x t transition matrix of tags # each column is t2, each row is t1 tags_matrix = np.zeros((len(tags), len(tags)), dtype='float32') for i, t1 in enumerate(list(tags)): for j, t2 in enumerate(list(tags)): tags_matrix[i, j] = transition(t2, t1) # convert the matrix to a df for better readability tags_df = pd.DataFrame(tags_matrix, columns = list(tags), index=list(tags)) tags_df # transition map plt.figure(figsize=(15, 15)) sns.heatmap(tags_df, cmap='YlGnBu', annot=True) plt.show() ``` Observation, - DET followed by NOUN is having probability of 0.64. - ADJ followed by NOUN is having probability of 0.7. - PRON followed by VERB is having probability of 0.49. ``` # Developing Viterbi algorithm funcation: Returns words and corresponding predicted POS tag def Viter1(words, train_bag = train_tagged_words): count1 = 0 state = [] for key, word in enumerate(words): p = [] for tag in tags: if key == 0: transition_p = tags_df.loc['.', tag] else: transition_p = tags_df.loc[state[-1], tag] emission_p = emission(word, tag) state_probability = emission_p * transition_p p.append(state_probability) pmax = max(p) T = list(tags) state_max = T[p.index(pmax)] state.append(state_max) # print(count1, word, state_max) count1 += 1 return list(zip(words, state)) start = time.time() test_pred = Viter1([x[0] for y in test_set for x in y]) end = time.time() difference = end-start print("Time taken (in seconds): ", difference) # accuracy check # wrong_pos --> there are words which are wrongly tagged by our tagger test_tagged_words = [x for y in test_set for x in y] wrong_pos = [(i,j,k) for i,j,k in list(zip([x[0] for x in test_tagged_words], [x[1] for x in test_tagged_words], [x[1] for x in test_pred])) if j != k] accuracy1 = 1-len(wrong_pos)/len(test_tagged_words) accuracy1 ``` We are getting accuracy of ~90% on the test set, which is good. But let's try to improve it further. #### Checking unknown words - Words which are not present in train set but are there in test set. These words would be having emission probability of 0. - We would be using `Regular Expression` based tagger for unknown words instead of using Viterbi method `Regular Expression` based Tagger: We use a set of rules to define the POS for the given word. Rules can be related to alphabets in word itself. e.g. words ending with 'ing' should be tagged as VERB. ``` unknown_words_pos = [x for x in wrong_pos if x[0] not in train_vocab] unknown_words_pos[0:10] ``` This is a subset of wrongly tagged words which are not present in training set. ``` # Checking distribution of unknown words POs tags FreqDist([x[1] for x in unknown_words_pos]).plot() ``` Observation, - Tagger unable to tag `NOUN`, `VERB`, `ADJ`, `NUM`, `X` and `ADV` from unknown group of words. It is assigning all above kind to `ADP` because `ADP` is the first tag in out transition probability matrix. Next, lets do analysis of words from unknown group and try to develop a set of rules to tag them correctly. #### VERB analysis ``` print([x for x in unknown_words_pos if x[1] =='VERB']) print(' ') print(len([x for x in unknown_words_pos if x[1] =='VERB'])) print([x[0] for x in unknown_words_pos if x[1] =='VERB' and x[0].endswith('ing')]) print(' ') print(len([x[0] for x in unknown_words_pos if x[1] =='VERB' and x[0].endswith('ing')])) print([x[0] for x in unknown_words_pos if x[1] =='VERB' and x[0].endswith('ed')]) print(' ') print(len([x[0] for x in unknown_words_pos if x[1] =='VERB' and x[0].endswith('ed')])) print([x[0] for x in unknown_words_pos if x[1] =='VERB' and x[0].endswith('es')]) print(' ') print(len([x[0] for x in unknown_words_pos if x[1] =='VERB' and x[0].endswith('es')])) ``` We can see many words which are ending with 'ing' or 'ed' or 'es' which are actually verbs but tagged as 'ADP'. Hence we would defined rules for that. ``` patterns_verb = [ (r'.*ing$', 'VERB'), # gerund (r'.*ed$', 'VERB'), # past tense (r'.*es$', 'VERB'), # 3rd singular present ] ``` #### ADV analysis ``` print([x for x in unknown_words_pos if x[1] =='ADV']) print(' ') print(len([x for x in unknown_words_pos if x[1] =='ADV'])) print([x[0] for x in unknown_words_pos if x[1] =='ADV' and x[0].endswith('ly')]) print(' ') print(len([x[0] for x in unknown_words_pos if x[1] =='ADV' and x[0].endswith('ly')])) ``` We can see many words which are ending with 'ly' which are actually 'ADV' but tagged as 'ADP'. Hence we would defined rule for that. ``` patterns_adv = [ (r'.*ly$', 'ADV') ] ``` #### X Analysis ``` print([x for x in unknown_words_pos if x[1] =='X']) print(' ') print(len([x for x in unknown_words_pos if x[1] =='X'])) ``` These are the foreign words which should be tagged as 'X' but are predicted as 'ADP'. We can't define as any rule for such words. Hence, we would leave them as it is. #### NOUN Analysis ``` print([x for x in unknown_words_pos if x[1] =='NOUN']) print(' ') print(len([x for x in unknown_words_pos if x[1] =='NOUN'])) print([x[0] for x in unknown_words_pos if x[1] =='NOUN' and x[0].endswith('s')]) print(' ') print(len([x[0] for x in unknown_words_pos if x[1] =='NOUN' and x[0].endswith('s')])) ``` We can see many words which are ending with 's' which are actually 'NOUN' but tagged as 'ADP'. Hence we would defined rule for that. ``` patterns_noun = [ (r'.*s$', 'NOUN'), # plural nouns ] ``` #### ADJ Analysis ``` print([x for x in unknown_words_pos if x[1] =='ADJ']) print(' ') print(len([x for x in unknown_words_pos if x[1] =='ADJ'])) print([x[0] for x in unknown_words_pos if x[1] =='ADJ' and x[0].endswith('est')]) print(' ') print(len([x[0] for x in unknown_words_pos if x[1] =='ADJ' and x[0].endswith('est')])) ``` There are a few words ending with 'est' should be considered as 'ADJ'. These words are very less in count. Hence, we would not try to assign any rule. This is to avoid any sort of over-fitting and try to define as generic rules as possible. #### NUM Analysis ``` print([x for x in unknown_words_pos if x[1] =='NUM']) print(' ') print(len([x for x in unknown_words_pos if x[1] =='NUM'])) patterns_numbers = [ (r'^-?[0-9]+(.[0-9]+)?$', 'NUM'), # cardinal numbers ] ``` #### Final pattern for Regex We need to assign some POS for the words which aren't aligned to any rule. Since we know that most of the words which are unknown belong to `NOUN` category, we would be using that as default tag. ``` # Final pattern pattern = [ (r'.*ing$', 'VERB'), # gerund (r'.*ed$', 'VERB'), # past tense (r'.*es$', 'VERB'), # 3rd singular present (r'.*ly$', 'ADV'), # gerund (r'^-?[0-9]+(.[0-9]+)?$', 'NUM'), # cardinal numbers (r'.*s$', 'NOUN'), # plural nouns (r'.*', 'NOUN') # nouns ] # Defining Regex based tagger regexp_tagger = nltk.RegexpTagger(pattern) # Viterbi and Regex tagger: Viterbi with added condition for unknown words def Viter2(words, train_bag = train_tagged_words): count1 = 0 state = [] T = list(tags) for key, word in enumerate(words): if word in train_vocab: p = [] for tag in tags: if key == 0: transition_p = tags_df.loc['.', tag] else: transition_p = tags_df.loc[state[-1], tag] emission_p = emission(word, tag) state_probability = emission_p * transition_p p.append(state_probability) pmax = max(p) state_max = T[p.index(pmax)] else: # print('unknown trigger!') state_max = regexp_tagger.tag([word])[0][1] state.append(state_max) # print(count1, word, state_max) count1 += 1 return list(zip(words, state)) start = time.time() test_pred2 = Viter2([x[0] for y in test_set for x in y]) end = time.time() difference = end-start print("Time taken in seconds: ", difference) # accuracy wrong_pos2 = [(i,j,k) for i,j,k in list(zip([x[0] for x in test_tagged_words], [x[1] for x in test_tagged_words], [x[1] for x in test_pred2])) if j != k] accuracy2 = 1-len(wrong_pos2)/len(test_tagged_words) accuracy2 ``` We can observe some improvement in the accuracy a compared to only Viterbi based model. ``` unknown_words_pos2 = [x for x in wrong_pos2 if x[0] not in train_vocab] print(unknown_words_pos2) print(' ') print(len(unknown_words_pos2)) # Checking distribution of unknown words POS tags FreqDist([x[1] for x in unknown_words_pos2]).plot() print([x for x in unknown_words_pos2 if x[1] =='ADJ']) print(' ') print(len([x for x in unknown_words_pos2 if x[1] =='ADJ'])) ``` Here, we don't see any further possible rules for 'ADJ' words. Hence, we would finalize `Viter2` as model for POS tagging. #### Testing on new sentences ``` # input sentence sent = 'Hurray!!! This is summer time. Let us have fun!' # Dependancy from nltk import word_tokenize # tokenize sentence sent_tokens = word_tokenize(sent) sent_tokens Viter2(sent_tokens) ``` ## Conclusion - Viterbi is a powerful algorithm for POS tagging. It is time consuming but gives accurate results. - Viterbi can be further improved using algorithms like Regex tagger. ---------------------
github_jupyter
# Importing libraries import nltk nltk.download('treebank') from nltk.probability import FreqDist import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import time import random from sklearn.model_selection import train_test_split # reading the Treebank tagged sentences nltk_data = list(nltk.corpus.treebank.tagged_sents(tagset='universal')) len(nltk_data) nltk_data[0:2] # Splitting into train and test random.seed(42) train_set, test_set = train_test_split(nltk_data, test_size=0.05) print('Train set size:', len(train_set)) print(' ') print('Test set size:', len(test_set)) # train tagged words train_tagged_words = [x for y in train_set for x in y] len(train_tagged_words) # train_words: words in train set train_words = [x[0] for x in train_tagged_words] print(train_words[:50]) # train set vocabulary train_vocab = set(train_words) print(len(train_vocab)) # tags from train set tags = set([x[1] for x in train_tagged_words]) print(tags) print(' ') print(len(tags)) # Emission Probability # Here, 'tag' stands for POS tag def emission(word, tag, train = train_tagged_words): tag_list = [x for x in train if x[1]==tag] w_given_tag_list = [x[0] for x in tag_list if x[0]==word] count_tag = len(tag_list) count_w_given_tag = len(w_given_tag_list) return (count_w_given_tag/count_tag) # Transition Probability def transition(t2, t1, train = train_tagged_words): tags = [x[1] for x in train] count_t1 = len([x for x in tags if x==t1]) count_t2_t1 = 0 for index in range(len(tags)-1): if tags[index]==t1 and tags[index+1] == t2: count_t2_t1 += 1 return (count_t2_t1/count_t1) # creating t x t transition matrix of tags # each column is t2, each row is t1 tags_matrix = np.zeros((len(tags), len(tags)), dtype='float32') for i, t1 in enumerate(list(tags)): for j, t2 in enumerate(list(tags)): tags_matrix[i, j] = transition(t2, t1) # convert the matrix to a df for better readability tags_df = pd.DataFrame(tags_matrix, columns = list(tags), index=list(tags)) tags_df # transition map plt.figure(figsize=(15, 15)) sns.heatmap(tags_df, cmap='YlGnBu', annot=True) plt.show() # Developing Viterbi algorithm funcation: Returns words and corresponding predicted POS tag def Viter1(words, train_bag = train_tagged_words): count1 = 0 state = [] for key, word in enumerate(words): p = [] for tag in tags: if key == 0: transition_p = tags_df.loc['.', tag] else: transition_p = tags_df.loc[state[-1], tag] emission_p = emission(word, tag) state_probability = emission_p * transition_p p.append(state_probability) pmax = max(p) T = list(tags) state_max = T[p.index(pmax)] state.append(state_max) # print(count1, word, state_max) count1 += 1 return list(zip(words, state)) start = time.time() test_pred = Viter1([x[0] for y in test_set for x in y]) end = time.time() difference = end-start print("Time taken (in seconds): ", difference) # accuracy check # wrong_pos --> there are words which are wrongly tagged by our tagger test_tagged_words = [x for y in test_set for x in y] wrong_pos = [(i,j,k) for i,j,k in list(zip([x[0] for x in test_tagged_words], [x[1] for x in test_tagged_words], [x[1] for x in test_pred])) if j != k] accuracy1 = 1-len(wrong_pos)/len(test_tagged_words) accuracy1 unknown_words_pos = [x for x in wrong_pos if x[0] not in train_vocab] unknown_words_pos[0:10] # Checking distribution of unknown words POs tags FreqDist([x[1] for x in unknown_words_pos]).plot() print([x for x in unknown_words_pos if x[1] =='VERB']) print(' ') print(len([x for x in unknown_words_pos if x[1] =='VERB'])) print([x[0] for x in unknown_words_pos if x[1] =='VERB' and x[0].endswith('ing')]) print(' ') print(len([x[0] for x in unknown_words_pos if x[1] =='VERB' and x[0].endswith('ing')])) print([x[0] for x in unknown_words_pos if x[1] =='VERB' and x[0].endswith('ed')]) print(' ') print(len([x[0] for x in unknown_words_pos if x[1] =='VERB' and x[0].endswith('ed')])) print([x[0] for x in unknown_words_pos if x[1] =='VERB' and x[0].endswith('es')]) print(' ') print(len([x[0] for x in unknown_words_pos if x[1] =='VERB' and x[0].endswith('es')])) patterns_verb = [ (r'.*ing$', 'VERB'), # gerund (r'.*ed$', 'VERB'), # past tense (r'.*es$', 'VERB'), # 3rd singular present ] print([x for x in unknown_words_pos if x[1] =='ADV']) print(' ') print(len([x for x in unknown_words_pos if x[1] =='ADV'])) print([x[0] for x in unknown_words_pos if x[1] =='ADV' and x[0].endswith('ly')]) print(' ') print(len([x[0] for x in unknown_words_pos if x[1] =='ADV' and x[0].endswith('ly')])) patterns_adv = [ (r'.*ly$', 'ADV') ] print([x for x in unknown_words_pos if x[1] =='X']) print(' ') print(len([x for x in unknown_words_pos if x[1] =='X'])) print([x for x in unknown_words_pos if x[1] =='NOUN']) print(' ') print(len([x for x in unknown_words_pos if x[1] =='NOUN'])) print([x[0] for x in unknown_words_pos if x[1] =='NOUN' and x[0].endswith('s')]) print(' ') print(len([x[0] for x in unknown_words_pos if x[1] =='NOUN' and x[0].endswith('s')])) patterns_noun = [ (r'.*s$', 'NOUN'), # plural nouns ] print([x for x in unknown_words_pos if x[1] =='ADJ']) print(' ') print(len([x for x in unknown_words_pos if x[1] =='ADJ'])) print([x[0] for x in unknown_words_pos if x[1] =='ADJ' and x[0].endswith('est')]) print(' ') print(len([x[0] for x in unknown_words_pos if x[1] =='ADJ' and x[0].endswith('est')])) print([x for x in unknown_words_pos if x[1] =='NUM']) print(' ') print(len([x for x in unknown_words_pos if x[1] =='NUM'])) patterns_numbers = [ (r'^-?[0-9]+(.[0-9]+)?$', 'NUM'), # cardinal numbers ] # Final pattern pattern = [ (r'.*ing$', 'VERB'), # gerund (r'.*ed$', 'VERB'), # past tense (r'.*es$', 'VERB'), # 3rd singular present (r'.*ly$', 'ADV'), # gerund (r'^-?[0-9]+(.[0-9]+)?$', 'NUM'), # cardinal numbers (r'.*s$', 'NOUN'), # plural nouns (r'.*', 'NOUN') # nouns ] # Defining Regex based tagger regexp_tagger = nltk.RegexpTagger(pattern) # Viterbi and Regex tagger: Viterbi with added condition for unknown words def Viter2(words, train_bag = train_tagged_words): count1 = 0 state = [] T = list(tags) for key, word in enumerate(words): if word in train_vocab: p = [] for tag in tags: if key == 0: transition_p = tags_df.loc['.', tag] else: transition_p = tags_df.loc[state[-1], tag] emission_p = emission(word, tag) state_probability = emission_p * transition_p p.append(state_probability) pmax = max(p) state_max = T[p.index(pmax)] else: # print('unknown trigger!') state_max = regexp_tagger.tag([word])[0][1] state.append(state_max) # print(count1, word, state_max) count1 += 1 return list(zip(words, state)) start = time.time() test_pred2 = Viter2([x[0] for y in test_set for x in y]) end = time.time() difference = end-start print("Time taken in seconds: ", difference) # accuracy wrong_pos2 = [(i,j,k) for i,j,k in list(zip([x[0] for x in test_tagged_words], [x[1] for x in test_tagged_words], [x[1] for x in test_pred2])) if j != k] accuracy2 = 1-len(wrong_pos2)/len(test_tagged_words) accuracy2 unknown_words_pos2 = [x for x in wrong_pos2 if x[0] not in train_vocab] print(unknown_words_pos2) print(' ') print(len(unknown_words_pos2)) # Checking distribution of unknown words POS tags FreqDist([x[1] for x in unknown_words_pos2]).plot() print([x for x in unknown_words_pos2 if x[1] =='ADJ']) print(' ') print(len([x for x in unknown_words_pos2 if x[1] =='ADJ'])) # input sentence sent = 'Hurray!!! This is summer time. Let us have fun!' # Dependancy from nltk import word_tokenize # tokenize sentence sent_tokens = word_tokenize(sent) sent_tokens Viter2(sent_tokens)
0.222954
0.984366
# Portfolio Optimization ``` # Import libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt # !pip install pandas-datareader import pandas_datareader.data as web import datetime ``` ### Collect Data Our investment universe consists of 20 stocks (n=20) all of which are constituents of the S&P 500. The stock symbols are F (Ford Motor Co.), CAT (Catepillar Inc.), DIS, MCD, KO, PEP, WMT, C, WFC, JPM, AAPL, IBM, PFE, JNJ, XOM, MRO, ED, T, VZ and NEM. ``` stocks = ["F", "CAT", "DIS", "MCD", "KO", "PEP", "WMT", "C", "WFC", "JPM", "AAPL", "IBM", "PFE", "JNJ", "XOM", "MRO", "ED", "T", "VZ", "NEM"] ``` Collect historical stock price from 30-Dec-2004 to 30-Sep-2008 ``` start = datetime.datetime(2004,12,30) end = datetime.datetime(2008,9,30) # pandas_datareader offers remote data access daily_price = {} for i in stocks: info = web.DataReader(i, "yahoo", start, end) # extract daily adjusted closing prices info_price = {i : info['Adj Close']} daily_price.update(info_price) daily_price = pd.DataFrame(daily_price) daily_price ``` ### Parameter Estimation (Monthly) Using monthly (last trading day of each month) adjusted closing prices for each stock from 30-Dec-2004 to 30-Sep-2008 compute the sample mean and sample variance for each stock. Also, compute sample covariances. ``` # Only require adjusted closing prices of last trading day of each month asset_monthly_return = daily_price.resample('M').ffill().pct_change() asset_monthly_return # Compute Geometric & Arithmetic Mean and sample variance return_add1 = (asset_monthly_return + 1 ) Geo_mean = return_add1.prod(axis=0) ** (1/45) - 1 Ari_mean = asset_monthly_return.mean() # Arithmetic Mean and sample variance pd.DataFrame({'Arithmetic Mean' : Ari_mean, 'Geometric Mean' : Geo_mean, 'Sample variance' : asset_monthly_return.var() }) # use geometric mean as expected return mu = Geo_mean # Save file # mu.to_csv('mu.csv') # Compute sample covariance Q = asset_monthly_return.cov() Q # Save file # Q.to_csv('Q.csv') ``` ### Risk-Free Rate ``` # Compute risk-free rate by using historical data of 13 Week Treasury Bill IRX_daily_price = web.DataReader("^IRX", "yahoo", start, end)['Adj Close'][:-1] IRX_monthly_price = IRX_daily_price.resample('M').ffill().mean() rf = IRX_monthly_price / (12*100) rf ``` ### Risk Aversion set to the quantity that Idzorek (see Black-Litterman paper and class slides) suggests (in this case the market (mkt) is the group of 20 stocks listed above) ``` # Collect historical market capitalization of assets from yahoo finance (US site) # 30-Sep-2008 asset_market_cap = np.array([12350000000.0, 35952736685.0, 55944801000.0, 68767419589.0, 122232012230.0, 111965170000.0, 235605845278.0, 111770063349.0, 124645335146.0, 174048427281.0, 100688632743.0, 157130846042.0, 124339728868.0, 193602657107.0, 395029171280.0, 28148220000.0, 11750739423.0, 164532560000.0, 95230608718.0, 17024528133.0]) mkt_portfolio = asset_market_cap / asset_market_cap.sum() mkt_portfolio # Estimate market return and variance expected_mkt__return = mu @ mkt_portfolio mkt_var = mkt_portfolio.T @ Q @ mkt_portfolio # Compute risk aversion risk_aversion = (expected_mkt__return - rf) / mkt_var # Results pd.DataFrame({'Output' : [rf,expected_mkt__return,mkt_var,risk_aversion] },index=['monthly risk-free rate','Expected Market return', 'Market Variance','risk_aversion']) ``` ## Portfolio Optimization (use Matlab) #### (1) Mean-variance optimization (MVO) \begin{equation} \begin{array}{rl} \displaystyle \min_{x} & \lambda x^TQx-\mu ^Tx \\ s.t. & 1^Tx = 1\\ \end{array} \end{equation} where $\mu \in R^n$is the vector of expected returns, $Q \in R^{n \times n}$ is the covariance matrix, and $e \in R^n$ is a vector of n ones. Short selling is allowed. $\lambda > 0$ is a risk aversion parameter. ``` MVO_with_short = np.array([-0.2009, -0.0264, 0.6748, 1.2014, -0.9811, 0.6750, 0.0692, -1.4137, 1.5304, 0.5058, 0.0411, -0.7729, -1.2104, -0.0935, 0.9144, 0.3581, -0.5307, 0.6501, -0.3298, -0.0609]) MVO_without_short = np.array([0.0000, 0.0001, 0.0001, 0.3881, 0.0001, 0.1978, 0.0769, 0.0000, 0.1203, 0.0001, 0.0630, 0.0000, 0.0000, 0.0001, 0.0001, 0.1531, 0.0001, 0.0001, 0.0000, 0.0000]) ``` #### (2) Robust mean-variance optimization (Ellipsoidal uncertainty sets) \begin{equation} \begin{array}{rl} \displaystyle \min_{x} & \lambda x^TQx-\mu ^Tx + \epsilon_2 |\Theta^{1/2}x|_2\\ s.t. & 1^Tx = 1\\ & (x \geq 0) \end{array} \end{equation} ``` # Confidence levels of 90% RMVO_90_with_short = np.array([-0.0247, 0.0320, 0.0554, 0.1094, 0.1351, 0.1553, 0.0917, -0.0377, 0.0416, 0.0295, 0.0230, 0.0366, -0.0089, 0.1291, 0.0604, 0.0350, 0.0843, 0.0342, 0.0057, 0.0129]) RMVO_90_without_short = np.array([0.0000, 0.0302, 0.0531, 0.0997, 0.1280, 0.1469, 0.0843, 0.0000, 0.0356, 0.0243, 0.0213, 0.0341, 0.0002, 0.1238, 0.0550, 0.0324, 0.0790, 0.0323, 0.0067, 0.0131]) # Confidence levels of 95% RMVO_95_with_short = np.array([-0.0226, 0.0320, 0.0564, 0.1056, 0.1338, 0.1522, 0.0901, -0.0340, 0.0402, 0.0288, 0.0221, 0.0370, -0.0057, 0.1301, 0.0588, 0.0336, 0.0850, 0.0345, 0.0085, 0.0134]) RMVO_95_without_short = np.array([0.0000, 0.0304, 0.0542, 0.0974, 0.1275, 0.1451, 0.0837, 0.0000, 0.0351, 0.0243, 0.0207, 0.0347, 0.0003, 0.1252, 0.0543, 0.0315, 0.0802, 0.0328, 0.0090, 0.0135]) ``` #### (3) Risk Parity optimization with no short selling \begin{equation} \begin{array}{rl} \displaystyle \min_{x,\theta} & \sum_{i=1}^n (x_i(Qx)_i -\theta)^2\\ s.t. & 1^Tx = 1\\ & (x \geq 0) \end{array} \end{equation} ``` RP = np.array([0.0450, 0.0500, 0.0512, 0.0500, 0.0513, 0.0515, 0.0513, 0.0495, 0.0518, 0.0509, 0.0425, 0.0507, 0.0510, 0.0514, 0.0499, 0.0491, 0.0513, 0.0503, 0.0504, 0.0511]) ``` #### (4) Market Portfolio (based on market capitalizations) ``` # more detail is presented in the Parameter Estimation part mkt_portfolio ``` Combine all portfolios ``` all_portfolios = pd.DataFrame({'MVO no short' : MVO_without_short, 'MVO short' : MVO_with_short, 'RMVO_90 no short' : RMVO_90_without_short, 'RMVO_95 no short' : RMVO_95_without_short, 'RMVO_90 short' : RMVO_90_with_short, 'RMVO_95 short' : RMVO_95_with_short, 'ERC' : RP, 'Market Portfolio':mkt_portfolio },index=stocks) all_portfolios ``` ##### Plots ``` # Plot Portfolio Weights of MVO without short selling all_portfolios.iloc[:,0:1].T.plot.bar(figsize=(10,6), stacked=True, title ='') plt.title('Portfolio Weights') plt.xticks(rotation=0.1) plt.show() # Plot Portfolio Weights of robust MVO with and without short selling all_portfolios.iloc[:,2:6].T.plot.bar(figsize=(10,6), stacked=True, title ='') # plt.ylim([-10,10]) plt.title('Portfolio Weights') plt.xticks(rotation=0.1) plt.show() # Plot Portfolio Weights of ERC and market portfolio all_portfolios.iloc[:,6:].T.plot.bar(figsize=(10,6), stacked=True, title ='') plt.title('Portfolio Weights') plt.xticks(rotation=0.1) plt.show() ``` ## PART A ``` end2 = datetime.datetime(2008,11,5) # pandas_datareader offers remote data access daily_price2 = {} for i in stocks: info = web.DataReader(i, "yahoo", start, end2) # extract daily adjusted closing prices info_price = {i : info['Adj Close']} daily_price2.update(info_price) daily_price2 = pd.DataFrame(daily_price2) # Compute asset monthly return asset_monthly_return2 = daily_price2.resample('M').ffill().pct_change() asset_monthly_return2.tail() ``` ##### Compute the major portfolio quantities for each portfolio for the month of Oct. 2008 1. portfolio return 2. portfolio variance and standard deviation 3. Sharpe ratio. Discuss the results i.e. explain why you think a portfolio did better than another on these portfolio dimensions e.g. why was the Sharpe ratio of a portfolio from a particular strategy better than a portfolio generated by a di¤erent strategy? and which portfolio did best (worst) on return and why? ``` def compute_quantity (asset_return): # 1. portfolio return port_return = all_portfolios.T @ asset_return port_return = np.array(port_return) # 2. portfolio variance and standard deviation port_var = np.diag(all_portfolios.T @ Q @ all_portfolios) port_sd = np.sqrt(port_var) # 3. Sharpe ratio sharp_ratio = (port_return - rf) / port_sd results = pd.DataFrame({'portfolio return' : port_return, 'portfolio var' : port_var, 'portfolio sd' : port_sd, 'Sharpe ratio' : sharp_ratio}, index = portfolios) return results portfolios = ["MVO_without_short", "MVO_with_short", "RMVO_90_without_short", "RMVO_95_without_short", "RMVO_90_with_short", "RMVO_95_with_short", "RP", "mkt_portfolio" ] # Realized return for each stock for the month of Oct of 2008 oct_asset_return = asset_monthly_return2.iloc[-2,:] # Save file # oct_asset_return.to_csv('oct_asset_return.csv') oct_result = compute_quantity (oct_asset_return) oct_result ``` ## PART B ``` # Realized return for each stock for the month of Nov of 2008 nov_asset_return = asset_monthly_return2.iloc[-1,:] nov_result = compute_quantity (nov_asset_return) nov_result # Realized return for each stock for selected date before crisis # asset_monthly_return2.iloc[11,:] select_result = compute_quantity (asset_monthly_return2.iloc[11,:]) select_result # Compare Sharpe ratio of different periods pd.DataFrame({'Oct 2008' : oct_result['Sharpe ratio'], 'Nov 2008' : nov_result['Sharpe ratio'], 'Before Crisis' : select_result['Sharpe ratio'] } ) ``` ## Discussion ##### Daily asset price ``` end3 = datetime.datetime(2012,11,5) # pandas_datareader offers remote data access daily_price3 = {} for i in stocks: info = web.DataReader(i, "yahoo", start, end3) # extract daily adjusted closing prices info_price = {i : info['Adj Close']} daily_price3.update(info_price) daily_price3 = pd.DataFrame(daily_price3) # plot daily adjusted closing price of assets daily_price3.plot(figsize=(12,7),title='Daily Adjusted Closing Price') # plt.ylim([0,200]) plt.show() # zoomed plot daily_price3.plot(figsize=(12,7),title='Daily Adjusted Closing Price') plt.ylim([0,175]) plt.show() ```
github_jupyter
# Import libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt # !pip install pandas-datareader import pandas_datareader.data as web import datetime stocks = ["F", "CAT", "DIS", "MCD", "KO", "PEP", "WMT", "C", "WFC", "JPM", "AAPL", "IBM", "PFE", "JNJ", "XOM", "MRO", "ED", "T", "VZ", "NEM"] start = datetime.datetime(2004,12,30) end = datetime.datetime(2008,9,30) # pandas_datareader offers remote data access daily_price = {} for i in stocks: info = web.DataReader(i, "yahoo", start, end) # extract daily adjusted closing prices info_price = {i : info['Adj Close']} daily_price.update(info_price) daily_price = pd.DataFrame(daily_price) daily_price # Only require adjusted closing prices of last trading day of each month asset_monthly_return = daily_price.resample('M').ffill().pct_change() asset_monthly_return # Compute Geometric & Arithmetic Mean and sample variance return_add1 = (asset_monthly_return + 1 ) Geo_mean = return_add1.prod(axis=0) ** (1/45) - 1 Ari_mean = asset_monthly_return.mean() # Arithmetic Mean and sample variance pd.DataFrame({'Arithmetic Mean' : Ari_mean, 'Geometric Mean' : Geo_mean, 'Sample variance' : asset_monthly_return.var() }) # use geometric mean as expected return mu = Geo_mean # Save file # mu.to_csv('mu.csv') # Compute sample covariance Q = asset_monthly_return.cov() Q # Save file # Q.to_csv('Q.csv') # Compute risk-free rate by using historical data of 13 Week Treasury Bill IRX_daily_price = web.DataReader("^IRX", "yahoo", start, end)['Adj Close'][:-1] IRX_monthly_price = IRX_daily_price.resample('M').ffill().mean() rf = IRX_monthly_price / (12*100) rf # Collect historical market capitalization of assets from yahoo finance (US site) # 30-Sep-2008 asset_market_cap = np.array([12350000000.0, 35952736685.0, 55944801000.0, 68767419589.0, 122232012230.0, 111965170000.0, 235605845278.0, 111770063349.0, 124645335146.0, 174048427281.0, 100688632743.0, 157130846042.0, 124339728868.0, 193602657107.0, 395029171280.0, 28148220000.0, 11750739423.0, 164532560000.0, 95230608718.0, 17024528133.0]) mkt_portfolio = asset_market_cap / asset_market_cap.sum() mkt_portfolio # Estimate market return and variance expected_mkt__return = mu @ mkt_portfolio mkt_var = mkt_portfolio.T @ Q @ mkt_portfolio # Compute risk aversion risk_aversion = (expected_mkt__return - rf) / mkt_var # Results pd.DataFrame({'Output' : [rf,expected_mkt__return,mkt_var,risk_aversion] },index=['monthly risk-free rate','Expected Market return', 'Market Variance','risk_aversion']) MVO_with_short = np.array([-0.2009, -0.0264, 0.6748, 1.2014, -0.9811, 0.6750, 0.0692, -1.4137, 1.5304, 0.5058, 0.0411, -0.7729, -1.2104, -0.0935, 0.9144, 0.3581, -0.5307, 0.6501, -0.3298, -0.0609]) MVO_without_short = np.array([0.0000, 0.0001, 0.0001, 0.3881, 0.0001, 0.1978, 0.0769, 0.0000, 0.1203, 0.0001, 0.0630, 0.0000, 0.0000, 0.0001, 0.0001, 0.1531, 0.0001, 0.0001, 0.0000, 0.0000]) # Confidence levels of 90% RMVO_90_with_short = np.array([-0.0247, 0.0320, 0.0554, 0.1094, 0.1351, 0.1553, 0.0917, -0.0377, 0.0416, 0.0295, 0.0230, 0.0366, -0.0089, 0.1291, 0.0604, 0.0350, 0.0843, 0.0342, 0.0057, 0.0129]) RMVO_90_without_short = np.array([0.0000, 0.0302, 0.0531, 0.0997, 0.1280, 0.1469, 0.0843, 0.0000, 0.0356, 0.0243, 0.0213, 0.0341, 0.0002, 0.1238, 0.0550, 0.0324, 0.0790, 0.0323, 0.0067, 0.0131]) # Confidence levels of 95% RMVO_95_with_short = np.array([-0.0226, 0.0320, 0.0564, 0.1056, 0.1338, 0.1522, 0.0901, -0.0340, 0.0402, 0.0288, 0.0221, 0.0370, -0.0057, 0.1301, 0.0588, 0.0336, 0.0850, 0.0345, 0.0085, 0.0134]) RMVO_95_without_short = np.array([0.0000, 0.0304, 0.0542, 0.0974, 0.1275, 0.1451, 0.0837, 0.0000, 0.0351, 0.0243, 0.0207, 0.0347, 0.0003, 0.1252, 0.0543, 0.0315, 0.0802, 0.0328, 0.0090, 0.0135]) RP = np.array([0.0450, 0.0500, 0.0512, 0.0500, 0.0513, 0.0515, 0.0513, 0.0495, 0.0518, 0.0509, 0.0425, 0.0507, 0.0510, 0.0514, 0.0499, 0.0491, 0.0513, 0.0503, 0.0504, 0.0511]) # more detail is presented in the Parameter Estimation part mkt_portfolio all_portfolios = pd.DataFrame({'MVO no short' : MVO_without_short, 'MVO short' : MVO_with_short, 'RMVO_90 no short' : RMVO_90_without_short, 'RMVO_95 no short' : RMVO_95_without_short, 'RMVO_90 short' : RMVO_90_with_short, 'RMVO_95 short' : RMVO_95_with_short, 'ERC' : RP, 'Market Portfolio':mkt_portfolio },index=stocks) all_portfolios # Plot Portfolio Weights of MVO without short selling all_portfolios.iloc[:,0:1].T.plot.bar(figsize=(10,6), stacked=True, title ='') plt.title('Portfolio Weights') plt.xticks(rotation=0.1) plt.show() # Plot Portfolio Weights of robust MVO with and without short selling all_portfolios.iloc[:,2:6].T.plot.bar(figsize=(10,6), stacked=True, title ='') # plt.ylim([-10,10]) plt.title('Portfolio Weights') plt.xticks(rotation=0.1) plt.show() # Plot Portfolio Weights of ERC and market portfolio all_portfolios.iloc[:,6:].T.plot.bar(figsize=(10,6), stacked=True, title ='') plt.title('Portfolio Weights') plt.xticks(rotation=0.1) plt.show() end2 = datetime.datetime(2008,11,5) # pandas_datareader offers remote data access daily_price2 = {} for i in stocks: info = web.DataReader(i, "yahoo", start, end2) # extract daily adjusted closing prices info_price = {i : info['Adj Close']} daily_price2.update(info_price) daily_price2 = pd.DataFrame(daily_price2) # Compute asset monthly return asset_monthly_return2 = daily_price2.resample('M').ffill().pct_change() asset_monthly_return2.tail() def compute_quantity (asset_return): # 1. portfolio return port_return = all_portfolios.T @ asset_return port_return = np.array(port_return) # 2. portfolio variance and standard deviation port_var = np.diag(all_portfolios.T @ Q @ all_portfolios) port_sd = np.sqrt(port_var) # 3. Sharpe ratio sharp_ratio = (port_return - rf) / port_sd results = pd.DataFrame({'portfolio return' : port_return, 'portfolio var' : port_var, 'portfolio sd' : port_sd, 'Sharpe ratio' : sharp_ratio}, index = portfolios) return results portfolios = ["MVO_without_short", "MVO_with_short", "RMVO_90_without_short", "RMVO_95_without_short", "RMVO_90_with_short", "RMVO_95_with_short", "RP", "mkt_portfolio" ] # Realized return for each stock for the month of Oct of 2008 oct_asset_return = asset_monthly_return2.iloc[-2,:] # Save file # oct_asset_return.to_csv('oct_asset_return.csv') oct_result = compute_quantity (oct_asset_return) oct_result # Realized return for each stock for the month of Nov of 2008 nov_asset_return = asset_monthly_return2.iloc[-1,:] nov_result = compute_quantity (nov_asset_return) nov_result # Realized return for each stock for selected date before crisis # asset_monthly_return2.iloc[11,:] select_result = compute_quantity (asset_monthly_return2.iloc[11,:]) select_result # Compare Sharpe ratio of different periods pd.DataFrame({'Oct 2008' : oct_result['Sharpe ratio'], 'Nov 2008' : nov_result['Sharpe ratio'], 'Before Crisis' : select_result['Sharpe ratio'] } ) end3 = datetime.datetime(2012,11,5) # pandas_datareader offers remote data access daily_price3 = {} for i in stocks: info = web.DataReader(i, "yahoo", start, end3) # extract daily adjusted closing prices info_price = {i : info['Adj Close']} daily_price3.update(info_price) daily_price3 = pd.DataFrame(daily_price3) # plot daily adjusted closing price of assets daily_price3.plot(figsize=(12,7),title='Daily Adjusted Closing Price') # plt.ylim([0,200]) plt.show() # zoomed plot daily_price3.plot(figsize=(12,7),title='Daily Adjusted Closing Price') plt.ylim([0,175]) plt.show()
0.555435
0.930711
``` import pandas import dask.dataframe as daskDataFrame person_IDs = [1,2,3,4,5,6,7,8,9,10] person_last_names = ['Smith', 'Williams', 'Williams','Jackson','Johnson','Smith','Anderson','Christiansen','Carter','Davidson'] person_first_names = ['John', 'Bill', 'Jane','Cathy','Stuart','James','Felicity','Liam','Nancy','Christina'] person_DOBs = ['1982-10-06', '1990-07-04', '1989-05-06', '1974-01-24', '1995-06-05', '1984-04-16', '1976-09-15', '1992-10-02', '1986-02-05', '1993-08-11'] peoplePandasDataFrame = pandas.DataFrame({'Person ID':person_IDs, 'Last Name': person_last_names, 'First Name': person_first_names, 'Date of Birth': person_DOBs}, columns=['Person ID', 'Last Name', 'First Name', 'Date of Birth']) peopleDaskDataFrame = daskDataFrame.from_pandas(peoplePandasDataFrame, npartitions=2) print(peopleDaskDataFrame.compute()) print(peopleDaskDataFrame.divisions) print(peopleDaskDataFrame.npartitions) ``` ### map_partition # takes all row as 1 argument Above shows a couple useful attributes of Dask DataFrames that can be used to inspect how a DataFrame is partitioned. The first attribute, divisions, (0, 5, 9), shows the boundaries of the partitioning scheme (remember that partitions are created on the index). This might look strange since there are two partitions but three boundaries. Each partition’s boundary consists of pairs of numbers from the list of divisions. The boundary for the first partition is “from 0 up to (but not including) 5,” meaning it will contain rows 0, 1, 2, 3, and 4. The boundary for the second partition is “from 5 through (and including) 9,” meaning it will contain rows 5, 6, 7, 8, and 9. The last partition always includes the upper boundary, whereas the other partitions go up to but don’t include their upper boundary. ``` peopleDaskDataFrame.map_partitions(len).compute() people_filtered = peopleDaskDataFrame[peopleDaskDataFrame['Last Name'] != 'Williams'] print(people_filtered.map_partitions(len).compute()) print(type(people_filtered.map_partitions(len))) people_filtered.compute() ### applymap dfa = pandas.DataFrame([[1, 2.12], [3.356, 4.567]]) print(type(dfa)) print(dfa) def mySquare(x): print('x => ', x) print('type(x) => ', type(x)) dfa[1].map(mySquare) dfa.applymap(mySquare) dfa.applymap(lambda x: len(str(x))) dfa[0] peopleDaskDataFrame.compute() peopleDaskDataFrame = peopleDaskDataFrame.set_index('First Name') peopleDaskDataFrame.compute() count = 0 def myF(row): print("type(row) => ", type(row)) print("row.dtypes => ", row.dtypes) print("row['Last Name'] => ", row['Last Name']) print("row => \n", row) # count = count+1 return row['Person ID'] * 10 # a = peopleDaskDataFrame.applymap(myF) a = peopleDaskDataFrame.apply(myF, axis=1) a.nlargest(3).compute() ```
github_jupyter
import pandas import dask.dataframe as daskDataFrame person_IDs = [1,2,3,4,5,6,7,8,9,10] person_last_names = ['Smith', 'Williams', 'Williams','Jackson','Johnson','Smith','Anderson','Christiansen','Carter','Davidson'] person_first_names = ['John', 'Bill', 'Jane','Cathy','Stuart','James','Felicity','Liam','Nancy','Christina'] person_DOBs = ['1982-10-06', '1990-07-04', '1989-05-06', '1974-01-24', '1995-06-05', '1984-04-16', '1976-09-15', '1992-10-02', '1986-02-05', '1993-08-11'] peoplePandasDataFrame = pandas.DataFrame({'Person ID':person_IDs, 'Last Name': person_last_names, 'First Name': person_first_names, 'Date of Birth': person_DOBs}, columns=['Person ID', 'Last Name', 'First Name', 'Date of Birth']) peopleDaskDataFrame = daskDataFrame.from_pandas(peoplePandasDataFrame, npartitions=2) print(peopleDaskDataFrame.compute()) print(peopleDaskDataFrame.divisions) print(peopleDaskDataFrame.npartitions) peopleDaskDataFrame.map_partitions(len).compute() people_filtered = peopleDaskDataFrame[peopleDaskDataFrame['Last Name'] != 'Williams'] print(people_filtered.map_partitions(len).compute()) print(type(people_filtered.map_partitions(len))) people_filtered.compute() ### applymap dfa = pandas.DataFrame([[1, 2.12], [3.356, 4.567]]) print(type(dfa)) print(dfa) def mySquare(x): print('x => ', x) print('type(x) => ', type(x)) dfa[1].map(mySquare) dfa.applymap(mySquare) dfa.applymap(lambda x: len(str(x))) dfa[0] peopleDaskDataFrame.compute() peopleDaskDataFrame = peopleDaskDataFrame.set_index('First Name') peopleDaskDataFrame.compute() count = 0 def myF(row): print("type(row) => ", type(row)) print("row.dtypes => ", row.dtypes) print("row['Last Name'] => ", row['Last Name']) print("row => \n", row) # count = count+1 return row['Person ID'] * 10 # a = peopleDaskDataFrame.applymap(myF) a = peopleDaskDataFrame.apply(myF, axis=1) a.nlargest(3).compute()
0.158239
0.713831
# PYTORCH CNN Classifier To run this notebook on an another benchmark, use ``` papermill utils/torch_cnn_classifier.ipynb torch_cnn_experiments/[DATASET NAME].ipynb -p DATASET [DATASET NAME] ``` ``` # DATASET = 'no_dataset' DATASET = 'demo_human_or_worm' VERSION = 0 BATCH_SIZE = 64 EPOCHS = 1 # Parameters DATASET = "human_enhancers_ensembl" EPOCHS = 10 print(DATASET, VERSION, BATCH_SIZE, EPOCHS) ``` ## Config ``` import os import numpy as np import torch from torch import nn from torch.utils.data import DataLoader from torchtext.data.utils import get_tokenizer from genomic_benchmarks.data_check import is_downloaded, info from genomic_benchmarks.dataset_getters.pytorch_datasets import get_dataset from genomic_benchmarks.loc2seq import download_dataset from genomic_benchmarks.models.torch import CNN from genomic_benchmarks.dataset_getters.utils import coll_factory, LetterTokenizer, build_vocab, check_seq_lengths, check_config, VARIABLE_LENGTH_DATASETS USE_PADDING = DATASET in VARIABLE_LENGTH_DATASETS ``` ## Choose the dataset ``` if not is_downloaded(DATASET): download_dataset(DATASET, local_repo=True) info(DATASET, local_repo=True) train_dset = get_dataset(DATASET, 'train') NUM_CLASSES = len(set([train_dset[i][1] for i in range(len(train_dset))])) NUM_CLASSES ``` ## Tokenizer and vocab ``` tokenizer = get_tokenizer(LetterTokenizer()) vocabulary = build_vocab(train_dset, tokenizer, use_padding=USE_PADDING) print("vocab len:" ,vocabulary.__len__()) print(vocabulary.get_stoi()) ``` ## Dataloader and batch preparation ``` # Run on GPU or CPU device = 'cuda' if torch.cuda.is_available() else 'cpu' print('Using {} device'.format(device)) max_seq_len, nn_input_len = check_seq_lengths(dataset=train_dset, use_padding=USE_PADDING) # Data Loader if(USE_PADDING): collate = coll_factory(vocabulary, tokenizer, device, pad_to_length = nn_input_len) else: collate = coll_factory(vocabulary, tokenizer, device, pad_to_length = None) train_loader = DataLoader(train_dset, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate) ``` ## Model ``` model = CNN( number_of_classes=NUM_CLASSES, vocab_size=vocabulary.__len__(), embedding_dim=100, input_len=nn_input_len, device=device ).to(device) ``` ## Training ``` model.fit(train_loader, epochs=EPOCHS) ``` ## Testing ``` test_dset = get_dataset(DATASET, 'test') test_loader = DataLoader(test_dset, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate) acc, f1 = model.test(test_loader) acc, f1 ```
github_jupyter
papermill utils/torch_cnn_classifier.ipynb torch_cnn_experiments/[DATASET NAME].ipynb -p DATASET [DATASET NAME] # DATASET = 'no_dataset' DATASET = 'demo_human_or_worm' VERSION = 0 BATCH_SIZE = 64 EPOCHS = 1 # Parameters DATASET = "human_enhancers_ensembl" EPOCHS = 10 print(DATASET, VERSION, BATCH_SIZE, EPOCHS) import os import numpy as np import torch from torch import nn from torch.utils.data import DataLoader from torchtext.data.utils import get_tokenizer from genomic_benchmarks.data_check import is_downloaded, info from genomic_benchmarks.dataset_getters.pytorch_datasets import get_dataset from genomic_benchmarks.loc2seq import download_dataset from genomic_benchmarks.models.torch import CNN from genomic_benchmarks.dataset_getters.utils import coll_factory, LetterTokenizer, build_vocab, check_seq_lengths, check_config, VARIABLE_LENGTH_DATASETS USE_PADDING = DATASET in VARIABLE_LENGTH_DATASETS if not is_downloaded(DATASET): download_dataset(DATASET, local_repo=True) info(DATASET, local_repo=True) train_dset = get_dataset(DATASET, 'train') NUM_CLASSES = len(set([train_dset[i][1] for i in range(len(train_dset))])) NUM_CLASSES tokenizer = get_tokenizer(LetterTokenizer()) vocabulary = build_vocab(train_dset, tokenizer, use_padding=USE_PADDING) print("vocab len:" ,vocabulary.__len__()) print(vocabulary.get_stoi()) # Run on GPU or CPU device = 'cuda' if torch.cuda.is_available() else 'cpu' print('Using {} device'.format(device)) max_seq_len, nn_input_len = check_seq_lengths(dataset=train_dset, use_padding=USE_PADDING) # Data Loader if(USE_PADDING): collate = coll_factory(vocabulary, tokenizer, device, pad_to_length = nn_input_len) else: collate = coll_factory(vocabulary, tokenizer, device, pad_to_length = None) train_loader = DataLoader(train_dset, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate) model = CNN( number_of_classes=NUM_CLASSES, vocab_size=vocabulary.__len__(), embedding_dim=100, input_len=nn_input_len, device=device ).to(device) model.fit(train_loader, epochs=EPOCHS) test_dset = get_dataset(DATASET, 'test') test_loader = DataLoader(test_dset, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate) acc, f1 = model.test(test_loader) acc, f1
0.732018
0.607081
#MNIST classification with Vowpal Wabbit ``` from __future__ import division import re import numpy as np from sklearn.metrics import confusion_matrix import matplotlib.pyplot as plt %matplotlib inline #%qtconsole ``` ##Train I found some help with parameters here: * https://github.com/JohnLangford/vowpal_wabbit/wiki/Tutorial * https://github.com/JohnLangford/vowpal_wabbit/wiki/Command-line-arguments --cache_file train.cache converts train_ALL.vw to a binary file for future faster processing. Next time we go through the model building, we will use the cache file and not the text file. --passes is the number of passes --oaa 10 refers to oaa learning algorithm with 10 classes (1 to 10) -q ii creates interaction between variables in the two referred to namespaces which here are the same i.e. 'image' Namespace. An interaction variable is created from two variables 'A' and 'B' by multiplying the values of 'A' and 'B'. -f mnist_ALL.model refers to file where model will be saved. -b refers to number of bits in the feature table. Default number is 18 but as we have increased the number of features much more by introducing interaction features, value of '-b' has been increased to 22. -l rate Adjust the learning rate. Defaults to 0.5 --power_t p This specifies the power on the learning rate decay. You can adjust this --power_t p where p is in the range [0,1]. 0 means the learning rate does not decay, which can be helpful when state tracking, while 1 is very aggressive. Defaults to 0.5 ``` !rm train.vw.cache !rm mnist_train.model !vw -d data/mnist_train.vw -b 19 --oaa 10 -f mnist_train.model -q ii --passes 30 -l 0.4 --early_terminate 3 --cache_file train.vw.cache --power_t 0.6 ``` ##Predict -t is for test file -i specifies the model file created earlier -p where to store the class predictions [1,10] ``` !rm predict.txt !vw -t data/mnist_test.vw -i mnist_train.model -p predict.txt ``` ##Analyze ``` y_true=[] with open("data/mnist_test.vw", 'rb') as f: for line in f: m = re.search('^\d+', line) if m: found = m.group() y_true.append(int(found)) y_pred = [] with open("predict.txt", 'rb') as f: for line in f: m = re.search('^\d+', line) if m: found = m.group() y_pred.append(int(found)) target_names = ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10"] # NOTE: plus one def plot_confusion_matrix(cm, target_names, title='Proportional Confusion matrix: VW on 784 pixels', cmap=plt.cm.Paired): """ given a confusion matrix (cm), make a nice plot see the skikit-learn documentation for the original done for the iris dataset """ plt.figure(figsize=(8, 6)) plt.imshow((cm/cm.sum(axis=1)), interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(target_names)) plt.xticks(tick_marks, target_names, rotation=45) plt.yticks(tick_marks, target_names) plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') cm = confusion_matrix(y_true, y_pred) print(cm) model_accuracy = sum(cm.diagonal())/len(y_pred) model_misclass = 1 - model_accuracy print("\nModel accuracy: {0}, model misclass rate: {1}".format(model_accuracy, model_misclass)) plot_confusion_matrix(cm, target_names) ```
github_jupyter
from __future__ import division import re import numpy as np from sklearn.metrics import confusion_matrix import matplotlib.pyplot as plt %matplotlib inline #%qtconsole !rm train.vw.cache !rm mnist_train.model !vw -d data/mnist_train.vw -b 19 --oaa 10 -f mnist_train.model -q ii --passes 30 -l 0.4 --early_terminate 3 --cache_file train.vw.cache --power_t 0.6 !rm predict.txt !vw -t data/mnist_test.vw -i mnist_train.model -p predict.txt y_true=[] with open("data/mnist_test.vw", 'rb') as f: for line in f: m = re.search('^\d+', line) if m: found = m.group() y_true.append(int(found)) y_pred = [] with open("predict.txt", 'rb') as f: for line in f: m = re.search('^\d+', line) if m: found = m.group() y_pred.append(int(found)) target_names = ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10"] # NOTE: plus one def plot_confusion_matrix(cm, target_names, title='Proportional Confusion matrix: VW on 784 pixels', cmap=plt.cm.Paired): """ given a confusion matrix (cm), make a nice plot see the skikit-learn documentation for the original done for the iris dataset """ plt.figure(figsize=(8, 6)) plt.imshow((cm/cm.sum(axis=1)), interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(target_names)) plt.xticks(tick_marks, target_names, rotation=45) plt.yticks(tick_marks, target_names) plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') cm = confusion_matrix(y_true, y_pred) print(cm) model_accuracy = sum(cm.diagonal())/len(y_pred) model_misclass = 1 - model_accuracy print("\nModel accuracy: {0}, model misclass rate: {1}".format(model_accuracy, model_misclass)) plot_confusion_matrix(cm, target_names)
0.516352
0.897111
``` #Import libraries from __future__ import print_function import matplotlib.pyplot as plt import pandas as pd import numpy as np import seaborn as sns from sklearn.metrics import confusion_matrix, classification_report, mean_squared_error, mean_absolute_error, r2_score from matplotlib.colors import ListedColormap from sklearn.datasets import make_classification, make_moons, make_circles from sklearn.linear_model import LogisticRegression from keras.models import Sequential from keras.layers import Dense, Dropout, BatchNormalization, Activation from keras.optimizers import Adam from keras.utils.np_utils import to_categorical from sklearn.preprocessing import StandardScaler, LabelEncoder, OneHotEncoder, MinMaxScaler from sklearn.model_selection import train_test_split, cross_val_score, StratifiedKFold, KFold import keras.backend as K #Define helper functions def plot_decision_boundary(func, X, y, figsize=(9, 6)): amin, bmin = X.min(axis=0) - 0.1 amax, bmax = X.max(axis=0) + 0.1 hticks = np.linspace(amin, amax, 101) vticks = np.linspace(bmin, bmax, 101) aa, bb = np.meshgrid(hticks, vticks) ab = np.c_[aa.ravel(), bb.ravel()] c = func(ab) cc = c.reshape(aa.shape) cm = plt.cm.RdBu cm_bright = ListedColormap(['#FF0000', '#0000FF']) fig, ax = plt.subplots(figsize=figsize) contour = plt.contourf(aa, bb, cc, cmap=cm, alpha=0.8) ax_c = fig.colorbar(contour) ax_c.set_label("$P(y = 1)$") ax_c.set_ticks([0, 0.25, 0.5, 0.75, 1]) plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cm_bright) plt.xlim(amin, amax) plt.ylim(bmin, bmax) def plot_multiclass_decision_boundary(model, X, y): x_min, x_max = X[:, 0].min() - 0.1, X[:, 0].max() + 0.1 y_min, y_max = X[:, 1].min() - 0.1, X[:, 1].max() + 0.1 xx, yy = np.meshgrid(np.linspace(x_min, x_max, 101), np.linspace(y_min, y_max, 101)) cmap = ListedColormap(['#FF0000', '#00FF00', '#0000FF']) Z = model.predict_classes(np.c_[xx.ravel(), yy.ravel()], verbose=0) Z = Z.reshape(xx.shape) fig = plt.figure(figsize=(8, 8)) plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8) plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.RdYlBu) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) def plot_data(X, y, figsize=None): if not figsize: figsize = (8, 6) plt.figure(figsize=figsize) plt.plot(X[y==0, 0], X[y==0, 1], 'or', alpha=0.5, label=0) plt.plot(X[y==1, 0], X[y==1, 1], 'ob', alpha=0.5, label=1) plt.xlim((min(X[:, 0])-0.1, max(X[:, 0])+0.1)) plt.ylim((min(X[:, 1])-0.1, max(X[:, 1])+0.1)) plt.legend() def plot_loss_accuracy(history): historydf = pd.DataFrame(history.history, index=history.epoch) plt.figure(figsize=(8, 6)) historydf.plot(ylim=(0, max(1, historydf.values.max()))) loss = history.history['loss'][-1] acc = history.history['acc'][-1] plt.title('Loss: %.3f, Accuracy: %.3f' % (loss, acc)) def plot_loss(history): historydf = pd.DataFrame(history.history, index=history.epoch) plt.figure(figsize=(8, 6)) historydf.plot(ylim=(0, historydf.values.max())) plt.title('Loss: %.3f' % history.history['loss'][-1]) def plot_confusion_matrix(model, X, y): y_pred = model.predict_classes(X, verbose=0) plt.figure(figsize=(8, 6)) sns.heatmap(pd.DataFrame(confusion_matrix(y, y_pred)), annot=True, fmt='d', cmap='YlGnBu', alpha=0.8, vmin=0) def make_multiclass(N=500, D=2, K=3): """ N: number of points per class D: dimensionality K: number of classes """ np.random.seed(0) X = np.zeros((N*K, D)) y = np.zeros(N*K) for j in range(K): ix = range(N*j, N*(j+1)) # radius r = np.linspace(0.0,1,N) # theta t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 X[ix] = np.c_[r*np.sin(t), r*np.cos(t)] y[ix] = j fig = plt.figure(figsize=(6, 6)) plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.RdYlBu, alpha=0.8) plt.xlim([-1,1]) plt.ylim([-1,1]) return X, y def plot_compare_histories(history_list, name_list, plot_accuracy=True): dflist = [] for history in history_list: h = {key: val for key, val in history.history.items() if not key.startswith('val_')} dflist.append(pd.DataFrame(h, index=history.epoch)) historydf = pd.concat(dflist, axis=1) metrics = dflist[0].columns idx = pd.MultiIndex.from_product([name_list, metrics], names=['model', 'metric']) historydf.columns = idx plt.figure(figsize=(6, 8)) ax = plt.subplot(211) historydf.xs('loss', axis=1, level='metric').plot(ylim=(0,1), ax=ax) plt.title("Loss") if plot_accuracy: ax = plt.subplot(212) historydf.xs('acc', axis=1, level='metric').plot(ylim=(0,1), ax=ax) plt.title("Accuracy") plt.xlabel("Epochs") plt.tight_layout() #To keep things simple, I won’t perform the standard practices of separating out the data to training and test sets, or performing k-fold cross-validation #creating the graph X, y = make_classification(n_samples=1000, n_features=2, n_redundant=0, n_informative=2, random_state=7, n_clusters_per_class=1) plot_data(X, y) #splitting the graph with a boundary lr = LogisticRegression() lr.fit(X, y) print('LR coefficients:', lr.coef_) print('LR intercept:', lr.intercept_) plot_data(X, y) limits = np.array([-2, 2]) boundary = -(lr.coef_[0][0] * limits + lr.intercept_[0]) / lr.coef_[0][1] plt.plot(limits, boundary, "g-", linewidth=2) #coef_ --> Coefficient of the features in the decision function. #Using ANN #Define the model model = Sequential() #Add a layer model.add(Dense(units=1, input_shape=(2,), activation='sigmoid')) #Compile model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) #Fit history = model.fit(x=X, y=y, verbose=0, epochs=50) #Plot plot_loss_accuracy(history) #Plot of the decision boundary #The various shades of blue and red represent the probability of a hypothetical point in that area belonging to class 1 or 0. plot_decision_boundary(lambda x: model.predict(x), X, y) #Classification report showing the precision and recall of our model y_pred = model.predict_classes(X, verbose=0) print(classification_report(y, y_pred)) #confusion matrix plot_confusion_matrix(model, X, y) #This time I will create a more complex graph to show the power of ANN X, y = make_circles(n_samples=1000, noise=0.05, factor=0.3, random_state=0) plot_data(X, y) #Build a similar model as before model = Sequential() model.add(Dense(1, input_shape=(2,), activation='sigmoid')) model.compile('adam', 'binary_crossentropy', metrics=['accuracy']) history = model.fit(X, y, verbose=0, epochs=100) plot_loss_accuracy(history) #Plot of the decision boundary #The various shades of blue and red represent the probability of a hypothetical point in that area belonging to class 1 or 0. plot_decision_boundary(lambda x: model.predict(x), X, y) #Classification report showing the precision and recall of our model y_pred = model.predict_classes(X, verbose=0) print(classification_report(y, y_pred)) #confusion matrix plot_confusion_matrix(model, X, y) #We can see that clearly this model wasn't good enough. But we can do much better with a more complex model #Define the model model = Sequential() model.add(Dense(units=4, input_shape=(2,), activation='tanh')) model.add(Dense(2, activation='tanh')) model.add(Dense(1, activation='sigmoid')) #compile the model model.compile(Adam(lr=0.01), 'binary_crossentropy', metrics=['accuracy']) #fit history=model.fit(X, y, verbose=0, epochs=50) #plot plot_loss_accuracy(history) #Plot decision boundary plot_decision_boundary(lambda x: model.predict(x), X, y) #Predict y_pred = model.predict_classes(X, verbose=0) print(classification_report(y, y_pred)) plot_confusion_matrix(model, X, y) #Create a more complex plot #We will use softmax regression this time - which is a generalization of logistic regression to the case where we want to handle multiple classes X, y = make_multiclass(K=3) #Define the model model = Sequential() model.add(Dense(64, input_shape=(2,), activation='tanh')) model.add(Dense(32, activation='tanh')) model.add(Dense(16, activation='tanh')) model.add(Dense(3, activation='softmax')) #Compile model.compile('adam', 'categorical_crossentropy', metrics=['accuracy']) #Change to one-hot representation y_cat = to_categorical(y) #fit history = model.fit(X, y_cat, verbose=0, epochs=50) #plot plot_loss_accuracy(history) #Decision boundary plot_multiclass_decision_boundary(model, X, y) #Predict y_pred = model.predict_classes(X, verbose=0) print(classification_report(y, y_pred)) plot_confusion_matrix(model, X, y) #That was pretty neat wasn't it? #Let's see how we can use it on a real dataset. #I will use the Bill authenticator dataset- the goal is to predict the class of the bill (which indicate true bill or false one) #Load data dataset=pd.read_csv('C:\\Users\\sagi\\Desktop\\Learning\\ML\\Datasets\\bill_authentication.csv') #Explore the data print(dataset.shape) print(dataset.info) dataset.head() #check the correlation of the features with 'Skewness' plt.figure(figsize=(5, 5)) sns.heatmap(dataset.corr()[['Skewness']], annot=True, vmin=-1, vmax=1) #check the correlation of the features with each other plt.figure(figsize=(10, 8)) sns.heatmap(dataset.corr(), annot=True, vmin=-1, vmax=1) #Check the distribution of the feature values dataset.hist(figsize=(10,8)) plt.tight_layout() #Looks like we need to normalize some of them because they are not on the same scale. #Copy and Normalize df=dataset.copy() ss=StandardScaler() scale_features=['Skewness','Curtosis','Entropy','Variance'] df[scale_features] = ss.fit_transform(df[scale_features]) #Creating the training and testing x=dataset.iloc[:, :4].values #convert data from pandas dataframe to numpy array using the values y=dataset.iloc[:,4].values x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=0) print(x_train.shape, y_train.shape, x_test.shape, y_test.shape) #Logistic Regression Model #define the model lr_model = Sequential() lr_model.add(Dense(1, input_shape=(x_train.shape[1],), activation='sigmoid')) #compile lr_model.compile(Adam(lr=0.01), 'binary_crossentropy', metrics=['accuracy']) #fit lr_history = lr_model.fit(x_train, y_train, verbose=1, epochs=50) plot_loss_accuracy(lr_history) #predict y_pred = lr_model.predict_classes(x_test, verbose=0) print(classification_report(y_test, y_pred)) plot_confusion_matrix(lr_model, x_test, y_test) #ANN Model deep_model = Sequential() deep_model.add(Dense(64, input_shape=(x_train.shape[1],), activation='tanh')) deep_model.add(Dense(16, activation='tanh')) deep_model.add(Dense(1, activation='sigmoid')) deep_model.compile(Adam(lr=0.01), 'binary_crossentropy', metrics=['accuracy']) deep_history = deep_model.fit(x_train, y_train, verbose=0, epochs=30) plot_loss_accuracy(deep_history) #Comparing results between LR and ANN plot_compare_histories([lr_history, deep_history], ['Logistic Reg', 'Deep ANN']) y_pred = deep_model.predict_classes(x_test, verbose=0) print(classification_report(y_test, y_pred)) plot_confusion_matrix(deep_model, x_test, y_test) ```
github_jupyter
#Import libraries from __future__ import print_function import matplotlib.pyplot as plt import pandas as pd import numpy as np import seaborn as sns from sklearn.metrics import confusion_matrix, classification_report, mean_squared_error, mean_absolute_error, r2_score from matplotlib.colors import ListedColormap from sklearn.datasets import make_classification, make_moons, make_circles from sklearn.linear_model import LogisticRegression from keras.models import Sequential from keras.layers import Dense, Dropout, BatchNormalization, Activation from keras.optimizers import Adam from keras.utils.np_utils import to_categorical from sklearn.preprocessing import StandardScaler, LabelEncoder, OneHotEncoder, MinMaxScaler from sklearn.model_selection import train_test_split, cross_val_score, StratifiedKFold, KFold import keras.backend as K #Define helper functions def plot_decision_boundary(func, X, y, figsize=(9, 6)): amin, bmin = X.min(axis=0) - 0.1 amax, bmax = X.max(axis=0) + 0.1 hticks = np.linspace(amin, amax, 101) vticks = np.linspace(bmin, bmax, 101) aa, bb = np.meshgrid(hticks, vticks) ab = np.c_[aa.ravel(), bb.ravel()] c = func(ab) cc = c.reshape(aa.shape) cm = plt.cm.RdBu cm_bright = ListedColormap(['#FF0000', '#0000FF']) fig, ax = plt.subplots(figsize=figsize) contour = plt.contourf(aa, bb, cc, cmap=cm, alpha=0.8) ax_c = fig.colorbar(contour) ax_c.set_label("$P(y = 1)$") ax_c.set_ticks([0, 0.25, 0.5, 0.75, 1]) plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cm_bright) plt.xlim(amin, amax) plt.ylim(bmin, bmax) def plot_multiclass_decision_boundary(model, X, y): x_min, x_max = X[:, 0].min() - 0.1, X[:, 0].max() + 0.1 y_min, y_max = X[:, 1].min() - 0.1, X[:, 1].max() + 0.1 xx, yy = np.meshgrid(np.linspace(x_min, x_max, 101), np.linspace(y_min, y_max, 101)) cmap = ListedColormap(['#FF0000', '#00FF00', '#0000FF']) Z = model.predict_classes(np.c_[xx.ravel(), yy.ravel()], verbose=0) Z = Z.reshape(xx.shape) fig = plt.figure(figsize=(8, 8)) plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8) plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.RdYlBu) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) def plot_data(X, y, figsize=None): if not figsize: figsize = (8, 6) plt.figure(figsize=figsize) plt.plot(X[y==0, 0], X[y==0, 1], 'or', alpha=0.5, label=0) plt.plot(X[y==1, 0], X[y==1, 1], 'ob', alpha=0.5, label=1) plt.xlim((min(X[:, 0])-0.1, max(X[:, 0])+0.1)) plt.ylim((min(X[:, 1])-0.1, max(X[:, 1])+0.1)) plt.legend() def plot_loss_accuracy(history): historydf = pd.DataFrame(history.history, index=history.epoch) plt.figure(figsize=(8, 6)) historydf.plot(ylim=(0, max(1, historydf.values.max()))) loss = history.history['loss'][-1] acc = history.history['acc'][-1] plt.title('Loss: %.3f, Accuracy: %.3f' % (loss, acc)) def plot_loss(history): historydf = pd.DataFrame(history.history, index=history.epoch) plt.figure(figsize=(8, 6)) historydf.plot(ylim=(0, historydf.values.max())) plt.title('Loss: %.3f' % history.history['loss'][-1]) def plot_confusion_matrix(model, X, y): y_pred = model.predict_classes(X, verbose=0) plt.figure(figsize=(8, 6)) sns.heatmap(pd.DataFrame(confusion_matrix(y, y_pred)), annot=True, fmt='d', cmap='YlGnBu', alpha=0.8, vmin=0) def make_multiclass(N=500, D=2, K=3): """ N: number of points per class D: dimensionality K: number of classes """ np.random.seed(0) X = np.zeros((N*K, D)) y = np.zeros(N*K) for j in range(K): ix = range(N*j, N*(j+1)) # radius r = np.linspace(0.0,1,N) # theta t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 X[ix] = np.c_[r*np.sin(t), r*np.cos(t)] y[ix] = j fig = plt.figure(figsize=(6, 6)) plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.RdYlBu, alpha=0.8) plt.xlim([-1,1]) plt.ylim([-1,1]) return X, y def plot_compare_histories(history_list, name_list, plot_accuracy=True): dflist = [] for history in history_list: h = {key: val for key, val in history.history.items() if not key.startswith('val_')} dflist.append(pd.DataFrame(h, index=history.epoch)) historydf = pd.concat(dflist, axis=1) metrics = dflist[0].columns idx = pd.MultiIndex.from_product([name_list, metrics], names=['model', 'metric']) historydf.columns = idx plt.figure(figsize=(6, 8)) ax = plt.subplot(211) historydf.xs('loss', axis=1, level='metric').plot(ylim=(0,1), ax=ax) plt.title("Loss") if plot_accuracy: ax = plt.subplot(212) historydf.xs('acc', axis=1, level='metric').plot(ylim=(0,1), ax=ax) plt.title("Accuracy") plt.xlabel("Epochs") plt.tight_layout() #To keep things simple, I won’t perform the standard practices of separating out the data to training and test sets, or performing k-fold cross-validation #creating the graph X, y = make_classification(n_samples=1000, n_features=2, n_redundant=0, n_informative=2, random_state=7, n_clusters_per_class=1) plot_data(X, y) #splitting the graph with a boundary lr = LogisticRegression() lr.fit(X, y) print('LR coefficients:', lr.coef_) print('LR intercept:', lr.intercept_) plot_data(X, y) limits = np.array([-2, 2]) boundary = -(lr.coef_[0][0] * limits + lr.intercept_[0]) / lr.coef_[0][1] plt.plot(limits, boundary, "g-", linewidth=2) #coef_ --> Coefficient of the features in the decision function. #Using ANN #Define the model model = Sequential() #Add a layer model.add(Dense(units=1, input_shape=(2,), activation='sigmoid')) #Compile model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) #Fit history = model.fit(x=X, y=y, verbose=0, epochs=50) #Plot plot_loss_accuracy(history) #Plot of the decision boundary #The various shades of blue and red represent the probability of a hypothetical point in that area belonging to class 1 or 0. plot_decision_boundary(lambda x: model.predict(x), X, y) #Classification report showing the precision and recall of our model y_pred = model.predict_classes(X, verbose=0) print(classification_report(y, y_pred)) #confusion matrix plot_confusion_matrix(model, X, y) #This time I will create a more complex graph to show the power of ANN X, y = make_circles(n_samples=1000, noise=0.05, factor=0.3, random_state=0) plot_data(X, y) #Build a similar model as before model = Sequential() model.add(Dense(1, input_shape=(2,), activation='sigmoid')) model.compile('adam', 'binary_crossentropy', metrics=['accuracy']) history = model.fit(X, y, verbose=0, epochs=100) plot_loss_accuracy(history) #Plot of the decision boundary #The various shades of blue and red represent the probability of a hypothetical point in that area belonging to class 1 or 0. plot_decision_boundary(lambda x: model.predict(x), X, y) #Classification report showing the precision and recall of our model y_pred = model.predict_classes(X, verbose=0) print(classification_report(y, y_pred)) #confusion matrix plot_confusion_matrix(model, X, y) #We can see that clearly this model wasn't good enough. But we can do much better with a more complex model #Define the model model = Sequential() model.add(Dense(units=4, input_shape=(2,), activation='tanh')) model.add(Dense(2, activation='tanh')) model.add(Dense(1, activation='sigmoid')) #compile the model model.compile(Adam(lr=0.01), 'binary_crossentropy', metrics=['accuracy']) #fit history=model.fit(X, y, verbose=0, epochs=50) #plot plot_loss_accuracy(history) #Plot decision boundary plot_decision_boundary(lambda x: model.predict(x), X, y) #Predict y_pred = model.predict_classes(X, verbose=0) print(classification_report(y, y_pred)) plot_confusion_matrix(model, X, y) #Create a more complex plot #We will use softmax regression this time - which is a generalization of logistic regression to the case where we want to handle multiple classes X, y = make_multiclass(K=3) #Define the model model = Sequential() model.add(Dense(64, input_shape=(2,), activation='tanh')) model.add(Dense(32, activation='tanh')) model.add(Dense(16, activation='tanh')) model.add(Dense(3, activation='softmax')) #Compile model.compile('adam', 'categorical_crossentropy', metrics=['accuracy']) #Change to one-hot representation y_cat = to_categorical(y) #fit history = model.fit(X, y_cat, verbose=0, epochs=50) #plot plot_loss_accuracy(history) #Decision boundary plot_multiclass_decision_boundary(model, X, y) #Predict y_pred = model.predict_classes(X, verbose=0) print(classification_report(y, y_pred)) plot_confusion_matrix(model, X, y) #That was pretty neat wasn't it? #Let's see how we can use it on a real dataset. #I will use the Bill authenticator dataset- the goal is to predict the class of the bill (which indicate true bill or false one) #Load data dataset=pd.read_csv('C:\\Users\\sagi\\Desktop\\Learning\\ML\\Datasets\\bill_authentication.csv') #Explore the data print(dataset.shape) print(dataset.info) dataset.head() #check the correlation of the features with 'Skewness' plt.figure(figsize=(5, 5)) sns.heatmap(dataset.corr()[['Skewness']], annot=True, vmin=-1, vmax=1) #check the correlation of the features with each other plt.figure(figsize=(10, 8)) sns.heatmap(dataset.corr(), annot=True, vmin=-1, vmax=1) #Check the distribution of the feature values dataset.hist(figsize=(10,8)) plt.tight_layout() #Looks like we need to normalize some of them because they are not on the same scale. #Copy and Normalize df=dataset.copy() ss=StandardScaler() scale_features=['Skewness','Curtosis','Entropy','Variance'] df[scale_features] = ss.fit_transform(df[scale_features]) #Creating the training and testing x=dataset.iloc[:, :4].values #convert data from pandas dataframe to numpy array using the values y=dataset.iloc[:,4].values x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=0) print(x_train.shape, y_train.shape, x_test.shape, y_test.shape) #Logistic Regression Model #define the model lr_model = Sequential() lr_model.add(Dense(1, input_shape=(x_train.shape[1],), activation='sigmoid')) #compile lr_model.compile(Adam(lr=0.01), 'binary_crossentropy', metrics=['accuracy']) #fit lr_history = lr_model.fit(x_train, y_train, verbose=1, epochs=50) plot_loss_accuracy(lr_history) #predict y_pred = lr_model.predict_classes(x_test, verbose=0) print(classification_report(y_test, y_pred)) plot_confusion_matrix(lr_model, x_test, y_test) #ANN Model deep_model = Sequential() deep_model.add(Dense(64, input_shape=(x_train.shape[1],), activation='tanh')) deep_model.add(Dense(16, activation='tanh')) deep_model.add(Dense(1, activation='sigmoid')) deep_model.compile(Adam(lr=0.01), 'binary_crossentropy', metrics=['accuracy']) deep_history = deep_model.fit(x_train, y_train, verbose=0, epochs=30) plot_loss_accuracy(deep_history) #Comparing results between LR and ANN plot_compare_histories([lr_history, deep_history], ['Logistic Reg', 'Deep ANN']) y_pred = deep_model.predict_classes(x_test, verbose=0) print(classification_report(y_test, y_pred)) plot_confusion_matrix(deep_model, x_test, y_test)
0.715523
0.578686
> <h1> MODELING </h1> This notebook is focused on making sure that my dataset is encoded properly so that it can be modeled with sklearn. Then I modeled my dataset and computed the associated accuracy scores. ``` import pandas as pd import numpy as np import seaborn as sns; sns.set() import matplotlib.pyplot as plt from sklearn.cross_validation import train_test_split from sklearn.metrics import accuracy_score from sklearn.cross_validation import cross_val_score from sklearn.cross_validation import LeaveOneOut from sklearn.metrics import confusion_matrix email_data_frame = pd.read_csv("/data/thenateandre/email_data_frame.csv") # Removing unnecessary columns from the dataframe. email_data_frame = email_data_frame.drop(email_data_frame.columns.values[0], axis=1) email_data_frame = email_data_frame.drop(['email'], axis=1) email_data_frame.head() # Saving the label data as a separate pandas series and then dropping that column. email_y = email_data_frame['label'] email_data_frame = email_data_frame.drop(['label'], axis=1) # Using pandas one hot encoding. email_data_frame = pd.get_dummies(email_data_frame, columns=['content', 'from'], drop_first=True) email_data_frame.head() # All of the columns after one hot encoding. email_data_frame.columns # Saving the y_values as a numpy array. email_y = np.array(email_y) # Saving the x_values as a matrix. email_x = email_data_frame.as_matrix() # Ensuring that no information was lost throughout encoding. assert len(email_x) == len(email_y) ``` > <h3>KNeighbors Classification</h3> ``` from sklearn.neighbors import KNeighborsClassifier KN_model = KNeighborsClassifier(n_neighbors=1) X_train, X_test, y_train, y_test = train_test_split(email_x, email_y, random_state=0, train_size=0.8) KN_model.fit(X_train, y_train) # This is the KNeighbors model. KN_model_predict = KN_model.predict(X_test) accuracy_score(y_test, KN_model_predict) KN_model_cv_scores = cross_val_score(KN_model, email_x, email_y, cv=30) # ensuring the testing of all of the data KN_model_cv_scores.mean() KN_model_one_out_scores = cross_val_score(KN_model, email_x, email_y, cv=LeaveOneOut(len(email_x))) KN_model_one_out_scores.mean() mat = confusion_matrix(y_test, KN_model_predict) sns.heatmap(mat.T, square=True, fmt='d', annot=True, cbar=False, xticklabels=['spam', 'ham'], yticklabels=['spam', 'ham']) plt.ylabel('Predicted values') plt.xlabel('Actual Values') plt.title('Confusion Matrix of Ham vs Spam') ``` > <h3>Gaussian Generative Model</h3> ``` from sklearn.naive_bayes import GaussianNB GNB_model = GaussianNB() X_train, X_test, y_train, y_test = train_test_split(email_x, email_y, random_state=0, train_size=0.8) GNB_model.fit(X_train, y_train) GNB_model_predict = GNB_model.predict(X_test) accuracy_score(y_test, GNB_model_predict) GNB_model_cv_scores = cross_val_score(GNB_model, email_x, email_y, cv=30) GNB_model_cv_scores.mean() GNB_model_one_out_scores = cross_val_score(GNB_model, email_x, email_y, cv=LeaveOneOut(len(email_x))) GNB_model_one_out_scores.mean() mat = confusion_matrix(y_test, GNB_model_predict) sns.heatmap(mat.T, square=True, fmt='d', annot=True, cbar=False, xticklabels=['spam', 'ham'], yticklabels=['spam', 'ham']) plt.ylabel('Predicted values') plt.xlabel('Actual Values') plt.title('Confusion Matrix of Ham vs Spam') ``` > <h3>Support Vector Machines</h3> ``` from sklearn.svm import SVC SVC_model = SVC(kernel='linear', C=1E10) X_train, X_test, y_train, y_test = train_test_split(email_x, email_y, random_state=0, train_size=0.8) SVC_model.fit(X_train, y_train) SVC_model_predict = SVC_model.predict(X_test) accuracy_score(y_test, SVC_model_predict) ``` > <h3>Random Forests</h3> ``` from sklearn.tree import DecisionTreeClassifier RF_model = DecisionTreeClassifier() X_train, X_test, y_train, y_test = train_test_split(email_x, email_y, random_state=0, train_size=0.8) RF_model.fit(X_train, y_train) RF_model_predict = RF_model.predict(X_test) accuracy_score(y_test, RF_model_predict) RF_model_cv_scores = cross_val_score(RF_model, email_x, email_y, cv=30) # ensuring the testing of all of the data RF_model_cv_scores.mean() RF_model_one_out_scores = cross_val_score(RF_model, email_x, email_y, cv=LeaveOneOut(len(email_x))) RF_model_one_out_scores.mean() mat = confusion_matrix(y_test, RF_model_predict) sns.heatmap(mat.T, square=True, fmt='d', annot=True, cbar=False, xticklabels=['spam', 'ham'], yticklabels=['spam', 'ham']) plt.ylabel('Predicted values') plt.xlabel('Actual Values') plt.title('Confusion Matrix of Ham vs Spam') ``` > <h3>MLP Classifier (Neural Network)</h3> ``` from sklearn.neural_network import MLPClassifier clf_model = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1) X_train, X_test, y_train, y_test = train_test_split(email_x, email_y, random_state=0, train_size=0.8) clf_model.fit(X_train, y_train) clf_model_predict = clf_model.predict(X_test) accuracy_score(y_test, clf_model_predict) clf_model_cv_scores = cross_val_score(clf_model, email_x, email_y, cv=30) # ensuring the testing of all of the data clf_model_cv_scores.mean() mat = confusion_matrix(y_test, clf_model_predict) sns.heatmap(mat.T, square=True, fmt='d', annot=True, cbar=False, xticklabels=['spam', 'ham'], yticklabels=['spam', 'ham']) plt.ylabel('Predicted values') plt.xlabel('Actual Values') plt.title('Confusion Matrix of Ham vs Spam') ``` > <h3> Overall Model Comparison</h3> ``` # Model comparisons not including SVM because it is the least effecient model # Accuracy scores (using cross val score with cv=30): print("KNeighbors Classification: " + str(KN_model_cv_scores.mean())) print("Gaussian Generative Model: " + str(GNB_model_cv_scores.mean())) print("Random Forests: " + str(RF_model_cv_scores.mean())) print("MLP Classifier (Neural Network): " + str(clf_model_cv_scores.mean())) ``` The best models for my dataset are the MLP Classifier (Neural Network) and Random Forests.
github_jupyter
import pandas as pd import numpy as np import seaborn as sns; sns.set() import matplotlib.pyplot as plt from sklearn.cross_validation import train_test_split from sklearn.metrics import accuracy_score from sklearn.cross_validation import cross_val_score from sklearn.cross_validation import LeaveOneOut from sklearn.metrics import confusion_matrix email_data_frame = pd.read_csv("/data/thenateandre/email_data_frame.csv") # Removing unnecessary columns from the dataframe. email_data_frame = email_data_frame.drop(email_data_frame.columns.values[0], axis=1) email_data_frame = email_data_frame.drop(['email'], axis=1) email_data_frame.head() # Saving the label data as a separate pandas series and then dropping that column. email_y = email_data_frame['label'] email_data_frame = email_data_frame.drop(['label'], axis=1) # Using pandas one hot encoding. email_data_frame = pd.get_dummies(email_data_frame, columns=['content', 'from'], drop_first=True) email_data_frame.head() # All of the columns after one hot encoding. email_data_frame.columns # Saving the y_values as a numpy array. email_y = np.array(email_y) # Saving the x_values as a matrix. email_x = email_data_frame.as_matrix() # Ensuring that no information was lost throughout encoding. assert len(email_x) == len(email_y) from sklearn.neighbors import KNeighborsClassifier KN_model = KNeighborsClassifier(n_neighbors=1) X_train, X_test, y_train, y_test = train_test_split(email_x, email_y, random_state=0, train_size=0.8) KN_model.fit(X_train, y_train) # This is the KNeighbors model. KN_model_predict = KN_model.predict(X_test) accuracy_score(y_test, KN_model_predict) KN_model_cv_scores = cross_val_score(KN_model, email_x, email_y, cv=30) # ensuring the testing of all of the data KN_model_cv_scores.mean() KN_model_one_out_scores = cross_val_score(KN_model, email_x, email_y, cv=LeaveOneOut(len(email_x))) KN_model_one_out_scores.mean() mat = confusion_matrix(y_test, KN_model_predict) sns.heatmap(mat.T, square=True, fmt='d', annot=True, cbar=False, xticklabels=['spam', 'ham'], yticklabels=['spam', 'ham']) plt.ylabel('Predicted values') plt.xlabel('Actual Values') plt.title('Confusion Matrix of Ham vs Spam') from sklearn.naive_bayes import GaussianNB GNB_model = GaussianNB() X_train, X_test, y_train, y_test = train_test_split(email_x, email_y, random_state=0, train_size=0.8) GNB_model.fit(X_train, y_train) GNB_model_predict = GNB_model.predict(X_test) accuracy_score(y_test, GNB_model_predict) GNB_model_cv_scores = cross_val_score(GNB_model, email_x, email_y, cv=30) GNB_model_cv_scores.mean() GNB_model_one_out_scores = cross_val_score(GNB_model, email_x, email_y, cv=LeaveOneOut(len(email_x))) GNB_model_one_out_scores.mean() mat = confusion_matrix(y_test, GNB_model_predict) sns.heatmap(mat.T, square=True, fmt='d', annot=True, cbar=False, xticklabels=['spam', 'ham'], yticklabels=['spam', 'ham']) plt.ylabel('Predicted values') plt.xlabel('Actual Values') plt.title('Confusion Matrix of Ham vs Spam') from sklearn.svm import SVC SVC_model = SVC(kernel='linear', C=1E10) X_train, X_test, y_train, y_test = train_test_split(email_x, email_y, random_state=0, train_size=0.8) SVC_model.fit(X_train, y_train) SVC_model_predict = SVC_model.predict(X_test) accuracy_score(y_test, SVC_model_predict) from sklearn.tree import DecisionTreeClassifier RF_model = DecisionTreeClassifier() X_train, X_test, y_train, y_test = train_test_split(email_x, email_y, random_state=0, train_size=0.8) RF_model.fit(X_train, y_train) RF_model_predict = RF_model.predict(X_test) accuracy_score(y_test, RF_model_predict) RF_model_cv_scores = cross_val_score(RF_model, email_x, email_y, cv=30) # ensuring the testing of all of the data RF_model_cv_scores.mean() RF_model_one_out_scores = cross_val_score(RF_model, email_x, email_y, cv=LeaveOneOut(len(email_x))) RF_model_one_out_scores.mean() mat = confusion_matrix(y_test, RF_model_predict) sns.heatmap(mat.T, square=True, fmt='d', annot=True, cbar=False, xticklabels=['spam', 'ham'], yticklabels=['spam', 'ham']) plt.ylabel('Predicted values') plt.xlabel('Actual Values') plt.title('Confusion Matrix of Ham vs Spam') from sklearn.neural_network import MLPClassifier clf_model = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1) X_train, X_test, y_train, y_test = train_test_split(email_x, email_y, random_state=0, train_size=0.8) clf_model.fit(X_train, y_train) clf_model_predict = clf_model.predict(X_test) accuracy_score(y_test, clf_model_predict) clf_model_cv_scores = cross_val_score(clf_model, email_x, email_y, cv=30) # ensuring the testing of all of the data clf_model_cv_scores.mean() mat = confusion_matrix(y_test, clf_model_predict) sns.heatmap(mat.T, square=True, fmt='d', annot=True, cbar=False, xticklabels=['spam', 'ham'], yticklabels=['spam', 'ham']) plt.ylabel('Predicted values') plt.xlabel('Actual Values') plt.title('Confusion Matrix of Ham vs Spam') # Model comparisons not including SVM because it is the least effecient model # Accuracy scores (using cross val score with cv=30): print("KNeighbors Classification: " + str(KN_model_cv_scores.mean())) print("Gaussian Generative Model: " + str(GNB_model_cv_scores.mean())) print("Random Forests: " + str(RF_model_cv_scores.mean())) print("MLP Classifier (Neural Network): " + str(clf_model_cv_scores.mean()))
0.597843
0.800419
# Argon Free Particles [Langevin with Euler Integrator] ``` from matplotlib import pyplot as plt import numpy as np from tqdm import tqdm import pyunitwizard as puw puw.configure.load_library(['pint']) rng = np.random.default_rng() dim = 3 n_particles = 10 #200 masses = np.full(n_particles, 39.948) * puw.unit('amu/mol') masses[0] damping = 20.0/puw.unit('ps') temperature = 300.0*puw.unit('K') delta_t = 0.01*puw.unit('ps') Kb = 1.38064852e-23*puw.unit('J/K') NA = 6.02214086e23*puw.unit('1/mol') KB = Kb*NA KT = KB*temperature KT def potential (positions): return 0.0*puw.unit('J/mol') def potential_forces (positions): return np.zeros(positions.shape) * puw.unit('J/mol/nm') def dissipation_forces(damping, velocities, masses): return -damping*masses[:, np.newaxis]*velocities def fluctuation_forces(damping, KT, dim, masses): global rng n_particles = masses.shape[0] sigma = np.sqrt(2.0*damping*KT*masses/puw.unit('seconds')) return rng.normal(size=[n_particles, dim]) * sigma[:, np.newaxis] def euler_integrator(positions, velocities, forces, masses, delta_t): new_positions = positions + velocities * delta_t new_velocities = velocities + (forces/masses[:, np.newaxis]) * delta_t return new_positions, new_velocities simulation_t = 0.10*puw.unit('ns') n_steps = int(simulation_t/delta_t) positions = np.zeros([n_particles, dim])*puw.unit('nm') velocities = np.zeros([n_particles, dim])*puw.unit('nm/ps') trajectory_positions = np.zeros([n_steps, n_particles, dim])*puw.unit('nm') trajectory_velocities = np.zeros([n_steps, n_particles, dim])*puw.unit('nm/ps') trajectory_time = np.zeros([n_steps])*puw.unit('ps') trajectory_positions.nbytes/1024/1024/1024 trajectory_positions[0,:,:] = positions trajectory_velocities[0,:,:] = velocities trajectory_time[0] = 0.0*puw.unit('ps') aa = (potential_forces(positions)+dissipation_forces(damping, velocities, masses))/masses[:, np.newaxis] puw.convert(aa, 'nm/(ps*ps)') fluctuation_forces(damping, KT, dim, masses) for step_index in tqdm(range(1, n_steps)): forces = potential_forces(positions) forces += dissipation_forces(damping, velocities, masses) forces += fluctuation_forces(damping, KT, dim, masses) positions, velocities = euler_integrator(positions, velocities, forces, masses, delta_t) trajectory_positions[step_index, :, :] = positions[:, :] trajectory_velocities[step_index, :, :] = velocities[:, :] trajectory_time[step_index] = step_index*delta_t particle_index=0 plt.plot(trajectory_time, trajectory_positions[:, particle_index, 0]) plt.plot(trajectory_time, trajectory_positions[:, particle_index, 1]) plt.plot(trajectory_time, trajectory_positions[:, particle_index, 2]) plt.show() trajectory_positions[:, particle_index, 0] ```
github_jupyter
from matplotlib import pyplot as plt import numpy as np from tqdm import tqdm import pyunitwizard as puw puw.configure.load_library(['pint']) rng = np.random.default_rng() dim = 3 n_particles = 10 #200 masses = np.full(n_particles, 39.948) * puw.unit('amu/mol') masses[0] damping = 20.0/puw.unit('ps') temperature = 300.0*puw.unit('K') delta_t = 0.01*puw.unit('ps') Kb = 1.38064852e-23*puw.unit('J/K') NA = 6.02214086e23*puw.unit('1/mol') KB = Kb*NA KT = KB*temperature KT def potential (positions): return 0.0*puw.unit('J/mol') def potential_forces (positions): return np.zeros(positions.shape) * puw.unit('J/mol/nm') def dissipation_forces(damping, velocities, masses): return -damping*masses[:, np.newaxis]*velocities def fluctuation_forces(damping, KT, dim, masses): global rng n_particles = masses.shape[0] sigma = np.sqrt(2.0*damping*KT*masses/puw.unit('seconds')) return rng.normal(size=[n_particles, dim]) * sigma[:, np.newaxis] def euler_integrator(positions, velocities, forces, masses, delta_t): new_positions = positions + velocities * delta_t new_velocities = velocities + (forces/masses[:, np.newaxis]) * delta_t return new_positions, new_velocities simulation_t = 0.10*puw.unit('ns') n_steps = int(simulation_t/delta_t) positions = np.zeros([n_particles, dim])*puw.unit('nm') velocities = np.zeros([n_particles, dim])*puw.unit('nm/ps') trajectory_positions = np.zeros([n_steps, n_particles, dim])*puw.unit('nm') trajectory_velocities = np.zeros([n_steps, n_particles, dim])*puw.unit('nm/ps') trajectory_time = np.zeros([n_steps])*puw.unit('ps') trajectory_positions.nbytes/1024/1024/1024 trajectory_positions[0,:,:] = positions trajectory_velocities[0,:,:] = velocities trajectory_time[0] = 0.0*puw.unit('ps') aa = (potential_forces(positions)+dissipation_forces(damping, velocities, masses))/masses[:, np.newaxis] puw.convert(aa, 'nm/(ps*ps)') fluctuation_forces(damping, KT, dim, masses) for step_index in tqdm(range(1, n_steps)): forces = potential_forces(positions) forces += dissipation_forces(damping, velocities, masses) forces += fluctuation_forces(damping, KT, dim, masses) positions, velocities = euler_integrator(positions, velocities, forces, masses, delta_t) trajectory_positions[step_index, :, :] = positions[:, :] trajectory_velocities[step_index, :, :] = velocities[:, :] trajectory_time[step_index] = step_index*delta_t particle_index=0 plt.plot(trajectory_time, trajectory_positions[:, particle_index, 0]) plt.plot(trajectory_time, trajectory_positions[:, particle_index, 1]) plt.plot(trajectory_time, trajectory_positions[:, particle_index, 2]) plt.show() trajectory_positions[:, particle_index, 0]
0.563858
0.839405
``` import matplotlib import matplotlib.pyplot as plt import numpy as np import pydicom import nibabel as nib from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets from IPython.display import display, clear_output import os import os.path import scipy.ndimage import scipy.signal def plot_images(img_stack, *, labels=None, figdim=5, colormap=plt.cm.gray, **imshowkwarg): """ Funzione di utilità per mostrare una grigli di immagini, impostando la lista delle legende e, opzionalmente, la colormap che di default è impostata a livelli di grigio """ nimag = len(img_stack) # creiamo il layout nrows = 1 if nimag <= 3 else 2 if nimag <= 6 else 3 if nimag <=12 else 4 if nimag <= 16 else 5 ncols = nimag if nrows == 1 else 2 if nimag == 4 else 3 if nimag <=9 else 4 if nimag <= 16 else 5 # generiamo la figura con un canvas ce riserva 5x5 pollici a immagine # e i riferimenti alle singole immagini fig, axs = plt.subplots(nrows,ncols,squeeze=False,figsize=(figdim*ncols, figdim*nrows)) img = 0 # contatore delle immagini for r in range(nrows): for c in range(ncols): # eliminiamo lo stile del grafico axs[r, c].tick_params(axis='both',\ which='both',\ bottom=False,top=False,right=False,left=False,\ labelbottom=False,labelleft=False) plt.gca().spines['top'].set_visible(False) plt.gca().spines['bottom'].set_visible(False) plt.gca().spines['left'].set_visible(False) plt.gca().spines['right'].set_visible(False) # se ci troviamo nella posizione di una # immagine allora la mostriamo if r*ncols + c < nimag: if labels != None: axs[r, c].set_title(labels[img]) axs[r, c].imshow(img_stack[img], cmap=colormap, **imshowkwarg) img += 1 # immagine successiva return axs def load_dcm_volume(directory): """ Funzione che carica un volume di dati DICOM da una cartella assumendo che questa contenga solamente un insieme di file .dcm """ # leggiamo la lista dei file nella cartella files = os.listdir(directory) files.sort() # leggiamo il primo file per ottenere le dimensioni e il tipo della singola slice file = pydicom.dcmread(f'{directory}/{files[0]}') # creiamo il volume volume = np.full((len(files),file.pixel_array.shape[0],file.pixel_array.shape[1]),\ 1,dtype=file.pixel_array.dtype) # carichiamo la prima slice, già letta, e successivamente le altre volume[0,:,:] = file.pixel_array for i in range(1,len(files)): volume[i,:,:] = pydicom.dcmread(f'{directory}/{files[i]}').pixel_array vol = np.copy(volume) if file.SeriesDescription != '' and file.SeriesDescription.find('Sag') != -1: vol=np.reshape(vol,(volume.shape[1],volume.shape[2],volume.shape[0])) for i in range(volume[:,0,:].shape[1]): vol[i,:,:] = volume[:,i,:].T if file.SeriesDescription != '' and file.SeriesDescription.find('Cor') != -1: vol=np.reshape(vol,(volume.shape[1],volume.shape[0],volume.shape[2])) for i in range(volume[:,0,:].shape[0]): vol[i,:,:] = volume[:,i,:] return vol def window(image, lower_threshold, upper_threshold): """ Funzione che calcola la finestratura della dinamica di input per mapparla in [0 - 255] """ # Saturiamo le soglie se sono fuori range rispetto all'immagine di ingresso if lower_threshold < np.min(image): lower_threshold = np.min(image) if upper_threshold > np.max(image): upper_threshold = np.max(image) # Creiamo la nuova immagine riempita di 1 new_image = np.full_like(image, 1) for i in range(image.shape[0]): for j in range(image.shape[1]): # Applichiamo la trasformazione puntualmente new_image[i, j] = 0 if image[i, j] < lower_threshold \ else 255 if image[i, j] > upper_threshold \ else 256*(image[i, j] - lower_threshold)//(upper_threshold - lower_threshold) return new_image def mini_pacs_viewer(directory): """ Funzione di visualizzazione dei tra piani di un volume dicom caricato da una cartella che raccoglie una serie """ # Carichiamo il volume DICOM dcm_volume = load_dcm_volume(directory) # creiamo gli slider per scorrere sui diversi piani di proiezione axial = widgets.IntSlider(min=0,max=dcm_volume.shape[0]-1,\ value=dcm_volume.shape[0]//2,description='Ax',orientation='vertical') sagittal = widgets.IntSlider(min=0,max=dcm_volume.shape[2]-1,\ value=dcm_volume.shape[2]//2,description='Sag',orientation='vertical') coronal = widgets.IntSlider(min=0,max=dcm_volume.shape[1]-1,\ value=dcm_volume.shape[1]//2,description='Cor',orientation='vertical') # creiamo gli slider per modificare la finestratura della gamma dinamica top = np.max(dcm_volume) bottom = np.min(dcm_volume) lower = widgets.IntSlider(min=bottom,max=top,\ value=bottom,description='Minimo',orientation='horizontal') upper = widgets.IntSlider(min=bottom,max=top,\ value=top,description='Massimo',orientation='horizontal') # gestore degli eventi def draw_slices(a,s,c,l,u): # modifica del contrasto ax = window(dcm_volume[a,:,:],l,u) sag = window(dcm_volume[:,:,s],l,u) cor = window(dcm_volume[:,c,:],l,u) # display delle tre slice axes = plot_images([ax,sag,cor],\ labels=['Piano Assiale','Piano Sagittale','Piano Coronale']) # display della traccia del piano assiale axes[0, 1].plot([0,dcm_volume[:,:,0].shape[1]-1],[a, a],'r-') axes[0, 2].plot([0,dcm_volume[:,0,:].shape[1]-1],[a, a],'r-') # display della traccia del piano sagittale axes[0, 0].plot([s, s],[0,dcm_volume[0,:,:].shape[0]-1],'g-') axes[0, 2].plot([s, s],[0,dcm_volume[:,0,:].shape[0]-1],'g-') # display della traccia del piano coronale axes[0, 1].plot([c, c],[0,dcm_volume[:,:,0].shape[0]-1],'b-') axes[0, 0].plot([0,dcm_volume[0,:,:].shape[1]-1],[c, c],'b-') # Creazione del widget w = interactive(draw_slices,a=axial,s=sagittal,c=coronal,l=lower,u=upper) w.layout.flex_flow='row wrap' display(w) return dcm_volume #mini_pacs_viewer('Data/Sag_FLAIR+FATSAT_CUBE_3') #mini_pacs_viewer('Data/Ax_T2_FRFSE_8') volume = mini_pacs_viewer('Data/series-00000') lung = {'level': -550, 'window': 1500} soft = {'level': 40, 'window': 440} bone = {'level': 400, 'window': 1800} info = pydicom.dcmread('Data/series-00000/image-00000.dcm') intercept, slope = (int(info.RescaleIntercept), int(info.RescaleSlope)) def compute_window(win, intercept, slope): lower = (win['level'] - win['window']//2 - intercept)/slope upper = (win['level'] + win['window']//2 - intercept)/slope return (lower, upper) (ll, lu) = compute_window(lung, intercept, slope) (sl, su) = compute_window(soft, intercept, slope) (bl, bu) = compute_window(bone, intercept, slope) lung_slice = window(volume[:,256,:], ll, lu) soft_slice = window(volume[:,256,:], sl, su) bone_slice = window(volume[:,256,:], bl, bu) axes = plot_images([volume[:,256,:], lung_slice, soft_slice, bone_slice],\ labels=['Originale', 'Finestra Polmoni', 'Finestra tes. molli', 'Finestra ossa']) def collect_values(row, threshold=0.7): # Calcoliamo il gradiente del segnale row # e individuiamo i picchi superiori a threshlod del massimo # o inferiori a -threshold del minimo gg = np.gradient(row) peaks = scipy.signal.find_peaks(gg,height=0.7*np.max(gg)) igg = gg * -1 invpeaks = scipy.signal.find_peaks(igg,height=0.7*np.max(igg)) # Tenendo conto della forma del nostro segnale che è sempre positivo o nullo # scandiamo gli array degli indici dei picchi copiando in uscita i valori che # si trovano all'interno degli intervalli di tipo [picco positivo - picco negativo] out = np.zeros(row.shape) j = 0 start = 0 stop = 0 if len(peaks[0]) > 0 and len(invpeaks[0]) > 0: for i in range(len(peaks[0])): start = peaks[0][i] if start <= stop: continue while j < len(invpeaks[0]) - 1 and invpeaks[0][j] <= start: j = j+1 stop = invpeaks[0][j] out[start:stop+1] = row[start:stop+1] else: out = row return out def projection(volume, mode='mip', direction='coronal', alpha=None): if mode == 'composite': if alpha.all() == None: raise(ValueError('alpha must have the same shape as the input volume')) if np.max(alpha) > 1.0 or np.min(alpha) < 0.0: raise(ValueError('All alpha values must range in [0, 1]')) axis = 1 if direction == 'coronal' else 0 if direction == 'axial' else 2 shape = (volume.shape[0] if axis != 0 else volume.shape[1],\ volume.shape[1] if axis ==2 else volume.shape[2]) proj = np.zeros(shape=shape,dtype='float') if mode == 'mip': proj = np.max(volume,axis=axis) else: for i in range(shape[0]): for j in range(shape[1]): row = collect_values(volume[i,:,j] if axis ==1\ else volume[i,j,:] if axis == 2 else volume[:,i,j]) if mode == 'minip': if len(row[row > 0]) > 0: proj[i,j] = np.amin(row[row > 0]) else: proj[i,j] = 0 if mode == 'avg': if len(row[row > 0]) > 0: proj[i,j] = np.average(row[row > 0]) else: proj[i,j] = 0 if mode == 'composite': if len(row[row > 0]) == 0: proj[i, j] = 0 else: c = 0 for k in np.arange(alpha.shape[axis]-1,-1,-1): a = alpha[i, j, k] if axis == 2 else alpha[k, i, j]\ if axis == 0 else alpha[i, k, j] c += (1 - a)*c + a*row[k] proj[i, j] = c return proj p = projection(volume,mode='minip') axes = plot_images([p]) ```
github_jupyter
import matplotlib import matplotlib.pyplot as plt import numpy as np import pydicom import nibabel as nib from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets from IPython.display import display, clear_output import os import os.path import scipy.ndimage import scipy.signal def plot_images(img_stack, *, labels=None, figdim=5, colormap=plt.cm.gray, **imshowkwarg): """ Funzione di utilità per mostrare una grigli di immagini, impostando la lista delle legende e, opzionalmente, la colormap che di default è impostata a livelli di grigio """ nimag = len(img_stack) # creiamo il layout nrows = 1 if nimag <= 3 else 2 if nimag <= 6 else 3 if nimag <=12 else 4 if nimag <= 16 else 5 ncols = nimag if nrows == 1 else 2 if nimag == 4 else 3 if nimag <=9 else 4 if nimag <= 16 else 5 # generiamo la figura con un canvas ce riserva 5x5 pollici a immagine # e i riferimenti alle singole immagini fig, axs = plt.subplots(nrows,ncols,squeeze=False,figsize=(figdim*ncols, figdim*nrows)) img = 0 # contatore delle immagini for r in range(nrows): for c in range(ncols): # eliminiamo lo stile del grafico axs[r, c].tick_params(axis='both',\ which='both',\ bottom=False,top=False,right=False,left=False,\ labelbottom=False,labelleft=False) plt.gca().spines['top'].set_visible(False) plt.gca().spines['bottom'].set_visible(False) plt.gca().spines['left'].set_visible(False) plt.gca().spines['right'].set_visible(False) # se ci troviamo nella posizione di una # immagine allora la mostriamo if r*ncols + c < nimag: if labels != None: axs[r, c].set_title(labels[img]) axs[r, c].imshow(img_stack[img], cmap=colormap, **imshowkwarg) img += 1 # immagine successiva return axs def load_dcm_volume(directory): """ Funzione che carica un volume di dati DICOM da una cartella assumendo che questa contenga solamente un insieme di file .dcm """ # leggiamo la lista dei file nella cartella files = os.listdir(directory) files.sort() # leggiamo il primo file per ottenere le dimensioni e il tipo della singola slice file = pydicom.dcmread(f'{directory}/{files[0]}') # creiamo il volume volume = np.full((len(files),file.pixel_array.shape[0],file.pixel_array.shape[1]),\ 1,dtype=file.pixel_array.dtype) # carichiamo la prima slice, già letta, e successivamente le altre volume[0,:,:] = file.pixel_array for i in range(1,len(files)): volume[i,:,:] = pydicom.dcmread(f'{directory}/{files[i]}').pixel_array vol = np.copy(volume) if file.SeriesDescription != '' and file.SeriesDescription.find('Sag') != -1: vol=np.reshape(vol,(volume.shape[1],volume.shape[2],volume.shape[0])) for i in range(volume[:,0,:].shape[1]): vol[i,:,:] = volume[:,i,:].T if file.SeriesDescription != '' and file.SeriesDescription.find('Cor') != -1: vol=np.reshape(vol,(volume.shape[1],volume.shape[0],volume.shape[2])) for i in range(volume[:,0,:].shape[0]): vol[i,:,:] = volume[:,i,:] return vol def window(image, lower_threshold, upper_threshold): """ Funzione che calcola la finestratura della dinamica di input per mapparla in [0 - 255] """ # Saturiamo le soglie se sono fuori range rispetto all'immagine di ingresso if lower_threshold < np.min(image): lower_threshold = np.min(image) if upper_threshold > np.max(image): upper_threshold = np.max(image) # Creiamo la nuova immagine riempita di 1 new_image = np.full_like(image, 1) for i in range(image.shape[0]): for j in range(image.shape[1]): # Applichiamo la trasformazione puntualmente new_image[i, j] = 0 if image[i, j] < lower_threshold \ else 255 if image[i, j] > upper_threshold \ else 256*(image[i, j] - lower_threshold)//(upper_threshold - lower_threshold) return new_image def mini_pacs_viewer(directory): """ Funzione di visualizzazione dei tra piani di un volume dicom caricato da una cartella che raccoglie una serie """ # Carichiamo il volume DICOM dcm_volume = load_dcm_volume(directory) # creiamo gli slider per scorrere sui diversi piani di proiezione axial = widgets.IntSlider(min=0,max=dcm_volume.shape[0]-1,\ value=dcm_volume.shape[0]//2,description='Ax',orientation='vertical') sagittal = widgets.IntSlider(min=0,max=dcm_volume.shape[2]-1,\ value=dcm_volume.shape[2]//2,description='Sag',orientation='vertical') coronal = widgets.IntSlider(min=0,max=dcm_volume.shape[1]-1,\ value=dcm_volume.shape[1]//2,description='Cor',orientation='vertical') # creiamo gli slider per modificare la finestratura della gamma dinamica top = np.max(dcm_volume) bottom = np.min(dcm_volume) lower = widgets.IntSlider(min=bottom,max=top,\ value=bottom,description='Minimo',orientation='horizontal') upper = widgets.IntSlider(min=bottom,max=top,\ value=top,description='Massimo',orientation='horizontal') # gestore degli eventi def draw_slices(a,s,c,l,u): # modifica del contrasto ax = window(dcm_volume[a,:,:],l,u) sag = window(dcm_volume[:,:,s],l,u) cor = window(dcm_volume[:,c,:],l,u) # display delle tre slice axes = plot_images([ax,sag,cor],\ labels=['Piano Assiale','Piano Sagittale','Piano Coronale']) # display della traccia del piano assiale axes[0, 1].plot([0,dcm_volume[:,:,0].shape[1]-1],[a, a],'r-') axes[0, 2].plot([0,dcm_volume[:,0,:].shape[1]-1],[a, a],'r-') # display della traccia del piano sagittale axes[0, 0].plot([s, s],[0,dcm_volume[0,:,:].shape[0]-1],'g-') axes[0, 2].plot([s, s],[0,dcm_volume[:,0,:].shape[0]-1],'g-') # display della traccia del piano coronale axes[0, 1].plot([c, c],[0,dcm_volume[:,:,0].shape[0]-1],'b-') axes[0, 0].plot([0,dcm_volume[0,:,:].shape[1]-1],[c, c],'b-') # Creazione del widget w = interactive(draw_slices,a=axial,s=sagittal,c=coronal,l=lower,u=upper) w.layout.flex_flow='row wrap' display(w) return dcm_volume #mini_pacs_viewer('Data/Sag_FLAIR+FATSAT_CUBE_3') #mini_pacs_viewer('Data/Ax_T2_FRFSE_8') volume = mini_pacs_viewer('Data/series-00000') lung = {'level': -550, 'window': 1500} soft = {'level': 40, 'window': 440} bone = {'level': 400, 'window': 1800} info = pydicom.dcmread('Data/series-00000/image-00000.dcm') intercept, slope = (int(info.RescaleIntercept), int(info.RescaleSlope)) def compute_window(win, intercept, slope): lower = (win['level'] - win['window']//2 - intercept)/slope upper = (win['level'] + win['window']//2 - intercept)/slope return (lower, upper) (ll, lu) = compute_window(lung, intercept, slope) (sl, su) = compute_window(soft, intercept, slope) (bl, bu) = compute_window(bone, intercept, slope) lung_slice = window(volume[:,256,:], ll, lu) soft_slice = window(volume[:,256,:], sl, su) bone_slice = window(volume[:,256,:], bl, bu) axes = plot_images([volume[:,256,:], lung_slice, soft_slice, bone_slice],\ labels=['Originale', 'Finestra Polmoni', 'Finestra tes. molli', 'Finestra ossa']) def collect_values(row, threshold=0.7): # Calcoliamo il gradiente del segnale row # e individuiamo i picchi superiori a threshlod del massimo # o inferiori a -threshold del minimo gg = np.gradient(row) peaks = scipy.signal.find_peaks(gg,height=0.7*np.max(gg)) igg = gg * -1 invpeaks = scipy.signal.find_peaks(igg,height=0.7*np.max(igg)) # Tenendo conto della forma del nostro segnale che è sempre positivo o nullo # scandiamo gli array degli indici dei picchi copiando in uscita i valori che # si trovano all'interno degli intervalli di tipo [picco positivo - picco negativo] out = np.zeros(row.shape) j = 0 start = 0 stop = 0 if len(peaks[0]) > 0 and len(invpeaks[0]) > 0: for i in range(len(peaks[0])): start = peaks[0][i] if start <= stop: continue while j < len(invpeaks[0]) - 1 and invpeaks[0][j] <= start: j = j+1 stop = invpeaks[0][j] out[start:stop+1] = row[start:stop+1] else: out = row return out def projection(volume, mode='mip', direction='coronal', alpha=None): if mode == 'composite': if alpha.all() == None: raise(ValueError('alpha must have the same shape as the input volume')) if np.max(alpha) > 1.0 or np.min(alpha) < 0.0: raise(ValueError('All alpha values must range in [0, 1]')) axis = 1 if direction == 'coronal' else 0 if direction == 'axial' else 2 shape = (volume.shape[0] if axis != 0 else volume.shape[1],\ volume.shape[1] if axis ==2 else volume.shape[2]) proj = np.zeros(shape=shape,dtype='float') if mode == 'mip': proj = np.max(volume,axis=axis) else: for i in range(shape[0]): for j in range(shape[1]): row = collect_values(volume[i,:,j] if axis ==1\ else volume[i,j,:] if axis == 2 else volume[:,i,j]) if mode == 'minip': if len(row[row > 0]) > 0: proj[i,j] = np.amin(row[row > 0]) else: proj[i,j] = 0 if mode == 'avg': if len(row[row > 0]) > 0: proj[i,j] = np.average(row[row > 0]) else: proj[i,j] = 0 if mode == 'composite': if len(row[row > 0]) == 0: proj[i, j] = 0 else: c = 0 for k in np.arange(alpha.shape[axis]-1,-1,-1): a = alpha[i, j, k] if axis == 2 else alpha[k, i, j]\ if axis == 0 else alpha[i, k, j] c += (1 - a)*c + a*row[k] proj[i, j] = c return proj p = projection(volume,mode='minip') axes = plot_images([p])
0.356335
0.586819
``` from datetime import datetime,timedelta from collections import defaultdict,Counter from pprint import pprint from tqdm import tqdm import re import pymongo from pymongo import InsertOne, DeleteMany, ReplaceOne, UpdateOne from pymongo.errors import BulkWriteError from fuzzywuzzy import fuzz from nltk.corpus import stopwords from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer import itertools list_stopWords=list(set(stopwords.words('english'))) client = pymongo.MongoClient('localhost:27017') db = client.tweet #db.authenticate('admin','lixiepeng') import pandas as pd import spacy nlp = spacy.load('en_core_web_md') events = [e for e in db.current_event.find({'event.date':{'$gt':'2017-09-01','$lt':'2017-09-05'}},{'_id':1,'event.class':1,'event.date':1,'event.title':1,'event.description':1})] events[0] events = [{'id':e['_id'],'class':e['event']['class'],'date':e['event']['date'],'title':e['event']['title'],'description':e['event']['description']} for e in events] df_events = pd.DataFrame.from_records(events) df_events.head() df_events.iloc[0]['id'] tweets = [tweet for tweet in db.paper.find({'event_id':df_events.iloc[0]['id']},{'tweet.standard_text':1})] tweets def reference_similatity(reference): doc_reference = nlp(reference['description']) print(reference['description']) tweets = [] filter_dict = {'event_id':reference['id'],'tweet.lang':'en'} #filter_dict = {'event_id':reference['id'],'tweet.lang':'en','tweet.media.card_url':None} #filter_dict = {'event_id':reference['id'],'tweet.lang':'en','tweet.media.card_url':{'$ne':None}} #filter_dict = {'event_id':reference['id'],'tweet.lang':'en','f':'&f=news'} query_dict = {'tweet.standard_text':1} records = [i for i in db.pos.find(filter_dict,query_dict)]+[i for i in db.paper.find(filter_dict,query_dict)] print(len(records)) for tweet in records: tweet_id = tweet['_id'] tweet_text = tweet['tweet']['standard_text'] doc_tweet = nlp(tweet_text) coverage_num,jacard_similarity = entity_coverage(doc_reference,doc_tweet) char_match,token_match = fuzzy_string_matching(reference['description'],tweet_text) tweets.append((tweet_id,doc_reference.similarity(doc_tweet),tweet_text)) #tweets.append((tweet_id,doc_reference.similarity(doc_tweet),coverage_num,jacard_similarity,char_match,token_match,tweet_text)) #tweets = [(,doc.similarity(nlp(tweet['tweet']['standard_text'])),entity_coverage,tweet['tweet']['standard_text']) for tweet in db.pos.find({'event_id':record['id']},{'tweet.standard_text':1})] + \ #[(tweet['_id'],doc.similarity(nlp(tweet['tweet']['standard_text'])),entity_coverage,tweet['tweet']['standard_text']) for tweet in db.paper.find({'event_id':record['id']},{'tweet.standard_text':1})] tweets = sorted(tweets,key=lambda x:x[1],reverse=True) for i in tweets: print(i) reference_similatity(df_events.iloc[11]) def get_ents_set(doc): ents = [ent.text for ent in doc.ents] return set(ents) def entity_coverage(doc_reference,doc_tweet): ents_reference = get_ents_set(doc_reference) ents_tweet = get_ents_set(doc_tweet) coverage_num = len(ents_reference & ents_tweet) jacard_similarity = coverage_num/len(ents_reference | ents_tweet) return coverage_num,jacard_similarity def fuzzy_string_matching(text_reference,text_tweet): return fuzz.ratio(text_reference,text_tweet),fuzz.token_set_ratio(text_reference,text_tweet) ```
github_jupyter
from datetime import datetime,timedelta from collections import defaultdict,Counter from pprint import pprint from tqdm import tqdm import re import pymongo from pymongo import InsertOne, DeleteMany, ReplaceOne, UpdateOne from pymongo.errors import BulkWriteError from fuzzywuzzy import fuzz from nltk.corpus import stopwords from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer import itertools list_stopWords=list(set(stopwords.words('english'))) client = pymongo.MongoClient('localhost:27017') db = client.tweet #db.authenticate('admin','lixiepeng') import pandas as pd import spacy nlp = spacy.load('en_core_web_md') events = [e for e in db.current_event.find({'event.date':{'$gt':'2017-09-01','$lt':'2017-09-05'}},{'_id':1,'event.class':1,'event.date':1,'event.title':1,'event.description':1})] events[0] events = [{'id':e['_id'],'class':e['event']['class'],'date':e['event']['date'],'title':e['event']['title'],'description':e['event']['description']} for e in events] df_events = pd.DataFrame.from_records(events) df_events.head() df_events.iloc[0]['id'] tweets = [tweet for tweet in db.paper.find({'event_id':df_events.iloc[0]['id']},{'tweet.standard_text':1})] tweets def reference_similatity(reference): doc_reference = nlp(reference['description']) print(reference['description']) tweets = [] filter_dict = {'event_id':reference['id'],'tweet.lang':'en'} #filter_dict = {'event_id':reference['id'],'tweet.lang':'en','tweet.media.card_url':None} #filter_dict = {'event_id':reference['id'],'tweet.lang':'en','tweet.media.card_url':{'$ne':None}} #filter_dict = {'event_id':reference['id'],'tweet.lang':'en','f':'&f=news'} query_dict = {'tweet.standard_text':1} records = [i for i in db.pos.find(filter_dict,query_dict)]+[i for i in db.paper.find(filter_dict,query_dict)] print(len(records)) for tweet in records: tweet_id = tweet['_id'] tweet_text = tweet['tweet']['standard_text'] doc_tweet = nlp(tweet_text) coverage_num,jacard_similarity = entity_coverage(doc_reference,doc_tweet) char_match,token_match = fuzzy_string_matching(reference['description'],tweet_text) tweets.append((tweet_id,doc_reference.similarity(doc_tweet),tweet_text)) #tweets.append((tweet_id,doc_reference.similarity(doc_tweet),coverage_num,jacard_similarity,char_match,token_match,tweet_text)) #tweets = [(,doc.similarity(nlp(tweet['tweet']['standard_text'])),entity_coverage,tweet['tweet']['standard_text']) for tweet in db.pos.find({'event_id':record['id']},{'tweet.standard_text':1})] + \ #[(tweet['_id'],doc.similarity(nlp(tweet['tweet']['standard_text'])),entity_coverage,tweet['tweet']['standard_text']) for tweet in db.paper.find({'event_id':record['id']},{'tweet.standard_text':1})] tweets = sorted(tweets,key=lambda x:x[1],reverse=True) for i in tweets: print(i) reference_similatity(df_events.iloc[11]) def get_ents_set(doc): ents = [ent.text for ent in doc.ents] return set(ents) def entity_coverage(doc_reference,doc_tweet): ents_reference = get_ents_set(doc_reference) ents_tweet = get_ents_set(doc_tweet) coverage_num = len(ents_reference & ents_tweet) jacard_similarity = coverage_num/len(ents_reference | ents_tweet) return coverage_num,jacard_similarity def fuzzy_string_matching(text_reference,text_tweet): return fuzz.ratio(text_reference,text_tweet),fuzz.token_set_ratio(text_reference,text_tweet)
0.271831
0.133133
``` from sobol import * from van_der_corput import * import numpy as np import matplotlib.pyplot as plt from scipy.stats import uniform import pandas as pd from halton import * from ctypes import * from hammersley import * import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline halton_seq_2d = halton_sequence(0,499,2).T np.savetxt(r'halton_2d.txt', halton_seq_2d) halton_seq_3d = halton_sequence(0,199,3).T np.savetxt(r'halton_3d.txt', halton_seq_3d) ham_seq_2d = hammersley_sequence(0, 499, 2, 2).T np.savetxt(r'hammersley_2d_500.txt', ham_seq_2d) ham_seq_2d = hammersley_sequence(0, 499, 2, 2).T np.savetxt(r'hammersley_2d.txt', ham_seq_2d) ham_seq_3d = hammersley_sequence(0, 99, 3, 2).T np.savetxt(r'hammersley_3d.txt', ham_seq_3d) ham_seq_3d = hammersley_sequence(0, 49, 3, 2).T np.savetxt(r'hammersley_3d_50.txt', ham_seq_3d) rs2 = [] ss2 = [] seed2 = tau_sobol(2) for _ in range(500): [r2, seed2] = i4_sobol(2, seed2) rs2.append(list(r2)) ss2.append(seed2) data2 = pd.DataFrame(rs2) np.savetxt(r'sobol_2d.txt', data2) rs = [] ss = [] seed = tau_sobol(3) for _ in range(200): [r, seed] = i4_sobol(3, seed) rs.append(list(r)) ss.append(seed) data = pd.DataFrame(rs) plt.plot(data[0], data[2], 'ro') np.savetxt(r'sobol_3d.txt', data) xs2 = [] ys2 = [] for _ in range(500): xs2.append(uniform.rvs(0)) ys2.append(uniform.rvs(0)) uniform_2d = pd.DataFrame((xs2,ys2)).iloc[:,:].T np.savetxt(r'uniform_2d.txt', uniform_2d.values) xs = [] ys = [] zs = [] for _ in range(200): xs.append(uniform.rvs(0)) ys.append(uniform.rvs(0)) zs.append(uniform.rvs(0)) uniform_3d = pd.DataFrame((xs,ys,zs)).T np.savetxt(r'uniform_3d.txt', uniform_3d.values) plt.plot(xs, ys, 'ro') data_uniform = pd.DataFrame((xs,ys)).T np.savetxt(r'uniform_numbers.txt', data_uniform.values) plt.plot(uniform.ppf(data[0]), uniform.ppf(data[1]), 'ro') data_ppf = pd.DataFrame((uniform.ppf(data[0]), uniform.ppf(data[1]))).T np.savetxt(r'ppf_numbers.txt', data_ppf.values) cov(uniform.ppf(data[0]), uniform.ppf(data[1])) norm_l = [0.152201,0.102201,0.0734617,0.076901,0.0625,0.0633167,0.0522844,0.0539463,0.0562613,0.0534959] norm_u = [0.152613,0.102613,0.0737219,0.0774836,0.0630476,0.0636339,0.0528569,0.0540716,0.0569904,0.0538465] norm_m = np.average(np.array([norm_l,norm_u]),axis=0) halton_l = [0.0866667,0.0502392,0.0397827,0.02826,0.0199059,0.0199414,0.0190287,0.0161716,0.0122184,0.0115593] halton_u = [0.0875075,0.050531,0.0406689,0.028751,0.0206342,0.0207306,0.0194098,0.0166377,0.0128433,0.0123616] halton_m = np.average(np.array([halton_l,halton_u]),axis=0) hammersley_l = [0.06875,0.0378125,0.0252083,0.0205469,0.0164375,0.0142448,0.0122098,0.0107617,0.00970486,0.00889062] hammersley_u = [0.0690406,0.0379603,0.0259349,0.0208379,0.0171046,0.0149597,0.0130978,0.0114187,0.0106009,0.00929946] hammersley_m = np.average(np.array([hammersley_l,hammersley_u]),axis=0) sobol_l=[0.0680859,0.0398437,0.0274105,0.0215149,0.0153127,0.0150315,0.0155077,0.0106149,0.0103954,0.00786038] sobol_u=[0.0688848,0.040244,0.028961,0.0223575,0.0162023,0.0158442,0.0160774,0.0111295,0.0109213,0.00869907] sobol_m=np.average(np.array([sobol_l,sobol_u]),axis=0) niederreiter_l=[0.0793066,0.0460742,0.0292813,0.0209969,0.0161554,0.0184124,0.0172153,0.0115395,0.0111083,0.0121728] niederreiter_u=[0.0796731,0.0465015,0.0300493,0.0213977,0.0170503,0.0191763,0.0177153,0.0124522,0.0117774,0.0126991] niederreiter_m=np.average(np.array([niederreiter_l,niederreiter_u]),axis=0) x = np.arange(50,550,50) data = pd.DataFrame({ 'number': np.tile(x,5), 'label': ['MT19937']*10+['Halton']*10+['Hammersley']*10+['Sobol']*10+['Niederreiter']*10, 'star discrepancy': np.append(np.append(np.append(np.append(norm_m,halton_m),hammersley_m),sobol_m),niederreiter_m) }) plt.figure(figsize=(10,10)) sns_plot = sns.lineplot(x='number',y='star discrepancy',hue='label',data=data) sns_plot.set_yscale('symlog') sns_plot.set_xticks(x) norm_2d_l = [0.220478,0.177592,0.145534,0.112201,0.102201,0.0835842,0.0686998,0.0698417,0.0679062,0.076901] norm_2d_u = [0.220716,0.177916,0.145946,0.112613,0.102613,0.0842435,0.06896,0.0703523,0.0681664,0.0774836] norm_2d_m = np.average(np.array([norm_2d_l,norm_2d_u]),axis=0) halton_2d_l = [0.173843,0.0988426,0.0653549,0.05,0.0502392,0.0422711,0.0368616,0.0292631,0.0320023,0.02826] halton_2d_u = [0.174351,0.0997694,0.065673,0.050796,0.050531,0.0430051,0.0372239,0.0298632,0.0326345,0.028751] halton_2d_m = np.average(np.array([halton_2d_l,halton_2d_u]),axis=0) sobol_2d_l=[0.13125,0.0835938,0.0484375,0.0505859,0.0398437,0.0251302,0.027941,0.0284485,0.0229953,0.0215149] sobol_2d_u=[0.131915,0.0840564,0.0492664,0.0513848,0.040244,0.0255617,0.0284411,0.0290759,0.0236088,0.0223575] sobol_2d_m=np.average(np.array([sobol_2d_l,sobol_2d_u]),axis=0) niederreiter_2d_2_l=[0.209375,0.0835938,0.0644531,0.0451907,0.0460742,0.0372233,0.027822,0.0260742,0.0255918,0.0209969] niederreiter_2d_2_u=[0.209954,0.0840564,0.065252,0.0459392,0.0465015,0.038109,0.0287652,0.0265808,0.0263902,0.0213977] niederreiter_2d_2_m=np.average(np.array([niederreiter_2d_2_l,niederreiter_2d_2_u]),axis=0) niederreiter_2d_3_l=[0.116255,0.121442,0.0695631,0.0357535,0.0466431,0.0441579,0.0332615,0.0280169,0.0287416,0.0247486] niederreiter_2d_3_u=[0.116931,0.122164,0.0701622,0.0361236,0.0468892,0.0447588,0.0337961,0.0283526,0.0292454,0.0251581] niederreiter_2d_3_m=np.average(np.array([niederreiter_2d_3_l,niederreiter_2d_3_u]),axis=0) x = np.arange(20,220,20) data_2d = pd.DataFrame({ 'number': np.tile(x,5), 'sequence': ['MT19937']*10+['Halton']*10+['Sobol']*10+['Niederreiter(base2)']*10+['Niederreiter(base3)']*10, 'star discrepancy': np.append(np.append(np.append(np.append(norm_2d_m,halton_2d_m),sobol_2d_m),niederreiter_2d_2_m),niederreiter_2d_3_m), 'dimension': ['2D']*50 }) plt.figure(figsize=(10,10)) sns_plot = sns.lineplot(x='number',y='star discrepancy',hue='sequence',data=data_2d) sns_plot.set_yscale('symlog') sns_plot.set_xticks(x) norm_3d_l = [0.233831,0.229981,0.161374,0.211107,0.186068,0.196867,0.145398,0.135788,0.120125,0.109395] norm_3d_u = [0.234559,0.23069,0.162209,0.211589,0.186741,0.19757,0.146,0.136434,0.120728,0.109849] norm_3d_m = np.average(np.array([norm_3d_l,norm_3d_u]),axis=0) halton_3d_l = [0.20787,0.147481,0.108046,0.0825,0.0675,0.0584905,0.0639947,0.0495864,0.044429,0.0408333] halton_3d_u = [0.208644,0.148149,0.108798,0.0830124,0.0680389,0.0590979,0.0644056,0.0501241,0.0448818,0.0414037] halton_3d_m = np.average(np.array([halton_3d_l,halton_3d_u]),axis=0) sobol_3d_l = [0.177417,0.111667,0.0805227,0.0786194,0.060575,0.0440732,0.0404645,0.0406897,0.0361581,0.0365189] sobol_3d_u = [0.17805,0.112065,0.0809946,0.0791973,0.0609938,0.0447793,0.0410534,0.041348,0.0370894,0.037214] sobol_3d_m = np.average(np.array([sobol_3d_l,sobol_3d_u]),axis=0) niederreiter_3d_2_l = [0.258203,0.110364,0.0846781,0.0731812,0.0689217,0.0488424,0.0431422,0.0363159,0.0394795,0.03681] niederreiter_3d_2_u = [0.258691,0.111097,0.0849967,0.0737839,0.0695926,0.0491874,0.0437039,0.0368911,0.0400258,0.0375931] niederreiter_3d_2_m = np.average(np.array([niederreiter_3d_2_l,niederreiter_3d_2_u]),axis=0) niederreiter_3d_3_l = [0.179492,0.183609,0.110682,0.0640263,0.0794885,0.071369,0.0521653,0.0500738,0.0556956,0.0391532] niederreiter_3d_3_u = [0.18022,0.184066,0.111209,0.0646516,0.0801546,0.0721232,0.0530213,0.0508522,0.0561501,0.0396149] niederreiter_3d_3_m = np.average(np.array([niederreiter_3d_3_l,niederreiter_3d_3_u]),axis=0) x = np.arange(20,220,20) data_3d = pd.DataFrame({ 'number': np.tile(x,5), 'sequence': ['MT19937']*10+['Halton']*10+['Sobol']*10+['Niederreiter(base2)']*10+['Niederreiter(base3)']*10, 'star discrepancy': np.append(np.append(np.append(np.append(norm_3d_m,halton_3d_m),sobol_3d_m),niederreiter_3d_2_m),niederreiter_3d_3_m), 'dimension': ['3D']*50 }) plt.figure(figsize=(10,10)) sns_plot = sns.lineplot(x='number',y='star discrepancy',hue='sequence',data=data_3d) sns_plot.set_yscale('symlog') sns_plot.set_xticks(x) data = pd.concat([data_2d, data_3d]) sns.set(font_scale = 1.3) g = sns.FacetGrid(data, hue="sequence", col="dimension", height=5) g.map(sns.lineplot, "number", "star discrepancy") g.set(yscale='symlog',xticks=x) g.add_legend() # plt.savefig('star_discrepancy.png') g.fig.savefig('star_discrepancy.png',bbox_inches='tight') ```
github_jupyter
from sobol import * from van_der_corput import * import numpy as np import matplotlib.pyplot as plt from scipy.stats import uniform import pandas as pd from halton import * from ctypes import * from hammersley import * import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline halton_seq_2d = halton_sequence(0,499,2).T np.savetxt(r'halton_2d.txt', halton_seq_2d) halton_seq_3d = halton_sequence(0,199,3).T np.savetxt(r'halton_3d.txt', halton_seq_3d) ham_seq_2d = hammersley_sequence(0, 499, 2, 2).T np.savetxt(r'hammersley_2d_500.txt', ham_seq_2d) ham_seq_2d = hammersley_sequence(0, 499, 2, 2).T np.savetxt(r'hammersley_2d.txt', ham_seq_2d) ham_seq_3d = hammersley_sequence(0, 99, 3, 2).T np.savetxt(r'hammersley_3d.txt', ham_seq_3d) ham_seq_3d = hammersley_sequence(0, 49, 3, 2).T np.savetxt(r'hammersley_3d_50.txt', ham_seq_3d) rs2 = [] ss2 = [] seed2 = tau_sobol(2) for _ in range(500): [r2, seed2] = i4_sobol(2, seed2) rs2.append(list(r2)) ss2.append(seed2) data2 = pd.DataFrame(rs2) np.savetxt(r'sobol_2d.txt', data2) rs = [] ss = [] seed = tau_sobol(3) for _ in range(200): [r, seed] = i4_sobol(3, seed) rs.append(list(r)) ss.append(seed) data = pd.DataFrame(rs) plt.plot(data[0], data[2], 'ro') np.savetxt(r'sobol_3d.txt', data) xs2 = [] ys2 = [] for _ in range(500): xs2.append(uniform.rvs(0)) ys2.append(uniform.rvs(0)) uniform_2d = pd.DataFrame((xs2,ys2)).iloc[:,:].T np.savetxt(r'uniform_2d.txt', uniform_2d.values) xs = [] ys = [] zs = [] for _ in range(200): xs.append(uniform.rvs(0)) ys.append(uniform.rvs(0)) zs.append(uniform.rvs(0)) uniform_3d = pd.DataFrame((xs,ys,zs)).T np.savetxt(r'uniform_3d.txt', uniform_3d.values) plt.plot(xs, ys, 'ro') data_uniform = pd.DataFrame((xs,ys)).T np.savetxt(r'uniform_numbers.txt', data_uniform.values) plt.plot(uniform.ppf(data[0]), uniform.ppf(data[1]), 'ro') data_ppf = pd.DataFrame((uniform.ppf(data[0]), uniform.ppf(data[1]))).T np.savetxt(r'ppf_numbers.txt', data_ppf.values) cov(uniform.ppf(data[0]), uniform.ppf(data[1])) norm_l = [0.152201,0.102201,0.0734617,0.076901,0.0625,0.0633167,0.0522844,0.0539463,0.0562613,0.0534959] norm_u = [0.152613,0.102613,0.0737219,0.0774836,0.0630476,0.0636339,0.0528569,0.0540716,0.0569904,0.0538465] norm_m = np.average(np.array([norm_l,norm_u]),axis=0) halton_l = [0.0866667,0.0502392,0.0397827,0.02826,0.0199059,0.0199414,0.0190287,0.0161716,0.0122184,0.0115593] halton_u = [0.0875075,0.050531,0.0406689,0.028751,0.0206342,0.0207306,0.0194098,0.0166377,0.0128433,0.0123616] halton_m = np.average(np.array([halton_l,halton_u]),axis=0) hammersley_l = [0.06875,0.0378125,0.0252083,0.0205469,0.0164375,0.0142448,0.0122098,0.0107617,0.00970486,0.00889062] hammersley_u = [0.0690406,0.0379603,0.0259349,0.0208379,0.0171046,0.0149597,0.0130978,0.0114187,0.0106009,0.00929946] hammersley_m = np.average(np.array([hammersley_l,hammersley_u]),axis=0) sobol_l=[0.0680859,0.0398437,0.0274105,0.0215149,0.0153127,0.0150315,0.0155077,0.0106149,0.0103954,0.00786038] sobol_u=[0.0688848,0.040244,0.028961,0.0223575,0.0162023,0.0158442,0.0160774,0.0111295,0.0109213,0.00869907] sobol_m=np.average(np.array([sobol_l,sobol_u]),axis=0) niederreiter_l=[0.0793066,0.0460742,0.0292813,0.0209969,0.0161554,0.0184124,0.0172153,0.0115395,0.0111083,0.0121728] niederreiter_u=[0.0796731,0.0465015,0.0300493,0.0213977,0.0170503,0.0191763,0.0177153,0.0124522,0.0117774,0.0126991] niederreiter_m=np.average(np.array([niederreiter_l,niederreiter_u]),axis=0) x = np.arange(50,550,50) data = pd.DataFrame({ 'number': np.tile(x,5), 'label': ['MT19937']*10+['Halton']*10+['Hammersley']*10+['Sobol']*10+['Niederreiter']*10, 'star discrepancy': np.append(np.append(np.append(np.append(norm_m,halton_m),hammersley_m),sobol_m),niederreiter_m) }) plt.figure(figsize=(10,10)) sns_plot = sns.lineplot(x='number',y='star discrepancy',hue='label',data=data) sns_plot.set_yscale('symlog') sns_plot.set_xticks(x) norm_2d_l = [0.220478,0.177592,0.145534,0.112201,0.102201,0.0835842,0.0686998,0.0698417,0.0679062,0.076901] norm_2d_u = [0.220716,0.177916,0.145946,0.112613,0.102613,0.0842435,0.06896,0.0703523,0.0681664,0.0774836] norm_2d_m = np.average(np.array([norm_2d_l,norm_2d_u]),axis=0) halton_2d_l = [0.173843,0.0988426,0.0653549,0.05,0.0502392,0.0422711,0.0368616,0.0292631,0.0320023,0.02826] halton_2d_u = [0.174351,0.0997694,0.065673,0.050796,0.050531,0.0430051,0.0372239,0.0298632,0.0326345,0.028751] halton_2d_m = np.average(np.array([halton_2d_l,halton_2d_u]),axis=0) sobol_2d_l=[0.13125,0.0835938,0.0484375,0.0505859,0.0398437,0.0251302,0.027941,0.0284485,0.0229953,0.0215149] sobol_2d_u=[0.131915,0.0840564,0.0492664,0.0513848,0.040244,0.0255617,0.0284411,0.0290759,0.0236088,0.0223575] sobol_2d_m=np.average(np.array([sobol_2d_l,sobol_2d_u]),axis=0) niederreiter_2d_2_l=[0.209375,0.0835938,0.0644531,0.0451907,0.0460742,0.0372233,0.027822,0.0260742,0.0255918,0.0209969] niederreiter_2d_2_u=[0.209954,0.0840564,0.065252,0.0459392,0.0465015,0.038109,0.0287652,0.0265808,0.0263902,0.0213977] niederreiter_2d_2_m=np.average(np.array([niederreiter_2d_2_l,niederreiter_2d_2_u]),axis=0) niederreiter_2d_3_l=[0.116255,0.121442,0.0695631,0.0357535,0.0466431,0.0441579,0.0332615,0.0280169,0.0287416,0.0247486] niederreiter_2d_3_u=[0.116931,0.122164,0.0701622,0.0361236,0.0468892,0.0447588,0.0337961,0.0283526,0.0292454,0.0251581] niederreiter_2d_3_m=np.average(np.array([niederreiter_2d_3_l,niederreiter_2d_3_u]),axis=0) x = np.arange(20,220,20) data_2d = pd.DataFrame({ 'number': np.tile(x,5), 'sequence': ['MT19937']*10+['Halton']*10+['Sobol']*10+['Niederreiter(base2)']*10+['Niederreiter(base3)']*10, 'star discrepancy': np.append(np.append(np.append(np.append(norm_2d_m,halton_2d_m),sobol_2d_m),niederreiter_2d_2_m),niederreiter_2d_3_m), 'dimension': ['2D']*50 }) plt.figure(figsize=(10,10)) sns_plot = sns.lineplot(x='number',y='star discrepancy',hue='sequence',data=data_2d) sns_plot.set_yscale('symlog') sns_plot.set_xticks(x) norm_3d_l = [0.233831,0.229981,0.161374,0.211107,0.186068,0.196867,0.145398,0.135788,0.120125,0.109395] norm_3d_u = [0.234559,0.23069,0.162209,0.211589,0.186741,0.19757,0.146,0.136434,0.120728,0.109849] norm_3d_m = np.average(np.array([norm_3d_l,norm_3d_u]),axis=0) halton_3d_l = [0.20787,0.147481,0.108046,0.0825,0.0675,0.0584905,0.0639947,0.0495864,0.044429,0.0408333] halton_3d_u = [0.208644,0.148149,0.108798,0.0830124,0.0680389,0.0590979,0.0644056,0.0501241,0.0448818,0.0414037] halton_3d_m = np.average(np.array([halton_3d_l,halton_3d_u]),axis=0) sobol_3d_l = [0.177417,0.111667,0.0805227,0.0786194,0.060575,0.0440732,0.0404645,0.0406897,0.0361581,0.0365189] sobol_3d_u = [0.17805,0.112065,0.0809946,0.0791973,0.0609938,0.0447793,0.0410534,0.041348,0.0370894,0.037214] sobol_3d_m = np.average(np.array([sobol_3d_l,sobol_3d_u]),axis=0) niederreiter_3d_2_l = [0.258203,0.110364,0.0846781,0.0731812,0.0689217,0.0488424,0.0431422,0.0363159,0.0394795,0.03681] niederreiter_3d_2_u = [0.258691,0.111097,0.0849967,0.0737839,0.0695926,0.0491874,0.0437039,0.0368911,0.0400258,0.0375931] niederreiter_3d_2_m = np.average(np.array([niederreiter_3d_2_l,niederreiter_3d_2_u]),axis=0) niederreiter_3d_3_l = [0.179492,0.183609,0.110682,0.0640263,0.0794885,0.071369,0.0521653,0.0500738,0.0556956,0.0391532] niederreiter_3d_3_u = [0.18022,0.184066,0.111209,0.0646516,0.0801546,0.0721232,0.0530213,0.0508522,0.0561501,0.0396149] niederreiter_3d_3_m = np.average(np.array([niederreiter_3d_3_l,niederreiter_3d_3_u]),axis=0) x = np.arange(20,220,20) data_3d = pd.DataFrame({ 'number': np.tile(x,5), 'sequence': ['MT19937']*10+['Halton']*10+['Sobol']*10+['Niederreiter(base2)']*10+['Niederreiter(base3)']*10, 'star discrepancy': np.append(np.append(np.append(np.append(norm_3d_m,halton_3d_m),sobol_3d_m),niederreiter_3d_2_m),niederreiter_3d_3_m), 'dimension': ['3D']*50 }) plt.figure(figsize=(10,10)) sns_plot = sns.lineplot(x='number',y='star discrepancy',hue='sequence',data=data_3d) sns_plot.set_yscale('symlog') sns_plot.set_xticks(x) data = pd.concat([data_2d, data_3d]) sns.set(font_scale = 1.3) g = sns.FacetGrid(data, hue="sequence", col="dimension", height=5) g.map(sns.lineplot, "number", "star discrepancy") g.set(yscale='symlog',xticks=x) g.add_legend() # plt.savefig('star_discrepancy.png') g.fig.savefig('star_discrepancy.png',bbox_inches='tight')
0.202246
0.432423
Welcome to HoloViews! This tutorial explains the basics of how to use HoloViews to explore your data. If this is your first contact with HoloViews, you may want to start by looking at our [showcase](Showcase.ipynb) to get a good idea of what can be achieved with HoloViews. If this introduction does not cover the type of visualizations you need, you should check out our [Elements](Elements.ipynb) and [Containers](Containers.ipynb) components to see what else is available. ## What is HoloViews? HoloViews allows you to collect and annotate your data in a way that reveals it naturally, with a minimum of effort needed for you to see your data as it actually *is*. HoloViews is not a plotting library -- it connects your data to plotting code implemented in other packages, such as [matplotlib](http://matplotlib.org/) or [Bokeh](http://bokeh.pydata.org). HoloViews is also not primarily a mass storage or archival data format like [HDF5](http://www.h5py.org/) -- it is designed to package your data to make it maximally visualizable and viewable interactively. If you supply just enough additional information to the data of interest, HoloViews allows you to store, index, slice, analyze, reduce, compose, display, and animate your data as naturally as possible. HoloViews makes your numerical data come alive, revealing itself easily and without extensive coding. Here are a few of the things HoloViews allows you to associate with your data: * ***The [Element](Elements) type***. This encapsulates your data and is the most fundamental indicator of how your data can be analyzed and displayed. For instance, if you wrap a 2D numpy array in a ``Image`` it will be displayed as an image with a colormap by default, a ``Curve`` will be presented as a line plot on an axis, and so on. Once your data has been encapsulated in an ``Element`` object, other ``Elements`` can easily be created from it, such as obtaining a ``Curve`` by taking a cross-section of a ``Image``. * ***Dimensions of your data***. The key dimensions (``kdims``) describe how your data can be indexed. The value dimensions (``vdims``) describe what the resulting indexed data represents. A numerical ``Dimension`` can have a name, type, range, and unit. This information allows HoloViews to rescale and label axes and allows HoloViews be smart in how it processes your data. * ***The multi-dimensional space in which your data resides***. This may be space as we normally think of it (in *x*, *y*, and *z* coordinates). It may be the spatial position of one component relative to another. Or it may be an entirely abstract space, such as a parameter space or a list of experiments done on different days. Whatever multi-dimensional space characterizes how one chunk of your data relates to another chunk, you can embed your data in that space easily, sparsely populating whatever region of that space you want to analyze. * ***How your data should be grouped for display***. In short, how you want your data to be organized for visualization. If you have a collection of points that was computed from an image, you can easily overlay your points over the image. As a result you have something that both displays sensibly, and is grouped together in a semantically meaningful way. HoloViews can display your data even if it knows only the [Element](Elements.ipynb) type, which lets HoloViews stay out your way when initially exploring your data, offering immediate feedback with reasonable default visualizations. As your analysis becomes more complex and your research progresses, you may offer more of the useful metadata above so that HoloViews will automatically improve your displayed figures accordingly. Throughout, all you need to supply is this metadata plus optional and separate plotting hints (such as choosing specific colors if you like), rather than having to write cumbersome code to put figures together or having to paste bits together manually in an external drawing or plotting program. Note that the HoloViews data components have only minimal required dependencies (Numpy and Param, both with no required dependencies of their own). This data format can thus be integrated directly into your research or development code, for maximum convenience and flexibility (see e.g. the [ImaGen](http://ioam.github.io/imagen) library for an example). Plotting implementations are currently provided for matplotlib and Bokeh, and other plotting packages could be used for the same data in the future if needed. Similarly, HoloViews provides strong support for the [IPython/Jupyter notebook](http://ipython.org/notebook.html) interface, and we recommend using the notebook for building [reproducible yet interactive workflows](Exporting.ipynb), but none of the components require IPython either. Thus HoloViews is designed to fit into your existing workflow, without adding complicated dependencies. ## Getting Started To enable IPython integration, you need to load the IPython extension as follows: ``` import holoviews as hv hv.notebook_extension() ``` We'll also need Numpy for some of our examples: ``` import numpy as np ``` ## Interactive Documentation <a id='ParamDoc'></a> HoloViews has very well-documented and error-checked constructors for every class (provided by the [Param](https://ioam.github.io/param/) library). There are a number of convenient ways to access this information interactively. E.g. if you have imported ``holoviews`` and ``Element`` and have instantiated an object of that type: ```python import holoviews as hv hv.Element(None, group='Value', label='Label') ``` You can now access the online documentation in the following ways: * ``hv.help(Element)`` or ``hv.help(e)``: description and parameter documentation for an object or type * ``hv.help(Element,pattern="group")``: only show help items that match the specified regular expression (the string "group" in this example) * ```Element(<Shift+TAB>``` in IPython: Repeatedly pressing ``<Shift+TAB>`` after opening an object constructor will get you more information on each press, eventually showing the full output of ``hv.help(Element)`` * ``hv.help(Element, visualization=True)`` or ``hv.help(e, visualization=True)``: description of options for *visualizing* an ``Element`` or the specific object ``e``, not the options for the object itself * ``%%output info=True`` on an IPython/Jupyter notebook cell or ``%output info=True`` on the whole notebook: show ``hv.help(o, visualization=True)`` for every HoloViews object ``o`` as it is returned in a cell. Lastly, you can tab-complete nearly all arguments to HoloViews classes, so if you try ``Element(vd<TAB>``, you will see the available keyword arguments (``vdims`` in this case). All of these forms of help are described in detail in the [options](Options.ipynb) tutorial. ## A simple visualization To begin, let's see how HoloViews stays out your way when initially exploring some data. Let's view an image, selecting the appropriate [RGB Element](Elements.ipynb#RGB). Now, although we could immediately load our image into the ``RGB`` object, we will first load it into a raw Numpy array (by specifying ``array=True``): ``` parrot = hv.RGB.load_image('../assets/macaw.png', array=True) print("%s with shape %s" % (type(parrot),parrot.shape)) ``` As we can see this 400&times;400 image data array has four channels (the fourth being an unused alpha channel). Now let us make an ``RGB`` element to wrap up this Numpy array with its associated label: ``` rgb_parrot = hv.RGB(parrot, label='Macaw') rgb_parrot ``` Here ``rgb_parrot`` is an ``RGB`` HoloViews element, which requires 3 or 4 dimensional data and can store an associated label. ``rgb_parrot`` is *not* a plot -- it is just a data structure with some metadata. The ``holoviews.ipython`` extension, in turn, makes sure that any ``RGB`` element is displayed appropriately, i.e. as a color image with an associated optional title, plotted using matplotlib. But the ``RGB`` object itself does not have any connection to the plotting library, and stores no data about the plot, just its own data, which is sufficient for the external plotting routines to visualize the data usefully and meaningfully. And the same plot can be [generated outside of IPython](Options.ipynb) just as easily, e.g. to save as a ``.png`` or ``.svg`` file. Because ``rgb_parrot`` is just our actual data, it can be composed with other objects, pickled, and analyzed as-is. For instance, we can still access the underlying Numpy array easily via the ``.data`` attribute, and can verify that it is indeed our actual data: ``` rgb_parrot.data is parrot ``` Note that this is generally true throughout HoloViews; if you pass a HoloViews element a Numpy array of the right shape, the ``.data`` attribute will simply be a reference to the data you supplied. If you use an alternative data format when constructing an element, such as a Python list, a Numpy array of the appropriate type will be created and made available through the ``.data`` attribute. You can always use the identity check demonstrated above if you want to make absolutely sure your raw data is being used directly. As you compose these objects together, you will see that a complex visualization is not simply a visual display, but a rich data structure containing all the raw data or analyzed data ready for further manipulation and analysis. ### Viewing individual color channels <a id="channels"> For some image analysis purposes, working in RGB colour space is too limiting. It is often more flexible to work with a single N&times;M array at a time and visualize the data in each channel using a colormap. To do this we need the [Image](Elements.ipynb#Image) ``Element`` instead of the ``RGB`` ``Element``. To illustrate, let's start by visualizing the total luminance across all the channels of the parrot image, choosing a specific colormap using the HoloViews ``%%opts`` IPython [cell magic](http://ipython.org/ipython-doc/dev/interactive/tutorial.html#magic-functions). ``%%opts Image`` allows us to pass plotting hints to the underlying visualization code for ``Image`` objects: ``` %%opts Image style(cmap='coolwarm') luminance = hv.Image(parrot.sum(axis=2), label='Summed Luminance') luminance ``` As described in the [Options](Options.ipynb) tutorial, the same options can also be specified in pure Python, though not quite as succinctly. The resulting plot is what we would expect: dark areas are shown in blue and bright areas are shown in red. Notice how the plotting hints (your desired colormap in this case) are kept entirely separate from your actual data, so that the Image data structure contains only your actual data and the metadata that describes it, not incidental information like matplotlib or Bokeh options. We will now set the default colormap to grayscale for all subsequent cells using the ``%opt`` command, and look at a single color channel by building an appropriate ``Image`` element: ``` %opts Image style(cmap='gray') red = hv.Image(parrot[:,:,0], label='Red') red ``` Here we created the red ``Image`` directly from the NumPy array ``parrot``. You can also make a lower-dimensional HoloViews component by slicing a higher-dimensional one. For instance, now we will combine this manually constructed red channel with green and blue channels constructed by slicing the ``rgb_parrot`` ``RGB`` HoloViews object to get the appropriate ``Image`` objects: ``` channels = red + rgb_parrot[:,:,'G'].relabel('Green') + rgb_parrot[:,:,'B'].relabel('Blue') channels ``` Here we have combined these three HoloViews objects using the compositional operator ``+`` to create a new object named ``channels``. When ``channels`` is displayed by the IPython/Jupyter notebook, each ``Image`` is shown side by side, with appropriate labels. In this format, you can see that the parrot looks quite distinctly different in the red channel than the green and blue channel. Note that the ``channels`` object isn't merely a *display* of three elements side by side, but a new composite object of type ``Layout`` containing our three ``Image`` objects: ``` print(channels) ``` Here the syntax ``:Layout`` means that this is an object of Python type ``Layout``. The entries below it show that this ``Layout`` contains three items accessible via different attributes of the ``channels`` object, i.e. ``channels.Image.Red``, ``channels.Image.Green``, and ``channels.Image.Blue``. The string ``:Image`` indicates that each object is of type ``Image``. The attributes provide an easy way to get to the objects inside ``channels``. In this case, the attributes were set by the ``+`` operator when it created the ``Layout``, with the first level from the optional ``group`` of each object (discussed below, here inheriting from the type of the object), and the second from the ``label`` we defined for each object above. As for other HoloViews objects, tab completion is provided, so you can type ``channels.I<TAB>.B<TAB>`` to get the Blue channel: ``` channels.Image.Blue ``` You can see that for each item in the ``Layout``, using ``print()`` shows us its Python type (``Image``) and the attributes needed to access it (``.Image.Blue`` in this case). The remaining items in brackets and parentheses in the printed representation will be discussed below. Since we can access the channels easily, we can recompose our data how we like, e.g. to compare the Red and Blue channels side by side: ``` channels.Image.Red + channels.Image.Blue ``` Notice how the labels we have set are useful for both the titles and for the indexing, and are thus not simply plotting-specific details -- they are semantically meaningful metadata describing this data. There are also sublabels **A** and **B** generated automatically for each subfigure; these can be [changed or disabled](Containers.ipynb#subfigure-labels) if you prefer. ``` %%opts Layout [sublabel_format="{alpha}"] channels.Image.Red + channels.Image.Green + channels.Image.Green ``` ## Grouping into ``Layouts`` <a id='Layouts'></a> <a id='value'></a> You may wonder what the "``.Image``" is doing in the middle of the indexing above. This is the **``group``** name which, even though we haven't set it directly in this case, is as important a concept as the label. All HoloViews objects come with both ``group`` and a ``label``, which allows you to specify both what *kind* of thing the object is (its ``group``), and which specific one it is (the ``label``). These values will be used to construct subfigure titles, to allow you to access the object by name in containers, and to allow you to set [options](Options.ipynb) for specific objects or for groups of them. ``Group``s and ``label``s can both be set to any Python string, including spaces and special characters. The ``label`` is an arbitrary name you can use for this data item. The ``group`` is meant to describe the category or the semantic type of the data. By default, the group is the same as the name of the HoloViews element type, in this case ``Image``: ``` channels.Image.Blue.group ``` The group is an extremely useful grouping mechanism that allows you to easily structure your data in ways that are semantically meaningful for you. As we noted above, the red channel is the most clearly different from the other two, and we can separate it from the other two channels if we wish by giving them two different groups: ``` chans = (hv.Image(parrot[:,:,0], group='RedChannel', label='Macaw') + hv.Image(parrot[:,:,1], group='Channel', label='Green') + hv.Image(parrot[:,:,2], group='Channel', label='Blue')) ``` The red channel is given its own special group ``'RedChannel'`` while the other two channels are grouped under the generic ``Channel``. The non-Red channels can now be accessed as a group: ``` print(chans.Channel) chans.Channel ``` And now we can access the interesting red channel via its own group: ``` chans.RedChannel ``` Of course, you can still access the other two channels individually using ``channels.Channel.Green`` and ``channels.Channel.Blue`` respectively; with enough attributes provided you can always get down to the individual, ungrouped objects. In any case, the reason that there are two levels of indexing here is simply because that's what the ``+`` operator does by default, i.e. it looks up the ``group`` and ``label`` information that every HoloViews object has, and uses those to name the attributes that let you access the object. But the ``Layout`` object is actually a tree, not a fixed two-level structure, and it allows you to store your objects in any tree shape that you prefer. This structure can be used to set up any grouping arrangement that reflects how you want to use your data, such as to make it easy to select certain subsets for plotting or for special operations like setting their plot options or running analyses on them. For instance, if you really wanted to, you could insert a new item arbitrarily deeply nested into your ``Layout``: ``` chans.RedChannel.OriginalData.StoredForSafeKeeping = rgb_parrot print(chans) chans ``` You can see that ``rgb_parrot`` is still titled with its internal label "Macaw", even though it's been inserted into a custom location in the ``Layout`` tree; the attributes are only used for grouping and accessing the objects from Python, not for plotting purposes, and so you can use them however you see fit to organize your own data. The two-level structure provided by the ``+`` operator is just a useful default, and should be all many HoloViews users will need. Note that the attributes constructed using ``+`` are sanitized --- spaces are converted to ``_``, special characters are represented specially, and so on -- just use ``[TAB]`` completion (or ``print()``) to find out the correct attribute name in unusual cases. So you can use any characters you like in the label and group, and there will still be usable names for use in attribute selection and options processing. ## Accessing your data Apart from the attribute access and types, the other items in the printed representation show the dimensions of the ``Image`` objects: ``` print(chans) ``` The key dimensions are shown in brackets ``[x,y]``. Each of these objects can be indexed by the 2-dimensional (x,y) location in the image: ``` chans.Channel.Blue[0.0,0.0] ``` The value dimensions in parentheses in the ``print()`` tell you what data will be returned by applying the above indexing over the key dimensions, in this case the single value for the luminance of the blue pixel at that location. The generic default name for that dimension of an ``Image`` is ``z``, but you can use another name for it if you prefer: ``` chans.Channel.Blue = chans.Channel.Blue.clone(vdims=["Luminance"]) print(chans.Channel.Blue) ``` And of course, if there is more than one value dimension, as there is for the ``RGB`` object ``rgb_parrot`` that we stored in there, you will get all of them back when you index, and then you can further index to select a single dimension: ``` print(rgb_parrot) print(rgb_parrot[0,0]) print(rgb_parrot[0,0].iloc[0, 0]) ``` To summarize, a ``Layout`` constructed by ``+`` will be organized into two levels by default, i.e. ``group`` and ``label``, which is sufficient in many cases. But if you have a more complicated hierarchical collection of different types of data, you can combine it all into a custom-organized ``Layout`` tree structure that respects the semantic categories that characterize your particular set of data. Once these categories have been set up, you can then very easily select appropriate sets of your data for display or further analysis. As you can see, a ``Layout`` is an incredibly convenient and versatile way of collecting even a huge and complex collection of data together, ready to explore easily. ## Grouping into ``Overlays`` Putting two components ([Elements](Elements.ipynb) or [Containers](Containers.ipynb)) side by side into a ``Layout`` using ``+`` is one of the most common operations in HoloViews, and works with any possible component type. But there is another compositional operator ``*`` that is also very useful for creating complex visualizations, by overlaying components on top of each other. Nearly all components can be overlaid as well, except for a ``Layout``; a ``Layout`` can contain ``Overlays``, but never the other way around. ### Pointing to our parrot One type of element designed specifically for overlaying is the [annotation](Elements.ipynb#Annotations). Here we use the [Arrow Element](Elements.ipynb#Arrow) to label our parrot using the original ``RGB`` object with the overlay (``*``) operator: ``` extents = (-0.5, -0.5, 0.5, 0.5) # Image spatial extents o = rgb_parrot * hv.Arrow(-0.1,0.2, 'Polly', '>', extents=extents) o ``` An overlay is a compositional data structure, just like ``Layout`` (it is in fact a subclass!). This means the same attribute-access and grouping semantics apply. To illustrate we can index our overlay to pull it apart and lay the two components side by side: ``` o + o.RGB.Macaw + o.Arrow.I ``` Note that when there is no label available for an object in a ``Layout`` or ``Overlay``, HoloViews will generate an appropriate Roman numeral identifier for indexing. In this case we index our arrow using ``Arrow.I``. Naturally, ``Overlays`` may themselves be elements of a ``Layout``, as at left above. ### Overlaying contours Overlays may be simple annotations as demonstrated above, but often they can contain significant volumes of important data. To demonstrate, we will introduce the concept of **operations** and the [``Contours``](Elements.ipynb#Contours) ``Element``: ``` from holoviews.operation import contours ``` This operation takes an ``Image`` as input and generates an overlay for us, where our original input is returned with contour lines overlaid on top. Let's have a look at the 10% (darkest) and 80% (brightest) areas of the red channel: ``` contours(chans.RedChannel.Macaw, levels=[0.10,0.80]) ``` ## Animations and slider bars The final topic for the introduction is animations. Animation relies on a powerful multidimensional data container called a ``HoloMap``, which is described in detail in the [Exploring Data](Exploring_Data.ipynb) tutorial. Here, as a brief illustration, we show how to construct three ``HoloMap``s from sets of ``Images`` with ``Contours``, constructed using a list of different threshold levels for the above image. As you can see in the above plot, having a large number of threshold levels would be very difficult to include in a single plot. In such a case, one could lay them all out side by side, but here we show how to combine them into three ``HoloMap`` objects that support animation, whether viewed separately or as part of the same ``Layout``: ``` %%opts Contours.Red (color=Palette('Reds')) Contours.Green (color=Palette('Greens')) Contours.Blue (color=Palette('Blues')) data = {lvl:(contours(chans.RedChannel.Macaw, levels=[lvl], group='Red') +\ contours(chans.Channel.Green, levels=[lvl], group='Green') +\ contours(chans.Channel.Blue, levels=[lvl], group='Blue')) for lvl in np.linspace(0.1,0.9,9)} levels = hv.HoloMap(data, kdims=['Levels']).collate() levels ``` The ``levels`` object here is a ``Layout``, just as in the other examples above, but it is displayed as an animation because it happens to contain three ``HoloMaps`` that have an additional dimension ``Levels`` beyond what has been laid out spatially in each image. There's no other special implementation necessary to get animations; they appear automatically whenever there are these additional dimensions in a ``HoloMap`` that haven't been sliced, sampled, or reduced down enough to fit into a single plot. Your data, as always, remains available within the object, if you later want to pull out portions of it to display without an animation: ``` green05 = levels.Overlay.Green[0.5] green05 + green05.Channel + green05.Channel.Green.sample(y=0.0) ``` Hopefully you now understand the basic concepts of HoloViews. It's now worth checking out the full features of the [HoloMap](Exploring_Data.ipynb) component, as well as all the other types of [elements](Elements.ipynb) and [containers](Containers.ipynb). Have fun!
github_jupyter
import holoviews as hv hv.notebook_extension() import numpy as np import holoviews as hv hv.Element(None, group='Value', label='Label') parrot = hv.RGB.load_image('../assets/macaw.png', array=True) print("%s with shape %s" % (type(parrot),parrot.shape)) rgb_parrot = hv.RGB(parrot, label='Macaw') rgb_parrot rgb_parrot.data is parrot %%opts Image style(cmap='coolwarm') luminance = hv.Image(parrot.sum(axis=2), label='Summed Luminance') luminance %opts Image style(cmap='gray') red = hv.Image(parrot[:,:,0], label='Red') red channels = red + rgb_parrot[:,:,'G'].relabel('Green') + rgb_parrot[:,:,'B'].relabel('Blue') channels print(channels) channels.Image.Blue channels.Image.Red + channels.Image.Blue %%opts Layout [sublabel_format="{alpha}"] channels.Image.Red + channels.Image.Green + channels.Image.Green channels.Image.Blue.group chans = (hv.Image(parrot[:,:,0], group='RedChannel', label='Macaw') + hv.Image(parrot[:,:,1], group='Channel', label='Green') + hv.Image(parrot[:,:,2], group='Channel', label='Blue')) print(chans.Channel) chans.Channel chans.RedChannel chans.RedChannel.OriginalData.StoredForSafeKeeping = rgb_parrot print(chans) chans print(chans) chans.Channel.Blue[0.0,0.0] chans.Channel.Blue = chans.Channel.Blue.clone(vdims=["Luminance"]) print(chans.Channel.Blue) print(rgb_parrot) print(rgb_parrot[0,0]) print(rgb_parrot[0,0].iloc[0, 0]) extents = (-0.5, -0.5, 0.5, 0.5) # Image spatial extents o = rgb_parrot * hv.Arrow(-0.1,0.2, 'Polly', '>', extents=extents) o o + o.RGB.Macaw + o.Arrow.I from holoviews.operation import contours contours(chans.RedChannel.Macaw, levels=[0.10,0.80]) %%opts Contours.Red (color=Palette('Reds')) Contours.Green (color=Palette('Greens')) Contours.Blue (color=Palette('Blues')) data = {lvl:(contours(chans.RedChannel.Macaw, levels=[lvl], group='Red') +\ contours(chans.Channel.Green, levels=[lvl], group='Green') +\ contours(chans.Channel.Blue, levels=[lvl], group='Blue')) for lvl in np.linspace(0.1,0.9,9)} levels = hv.HoloMap(data, kdims=['Levels']).collate() levels green05 = levels.Overlay.Green[0.5] green05 + green05.Channel + green05.Channel.Green.sample(y=0.0)
0.408513
0.988233
# 4-Flatmapping This tutorial demonstrates how to split PDB structures into subcomponents or create biological assemblies. In Spark, a flatMap transformation splits each data record into zero or more records. ### Import pyspark and mmtfPyspark ``` from pyspark.sql import SparkSession from mmtfPyspark.io import mmtfReader from mmtfPyspark.filters import ContainsDnaChain from mmtfPyspark.mappers import StructureToBioassembly, StructureToPolymerChains, StructureToPolymerSequences from mmtfPyspark.structureViewer import view_structure from mmtfPyspark.utils import traverseStructureHierarchy import py3Dmol ``` ### Configure Spark ``` spark = SparkSession.builder.master("local[4]").appName("4-Flatmapping").getOrCreate() ``` ## Read PDB structures In this example we download the hemoglobin structure 4HHB, consisting of two alpha subunits and two beta subunits. ``` quaternary = mmtfReader.download_reduced_mmtf_files(["4HHB"]) view_structure(quaternary.keys().collect()); ``` ## Flatmap by protein sequence Here we extract the polymer sequences using a flatMap transformation. Chains A and C (alpha subunits) and chains B and D (beta subunits) have identical sequences, respectively. ``` sequences = quaternary.flatMap(StructureToPolymerSequences()) sequences.take(4) ``` ## Flatmap structures A flatMap operation splits data records into zero or more records. Here, we use the StructureToPolymerChains class to flatMap a PDB entry (quaternary structure) to its polymer chains (tertiary structure). Note, the chain Id is appended to the PDB Id. The two alpha subunit are 4HHB.A and 4HHB.C and the beta subunits are 4HHB.B and 4HHB.C. ``` tertiary = quaternary.flatMap(StructureToPolymerChains()) tertiary.keys().collect() view_structure(tertiary.keys().collect()); ``` For some analyses we may only need one copy of each unique subunit (identical polymer sequence). This can be done by setting excludeDuplicates = True. ``` tertiary = quaternary.flatMap(StructureToPolymerChains(excludeDuplicates=True)) tertiary.keys().collect() ``` ### Combine FlatMap with Filter The filter operations we used previously for whole structures can also be applied to single polymer chains. Here we flatMap PDB structures into polymer chains and then select select DNA chains. ``` path = "../resources/mmtf_reduced_sample" dna_chains = mmtfReader \ .read_sequence_file(path) \ .flatMap(StructureToPolymerChains(excludeDuplicates=True)) \ .filter(ContainsDnaChain()) view_structure(dna_chains.keys().collect()); ``` ## FlatMap PDB structures to Biological Assemblies ### Read the asymmetric unit In this example we read the asymmetric unit of 1STP (Complex of Biotin with Streptavidin) ``` asymmetric_unit = mmtfReader.download_full_mmtf_files(["1STP"]) ``` Print some summary data about this structure ``` traverseStructureHierarchy.print_structure_data(asymmetric_unit.first()) ``` ### Create the biological assembly from the asymmetric unit Now, we use a flatMap operation to map an asymmetric unit to one or more biological assemblies. In the case of 1STP, there is only one biological assembly, which represents a tetramer. ``` bio_assembly = asymmetric_unit.flatMap(StructureToBioassembly()) bio_assembly.first()[0] ``` As you can see, the biological assembly contains 4 copies of the asymmetric unit ``` traverseStructureHierarchy.print_structure_data(bio_assembly.first()) ``` ### Shown below is the bioassembly for 1STP (tetramer) ``` view_structure(["1STP"], bioAssembly=True); spark.stop() ```
github_jupyter
from pyspark.sql import SparkSession from mmtfPyspark.io import mmtfReader from mmtfPyspark.filters import ContainsDnaChain from mmtfPyspark.mappers import StructureToBioassembly, StructureToPolymerChains, StructureToPolymerSequences from mmtfPyspark.structureViewer import view_structure from mmtfPyspark.utils import traverseStructureHierarchy import py3Dmol spark = SparkSession.builder.master("local[4]").appName("4-Flatmapping").getOrCreate() quaternary = mmtfReader.download_reduced_mmtf_files(["4HHB"]) view_structure(quaternary.keys().collect()); sequences = quaternary.flatMap(StructureToPolymerSequences()) sequences.take(4) tertiary = quaternary.flatMap(StructureToPolymerChains()) tertiary.keys().collect() view_structure(tertiary.keys().collect()); tertiary = quaternary.flatMap(StructureToPolymerChains(excludeDuplicates=True)) tertiary.keys().collect() path = "../resources/mmtf_reduced_sample" dna_chains = mmtfReader \ .read_sequence_file(path) \ .flatMap(StructureToPolymerChains(excludeDuplicates=True)) \ .filter(ContainsDnaChain()) view_structure(dna_chains.keys().collect()); asymmetric_unit = mmtfReader.download_full_mmtf_files(["1STP"]) traverseStructureHierarchy.print_structure_data(asymmetric_unit.first()) bio_assembly = asymmetric_unit.flatMap(StructureToBioassembly()) bio_assembly.first()[0] traverseStructureHierarchy.print_structure_data(bio_assembly.first()) view_structure(["1STP"], bioAssembly=True); spark.stop()
0.759939
0.99294
``` import pandas as pd import numpy as np pd.set_option('display.notebook_repr_html', False) pd.set_option('display.max_columns', 8) pd.set_option('display.max_rows', 8) ``` # Basics of Pandas The ```pandas``` package provides a comprehensive set of data structures for working with and manipulating data and performing various statistical and financial analyses. Two primary data structures we will use is ```Series``` and ```DataFrame```. ## The Series The ```Series``` is the primary building block of pandas and represents a onedimensional labeled array based on the ```NumPy ndarray```. A ```Series``` have index labeling which makes it more usable than ```NumPy ndarray```. A ```Series``` can hold zero or more instances of any single data type. However a Series can only associate single value with any given index label, so it has limitations, which ```DataFrame``` is solving it. ## The DataFrame A ```DataFrame``` can be thought of as a dictionary-like container of one or more ```Series``` objects, as a spreadsheet, or probably the best description for those new to pandas is to compare a ```DataFrame``` to a relational database table. A ```DataFrame``` and also by automatically aligning values in each column along the index labels of the ```DataFrame```. A ```DataFrame``` also introduces the concept of an axis, which you will often see in the pandas documentation and in many of its methods. A DataFrame has two axes, horizontal and vertical. ``` # Creating Series using Lists lst = list("Enes Kemal") s = pd.Series(lst) s # Creating Series using Dictionary dic = {'a':1, 'b':2, 'c':3, 'd':4, 'e':5} s = pd.Series(dic) s # Creating Series using NumPy random s = pd.Series(np.random.randn(100)) s # Access elements using [] operator: s[2] s[[2, 5, 20]] # Access specific locations with list parsed... # Slicing is possible like we are doing with lists s[3:8] # Examining the series data with .head(), .tail() s.head() s.tail() # Index of Series can be retrieved using .index print(list(s.index)) # Also retrieving values are with .values s.values # Creating series with index and values are passed s2 = pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd']) s2 # Returning the length of Series len(s) # Return dimensionality of Series s.shape # .count is also returning the count of elements in Series, but NaN is not counted s = pd.Series([10, 0, 1, 1, 2, 3, 4, 5, 6, np.nan]) print(len(s)) print(s.count()) # Returning all the unique values using .unique() s.unique() # Count of each value: s.value_counts() ``` ### Creating a DataFrame ``` # Create DF by passing series df1 = pd.DataFrame([pd.Series(np.arange(10, 15)), pd.Series(np.arange(15, 20))]) df1 df1.shape # Another way to create DF is by using numpy array df = pd.DataFrame(np.array([[10, 11], [20, 21]]), columns=['a', 'b']) df df.columns # We can rename the columns after data frame created df.columns = ['c1', 'c2'] df # We can also add index while creating df = pd.DataFrame(np.array([[0, 1], [2, 3]]), columns=['c1', 'c2'], index=['r1', 'r2']) df # We'll show the index df.index df.values # Pandas we'll fill the gaps with NaN s1 = pd.Series(np.arange(1, 6, 1)) s2 = pd.Series(np.arange(6, 11, 1)) s3 = pd.Series(np.arange(12, 14), index=[1, 2]) pd.DataFrame({'c1': s1, 'c2': s2, 'c3': s3}) ```
github_jupyter
import pandas as pd import numpy as np pd.set_option('display.notebook_repr_html', False) pd.set_option('display.max_columns', 8) pd.set_option('display.max_rows', 8) # Creating Series using Lists lst = list("Enes Kemal") s = pd.Series(lst) s # Creating Series using Dictionary dic = {'a':1, 'b':2, 'c':3, 'd':4, 'e':5} s = pd.Series(dic) s # Creating Series using NumPy random s = pd.Series(np.random.randn(100)) s # Access elements using [] operator: s[2] s[[2, 5, 20]] # Access specific locations with list parsed... # Slicing is possible like we are doing with lists s[3:8] # Examining the series data with .head(), .tail() s.head() s.tail() # Index of Series can be retrieved using .index print(list(s.index)) # Also retrieving values are with .values s.values # Creating series with index and values are passed s2 = pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd']) s2 # Returning the length of Series len(s) # Return dimensionality of Series s.shape # .count is also returning the count of elements in Series, but NaN is not counted s = pd.Series([10, 0, 1, 1, 2, 3, 4, 5, 6, np.nan]) print(len(s)) print(s.count()) # Returning all the unique values using .unique() s.unique() # Count of each value: s.value_counts() # Create DF by passing series df1 = pd.DataFrame([pd.Series(np.arange(10, 15)), pd.Series(np.arange(15, 20))]) df1 df1.shape # Another way to create DF is by using numpy array df = pd.DataFrame(np.array([[10, 11], [20, 21]]), columns=['a', 'b']) df df.columns # We can rename the columns after data frame created df.columns = ['c1', 'c2'] df # We can also add index while creating df = pd.DataFrame(np.array([[0, 1], [2, 3]]), columns=['c1', 'c2'], index=['r1', 'r2']) df # We'll show the index df.index df.values # Pandas we'll fill the gaps with NaN s1 = pd.Series(np.arange(1, 6, 1)) s2 = pd.Series(np.arange(6, 11, 1)) s3 = pd.Series(np.arange(12, 14), index=[1, 2]) pd.DataFrame({'c1': s1, 'c2': s2, 'c3': s3})
0.547222
0.974677
# Creating and grading assignments This guide walks an instructor through the workflow for generating an assignment and preparing it for release to students. ## Accessing the formgrader extension The formgrader extension provides the core access to nbgrader's instructor tools. After the extension has been installed, you can access it through the tab in the notebook list: ![](images/formgrader_tab.png) ## Creating a new assignment ### From the formgrader To create a new assignment, open the formgrader extension and click the "Add new assignment..." button at the bottom of the page. This will ask you to provide some information such as the name of the assignment and its due date. Then, you can add files to the assignment and edit them by clicking the name of the assignment: ![](images/manage_assignments1.png) ### From the command line To simplify this example, two notebooks of the assignment have already been stored in the `source/ps1` folder: * [source/ps1/problem1.ipynb](source/ps1/problem1.ipynb) * [source/ps1/problem2.ipynb](source/ps1/problem2.ipynb) ## Developing assignments with the assignment toolbar **Note**: As you are developing your assignments, you should save them into the `source/{assignment_id}/` folder of the nbgrader hierarchy, where `assignment_id` is the name of the assignment you are creating (e.g. "ps1"). Once the toolbar has been installed, you should see it in the drop down "View -> Cell Toolbar" menu: ![](images/assignment_toolbar.png) Selecting the "Create Assignment" toolbar will create a separate toolbar for each cell which by default will be a dropdown menu with the "-" item selected. For markdown cells, there are two additional options to choose from, either "Manually graded answer" or "Read-only": ![](images/markdown_cell.png) For code cells, there are four options to choose from, including "Manually graded answer", "Autograded answer", "Autograder tests", and "Read-only": ![](images/code_cell.png) The following sections go into detail about the different cell types, and show cells that are taken from a complete example of an assignment generated with the nbgrader toolbar extension: - [source/ps1/problem1.ipynb](source/ps1/problem1.html) - [source/ps1/problem2.ipynb](source/ps1/problem2.html) ### "Manually graded answer" cells If you select the "Manually graded answer" option (available for both markdown and code cells), the nbgrader extension will mark that cell as a cell that contains an answer that must be manually graded by a human grader. Here is an example of a manually graded answer cell: ![](images/manually_graded_answer.png) The most common use case for this type of cell is for written free-response answers (for example, which interpret the results of code that may have been written and/or executed above). *Note: the blue border only shows up when the nbgrader extension toolbar is active; it will not be visible to students.* ### “Manually graded task” cells If you select the “Manually graded task” option (available for markdown cells), the nbgrader extension will mark that cell as a cell that contains the description of a task that students have to perform. They must be manually graded by a human grader. Here is an example of a manually graded answer cell: ![](images/task-cell-source.png) The difference with a manually graded answer is that the manually graded tasks cells are not edited by the student. A manually or automatically graded cell ask students to perform a task *in* one cell. A manually graded task asks students to perform a task *with* cells. The common use case for this type of cell is for tasks that require the student to create several cells such as "Process the data and create a plot to illustrate your results." or to contain notebook-wide tasks such as "adhere to the PEP8 style convention." *Note: the blue border only shows up when the nbgrader extension toolbar is active; it will not be visible to students.* ### “Manually graded task” cells with mark scheme A mark scheme can be created through the use of a special syntax such as ``=== BEGIN MARK SCHEME ===`` and ``=== END MARK SCHEME ===``. The section of text between the two markers will be removed from the student version, but will be visible at the grading stage and in the feedback. ### "Autograded answer" cells If you select the "Autograded answer" option (available only for code cells), the nbgrader extension will mark that cell as a cell that contains an answer which will be autograded. Here is an example of an autograded graded answer cell: ![](images/autograded_answer.png) Unlike manually graded answers, autograded answers aren't worth any points: instead, the points for autograded answers are specified for the particular tests that grade those answers. See the next section for further details. *Note: the blue border only shows up when the nbgrader extension toolbar is active; it will not be visible to students.* ### "Autograder tests" cells If you select the "Autograder tests" option (available only for code cells), the nbgrader extension will mark that cell as a cell that contains tests to be run during autograding. Here is an example of two test cells: ![](images/autograder_tests.png) The lock icon on the left side of the cell toolbar indicates that the tests are "read-only". See the next section for further details on what this means. *Note: the blue border only shows up when the nbgrader extension toolbar is active; it will not be visible to students.* ### "Autograder tests" cells with hidden tests Tests in "Autograder tests" cells can be hidden through the use of a special syntax such as ``### BEGIN HIDDEN TESTS`` and ``### END HIDDEN TESTS``, for example: ![](images/autograder_tests_hidden_tests.png) ### "Read-only" cells If you select the "Read-only" option (available for both code and markdown cells), the nbgrader extension will mark that cell as one that cannot be modified. This is indicated by a lock icon on the left side of the cell toolbar: ![](images/read_only.png) This functionality is particularly important for test cells, which are always marked as read-only. Because the mechanism for autograding is that students receive full credit if the tests pass, an easy way to get around this would be to simply delete or comment out the tests. This read-only functionality will reverse any such changes made by the student. ## Validating the instructor version ### From the validate extension Ideally, the solutions in the instructor version should be correct and pass all the test cases to ensure that you are giving your students tests that they can actually pass. To verify this is the case, you can use the validate extension: ![](images/validate_extension.png) If your assignment passes all the tests, you'll get a success pop-up: ![](images/validate_success.png) If it doesn't pass all the tests, you'll get a message telling you which cells failed: ![](images/validate_failed.png) ### From the command line You can also validate assignments on the command line using the `nbgrader validate` command: ``` %%bash nbgrader validate source/ps1/*.ipynb ``` ## Generate and release an assignment ### From the formgrader After an assignment has been created with the assignment toolbar, you will want to generate the version that students will receive. You can do this from the formgrader by clicking the "generate" button: ![](images/manage_assignments2.png) This should succeed with a pop-up window containing log output: ![](images/generate_assignment.png) ### From the command line ``` {course_directory}/source/{assignment_id}/{notebook_id}.ipynb ``` Note: The `student_id` is not included here because the source and release versions of the assignment are the same for all students. After running `nbgrader generate_assignment`, the release version of the notebooks will be: ``` {course_directory}/release/{assignment_id}/{notebook_id}.ipynb ``` As a reminder, the instructor is responsible for distributing this release version to their students using their institution's existing student communication and document distribution infrastructure. When running `nbgrader generate_assignment`, the assignment name (which is "ps1") is passed. We also specify a *header* notebook (`source/header.ipynb`) to prepend at the beginning of each notebook in the assignment. By default, this command should be run from the root of the course directory: ``` %%bash nbgrader generate_assignment "ps1" --IncludeHeaderFooter.header=source/header.ipynb --force ``` ## Preview the student version After generating the student version of assignment, you should preview it to make sure that it looks correct. You can do this from the formgrader extension by clicking the "preview" button: ![](images/manage_assignments3.png) Under the hood, there will be a new folder called `release` with the same structure as `source`. The `release` folder contains the actual release version of the assignment files: * [release/ps1/problem1.ipynb](release/ps1/problem1.ipynb) * [release/ps1/problem2.ipynb](release/ps1/problem2.ipynb) If you are working on the command line, you may want to formally verify the student version as well. Ideally, all the tests should fail in the student version if the student hasn't implemented anything. To verify that this is in fact the case, we can use the `nbgrader validate --invert` command: ``` %%bash nbgrader validate --invert release/ps1/*.ipynb ``` If the notebook fails all the test cases, you should see the message "Success! The notebook does not pass any tests." ## Releasing files to students and collecting submissions ``` submitted/{student_id}/{assignment_id}/{notebook_id}.ipynb ``` **Please note**: Students must use version 3 or greater of the IPython/Jupyter notebook for nbgrader to work properly. If they are not using version 3 or greater, it is possible for them to delete cells that contain important metadata for nbgrader. With version 3 or greater, there is a feature in the notebook that prevents cells from being deleted. See [this issue](https://github.com/jupyter/nbgrader/issues/424) for more details. To ensure that students have a recent enough version of the notebook, you can include a cell such as the following in each notebook of the assignment: ```python import IPython assert IPython.version_info[0] >= 3, "Your version of IPython is too old, please update it." ``` ## Autograde assignments In the following example, we have an assignment with two notebooks. There are two submissions of the assignment: Submission 1: * [submitted/bitdiddle/ps1/problem1.ipynb](submitted/bitdiddle/ps1/problem1.ipynb) * [submitted/bitdiddle/ps1/problem2.ipynb](submitted/bitdiddle/ps1/problem2.ipynb) Submission 2: * [submitted/hacker/ps1/problem1.ipynb](submitted/hacker/ps1/problem1.ipynb) * [submitted/hacker/ps1/problem2.ipynb](submitted/hacker/ps1/problem2.ipynb) ### From the formgrader You can autograde individual submissions from the formgrader directly. To do so, click on the the number of submissions in the "Manage Assignments" view: ![](images/manage_assignments4.png) This will take you to a new page where you can see all the submissions. For a particular submission, click the "autograde" button to autograde it: ![](images/manage_submissions1.png) After autograding completes, you will see a pop-up window with log output: ![](images/autograde_assignment.png) And back on the submissions screen, you will see that the status of the submission has changed to "needs manual grading" and there is now a reported score as well: ![](images/manage_submissions2.png) ### From the command line We can run the autograder for all students at once from the command line: ``` %%bash nbgrader autograde "ps1" --force ``` When grading the submission for `Bitdiddle`, you'll see some warnings that look like "Checksum for grade cell correct_squares has changed!". What's happening here is that nbgrader has recorded what the *original* contents of the grade cell `correct_squares` (when `nbgrader generate_assignment` was run), and is checking the submitted version against this original version. It has found that the submitted version changed (perhaps this student tried to cheat by commenting out the failing tests), and has therefore overwritten the submitted version of the tests with the original version of the tests. You may also notice that there is a note saying "ps1 for Bitdiddle is 21503.948203 seconds late". What is happening here is that nbgrader is detecting a file in Bitdiddle's submission called `timestamp.txt`, reading in that timestamp, and saving it into the database. From there, it can compare the timestamp to the duedate of the problem set, and compute whether the submission is at all late. Once the autograding is complete, there will be new directories for the autograded versions of the submissions: ``` autograded/{student_id}/{assignment_id}/{notebook_id}.ipynb ``` Autograded submission 1: * [autograded/bitdiddle/ps1/problem1.ipynb](autograded/bitdiddle/ps1/problem1.ipynb) * [autograded/bitdiddle/ps1/problem2.ipynb](autograded/bitdiddle/ps1/problem2.ipynb) Autograded submission 2: * [autograded/hacker/ps1/problem1.ipynb](autograded/hacker/ps1/problem1.ipynb) * [autograded/hacker/ps1/problem2.ipynb](autograded/hacker/ps1/problem2.ipynb) ## Manual grading After running `nbgrader autograde`, the autograded version of the notebooks will be: autograded/{student_id}/{assignment_id}/{notebook_id}.ipynb We can manually grade assignments through the formgrader as well, by clicking on the "Manual Grading" navigation button. This will provide you with an interface for hand grading assignments that it finds in the directory listed above. Note that this applies to *all* assignments as well -- as long as the autograder has been run on the assignment, it will be available for manual grading via the formgrader. ## Generate feedback on assignments ``` autograded/{student_id}/{assignment_id}/{notebook_id}.ipynb ``` Creating feedback for students is divided into two parts: * generate feedback * release feedback Generating feedback will create HTML files in the local instructor directory. Releasing feedback will copy those HTML files to the nbgrader exchange. We can generate feedback based on the graded notebooks by running the `nbgrader generate_feedback` command, which will produce HTML versions of these notebooks at the following location: ``` feedback/{student_id}/{assignment_id}/{notebook_id}.html ``` The `nbgrader generate_feedback` is available by clicking the Generate Feedback button on either the Manage Assignments view (to generate feedback for all graded submissions), or on the individual student's Manage Submission page (to generate feedback for that specific individual). We can release the generated feedback by running the `nbgrader release_feedback` command, which will send the generated HTML files to the nbgrader exchange. The `nbgrader release_feedback` is available by clicking the Release Feedback button on either the Manage Assignments view (to release feedback for all generated feedback), or on the individual student's Manage Submission page (to release feedback for that specific individual). ### Workflow example: Instructor returning feedback to students In some scenarios, you may not want to (or be able to) use the exchange to deliver student feedback. This sections describes a workflow for manually returning generated feedback. In the following example, we have an assignment with two notebooks. There are two submissions of the assignment that have been graded: Autograded submission 1: * [autograded/bitdiddle/ps1/problem1.ipynb](autograded/bitdiddle/ps1/problem1.ipynb) * [autograded/bitdiddle/ps1/problem2.ipynb](autograded/bitdiddle/ps1/problem2.ipynb) Autograded submission 2: * [autograded/hacker/ps1/problem1.ipynb](autograded/hacker/ps1/problem1.ipynb) * [autograded/hacker/ps1/problem2.ipynb](autograded/hacker/ps1/problem2.ipynb) Generating feedback is fairly straightforward (and as with the other nbgrader commands for instructors, this must be run from the root of the course directory): ``` %%bash nbgrader generate_feedback "ps1" ``` Once the feedback has been generated, there will be new directories and HTML files corresponding to each notebook in each submission: Feedback for submission 1: * [feedback/bitdiddle/ps1/problem1.html](feedback/bitdiddle/ps1/problem1.html) * [feedback/bitdiddle/ps1/problem2.html](feedback/bitdiddle/ps1/problem2.html) Feedback for submission 2: * [feedback/hacker/ps1/problem1.html](feedback/hacker/ps1/problem1.html) * [feedback/hacker/ps1/problem2.html](feedback/hacker/ps1/problem2.html) If the exchange is available, one would of course use `nbgrader release_feedback`. However if not available, you can now deliver these generated HTML feedback files via whatever mechanism you wish. ## Getting grades from the database In addition to creating feedback for the students, you may need to upload grades to whatever learning management system your school uses (e.g. Canvas, Blackboard, etc.). nbgrader provides a way to export grades to CSV out of the box, with the `nbgrader export` command: ``` %%bash nbgrader export ``` After running `nbgrader export`, you will see the grades in a CSV file called `grades.csv`: ``` %%bash cat grades.csv ```
github_jupyter
%%bash nbgrader validate source/ps1/*.ipynb {course_directory}/source/{assignment_id}/{notebook_id}.ipynb {course_directory}/release/{assignment_id}/{notebook_id}.ipynb %%bash nbgrader generate_assignment "ps1" --IncludeHeaderFooter.header=source/header.ipynb --force %%bash nbgrader validate --invert release/ps1/*.ipynb submitted/{student_id}/{assignment_id}/{notebook_id}.ipynb import IPython assert IPython.version_info[0] >= 3, "Your version of IPython is too old, please update it." %%bash nbgrader autograde "ps1" --force autograded/{student_id}/{assignment_id}/{notebook_id}.ipynb autograded/{student_id}/{assignment_id}/{notebook_id}.ipynb feedback/{student_id}/{assignment_id}/{notebook_id}.html %%bash nbgrader generate_feedback "ps1" %%bash nbgrader export %%bash cat grades.csv
0.095851
0.959269
``` import numpy as np import math ``` Microscope: [Zeiss z.1](https://applications.zeiss.com/C125792900358A3F/0/4D1D8D177F06CDF4C1257A940041002D/$FILE/EN_41_011_005_LightsheetZ1_rel2-3.pdf) ``` # Sample sample_size = [5, 5, 10] # z, y, x (mm) # Camera sensor_size = [1920, 1920] # pixels ps = 6.5 # Motion stage stage_axial_max_velocity = 2 # mm/s stage_axial_max_acceleration = 1e3 # mm/s stage_axial_resolution = 200e-6 # mm stage_axial_settle_time = 0.2 # s # Illumination detection_wavelength = 0.53 # um # Objective system_mag = 1.25 # unitless (0.36× – 2.5×, continuous) objective_name = '5x' objectives = {'5x' : {'na' : 0.16, 'mag' : 5}, '10x': {'na' : 0.5, 'mag' : 20}, '20x': {'na' : 1.0, 'mag' : 20}, '40x': {'na' : 1.0, 'mag' : 40}, '63x': {'na' : 1.0, 'mag' : 64}} # Get objective parameters mag = objectives[objective_name]['mag'] na = objectives[objective_name]['na'] effective_pixel_size = ps / (mag * system_mag) # Axial Scan Parameters axial_scan_overlap_factor = 0.6 # Amount (0 to 1) of overlap between frames, relative to PSF size axial_scan_axis = 0 # Lateral Scan Parameters lateral_scan_axes = [1, 2] # y, x lateral_scan_overlap_factor = 0.2 camera_readout_time = 0.017 # s camera_exposure_time = 0.1 # s illumination_min_update_speed = 10e-6 # s # Check camera sampling k_max_optical = na / detection_wavelength k_max_sensor = 1 / (2 * effective_pixel_size) assert k_max_sensor > k_max_optical, "Maximum optical spatial frequency (%.3fum^{-1}) is greater than the system bandwidth (%.3fum^{-1})" % (k_max_optical,k_max_sensor) ``` ## Calculate Volumetric Parameters ``` # Calculate pixel size and resolution lateral_resolution = detection_wavelength / na axial_resolution = detection_wavelength / (na ** 2) print('Lateral resolution is %.4fum, axial is %.4fum' % (lateral_resolution, axial_resolution)) # Calculate number of planes in axial scan axial_scan_increment = axial_resolution * 1e-3 * (1 - axial_scan_overlap_factor) axial_plane_count = sample_size[axial_scan_axis] / axial_scan_increment print('Axial scan will require %d planes' % axial_plane_count) # Calculate number of lateral positions to scan sample_size_lateral = [sample_size[i] for i in lateral_scan_axes] fov = np.asarray(sensor_size) * effective_pixel_size * 1e-3 n_frames_lateral = np.ceil(np.asarray(sample_size_lateral) / (fov * 1 - lateral_scan_overlap_factor)) print('Lateral scan will require %d x %d positions' % tuple(n_frames_lateral)) ``` ## Calculate Scan Time with Stop and Stare A useful paremeter is the ratio of motion time to readout time - if the readout time is longer than the motion time, using motion deblur won't improve our results much. The total time per frame is determined by: $$ t_{frame} = t_{exposure} + \max(t_{readout}, t_{motion}) $$ In the case of continuous motion (strobed or coded illumination), $\max(t_{readout}, t_{motion}) = t_{readout}$, meaning that the acquisition time is limited by the readout time. Assuming $t_{motion} > t_{readout}$, the improvement ratio of continuous scanning over conventional imaging is therefore: $$ f = \frac{t_{exposure} + t_{motion}}{t_{exposure} + t_{readout}} $$ Using this analysis, it is clear that increasing $t_{readout}$ will decrease $f$, meaning that readout times are the enemy of motion deblur. Obviously, a higher $t_{motion}$ (slower acceleration, longer settle times) will lead to a larger $f$. From this we can conclude that our method is best applied in situations where: - The readout time is very short (shorter the better) - The system has low acceleration or long settle time (lead-screw based systems) - The exposure time is < motion time (or readout time) ``` # Ensure state increment is not less than resolution of the stage assert stage_axial_resolution < axial_scan_increment, "Axial scan increment is less than resolution of the state!" # Limit motion stage velocity to the maximum update speed of the light source stage_axial_velocity = min(axial_scan_increment / illumination_min_update_speed, stage_axial_max_velocity) print('Stage will be moving at up to %.4f mm/s' % stage_axial_velocity) # Previous imaging time (single_frame) axial_accel_time = stage_axial_max_velocity / stage_axial_max_acceleration axial_accel_distance = stage_axial_max_acceleration * axial_accel_time ** 2 if axial_accel_distance < axial_scan_increment: motion_time = axial_scan_increment / stage_axial_max_velocity + stage_axial_max_velocity / stage_axial_max_acceleration + stage_axial_settle_time else: print('WARNING: not reaching maximum velocity') motion_time = math.sqrt(axial_scan_increment / stage_axial_max_acceleration) + stage_axial_settle_time frame_time_stop_and_stare = max(camera_readout_time, motion_time) + camera_exposure_time # mechanical settle time print('Motion time to readout ratio is %.4f' % (motion_time / camera_readout_time)) # New imaging time (single frame) # Basiclally we still need to capture enough frames to reconstruct the same amount of data frame_time_continuous_scan = max(axial_scan_increment / stage_axial_velocity, camera_exposure_time + camera_readout_time) if (axial_scan_increment / stage_axial_velocity < stage_axial_velocity, camera_exposure_time + camera_readout_time): print('Continuous motion is camera-limited') print('Frame scan time with existing method (stop and stare) is %.4fs' % frame_time_stop_and_stare) print('Frame scan time with proposed method (continuous motion) is %.4fs' % frame_time_continuous_scan) print('Old Acquisition time is %.4f hours, new acquisition time is %.4f hours' % (axial_plane_count * frame_time_stop_and_stare / 3600, axial_plane_count * frame_time_continuous_scan / 3600)) print('Improvement factor (full acquisition) is %.4f' % (frame_time_stop_and_stare / frame_time_continuous_scan)) ``` ## Compressed-Sensing Acquisition ``` # This sets the ratio of how much data we acquire / how much data we construct compression_factor = 0.1 # Determine how long it takes to zoom through one frame at max velocity volume_scan_time = sample_size[axial_scan_axis] / stage_axial_velocity # Determine total acquisition time frame_time_full_scan = volume_scan_time * compression_factor * axial_plane_count # Determine the ratio of compressed sensing acquisition t_acquire_prev = axial_plane_count * frame_time_stop_and_stare print('Old Acquisition time is %.4f hours, new acquisition time is %.4f hours' % (t_acquire_prev / 3600, frame_time_full_scan / 3600)) print('Improvement factor is %.4f' % (t_acquire_prev / frame_time_full_scan)) ```
github_jupyter
import numpy as np import math # Sample sample_size = [5, 5, 10] # z, y, x (mm) # Camera sensor_size = [1920, 1920] # pixels ps = 6.5 # Motion stage stage_axial_max_velocity = 2 # mm/s stage_axial_max_acceleration = 1e3 # mm/s stage_axial_resolution = 200e-6 # mm stage_axial_settle_time = 0.2 # s # Illumination detection_wavelength = 0.53 # um # Objective system_mag = 1.25 # unitless (0.36× – 2.5×, continuous) objective_name = '5x' objectives = {'5x' : {'na' : 0.16, 'mag' : 5}, '10x': {'na' : 0.5, 'mag' : 20}, '20x': {'na' : 1.0, 'mag' : 20}, '40x': {'na' : 1.0, 'mag' : 40}, '63x': {'na' : 1.0, 'mag' : 64}} # Get objective parameters mag = objectives[objective_name]['mag'] na = objectives[objective_name]['na'] effective_pixel_size = ps / (mag * system_mag) # Axial Scan Parameters axial_scan_overlap_factor = 0.6 # Amount (0 to 1) of overlap between frames, relative to PSF size axial_scan_axis = 0 # Lateral Scan Parameters lateral_scan_axes = [1, 2] # y, x lateral_scan_overlap_factor = 0.2 camera_readout_time = 0.017 # s camera_exposure_time = 0.1 # s illumination_min_update_speed = 10e-6 # s # Check camera sampling k_max_optical = na / detection_wavelength k_max_sensor = 1 / (2 * effective_pixel_size) assert k_max_sensor > k_max_optical, "Maximum optical spatial frequency (%.3fum^{-1}) is greater than the system bandwidth (%.3fum^{-1})" % (k_max_optical,k_max_sensor) # Calculate pixel size and resolution lateral_resolution = detection_wavelength / na axial_resolution = detection_wavelength / (na ** 2) print('Lateral resolution is %.4fum, axial is %.4fum' % (lateral_resolution, axial_resolution)) # Calculate number of planes in axial scan axial_scan_increment = axial_resolution * 1e-3 * (1 - axial_scan_overlap_factor) axial_plane_count = sample_size[axial_scan_axis] / axial_scan_increment print('Axial scan will require %d planes' % axial_plane_count) # Calculate number of lateral positions to scan sample_size_lateral = [sample_size[i] for i in lateral_scan_axes] fov = np.asarray(sensor_size) * effective_pixel_size * 1e-3 n_frames_lateral = np.ceil(np.asarray(sample_size_lateral) / (fov * 1 - lateral_scan_overlap_factor)) print('Lateral scan will require %d x %d positions' % tuple(n_frames_lateral)) # Ensure state increment is not less than resolution of the stage assert stage_axial_resolution < axial_scan_increment, "Axial scan increment is less than resolution of the state!" # Limit motion stage velocity to the maximum update speed of the light source stage_axial_velocity = min(axial_scan_increment / illumination_min_update_speed, stage_axial_max_velocity) print('Stage will be moving at up to %.4f mm/s' % stage_axial_velocity) # Previous imaging time (single_frame) axial_accel_time = stage_axial_max_velocity / stage_axial_max_acceleration axial_accel_distance = stage_axial_max_acceleration * axial_accel_time ** 2 if axial_accel_distance < axial_scan_increment: motion_time = axial_scan_increment / stage_axial_max_velocity + stage_axial_max_velocity / stage_axial_max_acceleration + stage_axial_settle_time else: print('WARNING: not reaching maximum velocity') motion_time = math.sqrt(axial_scan_increment / stage_axial_max_acceleration) + stage_axial_settle_time frame_time_stop_and_stare = max(camera_readout_time, motion_time) + camera_exposure_time # mechanical settle time print('Motion time to readout ratio is %.4f' % (motion_time / camera_readout_time)) # New imaging time (single frame) # Basiclally we still need to capture enough frames to reconstruct the same amount of data frame_time_continuous_scan = max(axial_scan_increment / stage_axial_velocity, camera_exposure_time + camera_readout_time) if (axial_scan_increment / stage_axial_velocity < stage_axial_velocity, camera_exposure_time + camera_readout_time): print('Continuous motion is camera-limited') print('Frame scan time with existing method (stop and stare) is %.4fs' % frame_time_stop_and_stare) print('Frame scan time with proposed method (continuous motion) is %.4fs' % frame_time_continuous_scan) print('Old Acquisition time is %.4f hours, new acquisition time is %.4f hours' % (axial_plane_count * frame_time_stop_and_stare / 3600, axial_plane_count * frame_time_continuous_scan / 3600)) print('Improvement factor (full acquisition) is %.4f' % (frame_time_stop_and_stare / frame_time_continuous_scan)) # This sets the ratio of how much data we acquire / how much data we construct compression_factor = 0.1 # Determine how long it takes to zoom through one frame at max velocity volume_scan_time = sample_size[axial_scan_axis] / stage_axial_velocity # Determine total acquisition time frame_time_full_scan = volume_scan_time * compression_factor * axial_plane_count # Determine the ratio of compressed sensing acquisition t_acquire_prev = axial_plane_count * frame_time_stop_and_stare print('Old Acquisition time is %.4f hours, new acquisition time is %.4f hours' % (t_acquire_prev / 3600, frame_time_full_scan / 3600)) print('Improvement factor is %.4f' % (t_acquire_prev / frame_time_full_scan))
0.809012
0.886764
``` import os import numpy as np import matplotlib import matplotlib.pyplot as plt import scipy.io as sio import scipy.stats as ss import elfi import elfi.examples %matplotlib inline ``` # Using other than Python operations with ELFI If your simulator or other operations are implemented in a programming language other than Python, you can still use ELFI. This notebook briefly demonstrates how to do this in 3 common scenarios: * External executable (written e.g. in C++ or a shell script) * R function * MATLAB function **Note:** to run some parts of this notebook you need to either compile the simulator, have R or MATLAB installed and install their respective wrapper libraries. ## External executables ELFI supports using external simulators and other operations that can be called from the command-line. ELFI provides some tools to easily incorporate such operations to ELFI models. This functionality is briefly introduced in this notebook. For an introductory tutorial on ELFI, please see the ELFI_tutorial notebook. ### Birth-Death-Mutation process We will consider here the Birth-Death-Mutation process simulator introduced in *Tanaka et al 2006 [1]* for the spread of Tuberculosis. The simulator outputs a count vector where each of its elements represents a "mutation" of the disease and the count describes how many are currently infected by that mutation. There are three rates and the population size: - $\alpha$ - (birth rate) the rate at which any infectious host transmits the disease. - $\delta$ - (death rate) the rate at which any existing infectious hosts either recovers or dies. - $\tau$ - (mutation rate) the rate at which any infectious host develops a new unseen mutation of the disease within themselves. - $N$ - (population size) the size of the simulated infectious population It is assumed that the susceptible population is infinite, the hosts carry only one mutation of the disease and transmit that mutation onward. A more accurate description of the model can be found from the original paper or e.g. [*Lintusaari at al 2016*](https://doi.org/10.1093/sysbio/syw077) *[2]*. <img src="resources/bdm.png" alt="BDM model illustration from Lintusaari et al. 2016" style="width: 400px;"/> This simulator cannot be implemented effectively with vectorized operations so we have implemented it with C++ that handles loops efficiently. We will now reproduce Figure 6(a) in [*Lintusaari at al 2016*](https://doi.org/10.1093/sysbio/syw077) *[2]* with ELFI. Let's start by defining some constants: ``` # Fixed model parameters delta = 0 tau = 0.198 N = 20 # The zeros are to make the observed population vector have length N y_obs = np.array([6, 3, 2, 2, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype='int16') ``` Let's build the beginning of a new model for the birth rate $\alpha$ as the only unknown ``` m = elfi.ElfiModel(name='bdm') elfi.Prior('uniform', .005, 2, model=m, name='alpha') ``` ### Wrapping External executables We now need to wrap the executable as an ELFI node for the model. We can use `elfi.tools.external_operation` tool to wrap any executables as a Python callables (function). Let's first investigate how it works with a simple shell `echo` command: ``` # Make an external command as an elfi operation. {0} {1} are positional arguments and {seed} a keyword argument `seed`. command = 'echo {0} {1} {seed}' echo_sim = elfi.tools.external_operation(command) # Test that `echo_sim` can now be called as a regular python function echo_sim(3, 1, seed=123) ``` The placeholders for arguments in the command string are just Python's [`format strings`](https://docs.python.org/3/library/string.html#formatstrings). Currently `echo_sim` only accepts scalar arguments. In order to work in ELFI, `echo_sim` needs to be vectorized so that we can pass to it a vector of arguments. ELFI provides a handy tool for this as well: ``` # Vectorize it with elfi tools echo_sim_vec = elfi.tools.vectorize(echo_sim) # Add it to the model elfi.Simulator(echo_sim_vec, m['alpha'], 0, name='sim') # Test to generate 3 simulations from it m['sim'].generate(3) ``` So above, the first column draws from our uniform prior for $\alpha$, the second column has constant zeros, and the last one lists the seeds provided to the command by ELFI. ### More complex wrapping of external operations $-$ case BDM Lets now wrap the actual BDM simulator in place of the echo simulator. We assume the executable `bdm` is located at the same directory where this notebook is run from. **Note**: The source code for the BDM simulator comes with ELFI. You can get the directory with `elfi.examples.bdm.get_source_directory()`. Under unix-like systems it can be compiled with just typing `make` to console in the source directory. For windows systems, you need to have some C++ compiler available to compile it. ``` # Get the BDM source directory sources_path = elfi.examples.bdm.get_sources_path() # Copy to resources folder and compile (unix-like systems) !cp -r $sources_path resources !make -C resources/cpp # Move the file in to the working directory !mv ./resources/cpp/bdm . # Test the executable (assuming we have the executable `bdm` in the working directory) sim = elfi.tools.external_operation('./bdm {0} {1} {2} {3} --seed {seed} --mode 1') sim(1, delta, tau, N, seed=123) ``` The BDM simulator is actually already internally vectorized if you provide it an input file with parameters on the rows. This is more efficient than looping in Python (`elfi.tools.vectorize`), because one simulation takes very little time and we wish to generate tens of thousands of simulations. We will also here redirect the output to a file and then read the file into a numpy array. This is just one possibility among the many to implement this. The most efficient would be to write a native Python module with C++ but it's beyond the scope of this article. So let's work through files which is a fairly common situation especially with existing software. ``` # Assuming we have the executable `bdm` in the working directory command = './bdm {filename} --seed {seed} --mode 1 > {output_filename}' # Function to prepare the inputs for the simulator. We will create filenames and write an input file. def prepare_inputs(*inputs, **kwinputs): alpha, delta, tau, N = inputs meta = kwinputs['meta'] # Organize the parameters to an array. The broadcasting works nicely with constant arguments here. param_array = np.row_stack(np.broadcast(alpha, delta, tau, N)) # Prepare a unique filename for parallel settings filename = '{model_name}_{batch_index}_{submission_index}.txt'.format(**meta) np.savetxt(filename, param_array, fmt='%.4f %.4f %.4f %d') # Add the filenames to kwinputs kwinputs['filename'] = filename kwinputs['output_filename'] = filename[:-4] + '_out.txt' # Return new inputs that the command will receive return inputs, kwinputs # Function to process the result of the simulation def process_result(completed_process, *inputs, **kwinputs): output_filename = kwinputs['output_filename'] # Read the simulations from the file. simulations = np.loadtxt(output_filename, dtype='int16') # Clean up the files after reading the data in os.remove(kwinputs['filename']) os.remove(output_filename) # This will be passed to ELFI as the result of the command return simulations # Create the python function (do not read stdout since we will work through files) bdm = elfi.tools.external_operation(command, prepare_inputs=prepare_inputs, process_result=process_result, stdout=False) ``` Now let's replace the echo simulator with this. To create unique but informative filenames, we ask ELFI to provide the operation some meta information. That will be available under the `meta` keyword (see the `prepare_inputs` function above): ``` # Create the simulator bdm_node = elfi.Simulator(bdm, m['alpha'], delta, tau, N, observed=y_obs) m['sim'].become(bdm_node) # Ask ELFI to provide the meta dict bdm_node.uses_meta = True # Draw the model elfi.draw(m) # Test it data = bdm_node.generate(3) print(data) ``` ### Completing the BDM model We are now ready to finish up the BDM model. To reproduce Figure 6(a) in [*Lintusaari at al 2016*](https://doi.org/10.1093/sysbio/syw077) *[2]*, let's add different summaries and discrepancies to the model and run the inference for each of them: ``` def T1(clusters): clusters = np.atleast_2d(clusters) return np.sum(clusters > 0, 1)/np.sum(clusters, 1) def T2(clusters, n=20): clusters = np.atleast_2d(clusters) return 1 - np.sum((clusters/n)**2, axis=1) # Add the different distances to the model elfi.Summary(T1, bdm_node, name='T1') elfi.Distance('minkowski', m['T1'], p=1, name='d_T1') elfi.Summary(T2, bdm_node, name='T2') elfi.Distance('minkowski', m['T2'], p=1, name='d_T2') elfi.Distance('minkowski', m['sim'], p=1, name='d_sim') elfi.draw(m) # Save parameter and simulation results in memory to speed up the later inference pool = elfi.OutputPool(['alpha', 'sim']) # Fix a seed seed = 20170511 rej = elfi.Rejection(m, 'd_T1', batch_size=10000, pool=pool, seed=seed) %time T1_res = rej.sample(5000, n_sim=int(1e5)) rej = elfi.Rejection(m, 'd_T2', batch_size=10000, pool=pool, seed=seed) %time T2_res = rej.sample(5000, n_sim=int(1e5)) rej = elfi.Rejection(m, 'd_sim', batch_size=10000, pool=pool, seed=seed) %time sim_res = rej.sample(5000, n_sim=int(1e5)) # Load a precomputed posterior based on an analytic solution (see Lintusaari et al 2016) matdata = sio.loadmat('./resources/bdm.mat') x = matdata['likgrid'].reshape(-1) posterior_at_x = matdata['post'].reshape(-1) # Plot the reference plt.figure() plt.plot(x, posterior_at_x, c='k') # Plot the different curves for res, d_node, c in ([sim_res, 'd_sim', 'b'], [T1_res, 'd_T1', 'g'], [T2_res, 'd_T2', 'r']): alphas = res.outputs['alpha'] dists = res.outputs[d_node] # Use gaussian kde to make the curves look nice. Note that this tends to benefit the algorithm 1 # a lot as it ususally has only a very few accepted samples with 100000 simulations kde = ss.gaussian_kde(alphas[dists<=0]) plt.plot(x, kde(x), c=c) plt.legend(['reference', 'algorithm 1', 'algorithm 2, T1\n(eps=0)', 'algorithm 2, T2\n(eps=0)']) plt.xlim([-.2, 1.2]); print('Results after 100000 simulations. Compare to figure 6(a) in Lintusaari et al. 2016.') ``` ## Interfacing with R It is possible to run R scripts in command line for example with [Rscript](http://stat.ethz.ch/R-manual/R-devel/library/utils/html/Rscript.html). However, in Python it may be more convenient to use [rpy2](http://rpy2.readthedocs.io), which allows convenient access to the functionality of R from within Python. You can install it with `pip install rpy2`. Here we demonstrate how to calculate the summary statistics used in the ELFI tutorial (autocovariances) using R's `acf` function for the MA2 model. **Note:** See this [issue](https://github.com/ContinuumIO/anaconda-issues/issues/152) if you get a `undefined symbol: PC` error in the import after installing rpy2. ``` import rpy2.robjects as robj from rpy2.robjects import numpy2ri as np2ri # Converts numpy arrays automatically np2ri.activate() ``` Let's create a Python function that wraps the R commands (please see the documentation of [rpy2](http://rpy2.readthedocs.io) for details): ``` robj.r(''' # create a function `f` f <- function(x, lag=1) { ac = acf(x, plot=FALSE, type="covariance", lag.max=lag, demean=FALSE) ac[['acf']][lag+1] } ''') f = robj.globalenv['f'] def autocovR(x, lag=1): x = np.atleast_2d(x) apply = robj.r['apply'] ans = apply(x, 1, f, lag=lag) return np.atleast_1d(ans) # Test it autocovR(np.array([[1,2,3,4], [4,5,6,7]]), 1) ``` Load a ready made MA2 model: ``` ma2 = elfi.examples.ma2.get_model(seed_obs=4) elfi.draw(ma2) ``` Replace the summaries S1 and S2 with our R autocovariance function. ``` # Replace with R autocov S1 = elfi.Summary(autocovR, ma2['MA2'], 1) S2 = elfi.Summary(autocovR, ma2['MA2'], 2) ma2['S1'].become(S1) ma2['S2'].become(S2) # Run the inference rej = elfi.Rejection(ma2, 'd', batch_size=1000, seed=seed) rej.sample(100) ``` ## Interfacing with MATLAB There are a number of options for running MATLAB (or Octave) scripts from within Python. Here, evaluating the distance is demonstrated with a MATLAB function using the official [MATLAB Python cd API](http://www.mathworks.com/help/matlab/matlab-engine-for-python.html). (Tested with MATLAB 2016b.) ``` import matlab.engine ``` A MATLAB session needs to be started (and stopped) separately: ``` eng = matlab.engine.start_matlab() # takes a while... ``` Similarly as with R, we have to write a piece of code to interface between MATLAB and Python: ``` def euclidean_M(x, y): # MATLAB array initialized with Python's list ddM = matlab.double((x-y).tolist()) # euclidean distance dM = eng.sqrt(eng.sum(eng.power(ddM, 2.0), 2)) # Convert back to numpy array d = np.atleast_1d(dM).reshape(-1) return d # Test it euclidean_M(np.array([[1,2,3], [6,7,8], [2,2,3]]), np.array([2,2,2])) ``` Load a ready made MA2 model: ``` ma2M = elfi.examples.ma2.get_model(seed_obs=4) elfi.draw(ma2M) ``` Replace the summaries S1 and S2 with our R autocovariance function. ``` # Replace with Matlab distance implementation d = elfi.Distance(euclidean_M, ma2M['S1'], ma2M['S2']) ma2M['d'].become(d) # Run the inference rej = elfi.Rejection(ma2M, 'd', batch_size=1000, seed=seed) rej.sample(100) ``` Finally, don't forget to quit the MATLAB session: ``` eng.quit() ``` ## Verdict We showed here a few examples of how to incorporate non Python operations to ELFI models. There are multiple other ways to achieve the same results and even make the wrapping more efficient. Wrapping often introduces some overhead to the evaluation of the generative model. In many cases however this is not an issue since the operations are usually expensive by themselves making the added overhead insignificant. ### References - [1] Tanaka, Mark M., et al. "Using approximate Bayesian computation to estimate tuberculosis transmission parameters from genotype data." Genetics 173.3 (2006): 1511-1520. - [2] Jarno Lintusaari, Michael U. Gutmann, Ritabrata Dutta, Samuel Kaski, Jukka Corander; Fundamentals and Recent Developments in Approximate Bayesian Computation. Syst Biol 2017; 66 (1): e66-e82. doi: 10.1093/sysbio/syw077
github_jupyter
import os import numpy as np import matplotlib import matplotlib.pyplot as plt import scipy.io as sio import scipy.stats as ss import elfi import elfi.examples %matplotlib inline # Fixed model parameters delta = 0 tau = 0.198 N = 20 # The zeros are to make the observed population vector have length N y_obs = np.array([6, 3, 2, 2, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype='int16') m = elfi.ElfiModel(name='bdm') elfi.Prior('uniform', .005, 2, model=m, name='alpha') # Make an external command as an elfi operation. {0} {1} are positional arguments and {seed} a keyword argument `seed`. command = 'echo {0} {1} {seed}' echo_sim = elfi.tools.external_operation(command) # Test that `echo_sim` can now be called as a regular python function echo_sim(3, 1, seed=123) # Vectorize it with elfi tools echo_sim_vec = elfi.tools.vectorize(echo_sim) # Add it to the model elfi.Simulator(echo_sim_vec, m['alpha'], 0, name='sim') # Test to generate 3 simulations from it m['sim'].generate(3) # Get the BDM source directory sources_path = elfi.examples.bdm.get_sources_path() # Copy to resources folder and compile (unix-like systems) !cp -r $sources_path resources !make -C resources/cpp # Move the file in to the working directory !mv ./resources/cpp/bdm . # Test the executable (assuming we have the executable `bdm` in the working directory) sim = elfi.tools.external_operation('./bdm {0} {1} {2} {3} --seed {seed} --mode 1') sim(1, delta, tau, N, seed=123) # Assuming we have the executable `bdm` in the working directory command = './bdm {filename} --seed {seed} --mode 1 > {output_filename}' # Function to prepare the inputs for the simulator. We will create filenames and write an input file. def prepare_inputs(*inputs, **kwinputs): alpha, delta, tau, N = inputs meta = kwinputs['meta'] # Organize the parameters to an array. The broadcasting works nicely with constant arguments here. param_array = np.row_stack(np.broadcast(alpha, delta, tau, N)) # Prepare a unique filename for parallel settings filename = '{model_name}_{batch_index}_{submission_index}.txt'.format(**meta) np.savetxt(filename, param_array, fmt='%.4f %.4f %.4f %d') # Add the filenames to kwinputs kwinputs['filename'] = filename kwinputs['output_filename'] = filename[:-4] + '_out.txt' # Return new inputs that the command will receive return inputs, kwinputs # Function to process the result of the simulation def process_result(completed_process, *inputs, **kwinputs): output_filename = kwinputs['output_filename'] # Read the simulations from the file. simulations = np.loadtxt(output_filename, dtype='int16') # Clean up the files after reading the data in os.remove(kwinputs['filename']) os.remove(output_filename) # This will be passed to ELFI as the result of the command return simulations # Create the python function (do not read stdout since we will work through files) bdm = elfi.tools.external_operation(command, prepare_inputs=prepare_inputs, process_result=process_result, stdout=False) # Create the simulator bdm_node = elfi.Simulator(bdm, m['alpha'], delta, tau, N, observed=y_obs) m['sim'].become(bdm_node) # Ask ELFI to provide the meta dict bdm_node.uses_meta = True # Draw the model elfi.draw(m) # Test it data = bdm_node.generate(3) print(data) def T1(clusters): clusters = np.atleast_2d(clusters) return np.sum(clusters > 0, 1)/np.sum(clusters, 1) def T2(clusters, n=20): clusters = np.atleast_2d(clusters) return 1 - np.sum((clusters/n)**2, axis=1) # Add the different distances to the model elfi.Summary(T1, bdm_node, name='T1') elfi.Distance('minkowski', m['T1'], p=1, name='d_T1') elfi.Summary(T2, bdm_node, name='T2') elfi.Distance('minkowski', m['T2'], p=1, name='d_T2') elfi.Distance('minkowski', m['sim'], p=1, name='d_sim') elfi.draw(m) # Save parameter and simulation results in memory to speed up the later inference pool = elfi.OutputPool(['alpha', 'sim']) # Fix a seed seed = 20170511 rej = elfi.Rejection(m, 'd_T1', batch_size=10000, pool=pool, seed=seed) %time T1_res = rej.sample(5000, n_sim=int(1e5)) rej = elfi.Rejection(m, 'd_T2', batch_size=10000, pool=pool, seed=seed) %time T2_res = rej.sample(5000, n_sim=int(1e5)) rej = elfi.Rejection(m, 'd_sim', batch_size=10000, pool=pool, seed=seed) %time sim_res = rej.sample(5000, n_sim=int(1e5)) # Load a precomputed posterior based on an analytic solution (see Lintusaari et al 2016) matdata = sio.loadmat('./resources/bdm.mat') x = matdata['likgrid'].reshape(-1) posterior_at_x = matdata['post'].reshape(-1) # Plot the reference plt.figure() plt.plot(x, posterior_at_x, c='k') # Plot the different curves for res, d_node, c in ([sim_res, 'd_sim', 'b'], [T1_res, 'd_T1', 'g'], [T2_res, 'd_T2', 'r']): alphas = res.outputs['alpha'] dists = res.outputs[d_node] # Use gaussian kde to make the curves look nice. Note that this tends to benefit the algorithm 1 # a lot as it ususally has only a very few accepted samples with 100000 simulations kde = ss.gaussian_kde(alphas[dists<=0]) plt.plot(x, kde(x), c=c) plt.legend(['reference', 'algorithm 1', 'algorithm 2, T1\n(eps=0)', 'algorithm 2, T2\n(eps=0)']) plt.xlim([-.2, 1.2]); print('Results after 100000 simulations. Compare to figure 6(a) in Lintusaari et al. 2016.') import rpy2.robjects as robj from rpy2.robjects import numpy2ri as np2ri # Converts numpy arrays automatically np2ri.activate() robj.r(''' # create a function `f` f <- function(x, lag=1) { ac = acf(x, plot=FALSE, type="covariance", lag.max=lag, demean=FALSE) ac[['acf']][lag+1] } ''') f = robj.globalenv['f'] def autocovR(x, lag=1): x = np.atleast_2d(x) apply = robj.r['apply'] ans = apply(x, 1, f, lag=lag) return np.atleast_1d(ans) # Test it autocovR(np.array([[1,2,3,4], [4,5,6,7]]), 1) ma2 = elfi.examples.ma2.get_model(seed_obs=4) elfi.draw(ma2) # Replace with R autocov S1 = elfi.Summary(autocovR, ma2['MA2'], 1) S2 = elfi.Summary(autocovR, ma2['MA2'], 2) ma2['S1'].become(S1) ma2['S2'].become(S2) # Run the inference rej = elfi.Rejection(ma2, 'd', batch_size=1000, seed=seed) rej.sample(100) import matlab.engine eng = matlab.engine.start_matlab() # takes a while... def euclidean_M(x, y): # MATLAB array initialized with Python's list ddM = matlab.double((x-y).tolist()) # euclidean distance dM = eng.sqrt(eng.sum(eng.power(ddM, 2.0), 2)) # Convert back to numpy array d = np.atleast_1d(dM).reshape(-1) return d # Test it euclidean_M(np.array([[1,2,3], [6,7,8], [2,2,3]]), np.array([2,2,2])) ma2M = elfi.examples.ma2.get_model(seed_obs=4) elfi.draw(ma2M) # Replace with Matlab distance implementation d = elfi.Distance(euclidean_M, ma2M['S1'], ma2M['S2']) ma2M['d'].become(d) # Run the inference rej = elfi.Rejection(ma2M, 'd', batch_size=1000, seed=seed) rej.sample(100) eng.quit()
0.666062
0.919245
## Nearest Neighbor item based Collaborative Filtering ![image.png](attachment:image.png) Source: https://towardsdatascience.com ``` ##Dataset url: https://grouplens.org/datasets/movielens/latest/ import pandas as pd import numpy as np r_cols = ['user_id','movie_id','rating'] movies_df = pd.read_csv('u.item.csv', names=['movieId','title'],sep='|',usecols=range(2)) m_cols = ['movie_id','title'] rating_df=pd.read_csv('u.data.csv', names=['userId', 'movieId', 'rating'],usecols=range(3)) movies_df.head() rating_df.head() df = pd.merge(rating_df,movies_df,on='movieId') df.head() combine_movie_rating = df.dropna(axis = 0, subset = ['title']) # combine_movie_rating.shape movie_ratingCount = (combine_movie_rating. groupby(by = ['title'])['rating']. count(). reset_index(). rename(columns = {'rating': 'totalRatingCount'}) [['title', 'totalRatingCount']] ) movie_ratingCount.head() rating_with_totalRatingCount = combine_movie_rating.merge(movie_ratingCount, left_on = 'title', right_on = 'title', how = 'left') rating_with_totalRatingCount.head() pd.set_option('display.float_format', lambda x: '%.3f' % x) print(movie_ratingCount['totalRatingCount'].describe()) popularity_threshold = 50 rating_popular_movie= rating_with_totalRatingCount.query('totalRatingCount >= @popularity_threshold') rating_popular_movie.head() rating_popular_movie.shape ## First lets create a Pivot matrix movie_features_df=rating_popular_movie.pivot_table(index='title',columns='userId',values='rating').fillna(0) movie_features_df.head() from scipy.sparse import csr_matrix movie_features_df_matrix = csr_matrix(movie_features_df.values) # print(movie_features_df_matrix) from sklearn.neighbors import NearestNeighbors model_knn = NearestNeighbors(metric = 'cosine', algorithm = 'brute') model_knn.fit(movie_features_df_matrix) movie_features_df.shape # query_index = np.random.choice(movie_features_df.shape[0]) # print(query_index) query_index = movie_features_df.index.get_loc('Star Wars (1977)') distances, indices = model_knn.kneighbors(movie_features_df.iloc[query_index,:].values.reshape(1, -1), n_neighbors = 6) movie_features_df.head() distances indices for i in range(0, len(distances.flatten())): if i == 0: print('Recommendations for {0}:\n'.format(movie_features_df.index[query_index])) else: print('{0}: {1}, with distance of {2}:'.format(i, movie_features_df.index[indices.flatten()[i]], distances.flatten()[i])) ``` ## Cosine Similarity ![image.png](attachment:image.png) ``` my_ratings = movie_features_df[0] my_ratings = my_ratings.loc[my_ratings!=0] my_ratings simCandidates = pd.Series() for i in range(0,len(my_ratings.index)): print("Adding sims for ",my_ratings.index[i],"...") query_index = movie_features_df.index.get_loc(my_ratings.index[i]) # print(query_index) distances, indices = model_knn.kneighbors(movie_features_df.iloc[query_index,:].values.reshape(1, -1), n_neighbors = 6) distances = (1/(1+distances)) * my_ratings[i] # print(distances) sims = pd.Series(distances.flatten(), name="ratings", index=movie_features_df.index[indices.flatten()]) # sims = distances.map(lambda x: (1/x)*myRatings[i]) print(sims) simCandidates = simCandidates.append(sims) print('\nsorting..\n') simCandidates.sort_values(inplace=True,ascending=False) print(simCandidates.head(20)) simCandidates = simCandidates.groupby(simCandidates.index).sum() simCandidates.sort_values(inplace=True,ascending=False) simCandidates.head(10) filteredSims = simCandidates.drop(my_ratings.index) filteredSims.head(10) ``` This is the final Recommendation of movies of similar that i was like earlier such as `Empire Strikes Back, The (1980)`, `Gone with the Wind (1939)`, `Star Wars (1977)`
github_jupyter
##Dataset url: https://grouplens.org/datasets/movielens/latest/ import pandas as pd import numpy as np r_cols = ['user_id','movie_id','rating'] movies_df = pd.read_csv('u.item.csv', names=['movieId','title'],sep='|',usecols=range(2)) m_cols = ['movie_id','title'] rating_df=pd.read_csv('u.data.csv', names=['userId', 'movieId', 'rating'],usecols=range(3)) movies_df.head() rating_df.head() df = pd.merge(rating_df,movies_df,on='movieId') df.head() combine_movie_rating = df.dropna(axis = 0, subset = ['title']) # combine_movie_rating.shape movie_ratingCount = (combine_movie_rating. groupby(by = ['title'])['rating']. count(). reset_index(). rename(columns = {'rating': 'totalRatingCount'}) [['title', 'totalRatingCount']] ) movie_ratingCount.head() rating_with_totalRatingCount = combine_movie_rating.merge(movie_ratingCount, left_on = 'title', right_on = 'title', how = 'left') rating_with_totalRatingCount.head() pd.set_option('display.float_format', lambda x: '%.3f' % x) print(movie_ratingCount['totalRatingCount'].describe()) popularity_threshold = 50 rating_popular_movie= rating_with_totalRatingCount.query('totalRatingCount >= @popularity_threshold') rating_popular_movie.head() rating_popular_movie.shape ## First lets create a Pivot matrix movie_features_df=rating_popular_movie.pivot_table(index='title',columns='userId',values='rating').fillna(0) movie_features_df.head() from scipy.sparse import csr_matrix movie_features_df_matrix = csr_matrix(movie_features_df.values) # print(movie_features_df_matrix) from sklearn.neighbors import NearestNeighbors model_knn = NearestNeighbors(metric = 'cosine', algorithm = 'brute') model_knn.fit(movie_features_df_matrix) movie_features_df.shape # query_index = np.random.choice(movie_features_df.shape[0]) # print(query_index) query_index = movie_features_df.index.get_loc('Star Wars (1977)') distances, indices = model_knn.kneighbors(movie_features_df.iloc[query_index,:].values.reshape(1, -1), n_neighbors = 6) movie_features_df.head() distances indices for i in range(0, len(distances.flatten())): if i == 0: print('Recommendations for {0}:\n'.format(movie_features_df.index[query_index])) else: print('{0}: {1}, with distance of {2}:'.format(i, movie_features_df.index[indices.flatten()[i]], distances.flatten()[i])) my_ratings = movie_features_df[0] my_ratings = my_ratings.loc[my_ratings!=0] my_ratings simCandidates = pd.Series() for i in range(0,len(my_ratings.index)): print("Adding sims for ",my_ratings.index[i],"...") query_index = movie_features_df.index.get_loc(my_ratings.index[i]) # print(query_index) distances, indices = model_knn.kneighbors(movie_features_df.iloc[query_index,:].values.reshape(1, -1), n_neighbors = 6) distances = (1/(1+distances)) * my_ratings[i] # print(distances) sims = pd.Series(distances.flatten(), name="ratings", index=movie_features_df.index[indices.flatten()]) # sims = distances.map(lambda x: (1/x)*myRatings[i]) print(sims) simCandidates = simCandidates.append(sims) print('\nsorting..\n') simCandidates.sort_values(inplace=True,ascending=False) print(simCandidates.head(20)) simCandidates = simCandidates.groupby(simCandidates.index).sum() simCandidates.sort_values(inplace=True,ascending=False) simCandidates.head(10) filteredSims = simCandidates.drop(my_ratings.index) filteredSims.head(10)
0.411584
0.88647
# Numpy/Matplotlib Hints Below are some quick hints in numpy/matplotlib that should help you with the kmeans assignment, should you choose to take advantage of them ``` import numpy as np import matplotlib.pyplot as plt ``` ## Selection First, let's create a point cloud. As is the usual default (except for NMF, which is the tranpose), we'll express our point cloud as a data matrix, where each point is along a row, and the dimension are along a column. This means that for $N$ points in 2 dimensions, we'll have an $N \times 2$ matrix. Let's generate such a matrix below, where each coordinate is chosen independently according to a unit Gaussian distribution ``` N = 1000 X = np.random.randn(N, 2) ``` We can plot it by plotting the first column as the x coordinate and the second column as the y coordinate. We can pull a particular column out with slice notation, where we say take all of the rows :, but only a particulary column ``` plt.scatter(X[:, 0], X[:, 1]) plt.axis("equal") ``` Next, let's consider how we pull certain points out of this point cloud. Let's say we wanted to select all points that have a distance of at most 1 from the origin. We could make a good old python loop to do this and to filter out the elements that meet our criteria into some list Y, and then plot it ``` Y = [] for i in range(N): x = X[i, 0] y = X[i, 1] dist = np.sqrt(x**2 + y**2) if dist < 1: Y.append([x, y]) # Convert this 2D list to a 2D numpy array so we can # do 2D slicing for plotting Y = np.array(Y) plt.scatter(X[:, 0], X[:, 1]) plt.scatter(Y[:, 0], Y[:, 1]) plt.axis("equal"); ``` But numpy also has a very nice feature known as "boolean selection" which allows us to do this without a python loop. First, we create a parallel array with $N$ elements, each of which holds the corresponding point's distance from the origin. ``` d = np.sqrt(np.sum(X[:, 0]**2 + X[:, 1]**2)) ``` Notice how I'm actually doing a "element-wise operations" here; when I say X[:, 0]**2, I'm raising every element of the first column of X to the second power and creating a new array with that result. I can then add this element-wise to the same array Anyway, since this array is parallel to the rows in X, we can use a boolean expression of it in place of a slice to take elements out ``` Y = X[d < 1, :] # This is it! Y = np.array(Y) plt.scatter(X[:, 0], X[:, 1]) plt.scatter(Y[:, 0], Y[:, 1]) plt.axis("equal"); ``` Just one more quick note that an even faster way to compute the distances is by using the np.sum method, and this will generalize to higher dimensions ``` d = np.sqrt(np.sum(X**2, axis=1)) Y = X[d < 1, :] Y = np.array(Y) plt.scatter(X[:, 0], X[:, 1]) plt.scatter(Y[:, 0], Y[:, 1]) plt.axis("equal"); ``` ## Taking Means The mean of a point cloud is obtained by taking the mean of each coordinate individually. Let's compute the mean of X a more tedious way using loops ``` mean = np.zeros(2) for i in range(N): mean += X[i, :] mean = mean/X.shape[0] plt.scatter(X[:, 0], X[:, 1]) plt.scatter([mean[0]], [mean[1]], 200) # Draw the mean in orange plt.axis("equal"); ``` But actually, there's a really nice function in numpy called np.mean. If we pass it an "axis" parameter, it tells us the axis along which to vary the loop when taking the mean. Since each point is in a different row, we want to vary the rows (axis 0) while we're taking the mean here, so we could do this simply as ``` mean = np.mean(X, axis=0) # That's it! plt.scatter(X[:, 0], X[:, 1]) plt.scatter([mean[0]], [mean[1]], 200) # Draw the mean in orange plt.axis("equal"); ```
github_jupyter
import numpy as np import matplotlib.pyplot as plt N = 1000 X = np.random.randn(N, 2) plt.scatter(X[:, 0], X[:, 1]) plt.axis("equal") Y = [] for i in range(N): x = X[i, 0] y = X[i, 1] dist = np.sqrt(x**2 + y**2) if dist < 1: Y.append([x, y]) # Convert this 2D list to a 2D numpy array so we can # do 2D slicing for plotting Y = np.array(Y) plt.scatter(X[:, 0], X[:, 1]) plt.scatter(Y[:, 0], Y[:, 1]) plt.axis("equal"); d = np.sqrt(np.sum(X[:, 0]**2 + X[:, 1]**2)) Y = X[d < 1, :] # This is it! Y = np.array(Y) plt.scatter(X[:, 0], X[:, 1]) plt.scatter(Y[:, 0], Y[:, 1]) plt.axis("equal"); d = np.sqrt(np.sum(X**2, axis=1)) Y = X[d < 1, :] Y = np.array(Y) plt.scatter(X[:, 0], X[:, 1]) plt.scatter(Y[:, 0], Y[:, 1]) plt.axis("equal"); mean = np.zeros(2) for i in range(N): mean += X[i, :] mean = mean/X.shape[0] plt.scatter(X[:, 0], X[:, 1]) plt.scatter([mean[0]], [mean[1]], 200) # Draw the mean in orange plt.axis("equal"); mean = np.mean(X, axis=0) # That's it! plt.scatter(X[:, 0], X[:, 1]) plt.scatter([mean[0]], [mean[1]], 200) # Draw the mean in orange plt.axis("equal");
0.475849
0.99479
# Your structured data into Tensorflow. ML training often expects _flat_ data, like a line in a CSV. [tf.Example](https://www.tensorflow.org/api_docs/python/tf/train/Example) was designed to represent flat data. But the data you care about and want to predict things about usually starts out _structured_. Over and over again you have to write transform code that turns your structured data into Tensors. This repetitive transform code must be rewritten over and over for all your ML pipelines both for training _and_ serving! And it lets bugs slip into your ML pipeline. `struct2tensor` lets you take advantage of structured data _within_ your ML pipelines. It is: * **for**: ML Engineers * **who**: train models on data that starts out structured * **it is**: a python library * **that**: transforms your structured data into model-friendly (Sparse, Raggged, Dense, ...) tensors hermetically _within_ your model * **unlike**: writing custom transforms over and over for training and serving. --- ![struct2tensor diagram showing the transform happens in the model](https://imgur.com/aqOX7nS.png) # Demo example Suppose we have this _structured_ data we want to train on. The source example data format is a [protobuff](https://developers.google.com/protocol-buffers). `struct2tensor` was built internally and works on protobuffers now. It can be extended to parquet, json, etc. in the future. ``` # e.g. a web session message Session{ message SessionInfo { string session_feature = 1; double session_duration_sec = 2; } SessionInfo session_info = 1; message Event { string query = 1; message Action { int number_of_views = 1; } repeated Action action = 2; } repeated Event event = 2; } ``` In 3 steps we'll extract the fields we want with `struct2tensor`. We'll end up with batch-aligned `SparseTensors`: 1. Tell our model what examples we care about, e.g. **`event`** (submessage `Session::Event`). 2. Pick the proto fields that we think are good features, say: * `session_info.session_feature` * `event.query` 3. Identify the label to predict, say **`event.action.number_of_views`** (the actual label could be sum(action.number_of_views for action in event)) Then we can build a struct2tensor query that: * parses instances of this protocol buffer * transforms the fields we care about * creates the necessary `SparseTensor`s Don't worry about some of these terms yet. We'll show you an example. And then explain the terms later. ## Install required packages (internal colab users: skip) ``` #@test {"skip": true} # install struct2tensor !pip install struct2tensor # graphviz for pretty output !pip install graphviz ``` ## Some Pretty Printing and Imports (not the "real" work yet) ``` import base64 import numpy as np import pprint import os import tensorflow from graphviz import Source import tensorflow as tf from IPython.display import Image from IPython.lib import pretty import struct2tensor as s2t from struct2tensor.test import test_pb2 from google.protobuf import text_format def _display(graph): """Renders a graphviz digraph.""" s = Source(graph) s.format='svg' return s def _create_query_from_text_sessions(text_sessions): """Creates a struct2tensor query from a list of pbtxt of struct2tensor.test.Session.""" sessions = tf.constant([ text_format.Merge( text_session, test_pb2.Session() ).SerializeToString() for text_session in text_sessions ]) return s2t.create_expression_from_proto( sessions, test_pb2.Session.DESCRIPTOR) def _prensor_pretty_printer(prensor, p, cycle): """Pretty printing function for struct2tensor.prensor.Prensor""" pretty.pprint(prensor.get_sparse_tensors()) def _sp_pretty_printer(sp, p, cycle): """Pretty printing function for SparseTensor.""" del cycle p.begin_group(4, "SparseTensor(") p.text("values={}, ".format(sp.values.numpy().tolist())) p.text("dense_shape={}, ".format(sp.dense_shape.numpy().tolist())) p.break_() p.text("indices={}".format(sp.indices.numpy().tolist())) p.end_group(4, ")") pretty.for_type(tf.SparseTensor, _sp_pretty_printer) pretty.for_type(s2t.Prensor, _prensor_pretty_printer) _pretty_print = pretty.pprint print("type-specific pretty printing ready to go") ``` ## The real work: A function that parses our structured data (protobuffers) into tensors: ``` @tf.function(input_signature=[tf.TensorSpec(shape=(None), dtype=tf.string)], autograph=False) def parse_session(serialized_sessions): """A TF function parsing a batch of serialized Session protos into tensors. It is a TF graph that takes one 1-D tensor as input, and outputs a Dict[str, tf.SparseTensor] """ query = s2t.create_expression_from_proto( serialized_sessions, test_pb2.Session.DESCRIPTOR) # Move all the fields of our interest to under "event". query = query.promote_and_broadcast({ "session_feature": "session_info.session_feature", "action_number_of_views": "event.action.number_of_views" }, "event") # Specify "event" to be examples. query = query.reroot("event") # Extract all the fields of our interest. projection = query.project(["session_feature", "query", "action_number_of_views"]) prensors = s2t.calculate_prensors([projection]) output_sparse_tensors = {} for prensor in prensors: path_to_tensor = prensor.get_sparse_tensors() output_sparse_tensors.update({str(k): v for k, v in path_to_tensor.items()}) return output_sparse_tensors print("Defined the workhorse func: (structured data at rest) -> (tensors)") ``` ## Lets see it in action: ``` serialized_sessions = tf.constant([ text_format.Merge( """ session_info { session_duration_sec: 1.0 session_feature: "foo" } event { query: "Hello" action { number_of_views: 1 } action { } } event { query: "world" action { number_of_views: 2 } action { number_of_views: 3 } } """, test_pb2.Session() ).SerializeToString() ]) _pretty_print(parse_session(serialized_sessions)) ``` See how we went from our pre-pipeline data (the Protobuffer) all the way to the structured data, packed into `SparseTensor`s? # Digging Far Deeper Interested and want to learn more? Read on... Let's define several terms we mentioned before: ### Prensor A Prensor (protobuffer + tensor) is a data structure storing the data we work on. We use protobuffers a lot at Google. `struct2tensor` can support other structured formats, too. For example, throughout this colab we will be using proto [`struct2tensor.test.Session`](http://cs/symbol:struct2tensor.test.Session). A schematic visualization of a selected part of the prensor from that proto looks like: ``` #@title { display-mode: "form" } #@test {"skip": true} _display(""" digraph { root -> session [label="*"]; session -> event [label="*"]; session -> session_id [label="?"]; event -> action [label="*"]; event -> query_token [label="*"] action -> number_of_views [label="?"]; } """) ``` We will be using visualizations like this to demostrate struct2tensor queries later. Note: * The "*" on the edge means the pointed node has repeated values; while the "?" means it has an optional value. * There is always a "root" node whose only child is the root of the structure. Note that it's "repeated" because one struct2tensorTree can represent multiple instances of a structure. ### struct2tensor Query A struct2tensor query transforms a Prensor into another Prensor. For example, `broadcast` is a query that replicates a node as a child of one of its siblings. Applying ``` broadcast( source_path="session.session_id", sibling="event", new_field_name="session_session_id") ``` on the previous tree gives: ``` #@title { display-mode: "form" } #@test {"skip": true} _display(""" digraph { session_session_id [color="red"]; root -> session [label="*"]; session -> event [label="*"]; session -> session_id [label="?"]; event -> action [label="*"]; event -> session_session_id [label="?"]; event -> query_token [label="*"]; action -> number_of_views [label="?"]; } """) ``` We will talk about common struct2tensor queries in later sections. ### Projection A projection of paths in a Prensor produces another Prensor with just the selected paths. #### Logical representation of a projection The structure of the projected path can be represented losslessly as nested lists. For example, the projection of `event.action.number_of_views` from the struct2tensorTree formed by the following two instances of `struct2tensor.test.Session`: ``` { event { action { number_of_views: 1} action { number_of_views: 2} action {} } event {} }, { event { action { number_of_views: 3} } } ``` is: ``` [ # the outer list has two elements b/c there are two Session protos. [ # the first proto has two events [[1],[2],[]], # 3 actions, the last one does not have a number_of_views. [], # the second event does not have action ], [ # the second proto has one event [[3]], ], ] ``` #### Representing nested lists with `tf.SparseTensor` struct2tensor uses `tf.SparseTensor` to represent the above nested list in the projection results. Note that `tf.SparseTensor` essentially enforces that the lists nested at the same level to have the same length (because the there is a certain size for each dimension), therefore this representation is lossy. The above nested lists, when written as a SparseTensor will look like: ``` tf.SparseTensor( dense_shape=[2, 2, 3, 1], # each is the maximum length of lists at the same nesting level. values = [1, 2, 3], indices = [[0, 0, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0]] ) ``` Note that the last dimension is useless: the index of that dimension will always be 0 for any present value because number_of_views is an optional field. So struct2tensors library will actually "squeeze" all the optional dimensions. The actual result would be: ``` query = _create_query_from_text_sessions([''' event { action { number_of_views: 1} action { number_of_views: 2} action {} } event {} ''', ''' event { action { number_of_views: 3} } '''] ).project(["event.action.number_of_views"]) prensor = s2t.calculate_prensors([query]) pretty.pprint(prensor) ``` struct2tensor's internal data model is closer to the above "nested lists" abstraction and sometimes it's easier to reason with "nested lists" than with `SparseTensor`s. Recently, [`tf.RaggedTensor`](https://www.tensorflow.org/guide/ragged_tensors) was introduced to represent nested lists exactly. We are working on adding support for projecting into ragged tensors. ## Common struct2tensor Queries ### `promote` Promotes a node to become a sibling of its parent. If the node is repeated, then all its values are concatenated (the order is preserved). ``` #@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { root -> session [label="*"]; session -> event [label="*"]; event -> query_token [label="*"]; } ''') ``` `promote(source_path="event.query_token", new_field_name="event_query_token")` ``` #@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { event_query_token [color="red"]; root -> session [label="*"]; session -> event [label="*"]; session -> event_query_token [label="*"]; event -> query_token [label="*"]; } ''') query = (_create_query_from_text_sessions([ """ event { query_token: "abc" query_token: "def" } event { query_token: "ghi" } """]) .promote(source_path="event.query_token", new_field_name="event_query_token") .project(["event_query_token"])) prensor = s2t.calculate_prensors([query]) _pretty_print(prensor) ``` The projected structure is like: ``` { # this is under Session. event_query_token: "abc" event_query_token: "def" event_query_token: "ghi" } ``` ### `broadcast` Broadcasts the value of a node to one of its sibling. The value will be replicated if the sibling is repeated. This is similar to TensorFlow and Numpy's [broadcasting semantics](https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html). ``` #@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { root -> session [label="*"]; session -> session_id [label="?"]; session -> event [label="*"]; } ''') ``` `broadcast(source_path="session_id", sibling_field="event", new_field_name="session_session_id")` ``` #@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { session_session_id [color="red"]; root -> session [label="*"]; session -> session_id [label="?"]; session -> event [label="*"]; event -> session_session_id [label="?"]; } ''') query = (_create_query_from_text_sessions([ """ session_id: 8 event { } event { } """]) .broadcast(source_path="session_id", sibling_field="event", new_field_name="session_session_id") .project(["event.session_session_id"])) prensor = s2t.calculate_prensors([query]) _pretty_print(prensor) ``` The projected structure is like: ``` { event { session_session_id: 8 } event { session_session_id: 8 } } ``` ### `promote_and_broadcast` The query accepts multiple source fields and a destination field. For each source field, it first promotes it to the least common ancestor with the destination field (if necessary), then broadcasts it to the destination field (if necessary). Usually for the purpose of machine learning, this gives a reasonable flattened representation of nested structures. ``` promote_and_broadcast( path_dictionary={ 'session_info_duration_sec': 'session_info.session_duration_sec'}, dest_path_parent='event.action') ``` is equivalent to: ``` promote(source_path='session_info.session_duration_sec', new_field_name='anonymous_field1') broadcast(source_path='anonymous_field1', sibling_field='event.action', new_field_name='session_info_duration_sec') ``` ### `map_field_values` Creates a new node that is a sibling of a leaf node. The values of the new node are results of applying the given function to the values of the source node. Note that the function provided takes 1-D tensor that contains all the values of the source node as input and should also output a 1-D tensor of the same size, and it should build TF ops. ``` query = (_create_query_from_text_sessions([ """ session_id: 8 """, """ session_id: 9 """]) .map_field_values("session_id", lambda x: tf.add(x, 1), dtype=tf.int64, new_field_name="session_id_plus_one") .project(["session_id_plus_one"])) prensor = s2t.calculate_prensors([query]) _pretty_print(prensor) ``` ### `reroot` Makes the given node the new root of the struct2tensorTree. This has two effects: * restricts the scope of the struct2tensorTree + The field paths in all the following queries are relative to the new root + There's no way to refer to nodes that are outside the subtree rooted at the new root. * changes the batch dimension. ``` #@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { root -> session [label="*"]; session -> session_id [label="?"]; session -> event [label="*"]; event -> event_id [label="?"]; } ''') ``` `reroot("event")` ``` #@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { root -> event [label="*"]; event -> event_id [label="?"]; } ''') #@title { display-mode: "form" } text_protos = [""" session_id: 1 event { event_id: "a" } event { event_id: "b" } """, """ session_id: 2 """, """ session_id: 3 event { event_id: "c" } """ ] print("""Assume the following Sessions: """) print([text_format.Merge(p, s2t.test.test_pb2.Session()) for p in text_protos]) print("\n") reroot_example_query = _create_query_from_text_sessions(text_protos) print("""project(["event.event_id"]) before reroot() (the batch dimension is the index to sessions):""") _pretty_print(s2t.calculate_prensors([reroot_example_query.project(["event.event_id"])])) print("\n") print("""project(["event_id"]) after reroot() (the batch dimension becomes the index to events):""") _pretty_print(s2t.calculate_prensors([reroot_example_query.reroot("event").project(["event_id"])])) ``` ## Apache Parquet Support `struct2tensor` offers an [Apache Parquet](https://parquet.apache.org/) [tf.DataSet](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) that allows reading from a Parquet file and apply queries to manipulate the structure of the data. Because of the powerful struct2tensor library, the dataset will only read the Parquet columns that are required. This reduces I/O cost if we only need a select few columns. ### Preparation Please run the code cell at [Some Pretty Printing and Imports](#scrollTo=dIxHSM3VQfUu&line=1&uniqifier=1) to ensure that all required modules are imported, and that pretty print works properly. #### Prepare the input data ``` # Download our sample data file from the struct2tensor repository. The desciption of the data is below. #@test {"skip": true} !curl -o dremel_example.parquet 'https://raw.githubusercontent.com/google/struct2tensor/master/struct2tensor/testdata/parquet_testdata/dremel_example.parquet' ``` ### Example We will use a sample Parquet data file (*dremel_example.parquet*), which contains data based on the example used in this paper: https://storage.googleapis.com/pub-tools-public-publication-data/pdf/36632.pdf The file *dremel_example.parquet* has the following schema: ``` message Document { required int64 DocId; optional group Links { repeated int64 Backward; repeated int64 Forward; } repeated group Name { repeated group Language { required string Code; optional string Country; } optional string Url; }} ``` and contains the following data: ``` Document DocId: 10 Links Forward: 20 Forward: 40 Forward: 60 Name Language Code: 'en-us' Country: 'us' Language Code: 'en' Url: 'http://A' Name Url: 'http://B' Name Language Code: 'en-gb' Country: 'gb' Document DocId: 20 Links Backward: 10 Backward: 30 Forward: 80 Name Url: 'http://C' ``` In this example, we will promote and broadcast the field `Links.Forward` and project it. batch_size will be the number of records (`Document`) per prensor. This works with optional and repeated fields, and will be able to batch the entire record. Feel free to try `batch_size = 2` in the below code. (Note this parquet file only has 2 records (`Document`) total). ``` #@test {"skip": true} from struct2tensor import expression_impl filenames = ["dremel_example.parquet"] batch_size = 1 exp = s2t.expression_impl.parquet.create_expression_from_parquet_file(filenames) new_exp = exp.promote_and_broadcast({"new_field": "Links.Forward"}, "Name") proj_exp = new_exp.project(["Name.new_field"]) proj_exp_needed = exp.project(["Name.Url"]) # Please note that currently, proj_exp_needed needs to be passed into calculate. # This is due to the way data is stored in parquet (values and repetition & # definition levels). To construct the node for "Name", we need to read the # values of a column containing "Name". pqds = s2t.expression_impl.parquet.calculate_parquet_values([proj_exp, proj_exp_needed], exp, filenames, batch_size) for prensors in pqds: new_field_prensor = prensors[0] print("============================") print("Schema of new_field prensor: ") print(new_field_prensor) print("\nSparse tensor representation: ") pretty.pprint(new_field_prensor) print("============================") ```
github_jupyter
# e.g. a web session message Session{ message SessionInfo { string session_feature = 1; double session_duration_sec = 2; } SessionInfo session_info = 1; message Event { string query = 1; message Action { int number_of_views = 1; } repeated Action action = 2; } repeated Event event = 2; } #@test {"skip": true} # install struct2tensor !pip install struct2tensor # graphviz for pretty output !pip install graphviz import base64 import numpy as np import pprint import os import tensorflow from graphviz import Source import tensorflow as tf from IPython.display import Image from IPython.lib import pretty import struct2tensor as s2t from struct2tensor.test import test_pb2 from google.protobuf import text_format def _display(graph): """Renders a graphviz digraph.""" s = Source(graph) s.format='svg' return s def _create_query_from_text_sessions(text_sessions): """Creates a struct2tensor query from a list of pbtxt of struct2tensor.test.Session.""" sessions = tf.constant([ text_format.Merge( text_session, test_pb2.Session() ).SerializeToString() for text_session in text_sessions ]) return s2t.create_expression_from_proto( sessions, test_pb2.Session.DESCRIPTOR) def _prensor_pretty_printer(prensor, p, cycle): """Pretty printing function for struct2tensor.prensor.Prensor""" pretty.pprint(prensor.get_sparse_tensors()) def _sp_pretty_printer(sp, p, cycle): """Pretty printing function for SparseTensor.""" del cycle p.begin_group(4, "SparseTensor(") p.text("values={}, ".format(sp.values.numpy().tolist())) p.text("dense_shape={}, ".format(sp.dense_shape.numpy().tolist())) p.break_() p.text("indices={}".format(sp.indices.numpy().tolist())) p.end_group(4, ")") pretty.for_type(tf.SparseTensor, _sp_pretty_printer) pretty.for_type(s2t.Prensor, _prensor_pretty_printer) _pretty_print = pretty.pprint print("type-specific pretty printing ready to go") @tf.function(input_signature=[tf.TensorSpec(shape=(None), dtype=tf.string)], autograph=False) def parse_session(serialized_sessions): """A TF function parsing a batch of serialized Session protos into tensors. It is a TF graph that takes one 1-D tensor as input, and outputs a Dict[str, tf.SparseTensor] """ query = s2t.create_expression_from_proto( serialized_sessions, test_pb2.Session.DESCRIPTOR) # Move all the fields of our interest to under "event". query = query.promote_and_broadcast({ "session_feature": "session_info.session_feature", "action_number_of_views": "event.action.number_of_views" }, "event") # Specify "event" to be examples. query = query.reroot("event") # Extract all the fields of our interest. projection = query.project(["session_feature", "query", "action_number_of_views"]) prensors = s2t.calculate_prensors([projection]) output_sparse_tensors = {} for prensor in prensors: path_to_tensor = prensor.get_sparse_tensors() output_sparse_tensors.update({str(k): v for k, v in path_to_tensor.items()}) return output_sparse_tensors print("Defined the workhorse func: (structured data at rest) -> (tensors)") serialized_sessions = tf.constant([ text_format.Merge( """ session_info { session_duration_sec: 1.0 session_feature: "foo" } event { query: "Hello" action { number_of_views: 1 } action { } } event { query: "world" action { number_of_views: 2 } action { number_of_views: 3 } } """, test_pb2.Session() ).SerializeToString() ]) _pretty_print(parse_session(serialized_sessions)) #@title { display-mode: "form" } #@test {"skip": true} _display(""" digraph { root -> session [label="*"]; session -> event [label="*"]; session -> session_id [label="?"]; event -> action [label="*"]; event -> query_token [label="*"] action -> number_of_views [label="?"]; } """) broadcast( source_path="session.session_id", sibling="event", new_field_name="session_session_id") #@title { display-mode: "form" } #@test {"skip": true} _display(""" digraph { session_session_id [color="red"]; root -> session [label="*"]; session -> event [label="*"]; session -> session_id [label="?"]; event -> action [label="*"]; event -> session_session_id [label="?"]; event -> query_token [label="*"]; action -> number_of_views [label="?"]; } """) { event { action { number_of_views: 1} action { number_of_views: 2} action {} } event {} }, { event { action { number_of_views: 3} } } [ # the outer list has two elements b/c there are two Session protos. [ # the first proto has two events [[1],[2],[]], # 3 actions, the last one does not have a number_of_views. [], # the second event does not have action ], [ # the second proto has one event [[3]], ], ] tf.SparseTensor( dense_shape=[2, 2, 3, 1], # each is the maximum length of lists at the same nesting level. values = [1, 2, 3], indices = [[0, 0, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0]] ) query = _create_query_from_text_sessions([''' event { action { number_of_views: 1} action { number_of_views: 2} action {} } event {} ''', ''' event { action { number_of_views: 3} } '''] ).project(["event.action.number_of_views"]) prensor = s2t.calculate_prensors([query]) pretty.pprint(prensor) #@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { root -> session [label="*"]; session -> event [label="*"]; event -> query_token [label="*"]; } ''') #@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { event_query_token [color="red"]; root -> session [label="*"]; session -> event [label="*"]; session -> event_query_token [label="*"]; event -> query_token [label="*"]; } ''') query = (_create_query_from_text_sessions([ """ event { query_token: "abc" query_token: "def" } event { query_token: "ghi" } """]) .promote(source_path="event.query_token", new_field_name="event_query_token") .project(["event_query_token"])) prensor = s2t.calculate_prensors([query]) _pretty_print(prensor) { # this is under Session. event_query_token: "abc" event_query_token: "def" event_query_token: "ghi" } #@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { root -> session [label="*"]; session -> session_id [label="?"]; session -> event [label="*"]; } ''') #@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { session_session_id [color="red"]; root -> session [label="*"]; session -> session_id [label="?"]; session -> event [label="*"]; event -> session_session_id [label="?"]; } ''') query = (_create_query_from_text_sessions([ """ session_id: 8 event { } event { } """]) .broadcast(source_path="session_id", sibling_field="event", new_field_name="session_session_id") .project(["event.session_session_id"])) prensor = s2t.calculate_prensors([query]) _pretty_print(prensor) { event { session_session_id: 8 } event { session_session_id: 8 } } promote_and_broadcast( path_dictionary={ 'session_info_duration_sec': 'session_info.session_duration_sec'}, dest_path_parent='event.action') promote(source_path='session_info.session_duration_sec', new_field_name='anonymous_field1') broadcast(source_path='anonymous_field1', sibling_field='event.action', new_field_name='session_info_duration_sec') query = (_create_query_from_text_sessions([ """ session_id: 8 """, """ session_id: 9 """]) .map_field_values("session_id", lambda x: tf.add(x, 1), dtype=tf.int64, new_field_name="session_id_plus_one") .project(["session_id_plus_one"])) prensor = s2t.calculate_prensors([query]) _pretty_print(prensor) #@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { root -> session [label="*"]; session -> session_id [label="?"]; session -> event [label="*"]; event -> event_id [label="?"]; } ''') #@title { display-mode: "form" } #@test {"skip": true} _display(''' digraph { root -> event [label="*"]; event -> event_id [label="?"]; } ''') #@title { display-mode: "form" } text_protos = [""" session_id: 1 event { event_id: "a" } event { event_id: "b" } """, """ session_id: 2 """, """ session_id: 3 event { event_id: "c" } """ ] print("""Assume the following Sessions: """) print([text_format.Merge(p, s2t.test.test_pb2.Session()) for p in text_protos]) print("\n") reroot_example_query = _create_query_from_text_sessions(text_protos) print("""project(["event.event_id"]) before reroot() (the batch dimension is the index to sessions):""") _pretty_print(s2t.calculate_prensors([reroot_example_query.project(["event.event_id"])])) print("\n") print("""project(["event_id"]) after reroot() (the batch dimension becomes the index to events):""") _pretty_print(s2t.calculate_prensors([reroot_example_query.reroot("event").project(["event_id"])])) # Download our sample data file from the struct2tensor repository. The desciption of the data is below. #@test {"skip": true} !curl -o dremel_example.parquet 'https://raw.githubusercontent.com/google/struct2tensor/master/struct2tensor/testdata/parquet_testdata/dremel_example.parquet' message Document { required int64 DocId; optional group Links { repeated int64 Backward; repeated int64 Forward; } repeated group Name { repeated group Language { required string Code; optional string Country; } optional string Url; }} ``` and contains the following data: In this example, we will promote and broadcast the field `Links.Forward` and project it. batch_size will be the number of records (`Document`) per prensor. This works with optional and repeated fields, and will be able to batch the entire record. Feel free to try `batch_size = 2` in the below code. (Note this parquet file only has 2 records (`Document`) total).
0.53607
0.963403
# Intorduction --- #### ჩემს შესახებ #### თქვენს შესახებ #### კურსის შესახებ #### რას უნდა ველოდოთ კურსისგან #### წინაპირობების შესახებ #### ლიტერატურა და მასალები #### GitHub ## Course Outline --- #### მონაცემთა მეცნიერება #### მონაცემთა ანალიტიკა #### მანქანური სწავლება #### ღრმა სწავლება #### რატომ Python მონაცემთა ანალიზისათვის? #### მნიშვნელოვანი ბიბლიოთეკები #### სილაბუსის გაცნობა #### სამუშაო გარემოს გაცნობა ## მონაცემთა მეცნიერება --- მონაცემთა მეცნიერება - ეს არის ინტერ-დისციპლინარული მეცნიერების დარგი, რომელიც იყენებს სხვადასხვა სამეცნიერო მეთოდებს, პროცესებს, ალგორითმებს რათა სტრუქტურული თუ არასტრუქტურული მონაცემებიდან ამოიღოს ცოდნა. **(ვიკიპედიას განმარტება)** სხვა სიტყვებით: **მონაცემთა მეცნიერება არის მეცნიერება მონაცემების შესახებ რომლის საშუალებითაც შესაძლებელია მონაცემებში აღმოვაჩინოთ დამალული ინფორმაცია.** რადგან ინტერ-დისციპლინარული დარგია, მონაცემთა მეცნიერება იყენებს სხვადასხვა მეცნიერების დარგების ტექნიკებს და ალგორითმებს. ![alt text](images/convey_diagram.jpg "Title") [Drew Conway Diagram](https://en.wikipedia.org/wiki/Drew_Conway) > The ability to take data—to be able to understand it, to process it, to extract value from it, to visualize it, to communicate it—that’s going to be a hugely important skill in the next decades[...] [Hal Varian, chief economist at Google and UC Berkeley professor of information sciences, business, and economics.](https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/hal-varian-on-how-the-web-challenges-managers#) ![alt text](images/data_science_lifecycle.png "Title") [source](https://ischoolonline.berkeley.edu/data-science/what-is-data-science/#fn3b) ## მონაცემთა ანალიტიკა --- **მონაცემთა ანალიტიკა** - ეს არის მონაცემების გამოკვლევის და დიაგნოსტირების პროცესი რომელიც მიზნად ისახავს მონაცემებიდან ამოიღოს სასარგებლო ინფორმაცია და მიიღოს ამ ინფორმაციაზე დაფუძნებული გადაწყვეტილება. უფრო კონკრეტულად კი უპასუხოს შემდეგ ორ კითხვას: * **რა მოხდა?** * **რატომ მოხდა?** ![alt text](images/data.png "Title") ## Data Science VS Data Analytics --- ![alt text](images/data_science_vs_data_analytics.png "Title") **A data scientist creates questions, while a data analyst finds answers to the existing set of questions.** <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician.</p>&mdash; Josh Wills (@josh_wills) <a href="https://twitter.com/josh_wills/status/198093512149958656?ref_src=twsrc%5Etfw">May 3, 2012</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> ## The Whole World --- ![alt text](images/AI-vs-ML-vs-Deep-Learning.png "Title") ## მანქანური სწავლება --- **მეცნიერება რომელიც შეისწავლის ისეთ ალგორითმებს, რომლებსაც შეუძლიათ ავტომატური გაუმჯობესება სწავლასა და გამოცდილებაზე დაყრდნობით.** * მონაცემთა მეცნიერებაში ალგორითმი არის სტატისტიკური პროცესინგის თანამიმდევრული ნაბიჯები. * მანქანურ სწავლებში ალგორითმი არის გაწვრთნადი - ანუ მას შეუძლია აღმოაჩინოს პატერნები დიდი მოცულობით მონაცემებში და სწავლის შემდეგ მიიღოს გადაწყვეტილებები ახალ მონაცემებზე დაყრდნობით. ## მანქანური სწავლების მეთოდები --- * **Supervised Learning** - Supervised machine learning trains itself on a labeled data set. Supervised machine learning requires less training data than other machine learning methods and makes training easier because the results of the model can be compared to actual labeled results. But, properly labeled data is expensive to prepare, and there's the danger of over-fitting. * **Unsupervised Learning** - Unsupervised machine learning ingests unlabeled data, lots and lots of it, and uses algorithms to extract meaningful features needed to label, sort, and classify the data in real-time, without human intervention. Unsupervised learning is less about automating decisions and predictions, and more about identifying patterns and relationships in data that humans would miss. * **Semi-Supervised Learning** - Semi-supervised learning offers a happy medium between supervised and unsupervised learning. During training, it uses a smaller labeled data set to guide classification and feature extraction from a larger, unlabeled data set. Semi-supervised learning can solve the problem of having not enough labeled data to train a supervised learning algorithm. * **Reinforcement Learning** - Reinforcement machine learning is a behavioral machine learning model that is similar to supervised learning, but the algorithm isn't trained using sample data. This model learns as it goes by using trial and error. A sequence of successful outcomes will be reinforced to develop the best recommendation or policy for a given problem. ## ღრმა სწავლება --- * **ღრმა სწავლება** - ეს არის ხელოვნური ნეირონული ქსელები რომლის შექმნის ინსპირაცია იყო ადამიანის ტვინი და შესაბამისად ღრმა სწავლების ალგორითმები ცდილობს ისწავლოს ისე როგორც ადამიანი და ადამიანის ტვინი სწავლობს. * აქედან გამომდინარე, ღრმა სწავლება რომ იყოს ეფექტური და ეფექტიანი, საჭიროა ნამდვილად დიდი რაოდენობით მონაცემები და შესაბამისი გამოთვლითი რესურსი. * All deep learning is machine learning, but not all machine learning is deep learning > ყველა საუბრობს, თუ რატომ უნდა გამოვიყენოთ ღრმა სწავლება, მაგრამ არავინ აღნიშნავს თუ **რატომ არ უნდა გამოვიყენოთ იგი** ## რატომ არ უნდა გამოვიყენოთ ღრმა სწავლება? --- * კომპანიების უმეტესებას არ აქვს საკმარისი მონაცემები * მოითხოვს დიდ გამოთვლით რესურს, რაც ასევე ბევრ კომპანიას არ გააჩნია * უმეტეს შემთხვევაში სრულიად არასაჭიროა ![alt text](images/why_dl.png "Title") ## რატომ Python? --- * **მარტივი** - Python არის საკმაოდ მარტივი ენა დამწყებთათვის, რადგან ენაში ყურადღება გამახვილებულია კოდის წაკითხვადობაზე. თუმცა, არ არ ნიშნავს რომ მისი სიმარტივე შემზღუდავია. * **მოქნილი** - Python არის ზოგადი პროგრამირების ენა, რაც საშუალებას გვაძლევს შევქმათ როგორც მარტივი ისე დიდი და რთული პროგრამები. * **მზარდი პოპულარობა** - ბოლო 10 წელიწადში de-facto ენა მონაცემთა ანალიზისათვის * **ბიბლიოთეკები** - ბიბლიოთეკების მდიდარი კოლექცია * **საზოგადოება** - დიდი და აქტიური საზოგადოება * **შრომის ბაზარი** - ერთი-ორად გაზრდილი მოთხოვნა ![alt text](images/comparison.png "Title") ## მნიშვნელოვანი ბიბლიოთეკები --- #### [NumPy](https://numpy.org/) - Numeric Python (1995, 2006) #### [Pandas](https://pandas.pydata.org/) - Data Analysis (2008) #### [SciPy](https://scipy.org/scipylib/) - Scientific Python (2001) #### [Scikit-Learn](https://scikit-learn.org/stable/) - Machine Learning (2007) #### [statsmodels](https://www.statsmodels.org/stable/index.html) - Statistical Modeling (2012) #### [matplotlib](https://matplotlib.org/) - Static Data Visualization (2003) #### [Plotly](https://plotly.com/python/) - Interactive Data Visualization (2012) ## სილაბუსი ## სამუშაო გარემო --- #### [miniconda](https://docs.conda.io/en/latest/miniconda.html) - For Python and Working Environment #### [Project Jupyter](https://jupyter.org/) - Interactive Computing #### [GitHub](https://github.com/) - For lectures and other materials
github_jupyter
# Intorduction --- #### ჩემს შესახებ #### თქვენს შესახებ #### კურსის შესახებ #### რას უნდა ველოდოთ კურსისგან #### წინაპირობების შესახებ #### ლიტერატურა და მასალები #### GitHub ## Course Outline --- #### მონაცემთა მეცნიერება #### მონაცემთა ანალიტიკა #### მანქანური სწავლება #### ღრმა სწავლება #### რატომ Python მონაცემთა ანალიზისათვის? #### მნიშვნელოვანი ბიბლიოთეკები #### სილაბუსის გაცნობა #### სამუშაო გარემოს გაცნობა ## მონაცემთა მეცნიერება --- მონაცემთა მეცნიერება - ეს არის ინტერ-დისციპლინარული მეცნიერების დარგი, რომელიც იყენებს სხვადასხვა სამეცნიერო მეთოდებს, პროცესებს, ალგორითმებს რათა სტრუქტურული თუ არასტრუქტურული მონაცემებიდან ამოიღოს ცოდნა. **(ვიკიპედიას განმარტება)** სხვა სიტყვებით: **მონაცემთა მეცნიერება არის მეცნიერება მონაცემების შესახებ რომლის საშუალებითაც შესაძლებელია მონაცემებში აღმოვაჩინოთ დამალული ინფორმაცია.** რადგან ინტერ-დისციპლინარული დარგია, მონაცემთა მეცნიერება იყენებს სხვადასხვა მეცნიერების დარგების ტექნიკებს და ალგორითმებს. ![alt text](images/convey_diagram.jpg "Title") [Drew Conway Diagram](https://en.wikipedia.org/wiki/Drew_Conway) > The ability to take data—to be able to understand it, to process it, to extract value from it, to visualize it, to communicate it—that’s going to be a hugely important skill in the next decades[...] [Hal Varian, chief economist at Google and UC Berkeley professor of information sciences, business, and economics.](https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/hal-varian-on-how-the-web-challenges-managers#) ![alt text](images/data_science_lifecycle.png "Title") [source](https://ischoolonline.berkeley.edu/data-science/what-is-data-science/#fn3b) ## მონაცემთა ანალიტიკა --- **მონაცემთა ანალიტიკა** - ეს არის მონაცემების გამოკვლევის და დიაგნოსტირების პროცესი რომელიც მიზნად ისახავს მონაცემებიდან ამოიღოს სასარგებლო ინფორმაცია და მიიღოს ამ ინფორმაციაზე დაფუძნებული გადაწყვეტილება. უფრო კონკრეტულად კი უპასუხოს შემდეგ ორ კითხვას: * **რა მოხდა?** * **რატომ მოხდა?** ![alt text](images/data.png "Title") ## Data Science VS Data Analytics --- ![alt text](images/data_science_vs_data_analytics.png "Title") **A data scientist creates questions, while a data analyst finds answers to the existing set of questions.** <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician.</p>&mdash; Josh Wills (@josh_wills) <a href="https://twitter.com/josh_wills/status/198093512149958656?ref_src=twsrc%5Etfw">May 3, 2012</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> ## The Whole World --- ![alt text](images/AI-vs-ML-vs-Deep-Learning.png "Title") ## მანქანური სწავლება --- **მეცნიერება რომელიც შეისწავლის ისეთ ალგორითმებს, რომლებსაც შეუძლიათ ავტომატური გაუმჯობესება სწავლასა და გამოცდილებაზე დაყრდნობით.** * მონაცემთა მეცნიერებაში ალგორითმი არის სტატისტიკური პროცესინგის თანამიმდევრული ნაბიჯები. * მანქანურ სწავლებში ალგორითმი არის გაწვრთნადი - ანუ მას შეუძლია აღმოაჩინოს პატერნები დიდი მოცულობით მონაცემებში და სწავლის შემდეგ მიიღოს გადაწყვეტილებები ახალ მონაცემებზე დაყრდნობით. ## მანქანური სწავლების მეთოდები --- * **Supervised Learning** - Supervised machine learning trains itself on a labeled data set. Supervised machine learning requires less training data than other machine learning methods and makes training easier because the results of the model can be compared to actual labeled results. But, properly labeled data is expensive to prepare, and there's the danger of over-fitting. * **Unsupervised Learning** - Unsupervised machine learning ingests unlabeled data, lots and lots of it, and uses algorithms to extract meaningful features needed to label, sort, and classify the data in real-time, without human intervention. Unsupervised learning is less about automating decisions and predictions, and more about identifying patterns and relationships in data that humans would miss. * **Semi-Supervised Learning** - Semi-supervised learning offers a happy medium between supervised and unsupervised learning. During training, it uses a smaller labeled data set to guide classification and feature extraction from a larger, unlabeled data set. Semi-supervised learning can solve the problem of having not enough labeled data to train a supervised learning algorithm. * **Reinforcement Learning** - Reinforcement machine learning is a behavioral machine learning model that is similar to supervised learning, but the algorithm isn't trained using sample data. This model learns as it goes by using trial and error. A sequence of successful outcomes will be reinforced to develop the best recommendation or policy for a given problem. ## ღრმა სწავლება --- * **ღრმა სწავლება** - ეს არის ხელოვნური ნეირონული ქსელები რომლის შექმნის ინსპირაცია იყო ადამიანის ტვინი და შესაბამისად ღრმა სწავლების ალგორითმები ცდილობს ისწავლოს ისე როგორც ადამიანი და ადამიანის ტვინი სწავლობს. * აქედან გამომდინარე, ღრმა სწავლება რომ იყოს ეფექტური და ეფექტიანი, საჭიროა ნამდვილად დიდი რაოდენობით მონაცემები და შესაბამისი გამოთვლითი რესურსი. * All deep learning is machine learning, but not all machine learning is deep learning > ყველა საუბრობს, თუ რატომ უნდა გამოვიყენოთ ღრმა სწავლება, მაგრამ არავინ აღნიშნავს თუ **რატომ არ უნდა გამოვიყენოთ იგი** ## რატომ არ უნდა გამოვიყენოთ ღრმა სწავლება? --- * კომპანიების უმეტესებას არ აქვს საკმარისი მონაცემები * მოითხოვს დიდ გამოთვლით რესურს, რაც ასევე ბევრ კომპანიას არ გააჩნია * უმეტეს შემთხვევაში სრულიად არასაჭიროა ![alt text](images/why_dl.png "Title") ## რატომ Python? --- * **მარტივი** - Python არის საკმაოდ მარტივი ენა დამწყებთათვის, რადგან ენაში ყურადღება გამახვილებულია კოდის წაკითხვადობაზე. თუმცა, არ არ ნიშნავს რომ მისი სიმარტივე შემზღუდავია. * **მოქნილი** - Python არის ზოგადი პროგრამირების ენა, რაც საშუალებას გვაძლევს შევქმათ როგორც მარტივი ისე დიდი და რთული პროგრამები. * **მზარდი პოპულარობა** - ბოლო 10 წელიწადში de-facto ენა მონაცემთა ანალიზისათვის * **ბიბლიოთეკები** - ბიბლიოთეკების მდიდარი კოლექცია * **საზოგადოება** - დიდი და აქტიური საზოგადოება * **შრომის ბაზარი** - ერთი-ორად გაზრდილი მოთხოვნა ![alt text](images/comparison.png "Title") ## მნიშვნელოვანი ბიბლიოთეკები --- #### [NumPy](https://numpy.org/) - Numeric Python (1995, 2006) #### [Pandas](https://pandas.pydata.org/) - Data Analysis (2008) #### [SciPy](https://scipy.org/scipylib/) - Scientific Python (2001) #### [Scikit-Learn](https://scikit-learn.org/stable/) - Machine Learning (2007) #### [statsmodels](https://www.statsmodels.org/stable/index.html) - Statistical Modeling (2012) #### [matplotlib](https://matplotlib.org/) - Static Data Visualization (2003) #### [Plotly](https://plotly.com/python/) - Interactive Data Visualization (2012) ## სილაბუსი ## სამუშაო გარემო --- #### [miniconda](https://docs.conda.io/en/latest/miniconda.html) - For Python and Working Environment #### [Project Jupyter](https://jupyter.org/) - Interactive Computing #### [GitHub](https://github.com/) - For lectures and other materials
0.332961
0.643553
# Data exploration and preprocessing - In this notebook, we load in the MRI scans and their segmentations, build a Dataset object for the train and test set. - Then we check some basic stats of the datasets and visualise a few scans. - Finally, we carry out our preprocessing steps and save the train and test datasets. ``` %load_ext autoreload %autoreload 2 import os import copy import logging from glob import glob import warnings warnings.filterwarnings('ignore') import pandas as pd import numpy as np import matplotlib.pyplot as plt import sys sys.path.append('../src/') from data_utils import Dataset # get TF logger - set it to info for more tracking process log = logging.getLogger('pytorch') log.setLevel(logging.WARNING) ``` ## Create train and test datasets Find all the raw data files and build train and test Dataset objects from the patients' scans. ``` # get file paths to DICOM MRI scans and segmentation images # note that leaderboard samples can be used for training train_scan_files = glob('../data/raw/train/**/*.dcm', recursive=True) train_scan_files += glob('../data/raw/leaderboard/**/*.dcm', recursive=True) test_scan_files = glob('../data/raw/test/**/*.dcm', recursive=True) # ProstateDx-01-0006_corrected_label.nrrd was renamed to ProstateDx-01-0006.nrrd # In the leaderboard and test folders the _truth postfix have been removed from all nrrd files train_seg_files = glob('../data/raw/train/**/*.nrrd', recursive=True) train_seg_files += glob('../data/raw/leaderboard/**/*.nrrd', recursive=True) test_seg_files = glob('../data/raw/test/**/*.nrrd', recursive=True) # build datasets from file paths train_dataset = Dataset(scan_files=train_scan_files, seg_files=train_seg_files) test_dataset = Dataset(scan_files=test_scan_files, seg_files=test_seg_files) ``` ## Check basic stats Check number of patients in train and test datasets. ``` train_n = len(train_dataset.patient_ids) test_n = len(test_dataset.patient_ids) train_scan_nums = [p.scans.shape[0] for p in train_dataset.patients.values()] test_scan_nums = [p.scans.shape[0] for p in test_dataset.patients.values()] print('Number of patients in train dataset: %d' % train_n) print('Number of patients in test dataset: %d' % test_n) print('Number of scans in train dataset: %d' % sum(train_scan_nums)) print('Number of scans in test dataset: %d' % sum(test_scan_nums)) ``` Check distribution of number of scans in train and test datasets. - They both seem bi-modal with roughly the same peaks. ``` fig, ax = plt.subplots(1, 2) ax[0].set_title('# scans in train dataset') ax[0].hist(train_scan_nums, bins=10) ax[1].set_title('# scans in test dataset') ax[1].hist(test_scan_nums, bins=10) ``` Make sure that none of the patients have scans from mixed manufacturers and with mixed slice thickness. ``` # extract manufacturer and thickness sets from each patient train_manufacturers = [p.manufacturers for p in train_dataset.patients.values()] train_thicknesses = [p.thicknesses for p in train_dataset.patients.values()] test_manufacturers = [p.manufacturers for p in test_dataset.patients.values()] test_thicknesses = [p.thicknesses for p in test_dataset.patients.values()] # check if any patient has slices from two different manufacturers or thicknesses - NO for m in train_manufacturers + test_manufacturers: assert len(m) == 1 for t in train_thicknesses + test_thicknesses: assert len(t) == 1 ``` ### Create summary table Collate all information into a pandas DataFrame from the datasets so we can analyse it easily later. ``` # collapse all list of sets to simple list train_manufacturers = [list(i)[0] for i in train_manufacturers] train_thicknesses = [list(i)[0] for i in train_thicknesses] test_manufacturers = [list(i)[0] for i in test_manufacturers] test_thicknesses = [list(i)[0] for i in test_thicknesses] # extract scan width, height and max value train_widths = [p.scans.shape[1] for p in train_dataset.patients.values()] train_heights = [p.scans.shape[2] for p in train_dataset.patients.values()] train_max = [p.scans.max() for p in train_dataset.patients.values()] test_widths = [p.scans.shape[1] for p in test_dataset.patients.values()] test_heights = [p.scans.shape[2] for p in test_dataset.patients.values()] test_max = [p.scans.max() for p in test_dataset.patients.values()] # calculate contingency table from them df_summary = pd.DataFrame( list( zip( train_dataset.patient_ids + test_dataset.patient_ids, train_manufacturers + test_manufacturers, train_thicknesses + test_thicknesses, train_widths + test_widths, train_heights + test_heights, train_max + test_max, train_scan_nums + test_scan_nums, ['train'] * train_n + ['test'] * test_n ) ), columns = ['patient_id', 'manufacturer', 'thickness', 'width', 'heigth', 'max_val', 'scan_num', 'dataset'] ) ``` - Looks like the test and train datasets have been properly stratified with respect to manufacturer, i.e. half of the sample are from Siemens and half of them are from Philips. - However, the test dataset doesn't have slices of 4mm thickness. ``` df_summary.drop( ['width', 'heigth', 'scan_num', 'max_val'], axis=1 ).groupby( ['dataset', 'manufacturer', 'thickness'] ).count() ``` - Philips is the higher resolution machine, all scans are rectangular. ``` df_summary.drop( ['thickness', 'max_val', 'scan_num'], axis=1 ).groupby( ['dataset', 'manufacturer', 'width', 'heigth'] ).count() ``` - There's a large variation in the scans' maximum values and there are some clear outliers too (Siemens scan with max=65283, coming from the 18th scan of patient Prostate3T-01-0018) ``` df_summary.drop( ['thickness', 'patient_id', 'scan_num'], axis=1 ).groupby( ['dataset', 'manufacturer', 'width', 'heigth'] ).agg(['min', 'max', 'mean']) ``` - Here's the reason for those bi-modal histograms above, Philips mave higher number of scans on average. ``` df_summary.drop( ['thickness', 'max_val', 'width', 'heigth', 'patient_id'], axis=1 ).groupby( ['dataset', 'manufacturer'] ).agg(['min', 'max', 'median']) ``` ## Visualise scans Each patient's scans can be viewed as an animation or as a tiled figure. Let's have a look at some of these. **Note** you'll need to re-execute the cell to watch the animation. ``` # animation patient_id = train_dataset.patient_ids[23] train_dataset.patients[patient_id].patient_anim_scans() # tiled figure patient_id = train_dataset.patient_ids[2] train_dataset.patients[patient_id].patient_tile_scans() ``` ## Calculate class frequency The 3 classes are imbalanced, calculate their frequency in the data. This will inform our weighting scheme that we use with the loss function at training time. ``` class_freq = np.zeros(3) for i in range(len(train_dataset.patients.keys())): patient_id = train_dataset.patient_ids[i] seg = train_dataset.patients[patient_id].seg class0 = np.count_nonzero(seg == 0) class1 = np.count_nonzero(seg == 1) class2 = np.count_nonzero(seg == 2) class_freq += np.array([class0, class1, class2]) class_freq = class_freq / class_freq.sum() inv_class_freq = 1/class_freq norm_inv_class_freq = inv_class_freq / inv_class_freq.sum() norm_inv_class_freq ``` ## Preprocess data As we've seen from the summary stats above the scans are non-normalised and span a wide range of maximal values, resolution and number of scans. The `preprocess_dataset` method: - normalises the scans to be between zero and one - resizes (and downsample) each scan and target segmentation image to the same width and height (128, 128) - since 3D U-Net is fully convolutional, this isn't needed necessarily but it reduces the required memory at training time - cap depth of scans, i.e. the number of scans across the patients: <32 - this ensures that with a 4 layer deep 3D U-Net we'll get matching dimensions when we concatenate the shurtcuts in the network - we discard extra scans (i.e. more 32) and patients with less will be padded with zeros by TensorFlow ### First, save preprocessed but non-resized test dataset The performance evaluation has to be done ono original (i.e. non rescaled images). ``` test_dataset_non_resized = copy.deepcopy(test_dataset) test_dataset_non_resized.preprocess_dataset(resize=False, width=_, height=_, max_scans=32) test_dataset_non_resized.save_dataset('../data/processed/test_dataset.pckl') ``` Now let's resize and preprocess both train and test. ``` train_dataset.preprocess_dataset(resize= True,width=128, height=128, max_scans=32) test_dataset.preprocess_dataset(resize= True,width=128, height=128, max_scans=32) ``` Let's check if the scans and targets still look reasonable on the previous tiled example. ``` # note the target is now a one-hot tensor, so we only show the 2nd class patient_id = train_dataset.patient_ids[2] train_dataset.patients[patient_id].patient_tile_scans() ``` Finally, let's check that the preprocessing worked for all images. Specifically we should find that: - the number of scans are 32 or less for all patients - the resolution is 128 by 128 - the maximum value of the scans is less than or equal to 1 - the corresponding target tensor has an extra dimension, corresponding to the one hot encoding of the 3 classes ``` datasets = [train_dataset, test_dataset] for dataset in datasets: for i in range(len(dataset.patients.keys())): patient_id = dataset.patient_ids[i] scans = dataset.patients[patient_id].scans seg = dataset.patients[patient_id].seg assert(scans.shape[1:] == (128, 128)) assert(scans.shape[0] <= 32) assert(scans.max() <= 1) assert(seg.shape[1:3] == (128, 128)) assert(seg.shape[0] <= 32) assert(seg.shape[3] == 3) ``` ## Save resized datasets We save them as pickled objects so we can use them later for training and model evaluation. ``` train_dataset.save_dataset('../data/processed/train_dataset_resized.pckl') test_dataset.save_dataset('../data/processed/test_dataset_resized.pckl') ```
github_jupyter
%load_ext autoreload %autoreload 2 import os import copy import logging from glob import glob import warnings warnings.filterwarnings('ignore') import pandas as pd import numpy as np import matplotlib.pyplot as plt import sys sys.path.append('../src/') from data_utils import Dataset # get TF logger - set it to info for more tracking process log = logging.getLogger('pytorch') log.setLevel(logging.WARNING) # get file paths to DICOM MRI scans and segmentation images # note that leaderboard samples can be used for training train_scan_files = glob('../data/raw/train/**/*.dcm', recursive=True) train_scan_files += glob('../data/raw/leaderboard/**/*.dcm', recursive=True) test_scan_files = glob('../data/raw/test/**/*.dcm', recursive=True) # ProstateDx-01-0006_corrected_label.nrrd was renamed to ProstateDx-01-0006.nrrd # In the leaderboard and test folders the _truth postfix have been removed from all nrrd files train_seg_files = glob('../data/raw/train/**/*.nrrd', recursive=True) train_seg_files += glob('../data/raw/leaderboard/**/*.nrrd', recursive=True) test_seg_files = glob('../data/raw/test/**/*.nrrd', recursive=True) # build datasets from file paths train_dataset = Dataset(scan_files=train_scan_files, seg_files=train_seg_files) test_dataset = Dataset(scan_files=test_scan_files, seg_files=test_seg_files) train_n = len(train_dataset.patient_ids) test_n = len(test_dataset.patient_ids) train_scan_nums = [p.scans.shape[0] for p in train_dataset.patients.values()] test_scan_nums = [p.scans.shape[0] for p in test_dataset.patients.values()] print('Number of patients in train dataset: %d' % train_n) print('Number of patients in test dataset: %d' % test_n) print('Number of scans in train dataset: %d' % sum(train_scan_nums)) print('Number of scans in test dataset: %d' % sum(test_scan_nums)) fig, ax = plt.subplots(1, 2) ax[0].set_title('# scans in train dataset') ax[0].hist(train_scan_nums, bins=10) ax[1].set_title('# scans in test dataset') ax[1].hist(test_scan_nums, bins=10) # extract manufacturer and thickness sets from each patient train_manufacturers = [p.manufacturers for p in train_dataset.patients.values()] train_thicknesses = [p.thicknesses for p in train_dataset.patients.values()] test_manufacturers = [p.manufacturers for p in test_dataset.patients.values()] test_thicknesses = [p.thicknesses for p in test_dataset.patients.values()] # check if any patient has slices from two different manufacturers or thicknesses - NO for m in train_manufacturers + test_manufacturers: assert len(m) == 1 for t in train_thicknesses + test_thicknesses: assert len(t) == 1 # collapse all list of sets to simple list train_manufacturers = [list(i)[0] for i in train_manufacturers] train_thicknesses = [list(i)[0] for i in train_thicknesses] test_manufacturers = [list(i)[0] for i in test_manufacturers] test_thicknesses = [list(i)[0] for i in test_thicknesses] # extract scan width, height and max value train_widths = [p.scans.shape[1] for p in train_dataset.patients.values()] train_heights = [p.scans.shape[2] for p in train_dataset.patients.values()] train_max = [p.scans.max() for p in train_dataset.patients.values()] test_widths = [p.scans.shape[1] for p in test_dataset.patients.values()] test_heights = [p.scans.shape[2] for p in test_dataset.patients.values()] test_max = [p.scans.max() for p in test_dataset.patients.values()] # calculate contingency table from them df_summary = pd.DataFrame( list( zip( train_dataset.patient_ids + test_dataset.patient_ids, train_manufacturers + test_manufacturers, train_thicknesses + test_thicknesses, train_widths + test_widths, train_heights + test_heights, train_max + test_max, train_scan_nums + test_scan_nums, ['train'] * train_n + ['test'] * test_n ) ), columns = ['patient_id', 'manufacturer', 'thickness', 'width', 'heigth', 'max_val', 'scan_num', 'dataset'] ) df_summary.drop( ['width', 'heigth', 'scan_num', 'max_val'], axis=1 ).groupby( ['dataset', 'manufacturer', 'thickness'] ).count() df_summary.drop( ['thickness', 'max_val', 'scan_num'], axis=1 ).groupby( ['dataset', 'manufacturer', 'width', 'heigth'] ).count() df_summary.drop( ['thickness', 'patient_id', 'scan_num'], axis=1 ).groupby( ['dataset', 'manufacturer', 'width', 'heigth'] ).agg(['min', 'max', 'mean']) df_summary.drop( ['thickness', 'max_val', 'width', 'heigth', 'patient_id'], axis=1 ).groupby( ['dataset', 'manufacturer'] ).agg(['min', 'max', 'median']) # animation patient_id = train_dataset.patient_ids[23] train_dataset.patients[patient_id].patient_anim_scans() # tiled figure patient_id = train_dataset.patient_ids[2] train_dataset.patients[patient_id].patient_tile_scans() class_freq = np.zeros(3) for i in range(len(train_dataset.patients.keys())): patient_id = train_dataset.patient_ids[i] seg = train_dataset.patients[patient_id].seg class0 = np.count_nonzero(seg == 0) class1 = np.count_nonzero(seg == 1) class2 = np.count_nonzero(seg == 2) class_freq += np.array([class0, class1, class2]) class_freq = class_freq / class_freq.sum() inv_class_freq = 1/class_freq norm_inv_class_freq = inv_class_freq / inv_class_freq.sum() norm_inv_class_freq test_dataset_non_resized = copy.deepcopy(test_dataset) test_dataset_non_resized.preprocess_dataset(resize=False, width=_, height=_, max_scans=32) test_dataset_non_resized.save_dataset('../data/processed/test_dataset.pckl') train_dataset.preprocess_dataset(resize= True,width=128, height=128, max_scans=32) test_dataset.preprocess_dataset(resize= True,width=128, height=128, max_scans=32) # note the target is now a one-hot tensor, so we only show the 2nd class patient_id = train_dataset.patient_ids[2] train_dataset.patients[patient_id].patient_tile_scans() datasets = [train_dataset, test_dataset] for dataset in datasets: for i in range(len(dataset.patients.keys())): patient_id = dataset.patient_ids[i] scans = dataset.patients[patient_id].scans seg = dataset.patients[patient_id].seg assert(scans.shape[1:] == (128, 128)) assert(scans.shape[0] <= 32) assert(scans.max() <= 1) assert(seg.shape[1:3] == (128, 128)) assert(seg.shape[0] <= 32) assert(seg.shape[3] == 3) train_dataset.save_dataset('../data/processed/train_dataset_resized.pckl') test_dataset.save_dataset('../data/processed/test_dataset_resized.pckl')
0.321673
0.960435
# Data passing tutorial Data passing is the most importnt aspect of Pipelines. In Kubeflow Pipelines, the pipeline authors compose pipelines by creating component instances (tasks) and connecting them together. Component have inputs and outputs. They can consume and produce arbitrary data. Pipeline authors establish connections between component tasks by connecting their data inputs and outputs - by passing the output of one task as an argument to another task's input. The system takes care of storing the data produced by components and later passing that data to other components for consumption as instructed by the pipeline. This tutorial shows how to create python components that produce, consume and transform data. It shows how to create data passing pipelines by instantiating components and connecting them together. ``` # Install Kubeflow Pipelines SDK !PIP_DISABLE_PIP_VERSION_CHECK=1 pip3 install 'kfp>=0.1.31.1' --quiet from typing import NamedTuple import kfp from kfp.components import InputPath, InputTextFile, InputBinaryFile, OutputPath, OutputTextFile, OutputBinaryFile from kfp.components import func_to_container_op ``` ## Small data Small data is the data that you'll be comfortable passing as program's command-line argument. Smal data size should not exceed few kilobytes. Some examples of typical types of small data are: number, URL, small string (e.g. column name). Small lists, dictionaries and JSON structures are fine, but keep an eye on the size and consider switching to file-based data passing methods taht are more suitable for bigger data. All small data outputs will be at some point serialized to strings and all small data input values will be at some point deserialized from strings (passed as command-line argumants). There are built-in serializers and deserializers for several common types (e.g. `str`, `int`, `float`, `bool`, `list`, `dict`). All other types of data need to be serialized manually before returning the data. Make sure to properly specify type annotations, otherwize there would be no automatic deserialization and the component function will receive strings instead of deserialized objects. ### Consuming small data ``` @func_to_container_op def consume_one_argument(text: str): '''Print small text''' print(text) def constant_to_consumer_pipeline(): '''Pipeline that passes small constant string to to consumer''' consume_task = consume_one_argument('Hello world') # Passing constant as argument to consumer kfp.Client().create_run_from_pipeline_func(constant_to_consumer_pipeline, arguments={}) def pipeline_parameter_to_consumer_pipeline(text: str): '''Pipeline that passes small pipeline parameter string to to consumer''' consume_task = consume_one_argument(text) # Passing pipeline parameter as argument to consumer kfp.Client().create_run_from_pipeline_func( pipeline_parameter_to_consumer_pipeline, arguments={'text': 'Hello world'} ) ``` ### Producing small data ``` @func_to_container_op def produce_one_small_output() -> str: return 'Hello world' def task_output_to_consumer_pipeline(): '''Pipeline that passes small data from producer to consumer''' produce_task = produce_one_small_output() # Passing producer task output as argument to consumer consume_task1 = consume_one_argument(produce_task.output) # task.output only works for single-output components consume_task2 = consume_one_argument(produce_task.outputs['output']) # task.outputs[...] always works kfp.Client().create_run_from_pipeline_func(task_output_to_consumer_pipeline, arguments={}) ``` ### Producing and consuming multiple arguments ``` @func_to_container_op def produce_two_small_outputs() -> NamedTuple('Outputs', [('text', str), ('number', int)]): return ("data 1", 42) @func_to_container_op def consume_two_arguments(text: str, number: int): print('Text={}'.format(text)) print('Number={}'.format(str(number))) def producers_to_consumers_pipeline(text: str = "Hello world"): '''Pipeline that passes data from producer to consumer''' produce1_task = produce_one_small_output() produce2_task = produce_two_small_outputs() consume_task1 = consume_two_arguments(produce1_task.output, 42) consume_task2 = consume_two_arguments(text, produce2_task.outputs['number']) consume_task3 = consume_two_arguments(produce2_task.outputs['text'], produce2_task.outputs['number']) kfp.Client().create_run_from_pipeline_func(producers_to_consumers_pipeline, arguments={}) ``` ### Consuming and producing data at the same time ``` @func_to_container_op def get_item_from_list(list: list, index: int) -> str: return list[index] @func_to_container_op def truncate_text(text: str, max_length: int) -> str: return text[0:max_length] def processing_pipeline(text: str = "Hello world"): truncate_task = truncate_text(text, max_length=5) get_item_task = get_item_from_list(list=[3, 1, truncate_task.output, 1, 5, 9, 2, 6, 7], index=2) kfp.Client().create_run_from_pipeline_func(processing_pipeline, arguments={}) ``` ## Bigger data (files) Bigger data should be read from files and written to files. The paths for the input and output files are chosen by the system and are passed into the function (as strings). Use the `InputPath` parameter annotation to tell the system that the function wants to consume the corresponding input data as a file. The system will download the data, write it to a local file and then pass the **path** of that file to the function. Use the `OutputPath` parameter annotation to tell the system that the function wants to produce the corresponding output data as a file. The system will prepare and pass the **path** of a file where the function should write the output data. After the function exits, the system will upload the data to the storage system so that it can be passed to downstream components. You can specify the type of the consumed/produced data by specifying the type argument to `InputPath` and `OutputPath`. The type can be a python type or an arbitrary type name string. `OutputPath('TFModel')` means that the function states that the data it has written to a file has type 'TFModel'. `InputPath('TFModel')` means that the function states that it expect the data it reads from a file to have type 'TFModel'. When the pipeline author connects inputs to outputs the system checks whether the types match. Note on input/output names: When the function is converted to component, the input and output names generally follow the parameter names, but the "\_path" and "\_file" suffixes are stripped from file/path inputs and outputs. E.g. the `number_file_path: InputPath(int)` parameter becomes the `number: int` input. This makes the argument passing look more natural: `number=42` instead of `number_file_path=42`. ### Writing and reading bigger data ``` # Writing bigger data @func_to_container_op def repeat_line(line: str, output_text_path: OutputPath(str), count: int = 10): '''Repeat the line specified number of times''' with open(output_text_path, 'w') as writer: for i in range(count): writer.write(line + '\n') # Reading bigger data @func_to_container_op def print_text(text_path: InputPath(str)): '''Print text''' with open(text_path, 'r') as reader: for line in reader: print(line, end = '') def print_repeating_lines_pipeline(): print_text(repeat_line(line='Hello', count=5).output) # Don't forget .output ! kfp.Client().create_run_from_pipeline_func(print_repeating_lines_pipeline, arguments={}) ``` ### Processing bigger data ``` @func_to_container_op def split_text_lines(source_path: InputPath(str), odd_lines_path: OutputPath(str), even_lines_path: OutputPath(str)): with open(source_path, 'r') as reader: with open(odd_lines_path, 'w') as odd_writer: with open(even_lines_path, 'w') as even_writer: while True: line = reader.readline() print(line) if line == "": break odd_writer.write(line) line = reader.readline() print(line) if line == "": break even_writer.write(line) def text_splitting_pipeline(): text = '\n'.join(['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'ten']) split_text_task = split_text_lines(text) print_text(split_text_task.outputs['odd_lines']) print_text(split_text_task.outputs['even_lines']) kfp.Client().create_run_from_pipeline_func(text_splitting_pipeline, arguments={}) ``` ### Processing bigger data with pre-opened files ``` @func_to_container_op def split_text_lines2(source_file: InputTextFile(str), odd_lines_file: OutputTextFile(str), even_lines_file: OutputTextFile(str)): while True: line = source_file.readline() print(line) if line == "": break odd_lines_file.write(line) line = source_file.readline() print(line) if line == "": break even_lines_file.write(line) def text_splitting_pipeline2(): text = '\n'.join(['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'ten']) split_text_task = split_text_lines2(text) print_text(split_text_task.outputs['odd_lines']).set_display_name('Odd lines') print_text(split_text_task.outputs['even_lines']).set_display_name('Even lines') kfp.Client().create_run_from_pipeline_func(text_splitting_pipeline2, arguments={}) ```
github_jupyter
# Install Kubeflow Pipelines SDK !PIP_DISABLE_PIP_VERSION_CHECK=1 pip3 install 'kfp>=0.1.31.1' --quiet from typing import NamedTuple import kfp from kfp.components import InputPath, InputTextFile, InputBinaryFile, OutputPath, OutputTextFile, OutputBinaryFile from kfp.components import func_to_container_op @func_to_container_op def consume_one_argument(text: str): '''Print small text''' print(text) def constant_to_consumer_pipeline(): '''Pipeline that passes small constant string to to consumer''' consume_task = consume_one_argument('Hello world') # Passing constant as argument to consumer kfp.Client().create_run_from_pipeline_func(constant_to_consumer_pipeline, arguments={}) def pipeline_parameter_to_consumer_pipeline(text: str): '''Pipeline that passes small pipeline parameter string to to consumer''' consume_task = consume_one_argument(text) # Passing pipeline parameter as argument to consumer kfp.Client().create_run_from_pipeline_func( pipeline_parameter_to_consumer_pipeline, arguments={'text': 'Hello world'} ) @func_to_container_op def produce_one_small_output() -> str: return 'Hello world' def task_output_to_consumer_pipeline(): '''Pipeline that passes small data from producer to consumer''' produce_task = produce_one_small_output() # Passing producer task output as argument to consumer consume_task1 = consume_one_argument(produce_task.output) # task.output only works for single-output components consume_task2 = consume_one_argument(produce_task.outputs['output']) # task.outputs[...] always works kfp.Client().create_run_from_pipeline_func(task_output_to_consumer_pipeline, arguments={}) @func_to_container_op def produce_two_small_outputs() -> NamedTuple('Outputs', [('text', str), ('number', int)]): return ("data 1", 42) @func_to_container_op def consume_two_arguments(text: str, number: int): print('Text={}'.format(text)) print('Number={}'.format(str(number))) def producers_to_consumers_pipeline(text: str = "Hello world"): '''Pipeline that passes data from producer to consumer''' produce1_task = produce_one_small_output() produce2_task = produce_two_small_outputs() consume_task1 = consume_two_arguments(produce1_task.output, 42) consume_task2 = consume_two_arguments(text, produce2_task.outputs['number']) consume_task3 = consume_two_arguments(produce2_task.outputs['text'], produce2_task.outputs['number']) kfp.Client().create_run_from_pipeline_func(producers_to_consumers_pipeline, arguments={}) @func_to_container_op def get_item_from_list(list: list, index: int) -> str: return list[index] @func_to_container_op def truncate_text(text: str, max_length: int) -> str: return text[0:max_length] def processing_pipeline(text: str = "Hello world"): truncate_task = truncate_text(text, max_length=5) get_item_task = get_item_from_list(list=[3, 1, truncate_task.output, 1, 5, 9, 2, 6, 7], index=2) kfp.Client().create_run_from_pipeline_func(processing_pipeline, arguments={}) # Writing bigger data @func_to_container_op def repeat_line(line: str, output_text_path: OutputPath(str), count: int = 10): '''Repeat the line specified number of times''' with open(output_text_path, 'w') as writer: for i in range(count): writer.write(line + '\n') # Reading bigger data @func_to_container_op def print_text(text_path: InputPath(str)): '''Print text''' with open(text_path, 'r') as reader: for line in reader: print(line, end = '') def print_repeating_lines_pipeline(): print_text(repeat_line(line='Hello', count=5).output) # Don't forget .output ! kfp.Client().create_run_from_pipeline_func(print_repeating_lines_pipeline, arguments={}) @func_to_container_op def split_text_lines(source_path: InputPath(str), odd_lines_path: OutputPath(str), even_lines_path: OutputPath(str)): with open(source_path, 'r') as reader: with open(odd_lines_path, 'w') as odd_writer: with open(even_lines_path, 'w') as even_writer: while True: line = reader.readline() print(line) if line == "": break odd_writer.write(line) line = reader.readline() print(line) if line == "": break even_writer.write(line) def text_splitting_pipeline(): text = '\n'.join(['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'ten']) split_text_task = split_text_lines(text) print_text(split_text_task.outputs['odd_lines']) print_text(split_text_task.outputs['even_lines']) kfp.Client().create_run_from_pipeline_func(text_splitting_pipeline, arguments={}) @func_to_container_op def split_text_lines2(source_file: InputTextFile(str), odd_lines_file: OutputTextFile(str), even_lines_file: OutputTextFile(str)): while True: line = source_file.readline() print(line) if line == "": break odd_lines_file.write(line) line = source_file.readline() print(line) if line == "": break even_lines_file.write(line) def text_splitting_pipeline2(): text = '\n'.join(['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'ten']) split_text_task = split_text_lines2(text) print_text(split_text_task.outputs['odd_lines']).set_display_name('Odd lines') print_text(split_text_task.outputs['even_lines']).set_display_name('Even lines') kfp.Client().create_run_from_pipeline_func(text_splitting_pipeline2, arguments={})
0.68458
0.954647
# Data The original dataset is made from training (15000 samples), dev (6431) and testing (6500) files. Only the 400 *i-vector* features where used. A process to whiten the entire dataset was applied before using the feature set $x_i$ and only the dev set was used to train the whitening parameters (see code suplied with the data by the competition organizers). Each sample is either unlabeled (all dev and testing samples) and we will label it as $y_i=0$ or is one of $n=50$ different categories $y_i \in \{1 \ldots n \}$ (all training samples.) ## Cross validation In order to select hyper parameters of the model a modified [cross validation dataset was built](./fuel.ipynb) from the training dataset. In the modified dataset, the $n$ known original training labels are considered to be the entire label space of the modified dataset and from them a subset is assumed to be known. The other labeles are assumed to be out-of-set for the purpose of the modified dataset. The number of assumed known labels is such that the ratio of known and unknown labels in the modified set is: $Q = \lfloor \left( 1 - P_{\text{oos}} \right) * n \rfloor = 38 \quad P_{\text{oos}} = 0.23$ The labels of the modified dataset are re-indexed such that the labels assumed to be known are $y_i \in \{1 \ldots Q \}$ A part, $1-r$, of the training data with labels assumed to be known is used for training as labeled data. The rest, $r$, of the samples with labels assumed to be known are mixed with $r$ of the rest of the training which has labels that are assumed to be out-of-set. The mix is used for training as unlabeled data. For having an the number of unlabeled samples to be $u=0.5$ from the number of labeled samples (the ratio between `dev` and `training` sizes): $r = Q*u/(50+Q*u)$ The remaining $1-r$ samples with labels assumed to be unknown are dropped. Each of the steps above, in building the modified dataset, uses a random selection process. The process of creating a modified dataset can be repeated many times giving each label an opportunity to be out-of-set. # Model Training When training a model all samples are used, labeled and unlabeled. For cross validation, this is the modified dataset and for submission this is training and dev datasets, the test dataset is only used to make final prediction for submission. The model generates probability for each sample, $x_i$, to be out-of-set or in one of the categories. When doing cross validation the model will generate $Q+1=39$ categories and when training on the entire available data the model will generate $n+1=51$ categories. The label $l=0$ is used for out-of-set prediction (not to be confused with unlabeled sample.) $p(l) = p(l \mid x_i) \quad l \in \{ 0 \ldots Q \} \quad \text{or} \quad l \in \{ 0 \ldots n \} $ ## Final score According to the [evaluation plan](http://www.nist.gov/itl/iad/mig/upload/lre_ivectorchallenge_rel_v2.pdf) of the competition, the goal is to minimize: $\text{Cost} = \frac{1-P_{\text{oos}}}{n} * \sum_{k=1}^n P_{\text{error}}(k) + P_{\text{oos}} * P_{\text{error}}(\text{oos}) \qquad [1]$ $P_{\text{error}}(k) = \left( \frac{\text{#errors_class_k}}{\text{#trials_class_k}} \right) $ In the cross validation stage we can compute this cost directly, by replacing $n$ with $Q$, and using the information we have on the validation part of the modified dataset. We will use this score to select the best hyper parameters. ## Loss function The training process optimize the model internal parameters (weights) minimizing a loss function. We describe the loss function used in cross validation training, when training for a submission, substitute $Q=n$. The loss is computed as a sum of loss on batches of samples. Each batch has ($N=1024$) samples. For each sample, $x_i$, the loss function accepts as input the $Q+1$ probabilities, $p(l \mid x_i)$ from the model and the label information, $y_i$. Note that $p(0 \mid x_i)$ gives the probability of the model to out-of-set label and $y_i = 0$ is used to indicate that the sample $x_i$ is not labeled. The loss of a batch is made from several parts: $\text{loss} = \text{cross_entopy} + \beta \cdot \text{aprior_average_cross_entropy} + \gamma \cdot \text{binary_cross_entropy} \qquad [2]$ where $\beta$ and $\gamma$ are hyper-parameters. After running cross validation tests the values $\beta=0.15$ and $\gamma=0.01$ were selected. ### cross entropy for the labeled samples in the batch the loss is $\text{cross_entopy} = \frac{1}{N_l} \sum_{i : y_i \in \{1 \ldots Q \}} -\log p(y_i \mid x_i)$ were $N_l$ is the number of labeled samples in the batch $N_l = \sum_{i : y_i \in \{1 \ldots Q \}} 1$ ### aprior cross entropy Aprior, we assume that the predicted probabilities of unlabeled samples should have the distribution: $P^a (0) = P_\text{oos} \quad P^a (l) = \frac{1-P_\text{oos}}{Q} \quad \forall l \in \{1 \ldots Q \}$ This distribution is correct for the cross validation modified dataset and we assume it is correct for the dev dataset. Armed with the apriori distribution, we can add a loss term which measure the cross entropy between predictions made on unlabeled samples and this apriori distribution: $\text{aprior_cross_entropy} = \frac{1}{N_u} \sum_{i : y_i \notin \{1 \ldots Q \}} -P^a(l)\log(p(l \mid x_i))$ were $N_u$ is the number of labeled samples in the batch $N_u = \sum_{i : y_i \notin \{1 \ldots Q \}} 1$ ### aprior average cross entropy However it was found that a much better result is achieved by first averaging all the predicted probabilities over the unlabeled samples in the batch and only then measuring its cross entropy with the aprior probability: $\bar{p}(l) = \frac{1}{N_u} \sum_{i : y_i \notin \{1 \ldots Q \}} p(l \mid x_i) \\ \text{aprior_average_cross_entropy} = - \sum_{l=0}^Q P^a (l) \log(\bar{p}(l \mid x_i))$ ### aprior average Dirichlet $C_2 = -􀀀p_\text{oos} \log p_\text{av}(\text{oos}) 􀀀- \frac{1 - p_\text{oos}}{k} \sum_{i=1}^k \log p_\text{av}(i)$ changes to $\text{NLLK}(p_\text{av}) = -(\alpha_\text{oos} - 1) \log p_\text{av}(\text{oos}) 􀀀- \sum_{i=1}^k (\alpha_i - 1) \log p_\text{av}(i) \quad + \text{constant}$ such that $p_\text{oos} = \frac{\alpha_\text{oos}}{\alpha_\text{sum}} \qquad \frac{1 - p_\text{oos}}{k} = \frac{\alpha_i}{\alpha_\text{sum}}$ where $\alpha_\text{sum} = \alpha_\text{oos} + \sum_{i=1}^k \alpha_i$ redefine $C_2$ as $C_2 = -(􀀀p_\text{oos} - \delta) \log p_\text{av}(\text{oos}) 􀀀- \left( \frac{1 - p_\text{oos}}{k} - \delta \right) \sum_{i=1}^k \log p_\text{av}(i)$ where $\alpha_\text{sum}$ is moved outside into $C_2$ scale factor $\alpha$ and $\delta = 1/\alpha_\text{sum}$ ### binary cross entropy We will use $p(0 \mid x_i)$ to predict if $x_i$ is out-of-set or not. If $x_i$ happens to be a labeled sample, we know it is not out-of-set and if it is unlabeled we know there is $P_\text{oos}$ chance that it is out-of-set. Again this is something which is true for the corss validation modified dataset and assumed to be true for the dev dataset: $\text{binary_cross_entropy} = -\frac{1}{N} \left[ \sum_{i:y_i \notin \{1 \ldots Q \}} \left( P_\text{oos} \log(p_0(i)) + (1-P_\text{oos}) \log(p_1(i)) \right) + \sum_{i:y_i \in \{1 \ldots Q \}} \log(p_1(i)) \right]$ were $p_0(i) = p(0 \mid x_i) \quad p_1(i) = 1-p_0(i)$ # Model The loss function we use [2] is applied to all available data: training and dev datasets. However the strongest signal is from the training (labeled) part and effectively we are in a situation in which 1/3 of the available data is unlabeled. It is therefore beneficial to use semi-supervised technique which will utilize the information available in all the data and not just in the training set. Predictions, $p(y_i \mid x_i)$, are made using a modified [Ladder Network](http://arxiv.org/abs/1507.02672). The original Ladder Network [code](https://github.com/CuriousAI/ladder) was slightly modified. The code was modified to accept the training and dev data of the competition and was used in its entire both for supervised and unsupervised parts of the ladder method. The objective function used in computing the cost of the supervised part of the ladder method was replaced from a simple Cross Entropy to the loss function [2]. In addition, the error rate [1] was monitored while training on cross-validation dataset to determine the optimal number of epochs for training. The setup used for training that gave the best results are as follows: ```bash python run.py train --lr 1e-3 --labeled-samples 21431 --unlabeled-samples 21431 --encoder-layers 500-500-500-100-51 --decoder-spec gauss,relu,relu,relu,relu,relu --denoising-cost-x 1,1,.3,.3,.3,.3 --dseed 0 --seed 2 --super-noise-std 0.5 --f-local-noise-std 0.5 --batch-size 1024 --valid-batch-size 1024 --num-epochs 1000 --dataset 160111-fuel.test -- test. ``` The interpretation of each of the parameters is as follows: parameter | value | description --- | --- | --- dataset | 160111-fuel.test | Both training and dev datasets were used as input. For cross validation this was changed to `160111-fuel.train` labeled-samples | 21431 | All samples in training and dev were used for training the supervised part of the ladder method. This is made possible because the modified loss function has a part which is applied on unlabeled samples. For cross validation this was modified to `10000` and the rest of the modified dataset was used for validation unlabeled-samples | 21431 | All samples in training and dev were used in the unsupervised parts of the ladder method. For cross validation this was modified to `10000` encoder-layers | 500-500-500-100-51 | The network has an input of dimension 400 which pass through 4 hidden layers of size 500, 500, 500 and 100 and a final output layer of 51. For cross validation this was modified to 39. decoder-spec | gauss,relu,relu,relu,relu,relu | A direct skip of information from the encoder to the decoder was used only on the input layer using the gaussian method described in ladder paper. denoising-cost-x | 1,1,.3,.3,.3,.3 | The L2 error of the de-noising layers compared with an un-noised clean encoder was weighted with a weight of 1 for the input layer and the first hidden layer and 0.3 for all other layers. super-noise-std | 0.5 | std of gaussian noise added to the input of the courrputed encoder f-local-noise-std | 0.5 | std of gaussian noise added to output of all layers courrputed encoder lr | 1e-3 | Learning rate num-epochs | 1000 | Number of epoch iterations for which training was made. Before each iteration the order of the samples was shuffled. It turns out that because of the unsupervised learning the ladder method is insensitive to the number of epochs and having between 800 to 2000 epoch iterations would give similar results batch-size | 1024| batch size used for training. this size has a secondary effect through the loss function which performed an average of predictions before computing the loss lrate-decay | 0.67 (default) | the learning rate starts to decay linearly to zero after passing 0.67 of the epoch iterations act | relu (default) | the activation of the encoder layers except for the last layer which is always softmax # Results the results where measured by generating predictions on the test dataset using the model found in the training process. The prediction were then submitted to the competition web site which used an unknown subset of 30% of the samples to compute a score for the PROGESS SET (results for the 70% eval set are not reported by the web site.) Score | Description | Command line --- | --- | --- 24.000 | The best configuration which was described above. This would have been translated to 11th place while the competition was in progress | --lr 1e-3 --labeled-samples 21431 --unlabeled-samples 21431 --encoder-layers 500-500-500-100-51 --decoder-spec gauss,relu,relu,relu,relu,relu --denoising-cost-x 1,1,.3,.3,.3,.3 --dseed 0 --seed 2 --super-noise-std 0.5 --f-local-noise-std 0.5 --batch-size 1024 --valid-batch-size 1024 --num-epochs 1000 31.487 | In this configuration the unsupervisied part of the ladder algorithm is disabled. An early stopping after 138 epochs was needed to avoid overfiting | --lr 1e-3 --labeled-samples 21431 --unlabeled-samples 21431 --encoder-layers 500-500-500-100-51 --decoder-spec gauss,relu,relu,relu,relu,relu --denoising-cost-x 0,0,0,0,0,0 --decoder-spec 0-0-0-0-0-0 --dseed 0 --seed 2 --super-noise-std 0.5 --f-local-noise-std 0.5 --batch-size 1024 --valid-batch-size 1024 --num-epochs 138 To generate a submission file identify the directory in which the training stored its results. This is a subdirectory under `./results/` the subdirectory name has prefix determined by the last argument in the command line. In the example given above the prefix is `test.`. The suffix of the subdirectory is a number which is incremented after every training run. Below I assume that all of this results in `results/test.0` You then generate predictions with ```bash run.py dump --layer -1 -- results/test.0 ``` The submission is made from the predictions on the `test` part of the dataset file (last 6500 samples) that are saved in `bz2` file which can be submitted to the web site ``` import numpy as np import bz2 yprob = np.load('results/test.0/layer-1.npy'%t) y_pred = np.argmax(yprob,axis=1) fn = 'submission.txt.bz2' with bz2.BZ2File('data/%s'%fn, 'w') as f: for i in y_pred[-6500:]: f.write('%s\n' % idx2lang[i]) ``` # Reference 1. https://ivectorchallenge.nist.gov/ 2. http://www.nist.gov/itl/iad/mig/upload/lre_ivectorchallenge_rel_v2.pdf 2. http://arxiv.org/abs/1507.02672 3. https://github.com/CuriousAI/ladder 3. http://arxiv.org/abs/1511.06430v3
github_jupyter
python run.py train --lr 1e-3 --labeled-samples 21431 --unlabeled-samples 21431 --encoder-layers 500-500-500-100-51 --decoder-spec gauss,relu,relu,relu,relu,relu --denoising-cost-x 1,1,.3,.3,.3,.3 --dseed 0 --seed 2 --super-noise-std 0.5 --f-local-noise-std 0.5 --batch-size 1024 --valid-batch-size 1024 --num-epochs 1000 --dataset 160111-fuel.test -- test. run.py dump --layer -1 -- results/test.0 import numpy as np import bz2 yprob = np.load('results/test.0/layer-1.npy'%t) y_pred = np.argmax(yprob,axis=1) fn = 'submission.txt.bz2' with bz2.BZ2File('data/%s'%fn, 'w') as f: for i in y_pred[-6500:]: f.write('%s\n' % idx2lang[i])
0.422147
0.983198
``` import os import matplotlib.pyplot as plt import csv import pickle import math # Don't edit done_load=0 load_dest="" import time def deleteDB(db='ycsb', host='vmtest3.westus.cloudapp.azure.com:27017', mongo_dir=r"C:\Program Files\MongoDB\Server\3.6\bin"): curr_dir=os.getcwd() os.chdir(mongo_dir) status = os.system(r'mongo ycsb --host "' + host + '" --eval "db.usertable.remove({})"') os.chdir(curr_dir) return status def deleteDBMongo(): deleteDB(host='vmtest3.westus.cloudapp.azure.com:27017') def deleteDBAtlas(): deleteDB(host='vmtest3.westus.cloudapp.azure.com:27017') def deleteDBCosmos(mongo_dir=r"C:\Program Files\MongoDB\Server\3.6\bin"): curr_dir=os.getcwd() os.chdir(mongo_dir) u=r"mongo-api-benchmark" p=r"KiYRdcJp41NN268oTcyeM2ilpLwYUAo8tsX9sYoBNTd6DzjXuJHtcaSylh5VJNGs2wg1FVGExRC0m5Z6pEk7ow==" host=r"mongo-api-benchmark.mongo.cosmos.azure.com:10255" run_str=r'mongo ycsb --host ' + host + r' -u ' + u + r' -p ' + p + r' --ssl --eval "db.usertable.remove({})"' status = os.system(run_str) os.chdir(curr_dir) time.sleep(2) return status # deleteDB(host=r'mongo-api-benchmark:KiYRdcJp41NN268oTcyeM2ilpLwYUAo8tsX9sYoBNTd6DzjXuJHtcaSylh5VJNGs2wg1FVGExRC0m5Z6pEk7ow^=^=@mongo-api-benchmark.mongo.cosmos.azure.com:10255/?ssl^=true^&replicaSet^=globaldb^&retrywrites^=false^&maxIdleTimeMS^=120000^&appName^=@mongo-api-benchmark@') # deleteDB(host=r'mongo-api-benchmark:KiYRdcJp41NN268oTcyeM2ilpLwYUAo8tsX9sYoBNTd6DzjXuJHtcaSylh5VJNGs2wg1FVGExRC0m5Z6pEk7ow==@mongo-api-benchmark.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@mongo-api-benchmark@') def runYCSB(cmd="run", ycsb_dir=r'C:\Users\anfeldma\codeHome\YCSB\bin',workload_dir=r'C:\Users\anfeldma\codeHome\YCSB\workloads',workload='workloadw', \ mongo_endpoint=r'mongodb://vmtest3.westus.cloudapp.azure.com:27017/',operation_count=1000, \ nthreads=1,logdir=".\\",logfn="log.csv"): curr_dir=os.getcwd() os.chdir(ycsb_dir) ycsb_str=r'ycsb ' + cmd + ' mongodb -s -P "' + workload_dir + "\\" + workload + r'" -p mongodb.url="' + mongo_endpoint + \ r'" -p operationcount=' + str(operation_count) + r' -p recordcount=' + str(operation_count) + r' -threads ' + str(nthreads) + \ r" " + \ ' > ' + logdir + logfn print(ycsb_str) #status=0 os.system(ycsb_str) os.chdir(curr_dir) return ycsb_str def runYCSBMongo36(execmd="run", op_count=10000, nthr=1, wkld="workloadw"): return runYCSB(cmd=execmd, operation_count=op_count, nthreads=nthr, workload=wkld) def runYCSBCosmos36(execmd="run", op_count=10000, nthr=1, wkld="workloadw"): return runYCSB(cmd=execmd, mongo_endpoint=r'mongodb://mongo-api-benchmark:KiYRdcJp41NN268oTcyeM2ilpLwYUAo8tsX9sYoBNTd6DzjXuJHtcaSylh5VJNGs2wg1FVGExRC0m5Z6pEk7ow^=^=@mongo-api-benchmark.mongo.cosmos.azure.com:10255/?ssl^=true^&replicaSet^=globaldb^&retrywrites^=false^&maxIdleTimeMS^=120000^&appName^=@mongo-api-benchmark@', \ operation_count=op_count, nthreads=nthr, workload=wkld) def runYCSBAtlas36(execmd="run", op_count=10000, nthr=1, wkld="workloadw"): return runYCSB(cmd=execmd, mongo_endpoint=r'mongodb://vmtest3.westus.cloudapp.azure.com:27017', \ operation_count=op_count, nthreads=nthr, workload=wkld) def parseLog(logdir=r'C:\Users\anfeldma\codeHome\YCSB\bin', logfn='log.csv'): metrics_dict={} with open(logdir + '\\' + logfn, newline='') as csvfile: csvrdr = csv.reader(csvfile)#csv.reader(csvfile, delimiter='', quotechar='|') for row in csvrdr: if len(row) > 0 and row[0][0] == "[": arg0 = row[0].lstrip().rstrip() arg1 = row[1].lstrip().rstrip() met_val = row[2].lstrip().rstrip() if not(arg0 in metrics_dict): metrics_dict[arg0] = {} metrics_dict[arg0][arg1] = float(met_val) return metrics_dict def getIndividualMetrics(met_thrpt_dict_array): # Plot response curve thrpt_list=[] metric_list=[] max_thrpt=0 for idx in range(len(met_thrpt_dict_array)): thrpt_list.append(met_thrpt_dict_array[idx][rt_thrpt_field][thrpt_field]) metric_list.append(met_thrpt_dict_array[idx][optype_field][metric_field]) return thrpt_list, metric_list, max_thrpt def plotResponseCurve(thrpt_list, metric_list, max_thrpt, optype_field): plt.plot(thrpt_list, metric_list, marker="x") ax = plt.gca() for idx in range(len(met_thrpt_dict_array)): ax.annotate(str(thrpt_list[idx]), xy=(thrpt_list[idx], metric_list[idx])) plt.grid(True) plt.title(optype_field) plt.xlabel(thrpt_field) plt.ylabel(metric_field) fig=plt.gcf() plt.show() return fig def saveResult(met_thrpt_dict_array,thrpt_list,metric_list,nthread_list,max_thrpt,optype_field,ycsb_str,fig): print("Making " + optype_field + " dir.") os.makedirs(optype_field, exist_ok=True) print("Saving result data...") dumpObj={} with open(optype_field + "\\pickle.obj", "wb") as fileObj: dumpObj["met_thrpt_dict_array"]=met_thrpt_dict_array dumpObj["thrpt_list"]=thrpt_list dumpObj["metric_list"]=metric_list dumpObj["nthread_list"]=nthread_list dumpObj["max_thrpt"]=max_thrpt dumpObj["optype_field"]=optype_field dumpObj["ycsb_str"]=max_thrpt pickle.dump(dumpObj,fileObj) print("Saving plot...") fig.savefig(optype_field + "\\" + optype_field + ".png") def saveComparison(op_max_rate): print("Making " + "ycsb_op_comparison" + " dir.") os.makedirs("ycsb_op_comparison", exist_ok=True) print("Saving comparison data...") dumpObj={} with open(optype_field + "\\pickle.obj", "wb") as fileObj: dumpObj["op_max_rate"]=op_max_rate pickle.dump(dumpObj,fileObj) op_mapping={"insert":{"optype_field":"[INSERT]","workload_name":"workloadw"}, \ "read":{"optype_field":"[READ]","workload_name":"workloadr"}, \ "update":{"optype_field":"[UPDATE]","workload_name":"workloadu"} \ } db_type="cosmos" #"cosmos", "mongo", "atlas" rt_thrpt_field="[OVERALL]" rt_field="RunTime(ms)" thrpt_field="Throughput(ops/sec)" ops_list=["insert"] #["insert","read","update"] opname="read" optype_field=op_mapping[opname]["optype_field"] workload_name=op_mapping[opname]["workload_name"] metric_field="99thPercentileLatency(us)" doc_count=1000 nthread_list=[1,2,5,10,20,50,64,100] met_thrpt_dict_array = [] os.chdir(r"C:\Users\anfeldma\codeHome\YCSB") op_max_rate={} for jdx in range(len(ops_list)): opname = ops_list[jdx] optype_field=op_mapping[opname]["optype_field"] workload_name=op_mapping[opname]["workload_name"] if opname != "insert": if done_load>=doc_count and load_dest==db_type: print("Already loaded data.") else: print("Starting YCSB load using max thread count...") runYCSBMongo36(execmd="load",op_count=doc_count, nthr=max(nthread_list), wkld=workload_name) done_load=doc_count load_dest=db_type print("Finished YCSB load.") for idx in range(len(nthread_list)): print("Starting YCSB " + db_type + " run, opname " + opname + ", workload " + workload_name + ", thread count " + str(nthread_list[idx])) if opname=="insert": if db_type=="mongo": deleteDBMongo() elif db_type=="atlas": deleteDBAtlas() elif db_type=="cosmos": deleteDBCosmos() print("Done deleting existing YCSB dataset.") done_load=0 if db_type=="mongo": ycsb_str=runYCSBMongo36(op_count=doc_count, nthr=nthread_list[idx], wkld=workload_name) elif db_type=="atlas": ycsb_str=runYCSBAtlas36(op_count=doc_count, nthr=nthread_list[idx], wkld=workload_name) elif db_type=="cosmos": ycsb_str=runYCSBCosmos36(op_count=doc_count, nthr=nthread_list[idx], wkld=workload_name) met_thrpt_dict_array.append(parseLog()) print("Finished YCSB run, thread count " + str(nthread_list[idx])) thrpt_list, metric_list, max_thrpt = getIndividualMetrics(met_thrpt_dict_array) max_thrpt=max(thrpt_list) met_thrpt_dict_array=[] fig=plotResponseCurve(thrpt_list, metric_list, max_thrpt, opname) saveResult(met_thrpt_dict_array,thrpt_list,metric_list,nthread_list,max_thrpt,optype_field,ycsb_str,fig) print("Max throughput: " + str(max_thrpt)) op_max_rate[opname]=max_thrpt saveComparison(op_max_rate) print(op_max_rate) plt.bar(op_max_rate.keys(),op_max_rate.values()) os.getcwd() print(met_thrpt_dict_array) ```
github_jupyter
import os import matplotlib.pyplot as plt import csv import pickle import math # Don't edit done_load=0 load_dest="" import time def deleteDB(db='ycsb', host='vmtest3.westus.cloudapp.azure.com:27017', mongo_dir=r"C:\Program Files\MongoDB\Server\3.6\bin"): curr_dir=os.getcwd() os.chdir(mongo_dir) status = os.system(r'mongo ycsb --host "' + host + '" --eval "db.usertable.remove({})"') os.chdir(curr_dir) return status def deleteDBMongo(): deleteDB(host='vmtest3.westus.cloudapp.azure.com:27017') def deleteDBAtlas(): deleteDB(host='vmtest3.westus.cloudapp.azure.com:27017') def deleteDBCosmos(mongo_dir=r"C:\Program Files\MongoDB\Server\3.6\bin"): curr_dir=os.getcwd() os.chdir(mongo_dir) u=r"mongo-api-benchmark" p=r"KiYRdcJp41NN268oTcyeM2ilpLwYUAo8tsX9sYoBNTd6DzjXuJHtcaSylh5VJNGs2wg1FVGExRC0m5Z6pEk7ow==" host=r"mongo-api-benchmark.mongo.cosmos.azure.com:10255" run_str=r'mongo ycsb --host ' + host + r' -u ' + u + r' -p ' + p + r' --ssl --eval "db.usertable.remove({})"' status = os.system(run_str) os.chdir(curr_dir) time.sleep(2) return status # deleteDB(host=r'mongo-api-benchmark:KiYRdcJp41NN268oTcyeM2ilpLwYUAo8tsX9sYoBNTd6DzjXuJHtcaSylh5VJNGs2wg1FVGExRC0m5Z6pEk7ow^=^=@mongo-api-benchmark.mongo.cosmos.azure.com:10255/?ssl^=true^&replicaSet^=globaldb^&retrywrites^=false^&maxIdleTimeMS^=120000^&appName^=@mongo-api-benchmark@') # deleteDB(host=r'mongo-api-benchmark:KiYRdcJp41NN268oTcyeM2ilpLwYUAo8tsX9sYoBNTd6DzjXuJHtcaSylh5VJNGs2wg1FVGExRC0m5Z6pEk7ow==@mongo-api-benchmark.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@mongo-api-benchmark@') def runYCSB(cmd="run", ycsb_dir=r'C:\Users\anfeldma\codeHome\YCSB\bin',workload_dir=r'C:\Users\anfeldma\codeHome\YCSB\workloads',workload='workloadw', \ mongo_endpoint=r'mongodb://vmtest3.westus.cloudapp.azure.com:27017/',operation_count=1000, \ nthreads=1,logdir=".\\",logfn="log.csv"): curr_dir=os.getcwd() os.chdir(ycsb_dir) ycsb_str=r'ycsb ' + cmd + ' mongodb -s -P "' + workload_dir + "\\" + workload + r'" -p mongodb.url="' + mongo_endpoint + \ r'" -p operationcount=' + str(operation_count) + r' -p recordcount=' + str(operation_count) + r' -threads ' + str(nthreads) + \ r" " + \ ' > ' + logdir + logfn print(ycsb_str) #status=0 os.system(ycsb_str) os.chdir(curr_dir) return ycsb_str def runYCSBMongo36(execmd="run", op_count=10000, nthr=1, wkld="workloadw"): return runYCSB(cmd=execmd, operation_count=op_count, nthreads=nthr, workload=wkld) def runYCSBCosmos36(execmd="run", op_count=10000, nthr=1, wkld="workloadw"): return runYCSB(cmd=execmd, mongo_endpoint=r'mongodb://mongo-api-benchmark:KiYRdcJp41NN268oTcyeM2ilpLwYUAo8tsX9sYoBNTd6DzjXuJHtcaSylh5VJNGs2wg1FVGExRC0m5Z6pEk7ow^=^=@mongo-api-benchmark.mongo.cosmos.azure.com:10255/?ssl^=true^&replicaSet^=globaldb^&retrywrites^=false^&maxIdleTimeMS^=120000^&appName^=@mongo-api-benchmark@', \ operation_count=op_count, nthreads=nthr, workload=wkld) def runYCSBAtlas36(execmd="run", op_count=10000, nthr=1, wkld="workloadw"): return runYCSB(cmd=execmd, mongo_endpoint=r'mongodb://vmtest3.westus.cloudapp.azure.com:27017', \ operation_count=op_count, nthreads=nthr, workload=wkld) def parseLog(logdir=r'C:\Users\anfeldma\codeHome\YCSB\bin', logfn='log.csv'): metrics_dict={} with open(logdir + '\\' + logfn, newline='') as csvfile: csvrdr = csv.reader(csvfile)#csv.reader(csvfile, delimiter='', quotechar='|') for row in csvrdr: if len(row) > 0 and row[0][0] == "[": arg0 = row[0].lstrip().rstrip() arg1 = row[1].lstrip().rstrip() met_val = row[2].lstrip().rstrip() if not(arg0 in metrics_dict): metrics_dict[arg0] = {} metrics_dict[arg0][arg1] = float(met_val) return metrics_dict def getIndividualMetrics(met_thrpt_dict_array): # Plot response curve thrpt_list=[] metric_list=[] max_thrpt=0 for idx in range(len(met_thrpt_dict_array)): thrpt_list.append(met_thrpt_dict_array[idx][rt_thrpt_field][thrpt_field]) metric_list.append(met_thrpt_dict_array[idx][optype_field][metric_field]) return thrpt_list, metric_list, max_thrpt def plotResponseCurve(thrpt_list, metric_list, max_thrpt, optype_field): plt.plot(thrpt_list, metric_list, marker="x") ax = plt.gca() for idx in range(len(met_thrpt_dict_array)): ax.annotate(str(thrpt_list[idx]), xy=(thrpt_list[idx], metric_list[idx])) plt.grid(True) plt.title(optype_field) plt.xlabel(thrpt_field) plt.ylabel(metric_field) fig=plt.gcf() plt.show() return fig def saveResult(met_thrpt_dict_array,thrpt_list,metric_list,nthread_list,max_thrpt,optype_field,ycsb_str,fig): print("Making " + optype_field + " dir.") os.makedirs(optype_field, exist_ok=True) print("Saving result data...") dumpObj={} with open(optype_field + "\\pickle.obj", "wb") as fileObj: dumpObj["met_thrpt_dict_array"]=met_thrpt_dict_array dumpObj["thrpt_list"]=thrpt_list dumpObj["metric_list"]=metric_list dumpObj["nthread_list"]=nthread_list dumpObj["max_thrpt"]=max_thrpt dumpObj["optype_field"]=optype_field dumpObj["ycsb_str"]=max_thrpt pickle.dump(dumpObj,fileObj) print("Saving plot...") fig.savefig(optype_field + "\\" + optype_field + ".png") def saveComparison(op_max_rate): print("Making " + "ycsb_op_comparison" + " dir.") os.makedirs("ycsb_op_comparison", exist_ok=True) print("Saving comparison data...") dumpObj={} with open(optype_field + "\\pickle.obj", "wb") as fileObj: dumpObj["op_max_rate"]=op_max_rate pickle.dump(dumpObj,fileObj) op_mapping={"insert":{"optype_field":"[INSERT]","workload_name":"workloadw"}, \ "read":{"optype_field":"[READ]","workload_name":"workloadr"}, \ "update":{"optype_field":"[UPDATE]","workload_name":"workloadu"} \ } db_type="cosmos" #"cosmos", "mongo", "atlas" rt_thrpt_field="[OVERALL]" rt_field="RunTime(ms)" thrpt_field="Throughput(ops/sec)" ops_list=["insert"] #["insert","read","update"] opname="read" optype_field=op_mapping[opname]["optype_field"] workload_name=op_mapping[opname]["workload_name"] metric_field="99thPercentileLatency(us)" doc_count=1000 nthread_list=[1,2,5,10,20,50,64,100] met_thrpt_dict_array = [] os.chdir(r"C:\Users\anfeldma\codeHome\YCSB") op_max_rate={} for jdx in range(len(ops_list)): opname = ops_list[jdx] optype_field=op_mapping[opname]["optype_field"] workload_name=op_mapping[opname]["workload_name"] if opname != "insert": if done_load>=doc_count and load_dest==db_type: print("Already loaded data.") else: print("Starting YCSB load using max thread count...") runYCSBMongo36(execmd="load",op_count=doc_count, nthr=max(nthread_list), wkld=workload_name) done_load=doc_count load_dest=db_type print("Finished YCSB load.") for idx in range(len(nthread_list)): print("Starting YCSB " + db_type + " run, opname " + opname + ", workload " + workload_name + ", thread count " + str(nthread_list[idx])) if opname=="insert": if db_type=="mongo": deleteDBMongo() elif db_type=="atlas": deleteDBAtlas() elif db_type=="cosmos": deleteDBCosmos() print("Done deleting existing YCSB dataset.") done_load=0 if db_type=="mongo": ycsb_str=runYCSBMongo36(op_count=doc_count, nthr=nthread_list[idx], wkld=workload_name) elif db_type=="atlas": ycsb_str=runYCSBAtlas36(op_count=doc_count, nthr=nthread_list[idx], wkld=workload_name) elif db_type=="cosmos": ycsb_str=runYCSBCosmos36(op_count=doc_count, nthr=nthread_list[idx], wkld=workload_name) met_thrpt_dict_array.append(parseLog()) print("Finished YCSB run, thread count " + str(nthread_list[idx])) thrpt_list, metric_list, max_thrpt = getIndividualMetrics(met_thrpt_dict_array) max_thrpt=max(thrpt_list) met_thrpt_dict_array=[] fig=plotResponseCurve(thrpt_list, metric_list, max_thrpt, opname) saveResult(met_thrpt_dict_array,thrpt_list,metric_list,nthread_list,max_thrpt,optype_field,ycsb_str,fig) print("Max throughput: " + str(max_thrpt)) op_max_rate[opname]=max_thrpt saveComparison(op_max_rate) print(op_max_rate) plt.bar(op_max_rate.keys(),op_max_rate.values()) os.getcwd() print(met_thrpt_dict_array)
0.345436
0.149593
``` # Import packages import csv # Read in case parameters and write out solutions import numpy as np import matplotlib.pyplot as plt import pandas as pd def gauss_seidel(N,A,x,b): # Solve Ax = b iteratively using Gauss seidel print('N:',N)# N: number of variables print('A:',A) # coefficient matrix, NxN size (square matrix) # x: variables, Nx1 size (column vector) # Solver Parameters tol = 1E-6 # tolerance to determining stopping point of scheme res = 1.0 # residual (initially greater than the tolerance max_iter = 100 # max iterations (so it doesn't go forever) k = 0 # iteration counter # self.p[2:N-1] = zeros(N-2,1) # initial guess for cell centers ## Iteration Loop while ((res>tol)and(k<max_iter)): x_prev = np.copy(x)# previous iteration (copy to avoid using same mem loc) for i in range(0,N): xi = b[i] for j in range(0,N): if i!=j: xi -= A[i][j]*x[j] #; print('J:',i,j,A[i][j],xi,x[j]) x[i] = xi/A[i][i] # print('I:',i,xi,b[i],A[i][i]) res = sum(abs(x-x_prev)) # L2 norm of p_diff k += 1 # increase iteration count print('Iter, Res, x',k,res,x) print('Iteration Complete') return [x,k,res] def succ_OR(N,A,x,b): # Solve Ax = b iteratively using successive overrelaxation # N: number of variables # A: coefficient matrix, NxN size (square matrix) # x: variables, Nx1 size (column vector) # Solver Parameters tol = 1E-6 # tolerance to determining stopping point of scheme res = np.array([1.0],dtype=float) # residual (initially greater than the tolerance max_iter = 100 # max iterations (so it doesn't go forever) k = 0 # iteration counter omega = 1.1 # relaxation factor (for Gauss-Seidel, always 1) # self.p[2:N-1] = zeros(N-2,1) # initial guess for cell centers ## Iteration Loop while ((res[k]>tol)and(k<max_iter)): x_prev = np.copy(x)# previous iteration (copy to avoid using same mem loc) for i in range(0,N): xi = b[i] for j in range(0,N): if i!=j: xi -= A[i][j]*x[j] #; print('J:',i,j,A[i][j],xi,x[j]) x[i] = omega*xi/A[i][i] + (1.0-omega)*x_prev[i] # print('I:',i,xi,b[i],A[i][i]) res = np.append(res,[sum(abs(x-x_prev))]) # L2 norm of p_diff k += 1 # increase iteration count print('Iter, Res, x',k,res[k-1],x) print('Iteration Complete') df = pd.DataFrame(res,columns=['residual']) df.to_csv('res.csv',sep='\t') return [x,k,res[k-1]] N = 4; x0 = np.zeros(N,dtype=float) # initialize A = np.array([[1,1,1,1],[2,-5,3,2],[0,-3,1,-1],[1,0,-2,3]],dtype=float) b = np.array([25,0,6,-8],dtype=float) x0[0] = 491.0/7.0; x0[1] = 71.0/14.0; x0[2] = -29.0/2.0; x0[3] = -250.0/7.0 [x,k,res] = gauss_seidel(N,A,x0,b) x0 = np.ones(N,dtype=float) A = np.random.randint(1,4,size=[N,N]).astype(float) A = (A+A.transpose())+(np.eye(N)*7.0) A [x,k,res] = gauss_seidel(N,A,x0,b) x0 = np.ones(N,dtype=float) [x,k,res] = succ_OR(N,A,x0,b) s = np.array([[2,3,4,4]]) s = np.append(s,[[2,4,3,6]],axis=0) s t = np.array([1])#,[2]]) my_array = np.arange(6).reshape(3, 2) print(f"My array shape is: \n {my_array.shape}") my_expanded_array = my_array[:, np.newaxis, :, np.newaxis] print(f"My expanded array shape is: \n {my_expanded_array.shape}") t2 = t[:,np.newaxis] print(t2,t2.shape) np.append(t,s,axis=1) t.shape ```
github_jupyter
# Import packages import csv # Read in case parameters and write out solutions import numpy as np import matplotlib.pyplot as plt import pandas as pd def gauss_seidel(N,A,x,b): # Solve Ax = b iteratively using Gauss seidel print('N:',N)# N: number of variables print('A:',A) # coefficient matrix, NxN size (square matrix) # x: variables, Nx1 size (column vector) # Solver Parameters tol = 1E-6 # tolerance to determining stopping point of scheme res = 1.0 # residual (initially greater than the tolerance max_iter = 100 # max iterations (so it doesn't go forever) k = 0 # iteration counter # self.p[2:N-1] = zeros(N-2,1) # initial guess for cell centers ## Iteration Loop while ((res>tol)and(k<max_iter)): x_prev = np.copy(x)# previous iteration (copy to avoid using same mem loc) for i in range(0,N): xi = b[i] for j in range(0,N): if i!=j: xi -= A[i][j]*x[j] #; print('J:',i,j,A[i][j],xi,x[j]) x[i] = xi/A[i][i] # print('I:',i,xi,b[i],A[i][i]) res = sum(abs(x-x_prev)) # L2 norm of p_diff k += 1 # increase iteration count print('Iter, Res, x',k,res,x) print('Iteration Complete') return [x,k,res] def succ_OR(N,A,x,b): # Solve Ax = b iteratively using successive overrelaxation # N: number of variables # A: coefficient matrix, NxN size (square matrix) # x: variables, Nx1 size (column vector) # Solver Parameters tol = 1E-6 # tolerance to determining stopping point of scheme res = np.array([1.0],dtype=float) # residual (initially greater than the tolerance max_iter = 100 # max iterations (so it doesn't go forever) k = 0 # iteration counter omega = 1.1 # relaxation factor (for Gauss-Seidel, always 1) # self.p[2:N-1] = zeros(N-2,1) # initial guess for cell centers ## Iteration Loop while ((res[k]>tol)and(k<max_iter)): x_prev = np.copy(x)# previous iteration (copy to avoid using same mem loc) for i in range(0,N): xi = b[i] for j in range(0,N): if i!=j: xi -= A[i][j]*x[j] #; print('J:',i,j,A[i][j],xi,x[j]) x[i] = omega*xi/A[i][i] + (1.0-omega)*x_prev[i] # print('I:',i,xi,b[i],A[i][i]) res = np.append(res,[sum(abs(x-x_prev))]) # L2 norm of p_diff k += 1 # increase iteration count print('Iter, Res, x',k,res[k-1],x) print('Iteration Complete') df = pd.DataFrame(res,columns=['residual']) df.to_csv('res.csv',sep='\t') return [x,k,res[k-1]] N = 4; x0 = np.zeros(N,dtype=float) # initialize A = np.array([[1,1,1,1],[2,-5,3,2],[0,-3,1,-1],[1,0,-2,3]],dtype=float) b = np.array([25,0,6,-8],dtype=float) x0[0] = 491.0/7.0; x0[1] = 71.0/14.0; x0[2] = -29.0/2.0; x0[3] = -250.0/7.0 [x,k,res] = gauss_seidel(N,A,x0,b) x0 = np.ones(N,dtype=float) A = np.random.randint(1,4,size=[N,N]).astype(float) A = (A+A.transpose())+(np.eye(N)*7.0) A [x,k,res] = gauss_seidel(N,A,x0,b) x0 = np.ones(N,dtype=float) [x,k,res] = succ_OR(N,A,x0,b) s = np.array([[2,3,4,4]]) s = np.append(s,[[2,4,3,6]],axis=0) s t = np.array([1])#,[2]]) my_array = np.arange(6).reshape(3, 2) print(f"My array shape is: \n {my_array.shape}") my_expanded_array = my_array[:, np.newaxis, :, np.newaxis] print(f"My expanded array shape is: \n {my_expanded_array.shape}") t2 = t[:,np.newaxis] print(t2,t2.shape) np.append(t,s,axis=1) t.shape
0.210604
0.814717
# Subject Correlations & Year Counts ``` # Import necessary libraries. import re, nltk, warnings, csv, sys, os, pickle, string, json import pandas as pd import numpy as np import seaborn as sns import glob as glob from itertools import chain from scipy import stats import matplotlib.pyplot as plt import itertools as iter import networkx as nx from networkx.algorithms import community from networkx.readwrite import json_graph from json import JSONEncoder from operator import itemgetter from collections import Counter # Import project-specific functions. # Python files (.py) have to be in same folder to work. lib_path = os.path.abspath(os.path.join(os.path.dirname('Correspondence_XML_parser.py'), '../Scripts')) sys.path.append(lib_path) from Correspondence_XML_parser import * # Ignore warnings related to deprecated functions. warnings.filterwarnings('ignore') ``` ## Gather XML Files ``` %%time # Declare directory location to shorten filepaths later. abs_dir = "/Users/quinn.wi/Documents/" files = glob.glob(abs_dir + "Data/PSC/Richards/ESR-XML-Files-MHS/*.xml") len(files) # %%time # # Must be connected to Northeastern's VPN. # r = requests.get(url, # auth = (user, pw), # headers = {'Content-Type': 'application/xml'} # ) # # Read in contents of pipeline. # soup = BeautifulSoup(r.content, 'html.parser') # # Split soup's content by \n (each line is a file path to an XML doc). # # Use filter() to remove empty strings (''). # # Convert back to list using list(). # files = list(filter(None, soup.text.split('\n'))) # # Filter list and retrieve only jqa/ files. # files = [i for i in files if 'esr/' in i] # len(files) ``` ## Build Dataframe ``` %%time # Build dataframe from XML files. # build_dataframe() called from Correspondence_XML_parser df = build_dataframe(files) # Unnest subject headings. df['subjects'] = df['subjects'].str.split(',') df = df.explode('subjects') # Remove leading and trailing whitespace. df['subjects'] = df['subjects'].str.strip() # Remove rows with subject of "The". df = df[~df['subjects'].isin(['The'])] # Remove rows with empty values. df.replace('', np.nan, inplace = True) df.dropna(inplace = True) df.head(3) ``` ## Count Subject Headings by Year ``` %%time # Extract month, year from date. df['date'] = pd.to_datetime(df['date'], format = '%Y-%m-%d', errors = 'coerce') df = df.query('date != "NaT"') # remove Not-a-Time values. df['month'] = df['date'].dt.month df['year'] = df['date'].dt.year # Group by year & subject to get count of subjects per year. subjects = df.groupby(['year', 'subjects'], as_index = False)['subjects'] \ .size() \ .reset_index() subjects.columns = ['year', 'subjects', 'count'] # Group by year and get total number of subjects per year. subjects['total'] = subjects.groupby('year')['count'].transform('sum') # Get percentage of subject for each year. subjects['percentage'] = round(subjects['count'] / subjects['total'], 2) * 100 subjects.to_csv(abs_dir + 'Github/dsg-mhs/lab_space/projects/richards/subjects/data/subject-year-count.csv', sep = ',', index = False) subjects.head() ``` ## Create Adjacency Matrix of Subjects ``` %%time # Create adjacency matrix. adj = pd.crosstab(df['file'], df['subjects']) # Convert entry-person matrix into an adjacency matrix of persons. adj = adj.T.dot(adj) # Change same-same connections to zero. np.fill_diagonal(adj.values, 0) # Simple correlation matrix from dataframe. adj = adj.corr() adj ``` ## Save Subject Adj. as Network Object ``` %%time adj['source'] = adj.index df = pd.melt(adj, id_vars = ['source'], var_name = 'target', value_name = 'weight') \ .query('(source != target) & (weight > 0.5)') df ``` ## Create Graph Object ``` %%time # Initialize graph object. G = nx.from_pandas_edgelist(df, 'source', 'target', 'weight') # Add nodes. nodes = list( dict.fromkeys( df['source'].values.tolist() + df['target'].values.tolist() )) G.add_nodes_from(nodes) print (nx.info(G)) # Set degree attributes. nx.set_node_attributes(G, dict(G.degree(G.nodes())), 'degree') # Sort nodes by degree and print top results. sorted_degree = sorted(dict(G.degree(G.nodes())).items(), key = itemgetter(1), reverse = True) print ("Top 10 nodes by degree:") for d in sorted_degree[:10]: print (f'\t{d}') # Measure network density. density = nx.density(G) print (f"Network density: {density:.3f}") # Related to diameter, check if network is connected and, therefore, can have a diameter. print (f"Is the network connected? {nx.is_connected(G)}") # Get a list of network components (communities). # Find the largest component. components = nx.connected_components(G) largest_component = max(components, key = len) # Create a subgraph of the largest component and measure its diameter. subgraph = G.subgraph(largest_component) diameter = nx.diameter(subgraph) print (f"Network diameter of the largest component: {diameter:.3f}") # Find triadic closure (similar to density). triadic_closure = nx.transitivity(G) print (f"Triadic closure: {triadic_closure:.3f}\n") # Find centrality measures. betweenness_dict = nx.betweenness_centrality(G) # Run betweenness centrality eigenvector_dict = nx.eigenvector_centrality(G) # Run eigenvector centrality degree_cent_dict = nx.degree_centrality(G) # Assign each centrality measure to an attribute. nx.set_node_attributes(G, betweenness_dict, 'betweenness') nx.set_node_attributes(G, eigenvector_dict, 'eigenvector') nx.set_node_attributes(G, degree_cent_dict, 'degree_cent') # Find communities. communities = community.naive_greedy_modularity_communities(subgraph) # Create a dictionary that maps nodes to their community. modularity_dict = {} for i, c in enumerate(communities): for name in c: modularity_dict[name] = i # Add modularity information to graph object. nx.set_node_attributes(G, modularity_dict, 'modularity') ``` ## Write Graph Object ``` %%time # Convert graph object into a dictionary. data = json_graph.node_link_data(G) data_json = json.dumps(data) with open(abs_dir + "Github/dsg-mhs/lab_space/projects/richards/subjects/data/richards-subjects-network.json", "w") as f: f.write(data_json) ```
github_jupyter
# Import necessary libraries. import re, nltk, warnings, csv, sys, os, pickle, string, json import pandas as pd import numpy as np import seaborn as sns import glob as glob from itertools import chain from scipy import stats import matplotlib.pyplot as plt import itertools as iter import networkx as nx from networkx.algorithms import community from networkx.readwrite import json_graph from json import JSONEncoder from operator import itemgetter from collections import Counter # Import project-specific functions. # Python files (.py) have to be in same folder to work. lib_path = os.path.abspath(os.path.join(os.path.dirname('Correspondence_XML_parser.py'), '../Scripts')) sys.path.append(lib_path) from Correspondence_XML_parser import * # Ignore warnings related to deprecated functions. warnings.filterwarnings('ignore') %%time # Declare directory location to shorten filepaths later. abs_dir = "/Users/quinn.wi/Documents/" files = glob.glob(abs_dir + "Data/PSC/Richards/ESR-XML-Files-MHS/*.xml") len(files) # %%time # # Must be connected to Northeastern's VPN. # r = requests.get(url, # auth = (user, pw), # headers = {'Content-Type': 'application/xml'} # ) # # Read in contents of pipeline. # soup = BeautifulSoup(r.content, 'html.parser') # # Split soup's content by \n (each line is a file path to an XML doc). # # Use filter() to remove empty strings (''). # # Convert back to list using list(). # files = list(filter(None, soup.text.split('\n'))) # # Filter list and retrieve only jqa/ files. # files = [i for i in files if 'esr/' in i] # len(files) %%time # Build dataframe from XML files. # build_dataframe() called from Correspondence_XML_parser df = build_dataframe(files) # Unnest subject headings. df['subjects'] = df['subjects'].str.split(',') df = df.explode('subjects') # Remove leading and trailing whitespace. df['subjects'] = df['subjects'].str.strip() # Remove rows with subject of "The". df = df[~df['subjects'].isin(['The'])] # Remove rows with empty values. df.replace('', np.nan, inplace = True) df.dropna(inplace = True) df.head(3) %%time # Extract month, year from date. df['date'] = pd.to_datetime(df['date'], format = '%Y-%m-%d', errors = 'coerce') df = df.query('date != "NaT"') # remove Not-a-Time values. df['month'] = df['date'].dt.month df['year'] = df['date'].dt.year # Group by year & subject to get count of subjects per year. subjects = df.groupby(['year', 'subjects'], as_index = False)['subjects'] \ .size() \ .reset_index() subjects.columns = ['year', 'subjects', 'count'] # Group by year and get total number of subjects per year. subjects['total'] = subjects.groupby('year')['count'].transform('sum') # Get percentage of subject for each year. subjects['percentage'] = round(subjects['count'] / subjects['total'], 2) * 100 subjects.to_csv(abs_dir + 'Github/dsg-mhs/lab_space/projects/richards/subjects/data/subject-year-count.csv', sep = ',', index = False) subjects.head() %%time # Create adjacency matrix. adj = pd.crosstab(df['file'], df['subjects']) # Convert entry-person matrix into an adjacency matrix of persons. adj = adj.T.dot(adj) # Change same-same connections to zero. np.fill_diagonal(adj.values, 0) # Simple correlation matrix from dataframe. adj = adj.corr() adj %%time adj['source'] = adj.index df = pd.melt(adj, id_vars = ['source'], var_name = 'target', value_name = 'weight') \ .query('(source != target) & (weight > 0.5)') df %%time # Initialize graph object. G = nx.from_pandas_edgelist(df, 'source', 'target', 'weight') # Add nodes. nodes = list( dict.fromkeys( df['source'].values.tolist() + df['target'].values.tolist() )) G.add_nodes_from(nodes) print (nx.info(G)) # Set degree attributes. nx.set_node_attributes(G, dict(G.degree(G.nodes())), 'degree') # Sort nodes by degree and print top results. sorted_degree = sorted(dict(G.degree(G.nodes())).items(), key = itemgetter(1), reverse = True) print ("Top 10 nodes by degree:") for d in sorted_degree[:10]: print (f'\t{d}') # Measure network density. density = nx.density(G) print (f"Network density: {density:.3f}") # Related to diameter, check if network is connected and, therefore, can have a diameter. print (f"Is the network connected? {nx.is_connected(G)}") # Get a list of network components (communities). # Find the largest component. components = nx.connected_components(G) largest_component = max(components, key = len) # Create a subgraph of the largest component and measure its diameter. subgraph = G.subgraph(largest_component) diameter = nx.diameter(subgraph) print (f"Network diameter of the largest component: {diameter:.3f}") # Find triadic closure (similar to density). triadic_closure = nx.transitivity(G) print (f"Triadic closure: {triadic_closure:.3f}\n") # Find centrality measures. betweenness_dict = nx.betweenness_centrality(G) # Run betweenness centrality eigenvector_dict = nx.eigenvector_centrality(G) # Run eigenvector centrality degree_cent_dict = nx.degree_centrality(G) # Assign each centrality measure to an attribute. nx.set_node_attributes(G, betweenness_dict, 'betweenness') nx.set_node_attributes(G, eigenvector_dict, 'eigenvector') nx.set_node_attributes(G, degree_cent_dict, 'degree_cent') # Find communities. communities = community.naive_greedy_modularity_communities(subgraph) # Create a dictionary that maps nodes to their community. modularity_dict = {} for i, c in enumerate(communities): for name in c: modularity_dict[name] = i # Add modularity information to graph object. nx.set_node_attributes(G, modularity_dict, 'modularity') %%time # Convert graph object into a dictionary. data = json_graph.node_link_data(G) data_json = json.dumps(data) with open(abs_dir + "Github/dsg-mhs/lab_space/projects/richards/subjects/data/richards-subjects-network.json", "w") as f: f.write(data_json)
0.411939
0.661691
# Implementing the Gradient Descent Algorithm In this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data. ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd #Some helper functions for plotting and drawing lines def plot_points(X, y): admitted = X[np.argwhere(y==1)] rejected = X[np.argwhere(y==0)] plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k') plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k') def display(m, b, color='g--'): plt.xlim(-0.05,1.05) plt.ylim(-0.05,1.05) x = np.arange(-10, 10, 0.1) plt.plot(x, m*x+b, color) ``` ## Reading and plotting the data ``` data = pd.read_csv('data.csv', header=None) X = np.array(data[[0,1]]) y = np.array(data[2]) plot_points(X,y) plt.show() ``` ## TODO: Implementing the basic functions Here is your turn to shine. Implement the following formulas, as explained in the text. - Sigmoid activation function $$\sigma(x) = \frac{1}{1+e^{-x}}$$ - Output (prediction) formula $$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$ - Error function $$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$ - The function that updates the weights $$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$ $$ b \longrightarrow b + \alpha (y - \hat{y})$$ ``` # Implement the following functions # Activation (sigmoid) function def sigmoid(x): return 1 / (1 + np.exp(-x)) # Output (prediction) formula def output_formula(features, weights, bias): return sigmoid(np.dot(features, weights) + bias) # Error (log-loss) formula def error_formula(y, output): return - y*np.log(output) - (1 - y) * np.log(1-output) # Gradient descent step def update_weights(x, y, weights, bias, learnrate): output = output_formula(x, weights, bias) d_error = y - output weights += learnrate * d_error * x bias += learnrate * d_error return weights, bias ``` ## Training function This function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm. ``` np.random.seed(44) epochs = 100 learnrate = 0.01 def train(features, targets, epochs, learnrate, graph_lines=False): errors = [] n_records, n_features = features.shape last_loss = None weights = np.random.normal(scale=1 / n_features**.5, size=n_features) bias = 0 for e in range(epochs): del_w = np.zeros(weights.shape) for x, y in zip(features, targets): output = output_formula(x, weights, bias) error = error_formula(y, output) weights, bias = update_weights(x, y, weights, bias, learnrate) # Printing out the log-loss error on the training set out = output_formula(features, weights, bias) loss = np.mean(error_formula(targets, out)) errors.append(loss) if e % (epochs / 10) == 0: print("\n========== Epoch", e,"==========") if last_loss and last_loss < loss: print("Train loss: ", loss, " WARNING - Loss Increasing") else: print("Train loss: ", loss) last_loss = loss predictions = out > 0.5 accuracy = np.mean(predictions == targets) print("Accuracy: ", accuracy) if graph_lines and e % (epochs / 100) == 0: display(-weights[0]/weights[1], -bias/weights[1]) # Plotting the solution boundary plt.title("Solution boundary") display(-weights[0]/weights[1], -bias/weights[1], 'black') # Plotting the data plot_points(features, targets) plt.show() # Plotting the error plt.title("Error Plot") plt.xlabel('Number of epochs') plt.ylabel('Error') plt.plot(errors) plt.show() ``` ## Time to train the algorithm! When we run the function, we'll obtain the following: - 10 updates with the current training loss and accuracy - A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs. - A plot of the error function. Notice how it decreases as we go through more epochs. ``` train(X, y, epochs, learnrate, True) ```
github_jupyter
import matplotlib.pyplot as plt import numpy as np import pandas as pd #Some helper functions for plotting and drawing lines def plot_points(X, y): admitted = X[np.argwhere(y==1)] rejected = X[np.argwhere(y==0)] plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k') plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k') def display(m, b, color='g--'): plt.xlim(-0.05,1.05) plt.ylim(-0.05,1.05) x = np.arange(-10, 10, 0.1) plt.plot(x, m*x+b, color) data = pd.read_csv('data.csv', header=None) X = np.array(data[[0,1]]) y = np.array(data[2]) plot_points(X,y) plt.show() # Implement the following functions # Activation (sigmoid) function def sigmoid(x): return 1 / (1 + np.exp(-x)) # Output (prediction) formula def output_formula(features, weights, bias): return sigmoid(np.dot(features, weights) + bias) # Error (log-loss) formula def error_formula(y, output): return - y*np.log(output) - (1 - y) * np.log(1-output) # Gradient descent step def update_weights(x, y, weights, bias, learnrate): output = output_formula(x, weights, bias) d_error = y - output weights += learnrate * d_error * x bias += learnrate * d_error return weights, bias np.random.seed(44) epochs = 100 learnrate = 0.01 def train(features, targets, epochs, learnrate, graph_lines=False): errors = [] n_records, n_features = features.shape last_loss = None weights = np.random.normal(scale=1 / n_features**.5, size=n_features) bias = 0 for e in range(epochs): del_w = np.zeros(weights.shape) for x, y in zip(features, targets): output = output_formula(x, weights, bias) error = error_formula(y, output) weights, bias = update_weights(x, y, weights, bias, learnrate) # Printing out the log-loss error on the training set out = output_formula(features, weights, bias) loss = np.mean(error_formula(targets, out)) errors.append(loss) if e % (epochs / 10) == 0: print("\n========== Epoch", e,"==========") if last_loss and last_loss < loss: print("Train loss: ", loss, " WARNING - Loss Increasing") else: print("Train loss: ", loss) last_loss = loss predictions = out > 0.5 accuracy = np.mean(predictions == targets) print("Accuracy: ", accuracy) if graph_lines and e % (epochs / 100) == 0: display(-weights[0]/weights[1], -bias/weights[1]) # Plotting the solution boundary plt.title("Solution boundary") display(-weights[0]/weights[1], -bias/weights[1], 'black') # Plotting the data plot_points(features, targets) plt.show() # Plotting the error plt.title("Error Plot") plt.xlabel('Number of epochs') plt.ylabel('Error') plt.plot(errors) plt.show() train(X, y, epochs, learnrate, True)
0.773131
0.984336
``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline import pandas as pd import torch df = pd.read_excel(r"data.xlsx", index_col=0) #This excel file was pre processed to have columns CompanyName and CompanyName2 df_yes = df[df.YN_INNOVATION == 'yes'] df_yes = df_yes[["full_text","CompanyName","CompanyName2"]] df_yes df_new = pd.read_csv("/home/asr8r/git/dspg21binn/data/DNA_Food_Articles_2017_no_rep.csv", index_col=0) ## Reading in data #df_save = df.reset_index(inplace=False, drop = True) df_new = df_new.reset_index(inplace=False, drop = True) df_new.company_codes df_new df_new.company_codes = df_new.company_codes.str.strip(",") df_new.company_codes = df_new.company_codes.str.split(",",expand=False) for i in range(5): print(list(set(df_new.loc[i,"company_codes"]))) for i in range(5): df_new.iloc[i,10].dtypes for row in df_new.company_codes: row = set(row) set(df_new.company_codes[0]) device = torch.device("cuda") tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER") model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER") nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True) the_matches = [] the_scores = [] the_yn_matches = [] for article in range(222): example = df_yes.iloc[article,0] #full_text of article ner_results = nlp(example) #running nlp ner_df = pd.DataFrame(ner_results) #results of nlp Org = ner_df[ner_df.entity_group == ("ORG") ] #all the ORG's in results the_company_name =[] comp_name2=[] match_scores =[] correct_guesses = [] yn_match = [] for i in range(len(Org.iloc[:,2])): #cycling through each ORG in results company_name = df_yes.iloc[article,1] #Company Name from labeled data company_name2 = df_yes.iloc[article,2] guess = Org.iloc[i,2] if guess == company_name or guess == company_name2: correct_guesses.append(guess) match_score = Org.iloc[i,1] match_scores.append(match_score) the_company_name.append(company_name) comp_name2.append(company_name2) yn_match.append(1) else: match_scores.append(match_score) correct_guesses.append(guess) the_company_name.append(company_name) comp_name2.append(company_name2) yn_match.append(0) dict = {'company_name': the_company_name, 'company_name_2':comp_name2, 'match_score': match_scores, 'correct_guesses': correct_guesses,"yn_match":yn_match} df = pd.DataFrame(dict) df = df.sort_values(by=['match_score'],ascending = False) the_match = df.iloc[0,3] the_score = df.iloc[0,2] the_yn_match = df.iloc[0,4] the_matches.append(the_match) the_scores.append(the_score) the_yn_matches.append(the_yn_match) dict = {'Matches': the_matches, "Scores":the_scores,"the_yn_matches": the_yn_matches} final_results= pd.DataFrame(dict) final_results non_matches = final_results[final_results["the_yn_matches"]==0] hist = non_matches.hist(bins=10) hist df_all = pd.concat([df_yes.reset_index(drop=True), final_results.reset_index(drop=True)], axis=1, ignore_index=True) df_all import matplotlib.pyplot as plt final_results.hist df_all.to_csv(r'git\bertbase-NER-results.csv', index = False) ```
github_jupyter
from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline import pandas as pd import torch df = pd.read_excel(r"data.xlsx", index_col=0) #This excel file was pre processed to have columns CompanyName and CompanyName2 df_yes = df[df.YN_INNOVATION == 'yes'] df_yes = df_yes[["full_text","CompanyName","CompanyName2"]] df_yes df_new = pd.read_csv("/home/asr8r/git/dspg21binn/data/DNA_Food_Articles_2017_no_rep.csv", index_col=0) ## Reading in data #df_save = df.reset_index(inplace=False, drop = True) df_new = df_new.reset_index(inplace=False, drop = True) df_new.company_codes df_new df_new.company_codes = df_new.company_codes.str.strip(",") df_new.company_codes = df_new.company_codes.str.split(",",expand=False) for i in range(5): print(list(set(df_new.loc[i,"company_codes"]))) for i in range(5): df_new.iloc[i,10].dtypes for row in df_new.company_codes: row = set(row) set(df_new.company_codes[0]) device = torch.device("cuda") tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER") model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER") nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True) the_matches = [] the_scores = [] the_yn_matches = [] for article in range(222): example = df_yes.iloc[article,0] #full_text of article ner_results = nlp(example) #running nlp ner_df = pd.DataFrame(ner_results) #results of nlp Org = ner_df[ner_df.entity_group == ("ORG") ] #all the ORG's in results the_company_name =[] comp_name2=[] match_scores =[] correct_guesses = [] yn_match = [] for i in range(len(Org.iloc[:,2])): #cycling through each ORG in results company_name = df_yes.iloc[article,1] #Company Name from labeled data company_name2 = df_yes.iloc[article,2] guess = Org.iloc[i,2] if guess == company_name or guess == company_name2: correct_guesses.append(guess) match_score = Org.iloc[i,1] match_scores.append(match_score) the_company_name.append(company_name) comp_name2.append(company_name2) yn_match.append(1) else: match_scores.append(match_score) correct_guesses.append(guess) the_company_name.append(company_name) comp_name2.append(company_name2) yn_match.append(0) dict = {'company_name': the_company_name, 'company_name_2':comp_name2, 'match_score': match_scores, 'correct_guesses': correct_guesses,"yn_match":yn_match} df = pd.DataFrame(dict) df = df.sort_values(by=['match_score'],ascending = False) the_match = df.iloc[0,3] the_score = df.iloc[0,2] the_yn_match = df.iloc[0,4] the_matches.append(the_match) the_scores.append(the_score) the_yn_matches.append(the_yn_match) dict = {'Matches': the_matches, "Scores":the_scores,"the_yn_matches": the_yn_matches} final_results= pd.DataFrame(dict) final_results non_matches = final_results[final_results["the_yn_matches"]==0] hist = non_matches.hist(bins=10) hist df_all = pd.concat([df_yes.reset_index(drop=True), final_results.reset_index(drop=True)], axis=1, ignore_index=True) df_all import matplotlib.pyplot as plt final_results.hist df_all.to_csv(r'git\bertbase-NER-results.csv', index = False)
0.299822
0.226741
# **Assignment For Numpy** Difficulty Level **Beginner** 1. Import the numpy package under the name np ``` import numpy as np ``` 2. Create a null vector of size 10 ``` arr = np.zeros((10)) ``` 3. Create a vector with values ranging from 10 to 49 ``` rng = np.arange(10,49) ``` 4. Find the shape of previous array in question 3 ``` rng.shape ``` 5. Print the type of the previous array in question 3 ``` rng.dtype ``` 6. Print the numpy version and the configuration ``` print(np.__version__) print(np.show_config()) ``` 7. Print the dimension of the array in question 3 ``` print(rng.ndim) ``` 8. Create a boolean array with all the True values ``` boolean = np.ones((10),dtype=bool) ``` 9. Create a two dimensional array ``` twoDim = np.zeros((10,10)) print(twoDim.ndim) ``` 10. Create a three dimensional array ``` threedim = np.zeros((10,10,10)) print(threedim.ndim) ``` Difficulty Level **Easy** 11. Reverse a vector (first element becomes last) ``` reverse = np.random.randn((10)) print(reverse) reverse = np.flip(reverse) print(reverse) ``` 12. Create a null vector of size 10 but the fifth value which is 1 ``` nullVector = np.array([1 if x == 4 else 0 for x in range(10)]) print(nullVector) ``` 13. Create a 3x3 identity matrix ``` identity = np.identity(3) print(identity) ``` 14. arr = np.array([1, 2, 3, 4, 5]) --- Convert the data type of the given array from int to float ``` arr = np.array([1,2,3,4,5],dtype=float) arr.dtype ``` 15. arr1 = np.array([[1., 2., 3.], [4., 5., 6.]]) arr2 = np.array([[0., 4., 1.], [7., 2., 12.]]) --- Multiply arr1 with arr2 ``` arr1 = np.array([[1., 2., 3.],[4., 5., 6.]]) arr2 = np.array([[0., 4., 1.],[7., 2., 12.]]) arr1 * arr2 ``` 16. arr1 = np.array([[1., 2., 3.], [4., 5., 6.]]) arr2 = np.array([[0., 4., 1.], [7., 2., 12.]]) --- Make an array by comparing both the arrays provided above ``` arr1 = np.array([[1., 2., 3.],[4., 5., 6.]]) arr2 = np.array([[0., 4., 1.],[7., 2., 12.]]) arr3 = arr1 == arr2 print(arr3) ``` 17. Extract all odd numbers from arr with values(0-9) ``` odd = np.arange(0,10) print(odd[odd % 2 != 0]) ``` 18. Replace all odd numbers to -1 from previous array ``` odd[odd % 2 != 0] = -1 print(odd) ``` 19. arr = np.arange(10) --- Replace the values of indexes 5,6,7 and 8 to **12** ``` arr = np.arange(10) arr[[5,6,7,8]] = 12 print(arr) ``` 20. Create a 2d array with 1 on the border and 0 inside ``` twoD = np.ones((4,4)) twoD[1:-1,1:-1] = 0 print(twoD) ``` Difficulty Level **Medium** 21. arr2d = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) --- Replace the value 5 to 12 ``` arr2d = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) arr2d = np.where(arr2d == 5,12,arr2d) print(arr2d) ``` 22. arr3d = np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]) --- Convert all the values of 1st array to 64 ``` arr3d = np.array([ [ [1, 2, 3], [4, 5, 6] ], [ [7, 8, 9], [10, 11, 12] ] ]) arr3d[0][0] = 64 print(arr3d) ``` 23. Make a 2-Dimensional array with values 0-9 and slice out the first 1st 1-D array from it ``` arr2d = np.arange(0,10).reshape((2,5)) print(arr2d) arr2d[0] ``` 24. Make a 2-Dimensional array with values 0-9 and slice out the 2nd value from 2nd 1-D array from it ``` arr2d = np.arange(0,10).reshape((2,5)) print(arr2d) arr2d[1][1] ``` 25. Make a 2-Dimensional array with values 0-9 and slice out the third column but only the first two rows ``` arr2d = np.arange(0,10).reshape((2,5)) print(arr2d) arr2d[:,2] ``` 26. Create a 10x10 array with random values and find the minimum and maximum values ``` rand = np.random.randn((100)).reshape((10,10)) print(f"Minimun Value {np.amin(rand)}") print(f"Maximum Value {np.amax(rand)}") ``` 27. a = np.array([1,2,3,2,3,4,3,4,5,6]) b = np.array([7,2,10,2,7,4,9,4,9,8]) --- Find the common items between a and b ``` a = np.array([1,2,3,2,3,4,3,4,5,6]) b = np.array([7,2,10,2,7,4,9,4,9,8]) print(f"Common Elements are : {np.intersect1d(a,b)}") ``` 28. a = np.array([1,2,3,2,3,4,3,4,5,6]) b = np.array([7,2,10,2,7,4,9,4,9,8]) --- Find the positions where elements of a and b match ``` np.searchsorted(a, np.intersect1d(a, b)) ``` 29. names = np.array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe']) data = np.random.randn(7, 4) --- Find all the values from array **data** where the values from array **names** are not equal to **Will** ``` names = np.array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe']) data = np.random.randn(7, 4) print(data) print('==================================================') print(data[names != 'Will']) ``` 30. names = np.array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe']) data = np.random.randn(7, 4) --- Find all the values from array **data** where the values from array **names** are not equal to **Will** and **Joe** ``` print(data) print('==================================================') mask = (names != 'Will') & (names!= 'Joe') print(data[mask]) ``` Difficulty Level **Hard** 31. Create a 2D array of shape 5x3 to contain decimal numbers between 1 and 15. ``` rand_arr = np.random.uniform(1,15, size=(5,3)) print(rand_arr) ``` 32. Create an array of shape (2, 2, 4) with decimal numbers between 1 to 16. ``` rand_arr2 = np.random.uniform(1,16, size=(2,2,4)) print(rand_arr2) ``` 33. Swap axes of the array you created in Question 32 ``` print("Original Array") print(rand_arr2) print("Swapped Axes") print(np.swapaxes(rand_arr2, 2, 0)) ``` 34. Create an array of size 10, and find the square root of every element in the array, if the values less than 0.5, replace them with 0 ``` arr = np.random.uniform(0,20, size=(10)) arr = arr.astype('int32') arr =np.sqrt(arr) arr = np.where(arr < 0.5, 0, arr) arr = arr.astype('int32') print(arr) ``` 35. Create two random arrays of range 12 and make an array with the maximum values between each element of the two arrays ``` arr = np.random.uniform(0,20, size=(10)) arr1 = np.random.uniform(0,20, size=(10)) newArr = np.maximum(arr,arr1) print(f"Array 1 : {arr}") print(f"Array 2 : {arr1}") print(f"New Array : {newArr}") ``` 36. names = np.array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe']) --- Find the unique names and sort them out! ``` names = np.array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe']) print(np.sort(np.unique(names))) ``` 37. a = np.array([1,2,3,4,5]) b = np.array([5,6,7,8,9]) --- From array a remove all items present in array b ``` a = np.array([1,2,3,4,5]) b = np.array([5,6,7,8,9]) c = np.setdiff1d(a, b) print(c) ``` 38. Following is the input NumPy array delete column two and insert following new column in its place. --- sampleArray = numpy.array([[34,43,73],[82,22,12],[53,94,66]]) --- newColumn = numpy.array([[10,10,10]]) ``` sampleArray = np.array([ [34,43,73], [82,22,12], [53,94,66] ]) newColumn = np.array([[10,10,10]]) sampleArray = np.delete(sampleArray, 2, axis=1) sampleArray = np.insert(sampleArray, 2, newColumn, axis=1) print(sampleArray) ``` 39. x = np.array([[1., 2., 3.], [4., 5., 6.]]) y = np.array([[6., 23.], [-1, 7], [8, 9]]) --- Find the dot product of the above two matrix ``` x = np.array([[1., 2., 3.], [4., 5., 6.]]) y = np.array([[6., 23.], [-1, 7], [8, 9]]) print(np.dot(x,y)) ``` 40. Generate a matrix of 20 random values and find its cumulative sum ``` arr = np.random.uniform(1,20,size=(4,5)) arr = arr.astype('int32') print(arr.cumsum()) ```
github_jupyter
import numpy as np arr = np.zeros((10)) rng = np.arange(10,49) rng.shape rng.dtype print(np.__version__) print(np.show_config()) print(rng.ndim) boolean = np.ones((10),dtype=bool) twoDim = np.zeros((10,10)) print(twoDim.ndim) threedim = np.zeros((10,10,10)) print(threedim.ndim) reverse = np.random.randn((10)) print(reverse) reverse = np.flip(reverse) print(reverse) nullVector = np.array([1 if x == 4 else 0 for x in range(10)]) print(nullVector) identity = np.identity(3) print(identity) arr = np.array([1,2,3,4,5],dtype=float) arr.dtype arr1 = np.array([[1., 2., 3.],[4., 5., 6.]]) arr2 = np.array([[0., 4., 1.],[7., 2., 12.]]) arr1 * arr2 arr1 = np.array([[1., 2., 3.],[4., 5., 6.]]) arr2 = np.array([[0., 4., 1.],[7., 2., 12.]]) arr3 = arr1 == arr2 print(arr3) odd = np.arange(0,10) print(odd[odd % 2 != 0]) odd[odd % 2 != 0] = -1 print(odd) arr = np.arange(10) arr[[5,6,7,8]] = 12 print(arr) twoD = np.ones((4,4)) twoD[1:-1,1:-1] = 0 print(twoD) arr2d = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) arr2d = np.where(arr2d == 5,12,arr2d) print(arr2d) arr3d = np.array([ [ [1, 2, 3], [4, 5, 6] ], [ [7, 8, 9], [10, 11, 12] ] ]) arr3d[0][0] = 64 print(arr3d) arr2d = np.arange(0,10).reshape((2,5)) print(arr2d) arr2d[0] arr2d = np.arange(0,10).reshape((2,5)) print(arr2d) arr2d[1][1] arr2d = np.arange(0,10).reshape((2,5)) print(arr2d) arr2d[:,2] rand = np.random.randn((100)).reshape((10,10)) print(f"Minimun Value {np.amin(rand)}") print(f"Maximum Value {np.amax(rand)}") a = np.array([1,2,3,2,3,4,3,4,5,6]) b = np.array([7,2,10,2,7,4,9,4,9,8]) print(f"Common Elements are : {np.intersect1d(a,b)}") np.searchsorted(a, np.intersect1d(a, b)) names = np.array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe']) data = np.random.randn(7, 4) print(data) print('==================================================') print(data[names != 'Will']) print(data) print('==================================================') mask = (names != 'Will') & (names!= 'Joe') print(data[mask]) rand_arr = np.random.uniform(1,15, size=(5,3)) print(rand_arr) rand_arr2 = np.random.uniform(1,16, size=(2,2,4)) print(rand_arr2) print("Original Array") print(rand_arr2) print("Swapped Axes") print(np.swapaxes(rand_arr2, 2, 0)) arr = np.random.uniform(0,20, size=(10)) arr = arr.astype('int32') arr =np.sqrt(arr) arr = np.where(arr < 0.5, 0, arr) arr = arr.astype('int32') print(arr) arr = np.random.uniform(0,20, size=(10)) arr1 = np.random.uniform(0,20, size=(10)) newArr = np.maximum(arr,arr1) print(f"Array 1 : {arr}") print(f"Array 2 : {arr1}") print(f"New Array : {newArr}") names = np.array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe']) print(np.sort(np.unique(names))) a = np.array([1,2,3,4,5]) b = np.array([5,6,7,8,9]) c = np.setdiff1d(a, b) print(c) sampleArray = np.array([ [34,43,73], [82,22,12], [53,94,66] ]) newColumn = np.array([[10,10,10]]) sampleArray = np.delete(sampleArray, 2, axis=1) sampleArray = np.insert(sampleArray, 2, newColumn, axis=1) print(sampleArray) x = np.array([[1., 2., 3.], [4., 5., 6.]]) y = np.array([[6., 23.], [-1, 7], [8, 9]]) print(np.dot(x,y)) arr = np.random.uniform(1,20,size=(4,5)) arr = arr.astype('int32') print(arr.cumsum())
0.175892
0.98635
# Fine Tuning Transformer for MultiClass Text Classification ### Introduction In this tutorial we will be fine tuning a transformer model for the **Multiclass text classification** problem. This is one of the most common business problems where a given piece of text/sentence/document needs to be classified into one of the categories out of the given list. #### Flow of the notebook The notebook will be divided into seperate sections to provide a organized walk through for the process used. This process can be modified for individual use cases. The sections are: 1. [Importing Python Libraries and preparing the environment](#section01) 2. [Importing and Pre-Processing the domain data](#section02) 3. [Preparing the Dataset and Dataloader](#section03) 4. [Creating the Neural Network for Fine Tuning](#section04) 5. [Fine Tuning the Model](#section05) 6. [Validating the Model Performance](#section06) 7. [Saving the model and artifacts for Inference in Future](#section07) #### Technical Details This script leverages on multiple tools designed by other teams. Details of the tools used below. Please ensure that these elements are present in your setup to successfully implement this script. - Data: - We are using the News aggregator dataset available at by [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/News+Aggregator) - We are referring only to the first csv file from the data dump: `newsCorpora.csv` - There are `422937` rows of data. Where each row has the following data-point: - ID Numeric ID - TITLE News title - URL Url - PUBLISHER Publisher name - CATEGORY News category (b = business, t = science and technology, e = entertainment, m = health) - STORY Alphanumeric ID of the cluster that includes news about the same story - HOSTNAME Url hostname - TIMESTAMP Approximate time the news was published, as the number of milliseconds since the epoch 00:00:00 GMT, January 1, 1970 - Language Model Used: - DistilBERT this is a smaller transformer model as compared to BERT or Roberta. It is created by process of distillation applied to Bert. - [Blog-Post](https://medium.com/huggingface/distilbert-8cf3380435b5) - [Research Paper](https://arxiv.org/abs/1910.01108) - [Documentation for python](https://huggingface.co/transformers/model_doc/distilbert.html) - Hardware Requirements: - Python 3.6 and above - Pytorch, Transformers and All the stock Python ML Libraries - GPU enabled setup - Script Objective: - The objective of this script is to fine tune DistilBERT to be able to classify a news headline into the following categories: - Business - Technology - Health - Entertainment <a id='section01'></a> ### Importing Python Libraries and preparing the environment At this step we will be importing the libraries and modules needed to run our script. Libraries are: * Pandas * Pytorch * Pytorch Utils for Dataset and Dataloader * Transformers * DistilBERT Model and Tokenizer Followed by that we will preapre the device for CUDA execeution. This configuration is needed if you want to leverage on onboard GPU. ``` # Importing the libraries needed import pandas as pd import torch import transformers from torch.utils.data import Dataset, DataLoader from transformers import DistilBertModel, DistilBertTokenizer # Setting up the device for GPU usage from torch import cuda device = 'cuda' if cuda.is_available() else 'cpu' ``` <a id='section02'></a> ### Importing and Pre-Processing the domain data We will be working with the data and preparing for fine tuning purposes. *Assuming that the `newCorpora.csv` is already downloaded in your `data` folder* Import the file in a dataframe and give it the headers as per the documentation. Cleaning the file to remove the unwanted columns and create an additional column for training. The final Dataframe will be something like this: |TITLE|CATEGORY|ENCODED_CAT| |--|--|--| | title_1|Entertainment | 1 | | title_2|Entertainment | 1 | | title_3|Business| 2 | | title_4|Science| 3 | | title_5|Science| 3 | | title_6|Health| 4 | ``` # Import the csv into pandas dataframe and add the headers df = pd.read_csv('./data/newsCorpora.csv', sep='\t', names=['ID','TITLE', 'URL', 'PUBLISHER', 'CATEGORY', 'STORY', 'HOSTNAME', 'TIMESTAMP']) # df.head() # # Removing unwanted columns and only leaving title of news and the category which will be the target df = df[['TITLE','CATEGORY']] # df.head() # # Converting the codes to appropriate categories using a dictionary my_dict = { 'e':'Entertainment', 'b':'Business', 't':'Science', 'm':'Health' } def update_cat(x): return my_dict[x] df['CATEGORY'] = df['CATEGORY'].apply(lambda x: update_cat(x)) encode_dict = {} def encode_cat(x): if x not in encode_dict.keys(): encode_dict[x]=len(encode_dict) return encode_dict[x] df['ENCODE_CAT'] = df['CATEGORY'].apply(lambda x: encode_cat(x)) ``` <a id='section03'></a> ### Preparing the Dataset and Dataloader We will start with defining few key variables that will be used later during the training/fine tuning stage. Followed by creation of Dataset class - This defines how the text is pre-processed before sending it to the neural network. We will also define the Dataloader that will feed the data in batches to the neural network for suitable training and processing. Dataset and Dataloader are constructs of the PyTorch library for defining and controlling the data pre-processing and its passage to neural network. For further reading into Dataset and Dataloader read the [docs at PyTorch](https://pytorch.org/docs/stable/data.html) #### *Triage* Dataset Class - This class is defined to accept the Dataframe as input and generate tokenized output that is used by the DistilBERT model for training. - We are using the DistilBERT tokenizer to tokenize the data in the `TITLE` column of the dataframe. - The tokenizer uses the `encode_plus` method to perform tokenization and generate the necessary outputs, namely: `ids`, `attention_mask` - To read further into the tokenizer, [refer to this document](https://huggingface.co/transformers/model_doc/distilbert.html#distilberttokenizer) - `target` is the encoded category on the news headline. - The *Triage* class is used to create 2 datasets, for training and for validation. - *Training Dataset* is used to fine tune the model: **80% of the original data** - *Validation Dataset* is used to evaluate the performance of the model. The model has not seen this data during training. #### Dataloader - Dataloader is used to for creating training and validation dataloader that load data to the neural network in a defined manner. This is needed because all the data from the dataset cannot be loaded to the memory at once, hence the amount of dataloaded to the memory and then passed to the neural network needs to be controlled. - This control is achieved using the parameters such as `batch_size` and `max_len`. - Training and Validation dataloaders are used in the training and validation part of the flow respectively ``` # Defining some key variables that will be used later on in the training MAX_LEN = 512 TRAIN_BATCH_SIZE = 4 VALID_BATCH_SIZE = 2 EPOCHS = 1 LEARNING_RATE = 1e-05 tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-cased') class Triage(Dataset): def __init__(self, dataframe, tokenizer, max_len): self.len = len(dataframe) self.data = dataframe self.tokenizer = tokenizer self.max_len = max_len def __getitem__(self, index): title = str(self.data.TITLE[index]) title = " ".join(title.split()) inputs = self.tokenizer.encode_plus( title, None, add_special_tokens=True, max_length=self.max_len, pad_to_max_length=True, return_token_type_ids=True ) ids = inputs['input_ids'] mask = inputs['attention_mask'] return { 'ids': torch.tensor(ids, dtype=torch.long), 'mask': torch.tensor(mask, dtype=torch.long), 'targets': torch.tensor(self.data.ENCODE_CAT[index], dtype=torch.long) } def __len__(self): return self.len # Creating the dataset and dataloader for the neural network train_size = 0.8 train_dataset=df.sample(frac=train_size,random_state=200) test_dataset=df.drop(train_dataset.index).reset_index(drop=True) train_dataset = train_dataset.reset_index(drop=True) print("FULL Dataset: {}".format(df.shape)) print("TRAIN Dataset: {}".format(train_dataset.shape)) print("TEST Dataset: {}".format(test_dataset.shape)) training_set = Triage(train_dataset, tokenizer, MAX_LEN) testing_set = Triage(test_dataset, tokenizer, MAX_LEN) train_params = {'batch_size': TRAIN_BATCH_SIZE, 'shuffle': True, 'num_workers': 0 } test_params = {'batch_size': VALID_BATCH_SIZE, 'shuffle': True, 'num_workers': 0 } training_loader = DataLoader(training_set, **train_params) testing_loader = DataLoader(testing_set, **test_params) ``` <a id='section04'></a> ### Creating the Neural Network for Fine Tuning #### Neural Network - We will be creating a neural network with the `DistillBERTClass`. - This network will have the DistilBERT Language model followed by a `dropout` and finally a `Linear` layer to obtain the final outputs. - The data will be fed to the DistilBERT Language model as defined in the dataset. - Final layer outputs is what will be compared to the `encoded category` to determine the accuracy of models prediction. - We will initiate an instance of the network called `model`. This instance will be used for training and then to save the final trained model for future inference. #### Loss Function and Optimizer - `Loss Function` and `Optimizer` and defined in the next cell. - The `Loss Function` is used the calculate the difference in the output created by the model and the actual output. - `Optimizer` is used to update the weights of the neural network to improve its performance. #### Further Reading - You can refer to my [Pytorch Tutorials](https://github.com/abhimishra91/pytorch-tutorials) to get an intuition of Loss Function and Optimizer. - [Pytorch Documentation for Loss Function](https://pytorch.org/docs/stable/nn.html#loss-functions) - [Pytorch Documentation for Optimizer](https://pytorch.org/docs/stable/optim.html) - Refer to the links provided on the top of the notebook to read more about DistiBERT. ``` # Creating the customized model, by adding a drop out and a dense layer on top of distil bert to get the final output for the model. class DistillBERTClass(torch.nn.Module): def __init__(self): super(DistillBERTClass, self).__init__() self.l1 = DistilBertModel.from_pretrained("distilbert-base-uncased") self.pre_classifier = torch.nn.Linear(768, 768) self.dropout = torch.nn.Dropout(0.3) self.classifier = torch.nn.Linear(768, 4) def forward(self, input_ids, attention_mask): output_1 = self.l1(input_ids=input_ids, attention_mask=attention_mask) hidden_state = output_1[0] pooler = hidden_state[:, 0] pooler = self.pre_classifier(pooler) pooler = torch.nn.ReLU()(pooler) pooler = self.dropout(pooler) output = self.classifier(pooler) return output model = DistillBERTClass() model.to(device) # Creating the loss function and optimizer loss_function = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(params = model.parameters(), lr=LEARNING_RATE) ``` <a id='section05'></a> ### Fine Tuning the Model After all the effort of loading and preparing the data and datasets, creating the model and defining its loss and optimizer. This is probably the easier steps in the process. Here we define a training function that trains the model on the training dataset created above, specified number of times (EPOCH), An epoch defines how many times the complete data will be passed through the network. Following events happen in this function to fine tune the neural network: - The dataloader passes data to the model based on the batch size. - Subsequent output from the model and the actual category are compared to calculate the loss. - Loss value is used to optimize the weights of the neurons in the network. - After every 5000 steps the loss value is printed in the console. As you can see just in 1 epoch by the final step the model was working with a miniscule loss of 0.0002485 i.e. the output is extremely close to the actual output. ``` # Function to calcuate the accuracy of the model def calcuate_accu(big_idx, targets): n_correct = (big_idx==targets).sum().item() return n_correct # Defining the training function on the 80% of the dataset for tuning the distilbert model def train(epoch): tr_loss = 0 n_correct = 0 nb_tr_steps = 0 nb_tr_examples = 0 model.train() for _,data in enumerate(training_loader, 0): ids = data['ids'].to(device, dtype = torch.long) mask = data['mask'].to(device, dtype = torch.long) targets = data['targets'].to(device, dtype = torch.long) outputs = model(ids, mask) loss = loss_function(outputs, targets) tr_loss += loss.item() big_val, big_idx = torch.max(outputs.data, dim=1) n_correct += calcuate_accu(big_idx, targets) nb_tr_steps += 1 nb_tr_examples+=targets.size(0) if _%5000==0: loss_step = tr_loss/nb_tr_steps accu_step = (n_correct*100)/nb_tr_examples print(f"Training Loss per 5000 steps: {loss_step}") print(f"Training Accuracy per 5000 steps: {accu_step}") optimizer.zero_grad() loss.backward() # # When using GPU optimizer.step() print(f'The Total Accuracy for Epoch {epoch}: {(n_correct*100)/nb_tr_examples}') epoch_loss = tr_loss/nb_tr_steps epoch_accu = (n_correct*100)/nb_tr_examples print(f"Training Loss Epoch: {epoch_loss}") print(f"Training Accuracy Epoch: {epoch_accu}") return for epoch in range(EPOCHS): train(epoch) ``` <a id='section06'></a> ### Validating the Model During the validation stage we pass the unseen data(Testing Dataset) to the model. This step determines how good the model performs on the unseen data. This unseen data is the 20% of `newscorpora.csv` which was seperated during the Dataset creation stage. During the validation stage the weights of the model are not updated. Only the final output is compared to the actual value. This comparison is then used to calcuate the accuracy of the model. As you can see the model is predicting the correct category of a given headline to a 99.9% accuracy. ``` def valid(model, testing_loader): model.eval() n_correct = 0; n_wrong = 0; total = 0 with torch.no_grad(): for _, data in enumerate(testing_loader, 0): ids = data['ids'].to(device, dtype = torch.long) mask = data['mask'].to(device, dtype = torch.long) targets = data['targets'].to(device, dtype = torch.long) outputs = model(ids, mask).squeeze() loss = loss_function(outputs, targets) tr_loss += loss.item() big_val, big_idx = torch.max(outputs.data, dim=1) n_correct += calcuate_accu(big_idx, targets) nb_tr_steps += 1 nb_tr_examples+=targets.size(0) if _%5000==0: loss_step = tr_loss/nb_tr_steps accu_step = (n_correct*100)/nb_tr_examples print(f"Validation Loss per 100 steps: {loss_step}") print(f"Validation Accuracy per 100 steps: {accu_step}") epoch_loss = tr_loss/nb_tr_steps epoch_accu = (n_correct*100)/nb_tr_examples print(f"Validation Loss Epoch: {epoch_loss}") print(f"Validation Accuracy Epoch: {epoch_accu}") return epoch_accu print('This is the validation section to print the accuracy and see how it performs') print('Here we are leveraging on the dataloader crearted for the validation dataset, the approcah is using more of pytorch') acc = valid(model, testing_loader) print("Accuracy on test data = %0.2f%%" % acc) ``` <a id='section07'></a> ### Saving the Trained Model Artifacts for inference This is the final step in the process of fine tuning the model. The model and its vocabulary are saved locally. These files are then used in the future to make inference on new inputs of news headlines. Please remember that a trained neural network is only useful when used in actual inference after its training. In the lifecycle of an ML projects this is only half the job done. We will leave the inference of these models for some other day. ``` # Saving the files for re-use output_model_file = './models/pytorch_distilbert_news.bin' output_vocab_file = './models/vocab_distilbert_news.bin' model_to_save = model torch.save(model_to_save, output_model_file) tokenizer.save_vocabulary(output_vocab_file) print('All files saved') print('This tutorial is completed') ```
github_jupyter
# Importing the libraries needed import pandas as pd import torch import transformers from torch.utils.data import Dataset, DataLoader from transformers import DistilBertModel, DistilBertTokenizer # Setting up the device for GPU usage from torch import cuda device = 'cuda' if cuda.is_available() else 'cpu' # Import the csv into pandas dataframe and add the headers df = pd.read_csv('./data/newsCorpora.csv', sep='\t', names=['ID','TITLE', 'URL', 'PUBLISHER', 'CATEGORY', 'STORY', 'HOSTNAME', 'TIMESTAMP']) # df.head() # # Removing unwanted columns and only leaving title of news and the category which will be the target df = df[['TITLE','CATEGORY']] # df.head() # # Converting the codes to appropriate categories using a dictionary my_dict = { 'e':'Entertainment', 'b':'Business', 't':'Science', 'm':'Health' } def update_cat(x): return my_dict[x] df['CATEGORY'] = df['CATEGORY'].apply(lambda x: update_cat(x)) encode_dict = {} def encode_cat(x): if x not in encode_dict.keys(): encode_dict[x]=len(encode_dict) return encode_dict[x] df['ENCODE_CAT'] = df['CATEGORY'].apply(lambda x: encode_cat(x)) # Defining some key variables that will be used later on in the training MAX_LEN = 512 TRAIN_BATCH_SIZE = 4 VALID_BATCH_SIZE = 2 EPOCHS = 1 LEARNING_RATE = 1e-05 tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-cased') class Triage(Dataset): def __init__(self, dataframe, tokenizer, max_len): self.len = len(dataframe) self.data = dataframe self.tokenizer = tokenizer self.max_len = max_len def __getitem__(self, index): title = str(self.data.TITLE[index]) title = " ".join(title.split()) inputs = self.tokenizer.encode_plus( title, None, add_special_tokens=True, max_length=self.max_len, pad_to_max_length=True, return_token_type_ids=True ) ids = inputs['input_ids'] mask = inputs['attention_mask'] return { 'ids': torch.tensor(ids, dtype=torch.long), 'mask': torch.tensor(mask, dtype=torch.long), 'targets': torch.tensor(self.data.ENCODE_CAT[index], dtype=torch.long) } def __len__(self): return self.len # Creating the dataset and dataloader for the neural network train_size = 0.8 train_dataset=df.sample(frac=train_size,random_state=200) test_dataset=df.drop(train_dataset.index).reset_index(drop=True) train_dataset = train_dataset.reset_index(drop=True) print("FULL Dataset: {}".format(df.shape)) print("TRAIN Dataset: {}".format(train_dataset.shape)) print("TEST Dataset: {}".format(test_dataset.shape)) training_set = Triage(train_dataset, tokenizer, MAX_LEN) testing_set = Triage(test_dataset, tokenizer, MAX_LEN) train_params = {'batch_size': TRAIN_BATCH_SIZE, 'shuffle': True, 'num_workers': 0 } test_params = {'batch_size': VALID_BATCH_SIZE, 'shuffle': True, 'num_workers': 0 } training_loader = DataLoader(training_set, **train_params) testing_loader = DataLoader(testing_set, **test_params) # Creating the customized model, by adding a drop out and a dense layer on top of distil bert to get the final output for the model. class DistillBERTClass(torch.nn.Module): def __init__(self): super(DistillBERTClass, self).__init__() self.l1 = DistilBertModel.from_pretrained("distilbert-base-uncased") self.pre_classifier = torch.nn.Linear(768, 768) self.dropout = torch.nn.Dropout(0.3) self.classifier = torch.nn.Linear(768, 4) def forward(self, input_ids, attention_mask): output_1 = self.l1(input_ids=input_ids, attention_mask=attention_mask) hidden_state = output_1[0] pooler = hidden_state[:, 0] pooler = self.pre_classifier(pooler) pooler = torch.nn.ReLU()(pooler) pooler = self.dropout(pooler) output = self.classifier(pooler) return output model = DistillBERTClass() model.to(device) # Creating the loss function and optimizer loss_function = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(params = model.parameters(), lr=LEARNING_RATE) # Function to calcuate the accuracy of the model def calcuate_accu(big_idx, targets): n_correct = (big_idx==targets).sum().item() return n_correct # Defining the training function on the 80% of the dataset for tuning the distilbert model def train(epoch): tr_loss = 0 n_correct = 0 nb_tr_steps = 0 nb_tr_examples = 0 model.train() for _,data in enumerate(training_loader, 0): ids = data['ids'].to(device, dtype = torch.long) mask = data['mask'].to(device, dtype = torch.long) targets = data['targets'].to(device, dtype = torch.long) outputs = model(ids, mask) loss = loss_function(outputs, targets) tr_loss += loss.item() big_val, big_idx = torch.max(outputs.data, dim=1) n_correct += calcuate_accu(big_idx, targets) nb_tr_steps += 1 nb_tr_examples+=targets.size(0) if _%5000==0: loss_step = tr_loss/nb_tr_steps accu_step = (n_correct*100)/nb_tr_examples print(f"Training Loss per 5000 steps: {loss_step}") print(f"Training Accuracy per 5000 steps: {accu_step}") optimizer.zero_grad() loss.backward() # # When using GPU optimizer.step() print(f'The Total Accuracy for Epoch {epoch}: {(n_correct*100)/nb_tr_examples}') epoch_loss = tr_loss/nb_tr_steps epoch_accu = (n_correct*100)/nb_tr_examples print(f"Training Loss Epoch: {epoch_loss}") print(f"Training Accuracy Epoch: {epoch_accu}") return for epoch in range(EPOCHS): train(epoch) def valid(model, testing_loader): model.eval() n_correct = 0; n_wrong = 0; total = 0 with torch.no_grad(): for _, data in enumerate(testing_loader, 0): ids = data['ids'].to(device, dtype = torch.long) mask = data['mask'].to(device, dtype = torch.long) targets = data['targets'].to(device, dtype = torch.long) outputs = model(ids, mask).squeeze() loss = loss_function(outputs, targets) tr_loss += loss.item() big_val, big_idx = torch.max(outputs.data, dim=1) n_correct += calcuate_accu(big_idx, targets) nb_tr_steps += 1 nb_tr_examples+=targets.size(0) if _%5000==0: loss_step = tr_loss/nb_tr_steps accu_step = (n_correct*100)/nb_tr_examples print(f"Validation Loss per 100 steps: {loss_step}") print(f"Validation Accuracy per 100 steps: {accu_step}") epoch_loss = tr_loss/nb_tr_steps epoch_accu = (n_correct*100)/nb_tr_examples print(f"Validation Loss Epoch: {epoch_loss}") print(f"Validation Accuracy Epoch: {epoch_accu}") return epoch_accu print('This is the validation section to print the accuracy and see how it performs') print('Here we are leveraging on the dataloader crearted for the validation dataset, the approcah is using more of pytorch') acc = valid(model, testing_loader) print("Accuracy on test data = %0.2f%%" % acc) # Saving the files for re-use output_model_file = './models/pytorch_distilbert_news.bin' output_vocab_file = './models/vocab_distilbert_news.bin' model_to_save = model torch.save(model_to_save, output_model_file) tokenizer.save_vocabulary(output_vocab_file) print('All files saved') print('This tutorial is completed')
0.841337
0.988679
<a href="https://colab.research.google.com/github/jmillanm/ColabScripts/blob/master/UrlExtractorImagesAttributes.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` %%bash pip install cairosvg > /dev/null from bs4 import BeautifulSoup, element import cairosvg import csv from io import BytesIO import pandas as pd from PIL import Image import re import requests import time from typing import Dict, List, Optional, Text, Tuple import urllib.request, urllib.parse def _get_urls_from_csv(path: Text) -> List: contents = [] with open(path,'r') as csvf: # Open file in read mode urls = csv.reader(csvf) for url in urls: contents.append(url) return contents def _base_url(url:Text, with_path:bool=False) -> Text: parsed = urllib.parse.urlparse(url) path = '/'.join(parsed.path.split('/')[:-1]) if with_path else '' parsed = parsed._replace(path=path) parsed = parsed._replace(params='') parsed = parsed._replace(query='') parsed = parsed._replace(fragment='') return parsed.geturl() def _request_html_from_url(url:Text, headers:Dict) -> bool: response = requests.get(url, headers=headers) if response.status_code == 200: response.__len__ = len([response]) return response return "request_html_from_url status_code %s" % response.status_code def _find_img_tags(response: requests.models.Response, parser: 'html.parser' or 'lxml' or 'lxml-xml' or 'html5lib') -> element.ResultSet: soup = BeautifulSoup(response.text, parser) script_tags = soup.findAll('img') return script_tags columns = ['path', 'class', 'alt', 'title', 'html_tag', 'file_extension'] def _value_extractor_from_attr(html_tag: element.Tag, attrs: List) -> List: attributes = {'path':str(html_tag.get('src')), 'class':str(html_tag.get('class')), 'alt':str(html_tag.get('alt')), 'title': str(html_tag.get('title')), 'html_tag':str(html_tag), 'file_extension':str(html_tag.get('src')).split(".")[-1]} desired_attributes = {k: v for k, v in attributes.items() if k in attrs} return list(desired_attributes.values()) def _extract_data_images(script_tags: element.ResultSet, columns: List) -> pd.core.frame.DataFrame: data=[] df = pd.DataFrame(columns=columns) for script in script_tags: values = _value_extractor_from_attr(script, columns) data.append(dict(zip(columns, values))) df = df.append(data, True) return df def _image_size(url_img: Text) -> Tuple[int, int]: # Tested on jpg, jpeg and png # For svg, use svg size response = requests.get(url_img) img = Image.open(BytesIO(response.content)) img_size = img.size img.close() return img_size def _get_content_length(url_img: Text, headers) -> str: req = urllib.request.Request(url_img, method='HEAD', headers=headers) f = urllib.request.urlopen(req) length = f.headers['Content-Length'] return length def _svg_size(url_img: Text) -> Tuple[int, int]: response = requests.get(url_img) out = BytesIO() cairosvg.svg2png(url=url_img, write_to=out) img = Image.open(out) img_size = img.size img.close() return img_size def _get_img_size(url_img: Text, img_type: Text) -> float: if img_type == 'svg': try: img_size = _svg_size(url_img) except: #outlier to detect errors img_size = 999999999999999 else: try: img_size = _image_size(url_img) except: #outlier to detect errors img_size = 999999999999999 return img_size def image_scanner_from_url(url: Text, headers: Dict) -> pd.core.frame.DataFrame: html = _request_html_from_url(url, headers) if type(html) == str: raise Exception(html) img_tags = _find_img_tags(html, 'html.parser') data_images = _extract_data_images(img_tags, columns) return data_images def image_scanner_add_url(url: Text, data_images: pd.core.frame.DataFrame) -> pd.core.frame.DataFrame: data_images['url_img'] = data_images.apply( lambda x: x.path \ if x.path.startswith('http')\ else f"{_base_url(url)}/{x.path}", axis=1) return data_images def image_scanner_add_size(data_images: pd.core.frame.DataFrame) -> pd.core.frame.DataFrame: data_images['size'] = data_images.apply( lambda x: _get_img_size(x.url_img, x.file_extension), axis=1) return data_images def image_scanner_add_content_length(data_images: pd.core.frame.DataFrame, headers: Dict) -> pd.core.frame.DataFrame: data_images['content_length'] = data_images.apply( lambda x: float(_get_content_length(x.url_img, headers)), axis=1) return data_images def parse_all_info_img(url: Text, headers: Dict) -> pd.core.frame.DataFrame: data_images = image_scanner_from_url(url, headers) data_images = image_scanner_add_url(url, data_images) data_images = image_scanner_add_size(data_images) data_images = image_scanner_add_content_length(data_images, headers) data_images['content_length_kB'] = data_images.content_length/1000 data_images['url'] = url return data_images def pipe_all_info_img(all_info: pd.core.frame.DataFrame, columns_to_select: List): return all_info[columns_to_select] def make_extraction_process(url: Text, headers: Dict, columns_to_select: List) -> pd.core.frame.DataFrame: all_info = parse_all_info_img(url, headers) data = pipe_all_info_img(all_info, columns_to_select) return data if __name__ == "__main__": # you only have to change the csv_path and assign a name for the new file # csv with the list of urls without column title csv_path = 'path/to/your.csv' data_extration_name = 'results.csv' if 'data' in globals(): del(data) urls = _get_urls_from_csv(csv_path) headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'} columns_to_select = ['url','path', 'alt', 'title', 'size', 'content_length_kB'] run = 1 for url in urls: if run == 1: data = make_extraction_process(url[0], headers, columns_to_select) else: data = data.append(make_extraction_process(url[0], headers, columns_to_select), ignore_index=True) run+=1 data.to_csv(data_extration_name) ```
github_jupyter
%%bash pip install cairosvg > /dev/null from bs4 import BeautifulSoup, element import cairosvg import csv from io import BytesIO import pandas as pd from PIL import Image import re import requests import time from typing import Dict, List, Optional, Text, Tuple import urllib.request, urllib.parse def _get_urls_from_csv(path: Text) -> List: contents = [] with open(path,'r') as csvf: # Open file in read mode urls = csv.reader(csvf) for url in urls: contents.append(url) return contents def _base_url(url:Text, with_path:bool=False) -> Text: parsed = urllib.parse.urlparse(url) path = '/'.join(parsed.path.split('/')[:-1]) if with_path else '' parsed = parsed._replace(path=path) parsed = parsed._replace(params='') parsed = parsed._replace(query='') parsed = parsed._replace(fragment='') return parsed.geturl() def _request_html_from_url(url:Text, headers:Dict) -> bool: response = requests.get(url, headers=headers) if response.status_code == 200: response.__len__ = len([response]) return response return "request_html_from_url status_code %s" % response.status_code def _find_img_tags(response: requests.models.Response, parser: 'html.parser' or 'lxml' or 'lxml-xml' or 'html5lib') -> element.ResultSet: soup = BeautifulSoup(response.text, parser) script_tags = soup.findAll('img') return script_tags columns = ['path', 'class', 'alt', 'title', 'html_tag', 'file_extension'] def _value_extractor_from_attr(html_tag: element.Tag, attrs: List) -> List: attributes = {'path':str(html_tag.get('src')), 'class':str(html_tag.get('class')), 'alt':str(html_tag.get('alt')), 'title': str(html_tag.get('title')), 'html_tag':str(html_tag), 'file_extension':str(html_tag.get('src')).split(".")[-1]} desired_attributes = {k: v for k, v in attributes.items() if k in attrs} return list(desired_attributes.values()) def _extract_data_images(script_tags: element.ResultSet, columns: List) -> pd.core.frame.DataFrame: data=[] df = pd.DataFrame(columns=columns) for script in script_tags: values = _value_extractor_from_attr(script, columns) data.append(dict(zip(columns, values))) df = df.append(data, True) return df def _image_size(url_img: Text) -> Tuple[int, int]: # Tested on jpg, jpeg and png # For svg, use svg size response = requests.get(url_img) img = Image.open(BytesIO(response.content)) img_size = img.size img.close() return img_size def _get_content_length(url_img: Text, headers) -> str: req = urllib.request.Request(url_img, method='HEAD', headers=headers) f = urllib.request.urlopen(req) length = f.headers['Content-Length'] return length def _svg_size(url_img: Text) -> Tuple[int, int]: response = requests.get(url_img) out = BytesIO() cairosvg.svg2png(url=url_img, write_to=out) img = Image.open(out) img_size = img.size img.close() return img_size def _get_img_size(url_img: Text, img_type: Text) -> float: if img_type == 'svg': try: img_size = _svg_size(url_img) except: #outlier to detect errors img_size = 999999999999999 else: try: img_size = _image_size(url_img) except: #outlier to detect errors img_size = 999999999999999 return img_size def image_scanner_from_url(url: Text, headers: Dict) -> pd.core.frame.DataFrame: html = _request_html_from_url(url, headers) if type(html) == str: raise Exception(html) img_tags = _find_img_tags(html, 'html.parser') data_images = _extract_data_images(img_tags, columns) return data_images def image_scanner_add_url(url: Text, data_images: pd.core.frame.DataFrame) -> pd.core.frame.DataFrame: data_images['url_img'] = data_images.apply( lambda x: x.path \ if x.path.startswith('http')\ else f"{_base_url(url)}/{x.path}", axis=1) return data_images def image_scanner_add_size(data_images: pd.core.frame.DataFrame) -> pd.core.frame.DataFrame: data_images['size'] = data_images.apply( lambda x: _get_img_size(x.url_img, x.file_extension), axis=1) return data_images def image_scanner_add_content_length(data_images: pd.core.frame.DataFrame, headers: Dict) -> pd.core.frame.DataFrame: data_images['content_length'] = data_images.apply( lambda x: float(_get_content_length(x.url_img, headers)), axis=1) return data_images def parse_all_info_img(url: Text, headers: Dict) -> pd.core.frame.DataFrame: data_images = image_scanner_from_url(url, headers) data_images = image_scanner_add_url(url, data_images) data_images = image_scanner_add_size(data_images) data_images = image_scanner_add_content_length(data_images, headers) data_images['content_length_kB'] = data_images.content_length/1000 data_images['url'] = url return data_images def pipe_all_info_img(all_info: pd.core.frame.DataFrame, columns_to_select: List): return all_info[columns_to_select] def make_extraction_process(url: Text, headers: Dict, columns_to_select: List) -> pd.core.frame.DataFrame: all_info = parse_all_info_img(url, headers) data = pipe_all_info_img(all_info, columns_to_select) return data if __name__ == "__main__": # you only have to change the csv_path and assign a name for the new file # csv with the list of urls without column title csv_path = 'path/to/your.csv' data_extration_name = 'results.csv' if 'data' in globals(): del(data) urls = _get_urls_from_csv(csv_path) headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'} columns_to_select = ['url','path', 'alt', 'title', 'size', 'content_length_kB'] run = 1 for url in urls: if run == 1: data = make_extraction_process(url[0], headers, columns_to_select) else: data = data.append(make_extraction_process(url[0], headers, columns_to_select), ignore_index=True) run+=1 data.to_csv(data_extration_name)
0.511961
0.64994
<div class="alert alert-block alert-info" style="margin-top: 20px"> <a href="https://cocl.us/topNotebooksPython101Coursera"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png" width="750" align="center"> </a> </div> <a href="https://cognitiveclass.ai/"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center"> </a> <h1>Reading Files Python</h1> <p><strong>Welcome!</strong> This notebook will teach you about reading the text file in the Python Programming Language. By the end of this lab, you'll know how to read text files.</p> <h2>Table of Contents</h2> <div class="alert alert-block alert-info" style="margin-top: 20px"> <ul> <li><a href="download">Download Data</a></li> <li><a href="read">Reading Text Files</a></li> <li><a href="better">A Better Way to Open a File</a></li> </ul> <p> Estimated time needed: <strong>40 min</strong> </p> </div> <hr> <h2 id="download">Download Data</h2> ``` # Download Example file import wget url = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/example1.txt" #/ Example1 = wget.download(url) help(open) with open("resources/data/Example1.txt", "x") as WriteFile: WriteFile.write(Example1) ``` <hr> <h2 id="read">Reading Text Files</h2> One way to read or write a file in Python is to use the built-in <code>open</code> function. The <code>open</code> function provides a <b>File object</b> that contains the methods and attributes you need in order to read, save, and manipulate the file. In this notebook, we will only cover <b>.txt</b> files. The first parameter you need is the file path and the file name. An example is shown as follow: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadOpen.png" width="500" /> The mode argument is optional and the default value is <b>r</b>. In this notebook we only cover two modes:  <ul> <li><b>r</b> Read mode for reading files </li> <li><b>w</b> Write mode for writing files</li> </ul> For the next example, we will use the text file <b>Example1.txt</b>. The file is shown as follow: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadFile.png" width="200" /> We read the file: ``` # Read the Example1.txt example1 = "/resources/data/Example1.txt" file1 = open(example1, "r") ``` We can view the attributes of the file. The name of the file: ``` # Print the path of file file1.name ``` The mode the file object is in: ``` # Print the mode of file, either 'r' or 'w' file1.mode ``` We can read the file and assign it to a variable : ``` # Read the file FileContent = file1.read() FileContent ``` The <b>/n</b> means that there is a new line. We can print the file: ``` # Print the file with '\n' as a new line print(FileContent) ``` The file is of type string: ``` # Type of file content type(FileContent) ``` We must close the file object: ``` # Close file after finish file1.close() ``` <hr> <h2 id="better">A Better Way to Open a File</h2> Using the <code>with</code> statement is better practice, it automatically closes the file even if the code encounters an exception. The code will run everything in the indent block then close the file object. ``` # Open file using with with open(example1, "r") as file1: FileContent = file1.read() print(FileContent) ``` The file object is closed, you can verify it by running the following cell: ``` # Verify if the file is closed file1.closed ``` We can see the info in the file: ``` # See the content of file print(FileContent) ``` The syntax is a little confusing as the file object is after the <code>as</code> statement. We also don’t explicitly close the file. Therefore we summarize the steps in a figure: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadWith.png" width="500" /> We don’t have to read the entire file, for example, we can read the first 4 characters by entering three as a parameter to the method **.read()**: ``` # Read first four characters with open(example1, "r") as file1: print(file1.read(4)) ``` Once the method <code>.read(4)</code> is called the first 4 characters are called. If we call the method again, the next 4 characters are called. The output for the following cell will demonstrate the process for different inputs to the method <code>read()</code>: ``` # Read certain amount of characters with open(example1, "r") as file1: print(file1.read(4)) print(file1.read(4)) print(file1.read(7)) print(file1.read(15)) ``` The process is illustrated in the below figure, and each color represents the part of the file read after the method <code>read()</code> is called: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadChar.png" width="500" /> Here is an example using the same file, but instead we read 16, 5, and then 9 characters at a time: ``` # Read certain amount of characters with open(example1, "r") as file1: print(file1.read(16)) print(file1.read(5)) print(file1.read(9)) ``` We can also read one line of the file at a time using the method <code>readline()</code>: ``` # Read one line with open(example1, "r") as file1: print("first line: " + file1.readline()) ``` We can use a loop to iterate through each line: ``` # Iterate through the lines with open(example1,"r") as file1: i = 0; for line in file1: print("Iteration", str(i), ": ", line) i = i + 1; ``` We can use the method <code>readlines()</code> to save the text file to a list: ``` # Read all lines and save as a list with open(example1, "r") as file1: FileasList = file1.readlines() ``` Each element of the list corresponds to a line of text: ``` # Print the first line FileasList[0] # Print the second line FileasList[1] # Print the third line FileasList[2] ``` <hr> <h2>The last exercise!</h2> <p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work. <hr> <div class="alert alert-block alert-info" style="margin-top: 20px"> <h2>Get IBM Watson Studio free of charge!</h2> <p><a href="https://cocl.us/bottemNotebooksPython101Coursera"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p> </div> <h3>About the Authors:</h3> <p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p> Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a> <hr> <p>Copyright &copy; 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
github_jupyter
# Download Example file import wget url = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/example1.txt" #/ Example1 = wget.download(url) help(open) with open("resources/data/Example1.txt", "x") as WriteFile: WriteFile.write(Example1) # Read the Example1.txt example1 = "/resources/data/Example1.txt" file1 = open(example1, "r") # Print the path of file file1.name # Print the mode of file, either 'r' or 'w' file1.mode # Read the file FileContent = file1.read() FileContent # Print the file with '\n' as a new line print(FileContent) # Type of file content type(FileContent) # Close file after finish file1.close() # Open file using with with open(example1, "r") as file1: FileContent = file1.read() print(FileContent) # Verify if the file is closed file1.closed # See the content of file print(FileContent) # Read first four characters with open(example1, "r") as file1: print(file1.read(4)) # Read certain amount of characters with open(example1, "r") as file1: print(file1.read(4)) print(file1.read(4)) print(file1.read(7)) print(file1.read(15)) # Read certain amount of characters with open(example1, "r") as file1: print(file1.read(16)) print(file1.read(5)) print(file1.read(9)) # Read one line with open(example1, "r") as file1: print("first line: " + file1.readline()) # Iterate through the lines with open(example1,"r") as file1: i = 0; for line in file1: print("Iteration", str(i), ": ", line) i = i + 1; # Read all lines and save as a list with open(example1, "r") as file1: FileasList = file1.readlines() # Print the first line FileasList[0] # Print the second line FileasList[1] # Print the third line FileasList[2]
0.335351
0.881717
<a href="https://colab.research.google.com/github/hzwer/LearningToPaint/blob/master/LearningToPaint.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` # !git clone https://github.com/hzwer/LearningToPaint.git # cd LearningToPaint/ ``` Testing ``` !wget "https://drive.google.com/uc?export=download&id=1-7dVdjCIZIxh8hHJnGTK-RA1-jL1tor4" -O renderer.pkl !wget "https://drive.google.com/uc?export=download&id=1a3vpKgjCVXHON4P7wodqhCgCMPgg1KeR" -O actor.pkl !wget -U NoSuchBrowser/1.0 -O image/test.png https://raw.githubusercontent.com/hzwer/LearningToPaint/master/image/Trump.png !python3 baseline/test.py --max_step=80 --actor=actor.pkl --renderer=renderer.pkl --img=image/test.png --divide=5 !ffmpeg -r 30 -f image2 -i output/generated%d.png -s 512x512 -c:v libx264 -pix_fmt yuv420p video.mp4 -q:v 0 -q:a 0 from IPython.display import display, Image import moviepy.editor as mpy display(mpy.ipython_display('video.mp4', height=256, max_duration=100.)) display(Image('output/generated399.png')) ``` Training ``` !mkdir data cd data # !gdown https://drive.google.com/uc?id=0B7EVK8r0v71pZjFTYXZWM3FlRnM # !unzip img_align_celeba.zip # !rm img_align_celeba.zip cd デスクトップ/LearningToPaint !python3 baseline/train_renderer.py !pip install tensorboardX %%writefile baseline/env.py import sys import json import torch import numpy as np import argparse import torchvision.transforms as transforms import cv2 from DRL.ddpg import decode from utils.util import * from PIL import Image from torchvision import transforms, utils device = torch.device("cuda" if torch.cuda.is_available() else "cpu") aug = transforms.Compose( [transforms.ToPILImage(), transforms.RandomHorizontalFlip(), ]) width = 128 convas_area = width * width img_train = [] img_test = [] train_num = 0 test_num = 0 class Paint: def __init__(self, batch_size, max_step): self.batch_size = batch_size self.max_step = max_step self.action_space = (13) self.observation_space = (self.batch_size, width, width, 7) self.test = False def load_data(self): # CelebA global train_num, test_num for i in range(200000): img_id = '%06d' % (i + 1) try: img = cv2.imread('./data/img_align_celeba/' + img_id + '.jpg', cv2.IMREAD_UNCHANGED) img = cv2.resize(img, (width, width)) if i > 2000: train_num += 1 img_train.append(img) else: test_num += 1 img_test.append(img) finally: if (i + 1) % 10000 == 0: print('loaded {} images'.format(i + 1)) print('finish loading data, {} training images, {} testing images'.format(str(train_num), str(test_num))) def pre_data(self, id, test): if test: img = img_test[id] else: img = img_train[id] if not test: img = aug(img) img = np.asarray(img) return np.transpose(img, (2, 0, 1)) def reset(self, test=False, begin_num=False): self.test = test self.imgid = [0] * self.batch_size self.gt = torch.zeros([self.batch_size, 3, width, width], dtype=torch.uint8).to(device) for i in range(self.batch_size): if test: id = (i + begin_num) % test_num else: id = np.random.randint(train_num) self.imgid[i] = id self.gt[i] = torch.tensor(self.pre_data(id, test)) self.tot_reward = ((self.gt.float() / 255) ** 2).mean(1).mean(1).mean(1) self.stepnum = 0 self.canvas = torch.zeros([self.batch_size, 3, width, width], dtype=torch.uint8).to(device) self.lastdis = self.ini_dis = self.cal_dis() return self.observation() def observation(self): # canvas B * 3 * width * width # gt B * 3 * width * width # T B * 1 * width * width ob = [] T = torch.ones([self.batch_size, 1, width, width], dtype=torch.uint8) * self.stepnum return torch.cat((self.canvas, self.gt, T.to(device)), 1) # canvas, img, T def cal_trans(self, s, t): return (s.transpose(0, 3) * t).transpose(0, 3) def step(self, action): self.canvas = (decode(action, self.canvas.float() / 255) * 255).byte() self.stepnum += 1 ob = self.observation() done = (self.stepnum == self.max_step) reward = self.cal_reward() # np.array([0.] * self.batch_size) return ob.detach(), reward, np.array([done] * self.batch_size), None def cal_dis(self): return (((self.canvas.float() - self.gt.float()) / 255) ** 2).mean(1).mean(1).mean(1) def cal_reward(self): dis = self.cal_dis() reward = (self.lastdis - dis) / (self.ini_dis + 1e-8) self.lastdis = dis return to_numpy(reward) # !python3 baseline/train.py --max_step=200 --debug --batch_size=96 !python3 baseline/train.py --max_step=200 --debug --batch_size=12 --env_batch=12 --train_times=10000 ```
github_jupyter
# !git clone https://github.com/hzwer/LearningToPaint.git # cd LearningToPaint/ !wget "https://drive.google.com/uc?export=download&id=1-7dVdjCIZIxh8hHJnGTK-RA1-jL1tor4" -O renderer.pkl !wget "https://drive.google.com/uc?export=download&id=1a3vpKgjCVXHON4P7wodqhCgCMPgg1KeR" -O actor.pkl !wget -U NoSuchBrowser/1.0 -O image/test.png https://raw.githubusercontent.com/hzwer/LearningToPaint/master/image/Trump.png !python3 baseline/test.py --max_step=80 --actor=actor.pkl --renderer=renderer.pkl --img=image/test.png --divide=5 !ffmpeg -r 30 -f image2 -i output/generated%d.png -s 512x512 -c:v libx264 -pix_fmt yuv420p video.mp4 -q:v 0 -q:a 0 from IPython.display import display, Image import moviepy.editor as mpy display(mpy.ipython_display('video.mp4', height=256, max_duration=100.)) display(Image('output/generated399.png')) !mkdir data cd data # !gdown https://drive.google.com/uc?id=0B7EVK8r0v71pZjFTYXZWM3FlRnM # !unzip img_align_celeba.zip # !rm img_align_celeba.zip cd デスクトップ/LearningToPaint !python3 baseline/train_renderer.py !pip install tensorboardX %%writefile baseline/env.py import sys import json import torch import numpy as np import argparse import torchvision.transforms as transforms import cv2 from DRL.ddpg import decode from utils.util import * from PIL import Image from torchvision import transforms, utils device = torch.device("cuda" if torch.cuda.is_available() else "cpu") aug = transforms.Compose( [transforms.ToPILImage(), transforms.RandomHorizontalFlip(), ]) width = 128 convas_area = width * width img_train = [] img_test = [] train_num = 0 test_num = 0 class Paint: def __init__(self, batch_size, max_step): self.batch_size = batch_size self.max_step = max_step self.action_space = (13) self.observation_space = (self.batch_size, width, width, 7) self.test = False def load_data(self): # CelebA global train_num, test_num for i in range(200000): img_id = '%06d' % (i + 1) try: img = cv2.imread('./data/img_align_celeba/' + img_id + '.jpg', cv2.IMREAD_UNCHANGED) img = cv2.resize(img, (width, width)) if i > 2000: train_num += 1 img_train.append(img) else: test_num += 1 img_test.append(img) finally: if (i + 1) % 10000 == 0: print('loaded {} images'.format(i + 1)) print('finish loading data, {} training images, {} testing images'.format(str(train_num), str(test_num))) def pre_data(self, id, test): if test: img = img_test[id] else: img = img_train[id] if not test: img = aug(img) img = np.asarray(img) return np.transpose(img, (2, 0, 1)) def reset(self, test=False, begin_num=False): self.test = test self.imgid = [0] * self.batch_size self.gt = torch.zeros([self.batch_size, 3, width, width], dtype=torch.uint8).to(device) for i in range(self.batch_size): if test: id = (i + begin_num) % test_num else: id = np.random.randint(train_num) self.imgid[i] = id self.gt[i] = torch.tensor(self.pre_data(id, test)) self.tot_reward = ((self.gt.float() / 255) ** 2).mean(1).mean(1).mean(1) self.stepnum = 0 self.canvas = torch.zeros([self.batch_size, 3, width, width], dtype=torch.uint8).to(device) self.lastdis = self.ini_dis = self.cal_dis() return self.observation() def observation(self): # canvas B * 3 * width * width # gt B * 3 * width * width # T B * 1 * width * width ob = [] T = torch.ones([self.batch_size, 1, width, width], dtype=torch.uint8) * self.stepnum return torch.cat((self.canvas, self.gt, T.to(device)), 1) # canvas, img, T def cal_trans(self, s, t): return (s.transpose(0, 3) * t).transpose(0, 3) def step(self, action): self.canvas = (decode(action, self.canvas.float() / 255) * 255).byte() self.stepnum += 1 ob = self.observation() done = (self.stepnum == self.max_step) reward = self.cal_reward() # np.array([0.] * self.batch_size) return ob.detach(), reward, np.array([done] * self.batch_size), None def cal_dis(self): return (((self.canvas.float() - self.gt.float()) / 255) ** 2).mean(1).mean(1).mean(1) def cal_reward(self): dis = self.cal_dis() reward = (self.lastdis - dis) / (self.ini_dis + 1e-8) self.lastdis = dis return to_numpy(reward) # !python3 baseline/train.py --max_step=200 --debug --batch_size=96 !python3 baseline/train.py --max_step=200 --debug --batch_size=12 --env_batch=12 --train_times=10000
0.421552
0.845624
<img src="classical_gates.png" /> ``` %matplotlib inline # Importing standard Qiskit libraries from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute, BasicAer from qiskit.tools.jupyter import * from qiskit.visualization import * qr = QuantumRegister(2, 'qubit') cr = ClassicalRegister(2, name="bit") circuit = QuantumCircuit(qr, cr) circuit.h(qr[0]) circuit.cx(qr[0], qr[1]) circuit.measure(qr, cr) circuit.draw('mpl', initial_state=True) # Load simulator local_simulator = BasicAer.get_backend('qasm_simulator') job = execute(circuit, backend=local_simulator, shots=1000) job.result().get_counts() ``` # Deutsch Algorithm <table> <tbody> <tr> <td colspan="2"><img src="deutsch_problem.png" /></td> </tr> <tr> <td><img src="classic_oracle.png" /></td> <td><img src="quantum_oracle.png" /></td> </tr> </tbody> </table> ## Oracle 4: Constant zero ``` input = QuantumRegister(1, name='input') output = QuantumRegister(1, name='output') constant0 = QuantumCircuit(input, output, name='oracle') oracle4 = constant0.to_instruction() constant0.draw('mpl', initial_state=True) ``` ## Oracle 3: Constant one ``` input = QuantumRegister(1, name='input') temp = QuantumRegister(1, name='output') constant1 = QuantumCircuit(input, temp, name='oracle') constant1.x(temp) oracle3 = constant1.to_instruction() constant1.draw('mpl', initial_state=True) ``` ## Oracle 1: Identity ``` input = QuantumRegister(1, name='input') temp = QuantumRegister(1, name='output') identity = QuantumCircuit(input, temp, name='oracle') identity.cx(input, temp) oracle1 = identity.to_instruction() identity.draw('mpl', initial_state=True) ``` ## Oracle 2: Invert ``` input = QuantumRegister(1, name='input') output = QuantumRegister(1, name='output') invert = QuantumCircuit(input, output, name='oracle') invert.cx(input, output) invert.x(output) oracle2 = identity.to_instruction() invert.draw('mpl', initial_state=True) ``` ## Run an oracle ``` result = ClassicalRegister(1, name='result') circuit = QuantumCircuit(input, output, result) circuit.x(input) # <- set input to 1 circuit.barrier() circuit += identity circuit.barrier() circuit.measure(output, result) circuit.draw('mpl', initial_state=True) execute(circuit, backend=local_simulator, shots=10000).result().get_counts() ``` ## Running Deutsch's Algorithm ``` qr = QuantumRegister(2, name='qubits') cr = ClassicalRegister(1, name='result') circuit = QuantumCircuit(qr, cr) circuit.x(qr[1]) circuit.h(qr) circuit.append(oracle3, [qr[0], qr[1]]) circuit.h(qr[0]) circuit.measure(qr[0], cr[0]); circuit.draw('mpl', initial_state=True) counts = execute(circuit, backend=local_simulator, shots=1).result().get_counts() counts['BALANCED'] = counts.pop('1', None) counts['CONSTANT'] = counts.pop('0', None) print(counts) ``` # Real device! ``` import qiskit.tools.jupyter from qiskit import IBMQ from qiskit.providers.ibmq import least_busy provider = IBMQ.load_account() least_busy_device = least_busy(provider.backends(simulator=False, filters=lambda b: b.configuration().n_qubits >= 2)) least_busy_device job = execute(circuit, backend=least_busy_device, shots=1000) print(job.status()) counts = job.result().get_counts() print(counts) counts['BALANCED'] = counts.pop('1', None) counts['CONSTANT'] = counts.pop('0', None) print(counts) ```
github_jupyter
%matplotlib inline # Importing standard Qiskit libraries from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute, BasicAer from qiskit.tools.jupyter import * from qiskit.visualization import * qr = QuantumRegister(2, 'qubit') cr = ClassicalRegister(2, name="bit") circuit = QuantumCircuit(qr, cr) circuit.h(qr[0]) circuit.cx(qr[0], qr[1]) circuit.measure(qr, cr) circuit.draw('mpl', initial_state=True) # Load simulator local_simulator = BasicAer.get_backend('qasm_simulator') job = execute(circuit, backend=local_simulator, shots=1000) job.result().get_counts() input = QuantumRegister(1, name='input') output = QuantumRegister(1, name='output') constant0 = QuantumCircuit(input, output, name='oracle') oracle4 = constant0.to_instruction() constant0.draw('mpl', initial_state=True) input = QuantumRegister(1, name='input') temp = QuantumRegister(1, name='output') constant1 = QuantumCircuit(input, temp, name='oracle') constant1.x(temp) oracle3 = constant1.to_instruction() constant1.draw('mpl', initial_state=True) input = QuantumRegister(1, name='input') temp = QuantumRegister(1, name='output') identity = QuantumCircuit(input, temp, name='oracle') identity.cx(input, temp) oracle1 = identity.to_instruction() identity.draw('mpl', initial_state=True) input = QuantumRegister(1, name='input') output = QuantumRegister(1, name='output') invert = QuantumCircuit(input, output, name='oracle') invert.cx(input, output) invert.x(output) oracle2 = identity.to_instruction() invert.draw('mpl', initial_state=True) result = ClassicalRegister(1, name='result') circuit = QuantumCircuit(input, output, result) circuit.x(input) # <- set input to 1 circuit.barrier() circuit += identity circuit.barrier() circuit.measure(output, result) circuit.draw('mpl', initial_state=True) execute(circuit, backend=local_simulator, shots=10000).result().get_counts() qr = QuantumRegister(2, name='qubits') cr = ClassicalRegister(1, name='result') circuit = QuantumCircuit(qr, cr) circuit.x(qr[1]) circuit.h(qr) circuit.append(oracle3, [qr[0], qr[1]]) circuit.h(qr[0]) circuit.measure(qr[0], cr[0]); circuit.draw('mpl', initial_state=True) counts = execute(circuit, backend=local_simulator, shots=1).result().get_counts() counts['BALANCED'] = counts.pop('1', None) counts['CONSTANT'] = counts.pop('0', None) print(counts) import qiskit.tools.jupyter from qiskit import IBMQ from qiskit.providers.ibmq import least_busy provider = IBMQ.load_account() least_busy_device = least_busy(provider.backends(simulator=False, filters=lambda b: b.configuration().n_qubits >= 2)) least_busy_device job = execute(circuit, backend=least_busy_device, shots=1000) print(job.status()) counts = job.result().get_counts() print(counts) counts['BALANCED'] = counts.pop('1', None) counts['CONSTANT'] = counts.pop('0', None) print(counts)
0.643217
0.941735
## Sariyer Academy Research and Production focused AI Education Examination The project idea is creating a simple recomendation engine based of user single answer. ### Introduction * [Preperation of the Data](#Preperations) * [Manipulation and Storing the sentences](#Manipulation-Data) Create list of setences from example and make a category list from all example sentences * [Using Algoritms and functions](#Algorithm) Checking the user answer and If the user had answer only a single word then should recomend a random sentence from same category. If the user answers a long sentence then we should use the Jaccard Similarity * [Result](#Result) ``` import pandas as pd import numpy as np import random import re import warnings warnings.filterwarnings('ignore') pd.options.display.max_colwidth = 300 ``` ### Preperations We have Five different category of aphorism from Turkish language that "Hayat"(Life), "Sevgi"(Love), "Gayret"(Assiduity), "Umut"(Hope) and "Idare"(Management) ``` Hayat = """ Olumlu düşünen, bütün olası dünyaların en iyisinde yaşadığımızı söyler ve olumsuz düşünen, bunun gerçek olmasından korkar. James Cabell Bu dünyaya gelmenin sadece tek bir yolu vardır, terk etmenin ise çok fazla yolu. Donald Harington Birisinin ‘Hayat zor.’ diye yakındığını duyduğumda, her zaman, ‘Neye kıyasla?’ diye sormayı isterim. Sidney Harris Mutsuz olmamamız gerekir. Kimsenin hayatla bir kontratı yok. DavidHeath Hayatını kazanırken yaşamasını bilmeyen bir adam, servetini kazanmadan öncesine göre daha fakirdir. Josiah Holland Yarının ne olacağını sormaktan vazgeç. Her gün, sana verilen bir hazinedir. Eline geçtikçe değerlendir. Horace Hayatınızı yaşamanın en iyi yolu, sizden sonra da kalıcı olacak şeyler için harcamaktır. William James Biz Japonlar, küçük zevklerden hoşlanırız, israftan değil. Daha fazlasını karşılayabilirse de, insanın basit bir yaşam tarzı olması kanısındayım. Massaru Ibuka Benim sanatım ve mesleğim yaşamaktır. Montaigne Hayat, çikolata ile dolu bir kutu gibidir. Ne çıkacağını asla bilemezsiniz. Forrest Gump Filmi Hayat, büyük bir sürprizdir. Ölümün neden daha büyük bir sürpriz olması gerektiğini anlayamıyorum. Vladimir Nabokov """ Sevgi = """ Karım, benim ‘Seni seviyorum.’ dememi binlerce kez duydu ama, hiçbir zaman ‘Üzgünüm.’ dediğimi duymadı. Bruce Willis Büyüklerine saygı, küçüklerine şevkat göstermeyenler, bizden değildir. Hz, Muhammed Sevip de kaybetmek, sevmemiş olmaktan daha iyidir. Seneca Sevgiyle düşünün, sevgiyle konuşun, sevgiyle davranın. Her ihtiyaç karşılanacaktır. James Ailen Sevgi, insanı birliğe, bencillik ise yalnızlığa götürür. Schiller Sevdiği kadını ve sevdiği işi bulan bir erkek, yeryüzünde cenneti bulmuş demektir. Helen Rowland Kızgınlık, kırgınlık ya da acıya tutunmayın. Enerjinizi çalarlar ve sizi sevmekten alıkoyarlar. Leo Buscaglia Herkes temel bir şeyi unutuyor; insanlar sizi, siz onları sevmeden sevmeyecekler. Pat Carroll Sevgi, dünyadaki yaşamın aktığı nehirdir. Henry Beecher Sevmeden verebilirsiniz ama, vermeden sevemezsiniz. Amy Carmichael """ Gayret = """ Dileyin verilecektir; arayın bulacaksınız; kapıyı çalın size açılacaktır. Hz.İsa (a.s.) Damlayan su, mermeri, yürüyen de dağları deler. Ovidius Hazine, eziyet çekene gözükür. Hz. Mevlâna Beklenen gün gelecekse, çekilen çile kutsaldır.V. Hugo Yarınlar yorgun ve bezgin kimseler değil, rahatını terk edebilen gayretli insanlara aittir. Cicero Emek olmadan, yemek olmaz. Atasözü Kötülüğün hakim olmaması için tek şart, iyilerin gayret göstermeleridir. Kelâmı Kibar İnsan bir şeyi çok ciddi olarak arzu etmeye görsün; hiçbir şey erişilemeyecek kadar yükseklerde değildir. H. Andersen Ümitsizlik ve korkaklık, çalışma azminden yoksun insanlara göredir. A. Mourois Bir çığlık, bir çığ meydana getirir. Soljenitsin Kıramıyacağm zinciri, hiç olmazsa kemir. C. Şehabettin Basit bir insanın elinden geleni yapması; bilgili ve zeki bir insanın tembelliğinden çok daha değerlidir. Taşı delen, suyun kuvveti değil; dalgaların sürekliliğidir. Damlayan su, taşı deler. Ovidius """ Umut = """ Başlangıçta fazla umut ederiz ama, sonrasında yeteri kadar değil. Joesph Roux Hayatta umutsuz durumlar yoktur, sadece umutsuzluk besleyen insanlar vardır. Booth Şafaktan önce her yer karanlıktır. Katherine Mansfield Umut, gözle görülemeyeni görür, elle tutulamayanı hisseder ve imkansızı başarır. Anonim Büyük umutlar, büyük adamlar yaratır. Thomas Fuller Umut, fakirin ekmeğidir. Thales Miletus Kazanmayı umut etmeyen, daha baştan kaybetmiştir. Jose Olmedo Umut etmeyen, hiçbir zaman umutsuzluğa kapılmaz. George Bernard Shaw Hiçbir kış, sonsuza kadar sürmez. Hal Borland Umut, şüphe etmek daha kolayken, inanmaya devam etmektir. Anonim Umut, mutluluktan alınmış bir miktar borçtur. Joseph İoubert """ Idare = """ Çok söyleyen değil, çok iş yapan yöneticiye muhtaçsınız. Hz. Ömer (r.a.) Sevginin kurduğu devleti, adalet devam ettirir. Farabi Her memleketin hakettiği bir hükümeti vardır. J. Maistere Hükümetlerin en kötüsü, suçsuzu korkutandır. Kendinizi yönetirken kafanızı, başkalarını yönetirken kalbinizi kullanın. Evi ev eden avrat, yurdu şen eden devlet. Atasözü Mal cimrilerde, silah korkanlarda, idare de zayıflarda olursa, işler bozulur. Hz. Ebubekir İnsanlar üç sınıftır: İdareciler, din alimleri ve halk. İdareciler bozulunca, geçim; alimler bozulunca, din; halk bozulunca da hak bozulur. Ebu Bekir Verrak Kendini yönet, dünyayı yönetecek gücü bulabilirsin. Plato Kötü idarecilerin başarısı, halkın ıstırabıdır. Bir memlekette ayaklar baş olursa, başlar ayaklar altında mahvolur. """ # We need to add all list a single list and create each category names categories = [Hayat,Sevgi,Gayret,Umut,Idare] categories_name = ["Hayat", "Sevgi","Gayret","Umut","Idare"] ``` ### Manipulation-Data ``` # Split each row with new line (\n) for i in range(len(categories)): categories[i] = categories[i].split("\n") # Create a dataframe df = [] df = pd.DataFrame(df, columns=['sentence','category']) # Add each sentence to Dataframe (df) with the category names for i in range(len(categories_name)): for k in range(len(categories[i])): df = df.append({'sentence': categories[i][k], 'category': categories_name[i]}, ignore_index=True) df = df[df['sentence']!=""] df.reset_index(drop=True, inplace=True) df["cleaned"]=df['sentence'].copy() for i in range(len(df)): df["cleaned"][i] = re.sub('[!@#’‘?.,\'$]', '', df["cleaned"][i]) df["cleaned"][i] = df["cleaned"][i].lower() ``` ### Algorithm ``` def give_a_recomendation(): result = [] answer = input("Hi Human, What do you want? ") # We need to turn lowercase user answer for correct proportions answer=answer.lower() # One of the main reques is If not a word use Jaccard similarity. # We count user answer words and if it is more than one word and use Jaccard similarity if len(answer.split())>1: # Jaccard Similarity for i in range(len(df)): # Jaccard similarity a = set(df["cleaned"][i].split()) b = set(answer.split()) c = a.intersection(b) result.append(float(len(c)) / (len(a) + len(b) - len(c))) # We add all result in a new column that we can find any of result pd.DataFrame(result) df['result'] = result # Return as max result row sentence return df[df['result']==df['result'].max()].values[0][0] # The second request is If User type a single word and the word is the same the category name # and we will find a random word in the category # If only one word but not the same any category name then we can choose any random sentence in our data else: for i in categories_name: i=i.lower() if(i==answer): return df["sentence"][df["category"].str.lower()==answer].sample(n=1).values[0] return df["sentence"].sample(n=1).values[0] ``` ### Result Summary of the final dataframe ``` df.head() ``` <p>The project is a small daily challenge project from <a href="https://www.linkedin.com/in/uzay-çetin-b1a9ab100/">Uzay Cetin</a> and his team. </p> <p> The project idea is to recommend an aphorism after Users input with using the Jaccard similarity and also the same category randomisation. The first step was cleaning and preparing the data for the programme. Then the programme checks user input as a sentence or a single word to recommend an aphorism. </p> Stored the same sentences with removing symbols and makes lowercase for increasing similarity between user input and display the original sentence. <div class="alert alert-success" role="alert"> Tell me something </div> ``` give_a_recomendation() ```
github_jupyter
import pandas as pd import numpy as np import random import re import warnings warnings.filterwarnings('ignore') pd.options.display.max_colwidth = 300 Hayat = """ Olumlu düşünen, bütün olası dünyaların en iyisinde yaşadığımızı söyler ve olumsuz düşünen, bunun gerçek olmasından korkar. James Cabell Bu dünyaya gelmenin sadece tek bir yolu vardır, terk etmenin ise çok fazla yolu. Donald Harington Birisinin ‘Hayat zor.’ diye yakındığını duyduğumda, her zaman, ‘Neye kıyasla?’ diye sormayı isterim. Sidney Harris Mutsuz olmamamız gerekir. Kimsenin hayatla bir kontratı yok. DavidHeath Hayatını kazanırken yaşamasını bilmeyen bir adam, servetini kazanmadan öncesine göre daha fakirdir. Josiah Holland Yarının ne olacağını sormaktan vazgeç. Her gün, sana verilen bir hazinedir. Eline geçtikçe değerlendir. Horace Hayatınızı yaşamanın en iyi yolu, sizden sonra da kalıcı olacak şeyler için harcamaktır. William James Biz Japonlar, küçük zevklerden hoşlanırız, israftan değil. Daha fazlasını karşılayabilirse de, insanın basit bir yaşam tarzı olması kanısındayım. Massaru Ibuka Benim sanatım ve mesleğim yaşamaktır. Montaigne Hayat, çikolata ile dolu bir kutu gibidir. Ne çıkacağını asla bilemezsiniz. Forrest Gump Filmi Hayat, büyük bir sürprizdir. Ölümün neden daha büyük bir sürpriz olması gerektiğini anlayamıyorum. Vladimir Nabokov """ Sevgi = """ Karım, benim ‘Seni seviyorum.’ dememi binlerce kez duydu ama, hiçbir zaman ‘Üzgünüm.’ dediğimi duymadı. Bruce Willis Büyüklerine saygı, küçüklerine şevkat göstermeyenler, bizden değildir. Hz, Muhammed Sevip de kaybetmek, sevmemiş olmaktan daha iyidir. Seneca Sevgiyle düşünün, sevgiyle konuşun, sevgiyle davranın. Her ihtiyaç karşılanacaktır. James Ailen Sevgi, insanı birliğe, bencillik ise yalnızlığa götürür. Schiller Sevdiği kadını ve sevdiği işi bulan bir erkek, yeryüzünde cenneti bulmuş demektir. Helen Rowland Kızgınlık, kırgınlık ya da acıya tutunmayın. Enerjinizi çalarlar ve sizi sevmekten alıkoyarlar. Leo Buscaglia Herkes temel bir şeyi unutuyor; insanlar sizi, siz onları sevmeden sevmeyecekler. Pat Carroll Sevgi, dünyadaki yaşamın aktığı nehirdir. Henry Beecher Sevmeden verebilirsiniz ama, vermeden sevemezsiniz. Amy Carmichael """ Gayret = """ Dileyin verilecektir; arayın bulacaksınız; kapıyı çalın size açılacaktır. Hz.İsa (a.s.) Damlayan su, mermeri, yürüyen de dağları deler. Ovidius Hazine, eziyet çekene gözükür. Hz. Mevlâna Beklenen gün gelecekse, çekilen çile kutsaldır.V. Hugo Yarınlar yorgun ve bezgin kimseler değil, rahatını terk edebilen gayretli insanlara aittir. Cicero Emek olmadan, yemek olmaz. Atasözü Kötülüğün hakim olmaması için tek şart, iyilerin gayret göstermeleridir. Kelâmı Kibar İnsan bir şeyi çok ciddi olarak arzu etmeye görsün; hiçbir şey erişilemeyecek kadar yükseklerde değildir. H. Andersen Ümitsizlik ve korkaklık, çalışma azminden yoksun insanlara göredir. A. Mourois Bir çığlık, bir çığ meydana getirir. Soljenitsin Kıramıyacağm zinciri, hiç olmazsa kemir. C. Şehabettin Basit bir insanın elinden geleni yapması; bilgili ve zeki bir insanın tembelliğinden çok daha değerlidir. Taşı delen, suyun kuvveti değil; dalgaların sürekliliğidir. Damlayan su, taşı deler. Ovidius """ Umut = """ Başlangıçta fazla umut ederiz ama, sonrasında yeteri kadar değil. Joesph Roux Hayatta umutsuz durumlar yoktur, sadece umutsuzluk besleyen insanlar vardır. Booth Şafaktan önce her yer karanlıktır. Katherine Mansfield Umut, gözle görülemeyeni görür, elle tutulamayanı hisseder ve imkansızı başarır. Anonim Büyük umutlar, büyük adamlar yaratır. Thomas Fuller Umut, fakirin ekmeğidir. Thales Miletus Kazanmayı umut etmeyen, daha baştan kaybetmiştir. Jose Olmedo Umut etmeyen, hiçbir zaman umutsuzluğa kapılmaz. George Bernard Shaw Hiçbir kış, sonsuza kadar sürmez. Hal Borland Umut, şüphe etmek daha kolayken, inanmaya devam etmektir. Anonim Umut, mutluluktan alınmış bir miktar borçtur. Joseph İoubert """ Idare = """ Çok söyleyen değil, çok iş yapan yöneticiye muhtaçsınız. Hz. Ömer (r.a.) Sevginin kurduğu devleti, adalet devam ettirir. Farabi Her memleketin hakettiği bir hükümeti vardır. J. Maistere Hükümetlerin en kötüsü, suçsuzu korkutandır. Kendinizi yönetirken kafanızı, başkalarını yönetirken kalbinizi kullanın. Evi ev eden avrat, yurdu şen eden devlet. Atasözü Mal cimrilerde, silah korkanlarda, idare de zayıflarda olursa, işler bozulur. Hz. Ebubekir İnsanlar üç sınıftır: İdareciler, din alimleri ve halk. İdareciler bozulunca, geçim; alimler bozulunca, din; halk bozulunca da hak bozulur. Ebu Bekir Verrak Kendini yönet, dünyayı yönetecek gücü bulabilirsin. Plato Kötü idarecilerin başarısı, halkın ıstırabıdır. Bir memlekette ayaklar baş olursa, başlar ayaklar altında mahvolur. """ # We need to add all list a single list and create each category names categories = [Hayat,Sevgi,Gayret,Umut,Idare] categories_name = ["Hayat", "Sevgi","Gayret","Umut","Idare"] # Split each row with new line (\n) for i in range(len(categories)): categories[i] = categories[i].split("\n") # Create a dataframe df = [] df = pd.DataFrame(df, columns=['sentence','category']) # Add each sentence to Dataframe (df) with the category names for i in range(len(categories_name)): for k in range(len(categories[i])): df = df.append({'sentence': categories[i][k], 'category': categories_name[i]}, ignore_index=True) df = df[df['sentence']!=""] df.reset_index(drop=True, inplace=True) df["cleaned"]=df['sentence'].copy() for i in range(len(df)): df["cleaned"][i] = re.sub('[!@#’‘?.,\'$]', '', df["cleaned"][i]) df["cleaned"][i] = df["cleaned"][i].lower() def give_a_recomendation(): result = [] answer = input("Hi Human, What do you want? ") # We need to turn lowercase user answer for correct proportions answer=answer.lower() # One of the main reques is If not a word use Jaccard similarity. # We count user answer words and if it is more than one word and use Jaccard similarity if len(answer.split())>1: # Jaccard Similarity for i in range(len(df)): # Jaccard similarity a = set(df["cleaned"][i].split()) b = set(answer.split()) c = a.intersection(b) result.append(float(len(c)) / (len(a) + len(b) - len(c))) # We add all result in a new column that we can find any of result pd.DataFrame(result) df['result'] = result # Return as max result row sentence return df[df['result']==df['result'].max()].values[0][0] # The second request is If User type a single word and the word is the same the category name # and we will find a random word in the category # If only one word but not the same any category name then we can choose any random sentence in our data else: for i in categories_name: i=i.lower() if(i==answer): return df["sentence"][df["category"].str.lower()==answer].sample(n=1).values[0] return df["sentence"].sample(n=1).values[0] df.head() give_a_recomendation()
0.092929
0.972204
# SciPy `minimize_scalar` `scipy.optimize.minimize_scalar` determines the lowest value in a local neighborhood of a one-dimensional scalar-valued function. $$ \text{min}_x f(x) \qquad f : \mathbb{R} \rightarrow \mathbb{R} $$ [`scipy.optimize.minimize`](./Optimization_ND.ipynb) is a generalization for multidimensional scalar-valued functions. The submodule also offers a series of global optimization routines, that work to find the lowest value on an entire range, instead of a local neighborhood. To use the function, you first need to import the `scipy.optimize` submodule. SciPy works on top of NumPy, so we import that as well, and we use PyPlot to look at the data. ``` # Importing the packages from scipy import optimize import numpy as np import matplotlib.pyplot as plt ``` <div id="test_function" /> ## Test Function <hr/> For the example, we use the test function: $$ f(x) = x^2 $$ which has a single minimum at $x=0$. ``` # An example function with many minimums f = lambda x: x**2 ``` We can visually look at this function to verify that this is indeed a local minimum. ``` x = np.arange(-3,3,.01) plt.plot(x,f(x),label="f") plt.legend() plt.title("Test Functions") ``` ## Basic Usage <hr /> At its simplest, `minimize_scalar` only requires the function. ``` result = optimize.minimize_scalar(f) print(result) ``` ## Result <hr/> The function returns a special variable type based on a dictionary that can store more information. Individual components can be accessed by: ``` result.x ``` Available components can be determined by calling: ``` result.keys() ``` ## Args <hr> The `args` variable flag passes additional parameters via a tuple to the function. In the example, I shift the minimum via a parameter `a`. ``` # A function that uses a parameter f_parameter = lambda x , a : (x-a)**2 result_parameter = optimize.minimize_scalar(f_parameter, args=(2,)) print(result_parameter) ``` ## Choosing Methods <hr /> SciPy Optimize comes with three 1D methods: * Brent * Golden * Bounded <b>Brent</b>'s method is the default, and it uses inverse quadratic interpolation. Both Brent's method and the Golden Method both rely on <i>brackets</i> or a set of three numbers `a`,`b`,`c` such that `f(b)` is less than both `f(a)` and `f(c)`. The bounded method instead uses <i>bounds</i>, which only has the two endpoints. While brackets are optional for Brent's and Golden, initial bounds are required to start the Bounded Method. ``` result_Brent = optimize.minimize_scalar(f,method='Brent') print(result_Brent) result_golden = optimize.minimize_scalar(f,method='golden') print(result_golden) result_bounded = optimize.minimize_scalar(f,method='bounded',bounds=[-1,1]) print(result_bounded) ``` ## Options <hr> The maximum number of iterations `maxiter` can be passed in a dictionary to the flag `options`. While most 1D functions converge in a trivial amount of time, sometimes, you may want manual control to avoid excessive resources. Here's an example for changing the maximum number of iterations: ``` result_maxiter = optimize.minimize_scalar(f,options={'maxiter':3}) print(result_maxiter) ```
github_jupyter
# Importing the packages from scipy import optimize import numpy as np import matplotlib.pyplot as plt # An example function with many minimums f = lambda x: x**2 x = np.arange(-3,3,.01) plt.plot(x,f(x),label="f") plt.legend() plt.title("Test Functions") result = optimize.minimize_scalar(f) print(result) result.x result.keys() # A function that uses a parameter f_parameter = lambda x , a : (x-a)**2 result_parameter = optimize.minimize_scalar(f_parameter, args=(2,)) print(result_parameter) result_Brent = optimize.minimize_scalar(f,method='Brent') print(result_Brent) result_golden = optimize.minimize_scalar(f,method='golden') print(result_golden) result_bounded = optimize.minimize_scalar(f,method='bounded',bounds=[-1,1]) print(result_bounded) result_maxiter = optimize.minimize_scalar(f,options={'maxiter':3}) print(result_maxiter)
0.544559
0.994002
<a href="https://colab.research.google.com/github/martin-fabbri/colab-notebooks/blob/master/kafka/kafka_python_produce_consume_pynb.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` #@title ## Install/Upgrade packages #@markdown pip installs ``confluent-kafka`` and upgrades ``ipython`` to suport async/await #@markdown Ignore jupyter-console 5.2.0 has requirement ... #@markdown Ignore ERROR: google-colab 1.0.0 has requirement ipython ... #@markdown **Follow instructions to restart kernel after setup is complete** !pip install -q --upgrade ipython !pip install -q --upgrade ipykernel !pip install -q confluent-kafka !pip install -q pykafka from IPython.core.display import HTML HTML(""" <div style="border: 5px solid green; padding: 20px; margin: 20px;"> <h2 style="color:red;">Restart the runtime</h2> <p>We have upgraded IPython and you must restart the runtime (Runtime > Restart runtime ...) before continuing.</p> </div> """) #@title ## Setup Kafka #@markdown This cell will install Kafka 2.12 %%bash sudo apt-get update -qq sudo wget https://www-us.apache.org/dist/kafka/2.3.0/kafka_2.12-2.3.0.tgz -q sudo tar -xzf kafka_2.12-2.3.0.tgz sudo mv kafka_2.12-2.3.0 /opt/kafka #@title ## Start services #@markdown Start ``zookeeper on port 2181`` and ``kafka on port 9092``(default ports). %%bash sudo nohup /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties > /dev/null 2>&1 & sleep 5 sudo nohup /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties > /dev/null 2>&1 & ``` ## List all the Kafka topics available on the server The `--zookeeper` parameter is required everytime you want to call the `kafka-topics` command ``` %%bash /opt/kafka/bin/kafka-topics.sh --list --zookeeper localhost:2181 ``` ## Imports ``` import asyncio from confluent_kafka import Consumer from confluent_kafka import Producer from confluent_kafka.admin import AdminClient, NewTopic from pykafka import KafkaClient ``` ## Config ``` BROKER_URL = 'localhost:9092' TOPIC_NAME = 'python-test-topic' ``` ## Instantiate Kafka admin client ``` client = AdminClient({'bootstrap.servers': BROKER_URL}) client ``` ## Create topic No need for a ``zookeeper`` reference ??? ``` topic = NewTopic(TOPIC_NAME, num_partitions=1, replication_factor=1) client.create_topics([topic]) ``` x ``` async def produce(topic_name): """Produces data into a kafka topic""" p = Producer({'bootstrap.servers': BROKER_URL}) for curr_iteration in range(20): print(f'Producer: New message: {curr_iteration}') p.produce(topic_name, f'Message: {curr_iteration}') await asyncio.sleep(1) async def consume(topic_name): """Consumes data from a Kafka topic""" c = Consumer({'bootstrap.servers': BROKER_URL, 'group.id': 'python-test-consumer'}) c.subscribe([topic_name]) for _ in range(30): message = c.poll(1.0) if message is None: print('Consumer: No message reveived') elif message.error() is not None: print(f'Consumer: Message had an error {message.error()}') else: print(f'Consumer: Key: {message.key()}, {message.value()}') await asyncio.sleep(1) loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.run_until_complete(asyncio.gather( produce(TOPIC_NAME), consume(TOPIC_NAME) )) loop.close() ``` ## Produce/Consume messages ``` client = KafkaClient(hosts=BROKER_URL) client.topics async def produce(topic_name): """Produces data into a kafka topic""" topic = client.topics[b'python-test-topic'] message_key = 0 with topic.get_producer() as producer: for curr_iteration in range(20): msg = f'test message {message_key}' print(f'Producer: New message: {msg}') producer.produce(msg.encode()) message_key += 1 await asyncio.sleep(1) async def consume(topic_name): """Consumes data from a Kafka topic""" topic = client.topics[b'python-test-topic'] consumer = topic.get_simple_consumer() for message in consumer: if message is None: print('Consumer: No message reveived') else: print(f'Consumer: Key: {message.offset}, {message.value}') await asyncio.sleep(1) loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.run_until_complete(asyncio.gather( produce(TOPIC_NAME), consume(TOPIC_NAME) )) loop.close() ```
github_jupyter
#@title ## Install/Upgrade packages #@markdown pip installs ``confluent-kafka`` and upgrades ``ipython`` to suport async/await #@markdown Ignore jupyter-console 5.2.0 has requirement ... #@markdown Ignore ERROR: google-colab 1.0.0 has requirement ipython ... #@markdown **Follow instructions to restart kernel after setup is complete** !pip install -q --upgrade ipython !pip install -q --upgrade ipykernel !pip install -q confluent-kafka !pip install -q pykafka from IPython.core.display import HTML HTML(""" <div style="border: 5px solid green; padding: 20px; margin: 20px;"> <h2 style="color:red;">Restart the runtime</h2> <p>We have upgraded IPython and you must restart the runtime (Runtime > Restart runtime ...) before continuing.</p> </div> """) #@title ## Setup Kafka #@markdown This cell will install Kafka 2.12 %%bash sudo apt-get update -qq sudo wget https://www-us.apache.org/dist/kafka/2.3.0/kafka_2.12-2.3.0.tgz -q sudo tar -xzf kafka_2.12-2.3.0.tgz sudo mv kafka_2.12-2.3.0 /opt/kafka #@title ## Start services #@markdown Start ``zookeeper on port 2181`` and ``kafka on port 9092``(default ports). %%bash sudo nohup /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties > /dev/null 2>&1 & sleep 5 sudo nohup /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties > /dev/null 2>&1 & %%bash /opt/kafka/bin/kafka-topics.sh --list --zookeeper localhost:2181 import asyncio from confluent_kafka import Consumer from confluent_kafka import Producer from confluent_kafka.admin import AdminClient, NewTopic from pykafka import KafkaClient BROKER_URL = 'localhost:9092' TOPIC_NAME = 'python-test-topic' client = AdminClient({'bootstrap.servers': BROKER_URL}) client topic = NewTopic(TOPIC_NAME, num_partitions=1, replication_factor=1) client.create_topics([topic]) async def produce(topic_name): """Produces data into a kafka topic""" p = Producer({'bootstrap.servers': BROKER_URL}) for curr_iteration in range(20): print(f'Producer: New message: {curr_iteration}') p.produce(topic_name, f'Message: {curr_iteration}') await asyncio.sleep(1) async def consume(topic_name): """Consumes data from a Kafka topic""" c = Consumer({'bootstrap.servers': BROKER_URL, 'group.id': 'python-test-consumer'}) c.subscribe([topic_name]) for _ in range(30): message = c.poll(1.0) if message is None: print('Consumer: No message reveived') elif message.error() is not None: print(f'Consumer: Message had an error {message.error()}') else: print(f'Consumer: Key: {message.key()}, {message.value()}') await asyncio.sleep(1) loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.run_until_complete(asyncio.gather( produce(TOPIC_NAME), consume(TOPIC_NAME) )) loop.close() client = KafkaClient(hosts=BROKER_URL) client.topics async def produce(topic_name): """Produces data into a kafka topic""" topic = client.topics[b'python-test-topic'] message_key = 0 with topic.get_producer() as producer: for curr_iteration in range(20): msg = f'test message {message_key}' print(f'Producer: New message: {msg}') producer.produce(msg.encode()) message_key += 1 await asyncio.sleep(1) async def consume(topic_name): """Consumes data from a Kafka topic""" topic = client.topics[b'python-test-topic'] consumer = topic.get_simple_consumer() for message in consumer: if message is None: print('Consumer: No message reveived') else: print(f'Consumer: Key: {message.offset}, {message.value}') await asyncio.sleep(1) loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.run_until_complete(asyncio.gather( produce(TOPIC_NAME), consume(TOPIC_NAME) )) loop.close()
0.389198
0.748145
``` ['cat', 'bat', 'rat', 'elephant'] ``` ### Getting Individual Values in a List with Indexes ``` spam = ['cat', 'bat', 'rat', 'elephant'] spam[0] spam[1] spam[2] spam[3] ``` ### Negative Indexes ``` spam = ['cat', 'bat', 'rat', 'elephant'] spam[-1] spam[-3] 'The {} is afraid of the {}.'.format(spam[-1], spam[-3]) ``` ### Getting Sublists with Slices ``` spam = ['cat', 'bat', 'rat', 'elephant'] spam[0:4] spam[1:3] spam[0:-1] spam = ['cat', 'bat', 'rat', 'elephant'] spam[:2] spam[1:] spam[:] ``` ### Getting a list Length with len ``` spam = ['cat', 'dog', 'moose'] len(spam) ``` ### Changing Values in a List with Indexes ``` spam = ['cat', 'bat', 'rat', 'elephant'] spam[1] = 'aardvark' spam spam[2] = spam[1] spam spam[-1] = 12345 spam ``` ### List Concatenation and List Replication ``` [1, 2, 3] + ['A', 'B', 'C'] ['X', 'Y', 'Z'] * 3 spam = [1, 2, 3] spam = spam + ['A', 'B', 'C'] spam ``` ### Removing Values from Lists with del Statements ``` spam = ['cat', 'bat', 'rat', 'elephant'] del spam[2] spam del spam[2] spam ``` ### Using for Loops with Lists ``` supplies = ['pens', 'staplers', 'flame-throwers', 'binders'] for i, supply in enumerate(supplies): print('Index {} in supplies is: {}'.format(str(i), supply)) ``` ### Looping Through Multiple Lists with zip ``` name = ['Pete', 'John', 'Elizabeth'] age = [6, 23, 44] for n, a in zip(name, age): print('{} is {} years old'.format(n, a)) ``` ### The in and not in Operators ``` 'howdy' in ['hello', 'hi', 'howdy', 'heyas'] spam = ['hello', 'hi', 'howdy', 'heyas'] False 'howdy' not in spam 'cat' not in spam ``` ### The Multiple Assignment Trick The multiple assignment trick is a shortcut that lets you assign multiple variables with the values in a list in one line of code. So instead of doing this: ``` cat = ['fat', 'orange', 'loud'] size = cat[0] color = cat[1] disposition = cat[2] ``` You could type this line of code: ``` cat = ['fat', 'orange', 'loud'] size, color, disposition = cat ``` The multiple assignment trick can also be used to swap the values in two variables: ``` a, b = 'Alice', 'Bob' a, b = b, a print(a) print(b) ``` ### Augmented Assignment Operators | Operator | Equivalent | | ----------- | ----------------- | | `spam += 1` | `spam = spam + 1` | | `spam -= 1` | `spam = spam - 1` | | `spam *= 1` | `spam = spam * 1` | | `spam /= 1` | `spam = spam / 1` | | `spam %= 1` | `spam = spam % 1` | Examples: ``` spam = 'Hello' spam += ' world!' spam bacon = ['Zophie'] bacon *= 3 bacon ``` ### Finding a Value in a List with the index Method ``` spam = ['Zophie', 'Pooka', 'Fat-tail', 'Pooka'] spam.index('Pooka') ``` ### Adding Values to Lists with append and insert **append()**: ``` spam = ['cat', 'dog', 'bat'] spam.append('moose') spam ``` **insert()**: ``` spam = ['cat', 'dog', 'bat'] spam.insert(1, 'chicken') spam ``` ### Removing Values from Lists with remove ``` spam = ['cat', 'bat', 'rat', 'elephant'] spam.remove('bat') spam ``` If the value appears multiple times in the list, only the first instance of the value will be removed. ### Sorting the Values in a List with sort ``` spam = [2, 5, 3.14, 1, -7] spam.sort() spam spam = ['ants', 'cats', 'dogs', 'badgers', 'elephants'] spam.sort() spam ``` You can also pass True for the reverse keyword argument to have sort() sort the values in reverse order: ``` spam.sort(reverse=True) spam ``` If you need to sort the values in regular alphabetical order, pass str. lower for the key keyword argument in the sort() method call: ``` spam = ['a', 'z', 'A', 'Z'] spam.sort(key=str.lower) spam ``` You can use the built-in function `sorted` to return a new list: ``` spam = ['ants', 'cats', 'dogs', 'badgers', 'elephants'] sorted(spam) ``` ## Tuple Data Type ``` eggs = ('hello', 42, 0.5) eggs[0] eggs[1:3] len(eggs) ``` The main way that tuples are different from lists is that tuples, like strings, are immutable. ## Converting Types with the list and tuple Functions ``` tuple(['cat', 'dog', 5]) list(('cat', 'dog', 5)) list('hello') ```
github_jupyter
['cat', 'bat', 'rat', 'elephant'] spam = ['cat', 'bat', 'rat', 'elephant'] spam[0] spam[1] spam[2] spam[3] spam = ['cat', 'bat', 'rat', 'elephant'] spam[-1] spam[-3] 'The {} is afraid of the {}.'.format(spam[-1], spam[-3]) spam = ['cat', 'bat', 'rat', 'elephant'] spam[0:4] spam[1:3] spam[0:-1] spam = ['cat', 'bat', 'rat', 'elephant'] spam[:2] spam[1:] spam[:] spam = ['cat', 'dog', 'moose'] len(spam) spam = ['cat', 'bat', 'rat', 'elephant'] spam[1] = 'aardvark' spam spam[2] = spam[1] spam spam[-1] = 12345 spam [1, 2, 3] + ['A', 'B', 'C'] ['X', 'Y', 'Z'] * 3 spam = [1, 2, 3] spam = spam + ['A', 'B', 'C'] spam spam = ['cat', 'bat', 'rat', 'elephant'] del spam[2] spam del spam[2] spam supplies = ['pens', 'staplers', 'flame-throwers', 'binders'] for i, supply in enumerate(supplies): print('Index {} in supplies is: {}'.format(str(i), supply)) name = ['Pete', 'John', 'Elizabeth'] age = [6, 23, 44] for n, a in zip(name, age): print('{} is {} years old'.format(n, a)) 'howdy' in ['hello', 'hi', 'howdy', 'heyas'] spam = ['hello', 'hi', 'howdy', 'heyas'] False 'howdy' not in spam 'cat' not in spam cat = ['fat', 'orange', 'loud'] size = cat[0] color = cat[1] disposition = cat[2] cat = ['fat', 'orange', 'loud'] size, color, disposition = cat a, b = 'Alice', 'Bob' a, b = b, a print(a) print(b) spam = 'Hello' spam += ' world!' spam bacon = ['Zophie'] bacon *= 3 bacon spam = ['Zophie', 'Pooka', 'Fat-tail', 'Pooka'] spam.index('Pooka') spam = ['cat', 'dog', 'bat'] spam.append('moose') spam spam = ['cat', 'dog', 'bat'] spam.insert(1, 'chicken') spam spam = ['cat', 'bat', 'rat', 'elephant'] spam.remove('bat') spam spam = [2, 5, 3.14, 1, -7] spam.sort() spam spam = ['ants', 'cats', 'dogs', 'badgers', 'elephants'] spam.sort() spam spam.sort(reverse=True) spam spam = ['a', 'z', 'A', 'Z'] spam.sort(key=str.lower) spam spam = ['ants', 'cats', 'dogs', 'badgers', 'elephants'] sorted(spam) eggs = ('hello', 42, 0.5) eggs[0] eggs[1:3] len(eggs) tuple(['cat', 'dog', 5]) list(('cat', 'dog', 5)) list('hello')
0.193719
0.974239
This exercise creates a surface observsation station plot for the state of Florida, using both METAR (datatype *obs*) and Synoptic (datatype *sfcobs*). Because we are using the AWIPS Map Database for state and county boundaries, there is no use of Cartopy `cfeature` in this exercise. ``` from awips.dataaccess import DataAccessLayer from dynamicserialize.dstypes.com.raytheon.uf.common.time import TimeRange from datetime import datetime, timedelta import numpy as np import cartopy.crs as ccrs from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER from cartopy.feature import ShapelyFeature from shapely.geometry import Polygon import matplotlib.pyplot as plt from metpy.units import units from metpy.calc import wind_components from metpy.plots import simple_layout, StationPlot, StationPlotLayout import warnings %matplotlib inline def get_cloud_cover(code): if 'OVC' in code: return 1.0 elif 'BKN' in code: return 6.0/8.0 elif 'SCT' in code: return 4.0/8.0 elif 'FEW' in code: return 2.0/8.0 else: return 0 # EDEX request for a single state edexServer = "edex-cloud.unidata.ucar.edu" DataAccessLayer.changeEDEXHost(edexServer) request = DataAccessLayer.newDataRequest('maps') request.addIdentifier('table', 'mapdata.states') request.addIdentifier('state', 'FL') request.addIdentifier('geomField', 'the_geom') request.setParameters('state','name','lat','lon') response = DataAccessLayer.getGeometryData(request) record = response[0] print("Found " + str(len(response)) + " MultiPolygon") state={} state['name'] = record.getString('name') state['state'] = record.getString('state') state['lat'] = record.getNumber('lat') state['lon'] = record.getNumber('lon') #state['geom'] = record.getGeometry() state['bounds'] = record.getGeometry().bounds print(state['name'], state['state'], state['lat'], state['lon'], state['bounds']) print() # EDEX request for multiple states request = DataAccessLayer.newDataRequest('maps') request.addIdentifier('table', 'mapdata.states') request.addIdentifier('geomField', 'the_geom') request.addIdentifier('inLocation', 'true') request.addIdentifier('locationField', 'state') request.setParameters('state','name','lat','lon') request.setLocationNames('FL','GA','MS','AL','SC','LA') response = DataAccessLayer.getGeometryData(request) print("Found " + str(len(response)) + " MultiPolygons") # Append each geometry to a numpy array states = np.array([]) for ob in response: print(ob.getString('name'), ob.getString('state'), ob.getNumber('lat'), ob.getNumber('lon')) states = np.append(states,ob.getGeometry()) ``` Now make sure we can plot the states with a lat/lon grid. ``` def make_map(bbox, proj=ccrs.PlateCarree()): fig, ax = plt.subplots(figsize=(16,12),subplot_kw=dict(projection=proj)) ax.set_extent(bbox) gl = ax.gridlines(draw_labels=True, color='#e7e7e7') gl.top_labels = gl.right_labels = False gl.xformatter = LONGITUDE_FORMATTER gl.yformatter = LATITUDE_FORMATTER return fig, ax # buffer our bounds by +/i degrees lat/lon bounds = state['bounds'] bbox=[bounds[0]-3,bounds[2]+3,bounds[1]-1.5,bounds[3]+1.5] fig, ax = make_map(bbox=bbox) shape_feature = ShapelyFeature(states,ccrs.PlateCarree(), facecolor='none', linestyle="-",edgecolor='#000000',linewidth=2) ax.add_feature(shape_feature) ``` --- ## Plot METAR (obs) Here we use a spatial envelope to limit the request to the boundary or our plot. Without such a filter you may be requesting many tens of thousands of records. ``` # Create envelope geometry envelope = Polygon([(bbox[0],bbox[2]),(bbox[0],bbox[3]), (bbox[1], bbox[3]),(bbox[1],bbox[2]), (bbox[0],bbox[2])]) # New obs request DataAccessLayer.changeEDEXHost(edexServer) request = DataAccessLayer.newDataRequest("obs", envelope=envelope) availableProducts = DataAccessLayer.getAvailableParameters(request) single_value_params = ["timeObs", "stationName", "longitude", "latitude", "temperature", "dewpoint", "windDir", "windSpeed", "seaLevelPress"] multi_value_params = ["presWeather", "skyCover", "skyLayerBase"] params = single_value_params + multi_value_params request.setParameters(*(params)) # Time range lastHourDateTime = datetime.utcnow() - timedelta(minutes = 60) start = lastHourDateTime.strftime('%Y-%m-%d %H:%M:%S') end = datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S') beginRange = datetime.strptime( start , "%Y-%m-%d %H:%M:%S") endRange = datetime.strptime( end , "%Y-%m-%d %H:%M:%S") timerange = TimeRange(beginRange, endRange) # Get response response = DataAccessLayer.getGeometryData(request,timerange) # function getMetarObs was added in python-awips 18.1.4 obs = DataAccessLayer.getMetarObs(response) print("Found " + str(len(response)) + " records") print("Using " + str(len(obs['temperature'])) + " temperature records") ``` Next grab the simple variables out of the data we have (attaching correct units), and put them into a dictionary that we will hand the plotting function later: - Get wind components from speed and direction - Convert cloud fraction values to integer codes [0 - 8] - Map METAR weather codes to WMO codes for weather symbols ``` data = dict() data['stid'] = np.array(obs['stationName']) data['latitude'] = np.array(obs['latitude']) data['longitude'] = np.array(obs['longitude']) tmp = np.array(obs['temperature'], dtype=float) dpt = np.array(obs['dewpoint'], dtype=float) # Suppress nan masking warnings warnings.filterwarnings("ignore",category =RuntimeWarning) tmp[tmp == -9999.0] = 'nan' dpt[dpt == -9999.] = 'nan' data['air_temperature'] = tmp * units.degC data['dew_point_temperature'] = dpt * units.degC data['air_pressure_at_sea_level'] = np.array(obs['seaLevelPress'])* units('mbar') direction = np.array(obs['windDir']) direction[direction == -9999.0] = 'nan' u, v = wind_components(np.array(obs['windSpeed']) * units('knots'), direction * units.degree) data['eastward_wind'], data['northward_wind'] = u, v data['cloud_coverage'] = [int(get_cloud_cover(x)*8) for x in obs['skyCover']] data['present_weather'] = obs['presWeather'] proj = ccrs.LambertConformal(central_longitude=state['lon'], central_latitude=state['lat'], standard_parallels=[35]) custom_layout = StationPlotLayout() custom_layout.add_barb('eastward_wind', 'northward_wind', units='knots') custom_layout.add_value('NW', 'air_temperature', fmt='.0f', units='degF', color='darkred') custom_layout.add_value('SW', 'dew_point_temperature', fmt='.0f', units='degF', color='darkgreen') custom_layout.add_value('E', 'precipitation', fmt='0.1f', units='inch', color='blue') ax.set_title(str(response[-1].getDataTime()) + " | METAR Surface Obs | " + edexServer) stationplot = StationPlot(ax, data['longitude'], data['latitude'], clip_on=True, transform=ccrs.PlateCarree(), fontsize=10) custom_layout.plot(stationplot, data) fig ``` --- ## Plot Synoptic (sfcobs) ``` # New sfcobs/SYNOP request DataAccessLayer.changeEDEXHost(edexServer) request = DataAccessLayer.newDataRequest("sfcobs", envelope=envelope) availableProducts = DataAccessLayer.getAvailableParameters(request) # (sfcobs) uses stationId, while (obs) uses stationName, # the rest of these parameters are the same. single_value_params = ["timeObs", "stationId", "longitude", "latitude", "temperature", "dewpoint", "windDir", "windSpeed", "seaLevelPress"] multi_value_params = ["presWeather", "skyCover", "skyLayerBase"] pres_weather, sky_cov, sky_layer_base = [],[],[] params = single_value_params + multi_value_params request.setParameters(*(params)) # Time range lastHourDateTime = datetime.utcnow() - timedelta(minutes = 60) start = lastHourDateTime.strftime('%Y-%m-%d %H:%M:%S') end = datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S') beginRange = datetime.strptime( start , "%Y-%m-%d %H:%M:%S") endRange = datetime.strptime( end , "%Y-%m-%d %H:%M:%S") timerange = TimeRange(beginRange, endRange) # Get response response = DataAccessLayer.getGeometryData(request,timerange) # function getSynopticObs was added in python-awips 18.1.4 sfcobs = DataAccessLayer.getSynopticObs(response) print("Found " + str(len(response)) + " records") print("Using " + str(len(sfcobs['temperature'])) + " temperature records") data = dict() data['stid'] = np.array(sfcobs['stationId']) data['lat'] = np.array(sfcobs['latitude']) data['lon'] = np.array(sfcobs['longitude']) # Synop/sfcobs temps are stored in kelvin (degC for METAR/obs) tmp = np.array(sfcobs['temperature'], dtype=float) dpt = np.array(sfcobs['dewpoint'], dtype=float) direction = np.array(sfcobs['windDir']) # Account for missing values tmp[tmp == -9999.0] = 'nan' dpt[dpt == -9999.] = 'nan' direction[direction == -9999.0] = 'nan' data['air_temperature'] = tmp * units.kelvin data['dew_point_temperature'] = dpt * units.kelvin data['air_pressure_at_sea_level'] = np.array(sfcobs['seaLevelPress'])* units('mbar') try: data['eastward_wind'], data['northward_wind'] = wind_components( np.array(sfcobs['windSpeed']) * units('knots'),direction * units.degree) data['present_weather'] = sfcobs['presWeather'] except ValueError: pass fig_synop, ax_synop = make_map(bbox=bbox) shape_feature = ShapelyFeature(states,ccrs.PlateCarree(), facecolor='none', linestyle="-",edgecolor='#000000',linewidth=2) ax_synop.add_feature(shape_feature) custom_layout = StationPlotLayout() custom_layout.add_barb('eastward_wind', 'northward_wind', units='knots') custom_layout.add_value('NW', 'air_temperature', fmt='.0f', units='degF', color='darkred') custom_layout.add_value('SW', 'dew_point_temperature', fmt='.0f', units='degF', color='darkgreen') custom_layout.add_value('E', 'precipitation', fmt='0.1f', units='inch', color='blue') ax_synop.set_title(str(response[-1].getDataTime()) + " | SYNOP Surface Obs | " + edexServer) stationplot = StationPlot(ax_synop, data['lon'], data['lat'], clip_on=True, transform=ccrs.PlateCarree(), fontsize=10) custom_layout.plot(stationplot, data) ``` --- ## Plot both METAR and SYNOP ``` custom_layout = StationPlotLayout() custom_layout.add_barb('eastward_wind', 'northward_wind', units='knots') custom_layout.add_value('NW', 'air_temperature', fmt='.0f', units='degF', color='darkred') custom_layout.add_value('SW', 'dew_point_temperature', fmt='.0f', units='degF', color='darkgreen') custom_layout.add_value('E', 'precipitation', fmt='0.1f', units='inch', color='blue') ax.set_title(str(response[-1].getDataTime()) + " | METAR/SYNOP Surface Obs | " + edexServer) stationplot = StationPlot(ax, data['lon'], data['lat'], clip_on=True, transform=ccrs.PlateCarree(), fontsize=10) custom_layout.plot(stationplot, data) fig ```
github_jupyter
from awips.dataaccess import DataAccessLayer from dynamicserialize.dstypes.com.raytheon.uf.common.time import TimeRange from datetime import datetime, timedelta import numpy as np import cartopy.crs as ccrs from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER from cartopy.feature import ShapelyFeature from shapely.geometry import Polygon import matplotlib.pyplot as plt from metpy.units import units from metpy.calc import wind_components from metpy.plots import simple_layout, StationPlot, StationPlotLayout import warnings %matplotlib inline def get_cloud_cover(code): if 'OVC' in code: return 1.0 elif 'BKN' in code: return 6.0/8.0 elif 'SCT' in code: return 4.0/8.0 elif 'FEW' in code: return 2.0/8.0 else: return 0 # EDEX request for a single state edexServer = "edex-cloud.unidata.ucar.edu" DataAccessLayer.changeEDEXHost(edexServer) request = DataAccessLayer.newDataRequest('maps') request.addIdentifier('table', 'mapdata.states') request.addIdentifier('state', 'FL') request.addIdentifier('geomField', 'the_geom') request.setParameters('state','name','lat','lon') response = DataAccessLayer.getGeometryData(request) record = response[0] print("Found " + str(len(response)) + " MultiPolygon") state={} state['name'] = record.getString('name') state['state'] = record.getString('state') state['lat'] = record.getNumber('lat') state['lon'] = record.getNumber('lon') #state['geom'] = record.getGeometry() state['bounds'] = record.getGeometry().bounds print(state['name'], state['state'], state['lat'], state['lon'], state['bounds']) print() # EDEX request for multiple states request = DataAccessLayer.newDataRequest('maps') request.addIdentifier('table', 'mapdata.states') request.addIdentifier('geomField', 'the_geom') request.addIdentifier('inLocation', 'true') request.addIdentifier('locationField', 'state') request.setParameters('state','name','lat','lon') request.setLocationNames('FL','GA','MS','AL','SC','LA') response = DataAccessLayer.getGeometryData(request) print("Found " + str(len(response)) + " MultiPolygons") # Append each geometry to a numpy array states = np.array([]) for ob in response: print(ob.getString('name'), ob.getString('state'), ob.getNumber('lat'), ob.getNumber('lon')) states = np.append(states,ob.getGeometry()) def make_map(bbox, proj=ccrs.PlateCarree()): fig, ax = plt.subplots(figsize=(16,12),subplot_kw=dict(projection=proj)) ax.set_extent(bbox) gl = ax.gridlines(draw_labels=True, color='#e7e7e7') gl.top_labels = gl.right_labels = False gl.xformatter = LONGITUDE_FORMATTER gl.yformatter = LATITUDE_FORMATTER return fig, ax # buffer our bounds by +/i degrees lat/lon bounds = state['bounds'] bbox=[bounds[0]-3,bounds[2]+3,bounds[1]-1.5,bounds[3]+1.5] fig, ax = make_map(bbox=bbox) shape_feature = ShapelyFeature(states,ccrs.PlateCarree(), facecolor='none', linestyle="-",edgecolor='#000000',linewidth=2) ax.add_feature(shape_feature) # Create envelope geometry envelope = Polygon([(bbox[0],bbox[2]),(bbox[0],bbox[3]), (bbox[1], bbox[3]),(bbox[1],bbox[2]), (bbox[0],bbox[2])]) # New obs request DataAccessLayer.changeEDEXHost(edexServer) request = DataAccessLayer.newDataRequest("obs", envelope=envelope) availableProducts = DataAccessLayer.getAvailableParameters(request) single_value_params = ["timeObs", "stationName", "longitude", "latitude", "temperature", "dewpoint", "windDir", "windSpeed", "seaLevelPress"] multi_value_params = ["presWeather", "skyCover", "skyLayerBase"] params = single_value_params + multi_value_params request.setParameters(*(params)) # Time range lastHourDateTime = datetime.utcnow() - timedelta(minutes = 60) start = lastHourDateTime.strftime('%Y-%m-%d %H:%M:%S') end = datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S') beginRange = datetime.strptime( start , "%Y-%m-%d %H:%M:%S") endRange = datetime.strptime( end , "%Y-%m-%d %H:%M:%S") timerange = TimeRange(beginRange, endRange) # Get response response = DataAccessLayer.getGeometryData(request,timerange) # function getMetarObs was added in python-awips 18.1.4 obs = DataAccessLayer.getMetarObs(response) print("Found " + str(len(response)) + " records") print("Using " + str(len(obs['temperature'])) + " temperature records") data = dict() data['stid'] = np.array(obs['stationName']) data['latitude'] = np.array(obs['latitude']) data['longitude'] = np.array(obs['longitude']) tmp = np.array(obs['temperature'], dtype=float) dpt = np.array(obs['dewpoint'], dtype=float) # Suppress nan masking warnings warnings.filterwarnings("ignore",category =RuntimeWarning) tmp[tmp == -9999.0] = 'nan' dpt[dpt == -9999.] = 'nan' data['air_temperature'] = tmp * units.degC data['dew_point_temperature'] = dpt * units.degC data['air_pressure_at_sea_level'] = np.array(obs['seaLevelPress'])* units('mbar') direction = np.array(obs['windDir']) direction[direction == -9999.0] = 'nan' u, v = wind_components(np.array(obs['windSpeed']) * units('knots'), direction * units.degree) data['eastward_wind'], data['northward_wind'] = u, v data['cloud_coverage'] = [int(get_cloud_cover(x)*8) for x in obs['skyCover']] data['present_weather'] = obs['presWeather'] proj = ccrs.LambertConformal(central_longitude=state['lon'], central_latitude=state['lat'], standard_parallels=[35]) custom_layout = StationPlotLayout() custom_layout.add_barb('eastward_wind', 'northward_wind', units='knots') custom_layout.add_value('NW', 'air_temperature', fmt='.0f', units='degF', color='darkred') custom_layout.add_value('SW', 'dew_point_temperature', fmt='.0f', units='degF', color='darkgreen') custom_layout.add_value('E', 'precipitation', fmt='0.1f', units='inch', color='blue') ax.set_title(str(response[-1].getDataTime()) + " | METAR Surface Obs | " + edexServer) stationplot = StationPlot(ax, data['longitude'], data['latitude'], clip_on=True, transform=ccrs.PlateCarree(), fontsize=10) custom_layout.plot(stationplot, data) fig # New sfcobs/SYNOP request DataAccessLayer.changeEDEXHost(edexServer) request = DataAccessLayer.newDataRequest("sfcobs", envelope=envelope) availableProducts = DataAccessLayer.getAvailableParameters(request) # (sfcobs) uses stationId, while (obs) uses stationName, # the rest of these parameters are the same. single_value_params = ["timeObs", "stationId", "longitude", "latitude", "temperature", "dewpoint", "windDir", "windSpeed", "seaLevelPress"] multi_value_params = ["presWeather", "skyCover", "skyLayerBase"] pres_weather, sky_cov, sky_layer_base = [],[],[] params = single_value_params + multi_value_params request.setParameters(*(params)) # Time range lastHourDateTime = datetime.utcnow() - timedelta(minutes = 60) start = lastHourDateTime.strftime('%Y-%m-%d %H:%M:%S') end = datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S') beginRange = datetime.strptime( start , "%Y-%m-%d %H:%M:%S") endRange = datetime.strptime( end , "%Y-%m-%d %H:%M:%S") timerange = TimeRange(beginRange, endRange) # Get response response = DataAccessLayer.getGeometryData(request,timerange) # function getSynopticObs was added in python-awips 18.1.4 sfcobs = DataAccessLayer.getSynopticObs(response) print("Found " + str(len(response)) + " records") print("Using " + str(len(sfcobs['temperature'])) + " temperature records") data = dict() data['stid'] = np.array(sfcobs['stationId']) data['lat'] = np.array(sfcobs['latitude']) data['lon'] = np.array(sfcobs['longitude']) # Synop/sfcobs temps are stored in kelvin (degC for METAR/obs) tmp = np.array(sfcobs['temperature'], dtype=float) dpt = np.array(sfcobs['dewpoint'], dtype=float) direction = np.array(sfcobs['windDir']) # Account for missing values tmp[tmp == -9999.0] = 'nan' dpt[dpt == -9999.] = 'nan' direction[direction == -9999.0] = 'nan' data['air_temperature'] = tmp * units.kelvin data['dew_point_temperature'] = dpt * units.kelvin data['air_pressure_at_sea_level'] = np.array(sfcobs['seaLevelPress'])* units('mbar') try: data['eastward_wind'], data['northward_wind'] = wind_components( np.array(sfcobs['windSpeed']) * units('knots'),direction * units.degree) data['present_weather'] = sfcobs['presWeather'] except ValueError: pass fig_synop, ax_synop = make_map(bbox=bbox) shape_feature = ShapelyFeature(states,ccrs.PlateCarree(), facecolor='none', linestyle="-",edgecolor='#000000',linewidth=2) ax_synop.add_feature(shape_feature) custom_layout = StationPlotLayout() custom_layout.add_barb('eastward_wind', 'northward_wind', units='knots') custom_layout.add_value('NW', 'air_temperature', fmt='.0f', units='degF', color='darkred') custom_layout.add_value('SW', 'dew_point_temperature', fmt='.0f', units='degF', color='darkgreen') custom_layout.add_value('E', 'precipitation', fmt='0.1f', units='inch', color='blue') ax_synop.set_title(str(response[-1].getDataTime()) + " | SYNOP Surface Obs | " + edexServer) stationplot = StationPlot(ax_synop, data['lon'], data['lat'], clip_on=True, transform=ccrs.PlateCarree(), fontsize=10) custom_layout.plot(stationplot, data) custom_layout = StationPlotLayout() custom_layout.add_barb('eastward_wind', 'northward_wind', units='knots') custom_layout.add_value('NW', 'air_temperature', fmt='.0f', units='degF', color='darkred') custom_layout.add_value('SW', 'dew_point_temperature', fmt='.0f', units='degF', color='darkgreen') custom_layout.add_value('E', 'precipitation', fmt='0.1f', units='inch', color='blue') ax.set_title(str(response[-1].getDataTime()) + " | METAR/SYNOP Surface Obs | " + edexServer) stationplot = StationPlot(ax, data['lon'], data['lat'], clip_on=True, transform=ccrs.PlateCarree(), fontsize=10) custom_layout.plot(stationplot, data) fig
0.543106
0.886568
# System Model V3 - Query: does market price decline with controller, when $K_i = 0$? The purpose of this notebook is to simulate a single run of the full CDP and APT system model, using a stochastic Ethereum price and liquidity demand process as a driver. For this simulation run, we will be asking the following question: does market price decline with controller, when $K_i = 0$? --- 2. Query: does market price decline with controller, when $K_i = 0$? --- - Run simulation with: - ETH price shock OFF - Liquidity shocks OFF - Controller ON with $K_p$ equalling recommended value and $K_i = 0$ - Arbitrageur considers liquidation ratio: - ON - OFF - Target price rescaled: - ON - OFF We will override the recommended parameters in [init.py](https://github.com/BlockScience/reflexer/blob/experiment-analysis/models/system_model_v3/model/params/init.py) In order to test this, configure the following experiment [Query_2.py](experiments/system_model_v3/Query_2.py). Run run this simulation, we create a directory in the ```experiments/system_model_v3``` called ```Query_2/```,and add a ```logs/``` directory inside of it. Assuming our we have all of the requirements required (run requirements.txt from the ```reflexer/``` root directory to be sure. Assuming our terminal is in the root directory, we run the follow to run the simulation: ```bash cd experiments/system_model_v3 python3 -m experiments.system_model_v3.Query_2 ``` And our simulation will run. The resulting [run log](experiments/system_model_v3/sanity_check_plot/experiment_run_log.md) can be found in the ```experiments/system_model_v3/Query_2/``` directory, along with the simulation results stored as ```experiment_results.hdf5``` Below we will import and examine the simulation results. ``` # import libraries import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import os %matplotlib inline ``` ## Import simulation run data ``` os.getcwd() os.chdir('../../') os.getcwd() experiment_results = 'experiments/system_model_v3/Query_2/experiment_results.hdf5' experiment_results_keys = [] with pd.HDFStore(experiment_results) as store: experiment_results_keys = list(filter(lambda x: "results" in x, store.keys())) exceptions_keys = list(filter(lambda x: "exceptions" in x, store.keys())) # A list of all experiment result keys experiment_results_keys # Copy a results_ key from the above keys to select the experiment experiment_results_key = experiment_results_keys[-1]#'results_2021-02-09T18:46:33.073363' # Or select last result: experiment_results_keys[-1] experiment_timestamp = experiment_results_key.strip('results_') exceptions_key = 'exceptions_' + experiment_timestamp experiment_timestamp df_raw = pd.read_hdf(experiment_results, experiment_results_key) df_raw.tail() ``` ## Post process ``` from experiments.system_model_v3.post_process import post_process_results from experiments.system_model_v3.Query_2 import params, SIMULATION_TIMESTEPS params # add swept parameters as a column df = post_process_results(df_raw, params, set_params=['rescale_target_price','arbitrageur_considers_liquidation_ratio','liquidation_ratio']) df ``` # Simulation Analysis - **If market price does not decline when arbitrageur considers liquidation ratio is ON and target price rescaled is ON**: - Implication from Reflexer: negative $K_i$ term is causing market price decline, proceed to Query 3 ``` df.substep.unique() df.subset.unique() df.columns ``` ### Generate key plots for analysis ``` df_true_true = df.query('rescale_target_price==True and arbitrageur_considers_liquidation_ratio == True') sns.lineplot(data=df_true_true,x="timestamp", y="eth_price",label='Generated Eth price') plt.ylim(250, 600) ax2 = plt.twinx() sns.lineplot(data=df_true_true,x="timestamp", y="market_price",ax=ax2,color='r',label='Market Price in Rai') sns.lineplot(data=df_true_true,x="timestamp", y="target_price_scaled",ax=ax2,color='g',label='Redemption Price in Rai') plt.title('Liquidation ratio is ON and target price rescaled is ON') plt.legend(loc="upper left") plt.show() sns.lineplot(data=df_true_true,x="timestamp", y="ETH_balance",color='b',label='ETH Balance') plt.ylim(10000, 130000) ax2 = plt.twinx() sns.lineplot(data=df_true_true,x="timestamp", y="RAI_balance",ax=ax2,color='r',label='Rai Balance in Rai') sns.lineplot(data=df_true_true,x="timestamp", y="principal_debt",ax=ax2,color='g',label='CDP total debt (RAI)') plt.title('Liquidation ratio is ON and target price rescaled is ON') plt.legend(loc="upper left") df_false_false = df.query('rescale_target_price==False and arbitrageur_considers_liquidation_ratio == False') sns.lineplot(data=df_false_false,x="timestamp", y="eth_price",label='Generated Eth price') plt.ylim(250, 600) ax2 = plt.twinx() sns.lineplot(data=df_false_false,x="timestamp", y="market_price",ax=ax2,color='r',label='Market Price in Rai') sns.lineplot(data=df_false_false,x="timestamp", y="target_price_scaled",ax=ax2,color='g',label='Redemption Price in Rai') plt.title('Liquidation ratio is OFF and target price rescaled is OFF') plt.legend(loc="upper left") plt.show() ``` ## Conclusion Market price does decline when arbitrageur considers liquidation ratio is ON and target price rescaled is ON with $K_i = 0$.
github_jupyter
cd experiments/system_model_v3 python3 -m experiments.system_model_v3.Query_2 Below we will import and examine the simulation results. ## Import simulation run data ## Post process # Simulation Analysis - **If market price does not decline when arbitrageur considers liquidation ratio is ON and target price rescaled is ON**: - Implication from Reflexer: negative $K_i$ term is causing market price decline, proceed to Query 3 ### Generate key plots for analysis
0.388734
0.914023
Copyright 2020 Vasile Rus, Andrew M. Olney and made available under [CC BY-SA](https://creativecommons.org/licenses/by-sa/4.0) for text and [Apache-2.0](http://www.apache.org/licenses/LICENSE-2.0) for code. # Logistic Regression So far, we have looked at two broad kinds of supervised learning, classification and regression. Classification predicts a class label for an observation (i.e., a row of the dataframe) and regression predicts a numeric value for an observation. Logistic regression is a kind of regression that is primarily used for classification, particularly binary classification. It does this by predicting the **probability** (technically the log-odds) of the positive class assigned label `1`. If the probability is above a threshold, e.g .50, then this predicted numeric value is interpreted as a classification of `1`. Otherwise, the predicted numeric value is interpreted as a classification of `0`. So **logistic regression predicts a numeric probability that we convert into a classification.** Logistic regression is widely used in data science classification tasks, for example to: * categorize a person as having diabetes or not having diabetes * categorize an incoming email as spam or not spam Because logistic regression is also regression, it captures the relationship between an outcome/dependent variable and the predictor/independent variables in a similar way to linear regression. The major difference is that the coefficients in logistic regression can be interpreted probabilistically, so that we can say how much more likely a predictor variable makes a positive classification. The most common kind of logistic regression is binary logistic regression, but it is possible to have: * Binary/binomial logistic regression * Multiclass/Multinomial logistic regression * Ordinal logistic regression (there is an order among the categories) <!-- NOTE: I think this has been covered already, except maybe the hard/soft distinction --> <!-- **What is classification?** A classification/categorization task is about placing an object, e.g., a patient, into one of many categories, e.g. diseases, based on some characteristics of that object, e.g, patient’s symptoms. Depending on the number of classes/categories, classification tasks are called: * Binary/binomial vs. multi-class/multinomial classification. The simplest form of classification is binary classification, e.g., spam vs. not-spam email. * Multinomial classification. An example is categorizing birds into one of many species. **Hard classification vs. soft classification** * Hard classification: The object is placed into one and only one category out of many * Soft classification: the object is assigned to more than one category with some measure indicating the confidence of that object belonging to those categories In this notebook, we focus on binary, hard classification tasks. --> ## What you will learn In the sections that follow you will learn about logistic regression, an extension of linear regression, and how it can be used for classification. We will study the following: - The math behind logistic regression - Interpreting logistic regression coefficients - Evaluating classification performance ## When to use logistic regression Logistic regression works best when you need a classifier and want to be able to interpret the predictor variables easily, as you can with linear regression. Because logistic regression is fundamentally regression, it has the some assumptions of linearity and additivity, which may not be appropriate for some problems. Binary logistic regression is widely used and scales well, but multinomial variants typically begin to have performance problems when the number of classes is large. ## Mathematical Foundations of Logistic Regression for Binary Classification We briefly review in this section the mathematical formulation of logistic regression for binary classification problems. That is, the predicted categories are just two (say, 1 or 0) and each object or instance belongs to one and only one category. Logistic regression expresses the relationship between the output variable, also called dependent variable, and the predictors, also called independent variables or features, in a similar way to linear regression with an additional twist. The additional twist is necessary in order to transform the typical continuous value of linear regression onto a categorical value (0 or 1). **From Linear Regression to Logistic Regression** Let us review first the basics of linear regression and then discuss how to transform the mathematical formulation of linear regression such that the outcome is categorical. In a typical linear regression equation, the output variable $Y$ is related to $n$ predictor variables $X_j$ ($j=1,n$) using the following linear relation, where the output $Y$ is a linear combination of the predictors $X_j$ with corresponding weights (or coefficients) $\beta_{j}$: $$Y = {\beta}_{0} + \sum \limits _{j=1} ^{n} X_{j}{\beta}_{j}$$ In linear regression, the output $Y$ has continuous values between $-\inf$ and $+\inf$. In order to map such output values to just 0 and 1, we need to apply the sigmoid or logistic function. $$\sigma (t) = \frac{1}{1 + e^{-t}}$$ A graphical representation of the sigmoid or logistic function is shown below (from Wikipedia). The important part is that the output values are in the interval $(0,1)$ which is close to our goal of just predicted values 1 or 0. <!-- ![Sigmoid Function|200x100,20%](attachment:image.png) --> <img src="attachment:image.png" width="400"> <center><b>Figure 1. The logistic function.</b> Source: <a href=\"https://commons.wikimedia.org/wiki/File:Logistic-curve.svg\">Wikipedia</a></center> When applied to the $Y = {\beta}_{0} + \sum \limits _{j=1} ^{n} X_{j}{\beta}_{j}$ from linear regression we get the following formulation for logistic regression: $$\frac{1}{1 + e^{{\beta}_{0} + \sum \limits _{j=1} ^{n} X_{j}{\beta}_{j}}}$$ The net effect is that the the typical linear regression output values ranging from $-\inf$ and $+\inf$ are now bound to $(0,1)$, which is typical for probabilities. That is, the above formulation can be interpreted as estimating the probability of instance $X$ (described by all predictors $X_j$) belonging to class 1. $$ P( Y=1 | X ) = \frac{1}{1 + e^{{\beta}_{0} + \sum \limits _{j=1} ^{p} X_{j}{\beta}_{j}}}$$ The probability of class 0 is then: $$ P( Y=0 | X ) = 1 - P( Y=1 | X ) $$ Values close to 0 are deemed to belong to class 0 and values close to 1 are deemed to belong to class 1, thus resulting in a categorical output which is what we intend in logistic regression. <!-- NOTE: This is great but I think too intense at this point. The problem is that we haven't done the background in probability it seems to require. --> <!-- # Supervised Training for Logistic Regression In order to apply logistic regression to a particular problem, e.g., email spam classification, we need to train a logistic regression model using a supervised method, i.e., we need a training dataset consisting of expert-labeled instances of the objects we want to classify and their correct categories as judged by human experts. During training, the best set of predictor variable weights or coefficients $\beta_{j}$ are estimated based on the training data. It is beyond the scope of this notebook to detail the details of the training process. We will just note that the objective is to find the weights that maximize how well the predicted categories match the true, actual/expert-labelled categories for all instances. Mathematically, the goal of the training is to maximize the following expression that captures how well a set of the values of the coefficients $\beta_{j}$, i.e., a logistic model, predicts the actual classes for all training instances T: $$Likelihood(T) = \prod _{j=1} ^{T} P(Y_{j}|X_{j}; \beta_{j})$$ The expression is called the likelihood of the training data $T$ and is defined as the product of the estimated probabilities of each training example $X_j$ given a model defined by the weights/coefficients $\beta_{j}$. For computational reasons (e.g., it is easier to work with sum of small numbers than with products of such small numbers, i.e., probabilities values between 0 and 1), we maximize the log of likelihood: $$Log-Likelihood(T) = log (\prod _{j=1} ^{T} P(Y_{j}|X_{j}; \beta_{j})) = \sum _{j=1} ^{T} P(Y_{j}|X_{j}; \beta_{j})$$ The probability of an instance labeled with class $Y_i$ is in compact form: $$ P(Y_{j}|X_{j}; \beta_{j}) = P(Y_{j}=1|X_{j}; \beta_{j})^{Y_j} (1- P(Y_{j}=1|X_{j}; \beta_{j})^{(1-Y_j)}$$ Based on this expression of the probability of each instance, we can rewrite the log-likelihood as: $$Log-Likelihood(T) = \sum _{j=1} ^{T} {P(Y_{j}=1|X_{j}; \beta_{j})^{Y_j}} + \sum _{j=1} ^{T} {(1- P(Y_{j}=1|X_{j}; \beta_{j})^{(1-Y_j)}}$$ During training this expression is maximized. It can be maximized by minimizing its opposite which we can call the cost function or optimization objective for logistic regression: $$cost-function = - Log-Likelihood(T) = - \sum _{j=1} ^{T} {P(Y_{j}=1|X_{j}; \beta_{j})^{Y_j}} - \sum _{j=1} ^{T} {(1- P(Y_{j}=1|X_{j}; \beta_{j})^{(1-Y_j)}}$$ Finding the weights or coefficients $beta_{j}$ that minimize the cost function can be done using various algorithms such as gradient descent. --> # Interpreting the Coefficients in Logistic Regression One of the best ways to interpret the coefficients in logistic regression is to transform it back into a linear regression whose coefficients are easier to interpret. From the earlier formulation, we know that: $$ Y = P( Y=1 | X ) = \frac{1}{1 + e^{{\beta}_{0} + \sum \limits _{j=1} ^{p} X_{j}{\beta}_{j}}}$$ Applying a log function on both sides, we get: $$ log \frac{P ( Y=1 | X )}{1- P( Y=1 | X )} = \sum \limits _{j=1} ^{p} X_{j}{\beta}_{j} $$ On the left-hand of the above expression we have the log odds defined as the ratio of the probability of class 1 versus the probability of class 0. Indeed, this expression $\frac{P ( Y=1 | X )}{1- P( Y=1 | X )}$ is the odds because $1- P( Y=1 | X )$ is the probability of class 0, i.e., $P( Y=0 | X )$. Therefore, we conclude that the log odds are a linear regression of the predictor variables weighted by the coefficients $\beta_{j}$. Each such coefficient therefore indicates a change in the log odds when the corresponding predictor changes with a unit (in the case of numerical predictors). You may feel more comfortable with probabilities than odds, but you have probably seen odds expressed frequently in the context of sports. Here are some examples: - 1 to 1 means 50% probability of winning - 2 to 1 means 67% probability of winning - 3 to 1 means 75% probability of winning - 4 to 1 means 80% probability of winning Odds are just the probability of success divided by the probability of failure. For example 75% probability of winning means 25% probability of losing, and $.75/.25=3$, and we say the odds are 3 to 1. Because log odds are not intuitive (for most people), it is common to interpret the coefficients of logistic regression as odds. When a log odds coefficient has been converted to odds (using $e^\beta$), a coefficient of 1.5 means the positive class is 1.5 times more likely given a unit increase in the variable. # Peformance Evaluation Performance evaluation for logistic regression is the same as for other classification methods. The typical performance metrics for classifiers are accuracy, precision, and recall (also called sensitivity). We previously talked about these, but we did not focus much on precision, so let's clarify that. In some of our previous classification examples, there are only two classes that are equally likely (each is 50% of the data). When classes are equally likely, we say they are **balanced**. If our classifier is correct 60% of the time with two balanced classes, we know it is 10% better than chance. However, sometimes things are very unbalanced. Suppose we're trying to detect a rare disease that occurs once in 10,000 people. In this case, a classifier that always predicts "no disease" will be correct 99.99% of the time. This is because the **true negatives** in the data are so much greater than the **true positives** Because the metrics of accuracy and specificity use true negatives, they can be somewhat misleading when classes are imbalanced. In contrast, precision and recall don't use true negatives at all (see the figure below). This makes them behave more consistently in both balanced and imbalance data. For these reasons, precision, recall, and their combination F1 (also called f-measure) are very popular in machine learning and data science. <!-- ![confusionMatrix-1.png](attachment:confusionMatrix-1.png) --> <div> <img src="attachment:confusionMatrix-1.png" width="700"/> </div> <center><b>Figure 2. A confusion matrix. Note recall is an alternate label for sensitivity. </b> </center> <!-- NOTE: this became redundant with Tasha's KNN classification notebook. I modified to amplify precision, which she did not focus much on. --> <!-- happens, it is easy These are typical derived by compared the predicted output to the golden or actual output/categories in the expert labelled dataset. For a binary classification case, we denote the category 1 as the positive category and category 0 as the negative category. Using this new terminology, When comparing the predicted categories to the actual categories we may end up with the following cases: * True Positives (TP): instances predicted as belonging to the positive category and which in fact do belong to the positive category * True Negatives (TN): instances predicted as belonging to the negative category and which in fact do belong to the negative category * False Positives (FP): instances predicted as belonging to the positive category and which in fact do belong to the negative category * False Negatives (FN): instances predicted as belonging to the negative category and which in fact do belong to the positive category From these categories, we define the following metrics: $Accuracy = \frac{TP + TN}{TP + TN + FP + FN}$ $Precision = \frac{TP}{TP + FP}$ $Recall = \frac{TP}{TP + FN}$ Classfication methods that have a high accuracy are preferred in general although --> In some cases, maximizing precision or recall may be preferred. For instance, a high recall is highly recommended when making medical diagnosis since it is preferrable to err on mis-diagnosing someone as having cancer as opposed to missing someone who indeed has cancer, i.e., the method should try not to miss anyone who may indeed have cancer. This idea is sometimes referred to as **cost-sensitive classification**, because there may be an asymmetric cost toward making one kind of mistake vs. another (i.e. FN vs. FP). In general, there is a trade-off between precision and recall. If precision is high then recall is low and vice versa. Total recall (100% recall) is achievable by always predicting the positive class, i.e., label all instances as positive, in which case precision will be very low. In the case of logistic regression, you can imagine that we changed the threshold from .50 to a higher value like .90. This would make many observations previously classified as 1 now classified as 0. What was left of 1 would be very likely to be 1, since we are 90% confident (high precision). However, we would have lost all of the 1s between 50-90% (low recall). <!-- TODO: we need to normalize coverage of performance metrics across notebooks, particularly for classification --> # Example: Diabetes or no Diabetes The type of dataset and problem is a classic supervised binary classification. Given a number of elements all with certain characteristics (features), we want to build a machine learning model to identify people affected by type 2 diabetes. To solve the problem we will have to analyze the data, do any required transformation and normalisation, apply a machine learning algorithm, train a model, check the performance of the trained model and iterate with other algorithms until we find the most performant for our type of dataset. ## The Pima Indians Dataset The Pima are a group of Native Americans living in Arizona. A genetic predisposition allowed this group to survive normally to a diet poor of carbohydrates for years. In recent years, because of a sudden shift from traditional agricultural crops to processed foods, together with a decline in physical activity, made them develop the highest prevalence of type 2 diabetes, and for this reason they have been subject of many studies. This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The dataset includes data from 768 women with 8 characteristics, in particular: | Variable | Type | Description | |----------|-------|:--------------------------------------------------------------------------| | pregnant | Ratio | Number of times pregnant | | glucose | Ratio | Plasma glucose concentration a 2 hours in an oral glucose tolerance test | | bp | Ratio | Diastolic blood pressure (mm Hg) | | skin | Ratio | Triceps skin fold thickness (mm) | | insulin | Ratio | 2-Hour serum insulin (mu U/ml) | | bmi | Ratio | Body mass index (weight in kg/(height in m)^2) | | pedigree | Ratio | Diabetes pedigree function | | age | Ratio | Age (years) | | label | Ratio | Diagnosed with diabetes (0 or 1) | **Source:** This dataset was taken from the UCI Machine Learning Repository library. <!-- NOTE: UCI is no longer providing access to the dataset, but without explanation regarding its continued use. Quick searches on Google also do not provide an explanation. TODO: determine if use of the Pima has been disallowed, and if so, replace it in this notebook with another dataset --> ## The problem The type of dataset and problem is a classic supervised binary classification. Given a number of elements all with certain characteristics (features), we want to build a machine learning model to identify people affected by type 2 diabetes. To solve the problem we will have to analyze the data, do any required transformation and normalization, apply a machine learning algorithm, train a model, check the performance of the trained model and iterate with other algorithms until we find the most performant for our type of dataset. <!--- AO: In the code below, I removed the np and os imports; they did not seem strictly necessary for this problem --> ## Get the data - First import `pandas` as `pd` so we can read the data file into a dataframe <!--- AO: Since this is just openning a file, several options: - Use full path for file in read command (Current choice) - Assemble path from separate strings - Store part of path in variable, then assemble I also added an explanation for why we are defining the col names Had to switch kernel to xpython here b/c Python 3 was not giving intellisense for pd --> Because our data file doesn't have a header (i.e., column names), we need to define these: - Create variable `col_names` - Set it to to a list containing: `"pregnant", "glucose", "bp", "skin", "insulin", "bmi", "pedigree", "age", "label"` - Create variable `dataframe` - Set it to `with pd do read_csv` using a list containing - `"datasets/pima-indians-diabetes.csv"` - freestyle `header=None` - freestyle `names=col_names` - `dataframe` (to display) <!-- TODO: I'm not sure if it makes sense to have data cleaning steps here; especially since we have an entire notebook on that --> ## Clean the data As you noticed when displaying the dataframe, something is wrong. Often the first row of a data file will be a **header** row that gives the names of the columns. In comma separated value (csv) format, the header and each following row of data are divided into columns using commas. However, in this case, something different is going on. Let's take a closer look at the first 20 rows: - `with dataframe do head using 20` As you can see, the first 9 rows (rows 0 to 8) are what we might expect in column headers. Since we manually specified the column names when we loaded the dataframe, these rows are "junk", and we should get rid of them. One way to do that is to get a sublist of rows from dataset that excludes them: - Set `dataframe` to `in list dataframe get sublist from #10 to last` - `dataframe` While the dataframe may look OK now, there is a subtle problem. When `pandas` reads data from a file, it uses what it finds in the column to decide what kind of variable that column is. Since the first column originally had some header information in it, `pandas` doesn't think it is numeric. So we need to tell `pandas` to correct it: - `import numpy as np` Convert everything in the dataframe to numeric: - Set `dataframe` to `with dataframe do astype using from np get float32` ## Explore the data ### Descriptive statistics - `with dataframe do describe using` There are some zeros which are really problematic. Having a glucose or blood pressure of 0 is not possible for a living person. Therefore we assume that variables with zero values in all variables except `pregnant` and `label` are actually **missing data**. That means, for example, that a piece of equipment broke during blood pressure measurement, so there was no value. - Create variable `dataframe2` - Set it to `with dataframe do drop using` a list containing - freestyle `columns=["pregnant","label"]` <!-- TODO: similarly question whether missing data should be part of this. Something to check on in future versions --> Now replace all the zeros in the remaining columns with the median in those columns: - `with dataframe2 do replace using` a list containing - `0` - `with dataframe2 do median using` - freestyle `inplace=True` Add the two missing columns back in: - Set dataframe to `with dataframe2 do assign using` a list containing - freestyle `pregnant = dataframe["pregnant"]` - freestyle `label = dataframe["label"]` - `dataframe` (to display) ### Correlations One of the most basic ways of exploring the data is to look at correlations. As we previously discussed, correlations show you how a variable is related to another variable. When the correlation is further away from zero, the variables are more strongly related: - Create `corr` and set to `with dataframe do corr` using nothing - Output `corr` This is a correlation matrix. The diagonal is 1.0 because each variable is perfectly correlated with itself. You might also notice that the upper and lower triangular matrices (above/below the diagonal) are mirror images of each other. Sometimes its easier to interpret a correlation matrix if we plot it in color with a heatmap. First, the import `plotly` for plotting: - `import plotly.express as px` To display the correlation matrix as a heatmap: - `with px do imshow using` a list containing - `corr` - A freestyle block **with a notch on the right** containing `x=`, connected to `from corr get columns` - A freestyle block **with a notch on the right** containing `y=`, connected to `from corr get columns` This is the color that represents zero: ![image.png](attachment:image.png) So anything darker is a negative correlation, and anything lighter is a positive one. As you can see, most of the negative correlations are weak and so not very interesting. The most positive correlations are pink-orange at around .55, which is a medium correlation. ### Histograms Another way to try to understand the data is to create histograms of all the variables. As we briefly discussed, a histogram shows you the count (on the y-axis) of the number of data points that fall into a certain range (also called a bin) of the variable. It can be very tedious to make a separate plot for each variable when you have many variables. The best way is to do it in a loop: - `for each item i in list` `from dataframe get columns` (use the green loop) - Set `fig` to `with px do histogram using` a list containing - `dataframe` - `x=` followed by `i` **Hint**: ![image.png](attachment:image.png) - Empty freestyle followed by `with fig do show using` **Hint**: ![image.png](attachment:image.png) Often we omit `with fig do show using` because Jupyter always displays the last "thing" in a cell. In this case, however, we want to display multiple things using one cell, so we need to explicitly display each one. From these histograms we observe: - Only `glucose`, `bp`, and `bmi` are normal - Everything else has larger mass on the lower end of the scale (i.e. on the left) ## Prepare train/test sets We need to split the dataframe into training data and testing data, and also separate the predictors from the class labels. Let's start by dropping the label: - Create variable `X` - Set it to `with dataframe do drop using` a list containing - freestyle `columns=["label"]` - `X` (to display) Save a dataframe with just `label` in `Y`: - Create variable `Y` - Set it to `dataframe [ ] ` containing the following in a list - `"label"` - `Y` (to display) To split our `X` and `Y` into train and test sets, we need an import: - `import sklearn.model_selection as model_selection` Now do the splits: - Create variable `splits` - Set it to `with model_selection do train_test_split using` a list containing - `X` (the features in an array) - `Y` (the labels in an array) - freestyle `test_size=0.2` (the proportion of the dataset to include in the test split) ## Logistic regression model We need libraries for: - Logistic regression - Performance metrics - Ravel As well as libraries we need to standardize: - Scale - Pipeline So do the following imports: - `import sklearn.linear_model as linear_model` - `import sklearn.metrics as metrics` - ~~`import numpy as np`~~ (already imported above) - `import sklearn.preprocessing as pp` - `import sklearn.pipeline as pipe` We're going to make a pipeline so we can scale and train in one step: - Create variable `std_clf` - Set it to `with pipe do make_pipeline using` a list containing - `with pp create StandardScaler using` - `with linear_model create LogisticRegression using` We can treat the whole pipeline as a classifier and call `fit` on it: - `with std_clf do fit using` a list containing - `in list splits get # 1` (this is Xtrain) - `with np do ravel using` a list containing - `in list splits get # 3` (this is Ytrain) Now we can get predictions from the model for our test data: - Create variable `predictions` - Set it to `with std_clf do predict using` a list containing - `in list splits get # 2` (this is Xtest) - `predictions` (to display) ## Assessing the model To get the accuracy: - `print create text with` - "Accuracy:" - `with metrics do accuracy_score using` a list containing - `in list splits get # 4` (this is `Ytest`) - `predictions` To get precision, recall, and F1: - `print with metrics do classification_report using` a list containing - `in list splits get # 4` (this is `Ytest`) - `predictions` Notice how the recall is much lower for `1` (diabetes), the rare class. Finally, let's create an ROC plot. To create the plot, we need predicted probabilities (for class `1`) as well as the ROC metrics using these probabilities and the true class labels: - Create variable `probs` - Set it to `with std_clf do predict_proba using` a list containing - `in list splits get # 2` (this is Xtest) - Create variable `rocMetrics` - Set it to `with metrics do roc_curve using` a list containing - `in list splits get # 4` (this is Ytest) - freestyle `probs[:,1]` (this is the positive class probabilities) - Set `fig` to `with px do line using` a list containing - freestyle `x=rocMetrics[0]` - freestyle `y=rocMetrics[1]` - `with fig do update_yaxes using` a list containing - freestyle `title_text="Recall/True positive rate"` - `with fig do update_xaxes using` a list containing - freestyle `title_text="False positive rate"` <!-- NOTE: Vasile had a nice AUC annotation on the plot, but I ran out of time to reverse engineer that -->
github_jupyter
Copyright 2020 Vasile Rus, Andrew M. Olney and made available under [CC BY-SA](https://creativecommons.org/licenses/by-sa/4.0) for text and [Apache-2.0](http://www.apache.org/licenses/LICENSE-2.0) for code. # Logistic Regression So far, we have looked at two broad kinds of supervised learning, classification and regression. Classification predicts a class label for an observation (i.e., a row of the dataframe) and regression predicts a numeric value for an observation. Logistic regression is a kind of regression that is primarily used for classification, particularly binary classification. It does this by predicting the **probability** (technically the log-odds) of the positive class assigned label `1`. If the probability is above a threshold, e.g .50, then this predicted numeric value is interpreted as a classification of `1`. Otherwise, the predicted numeric value is interpreted as a classification of `0`. So **logistic regression predicts a numeric probability that we convert into a classification.** Logistic regression is widely used in data science classification tasks, for example to: * categorize a person as having diabetes or not having diabetes * categorize an incoming email as spam or not spam Because logistic regression is also regression, it captures the relationship between an outcome/dependent variable and the predictor/independent variables in a similar way to linear regression. The major difference is that the coefficients in logistic regression can be interpreted probabilistically, so that we can say how much more likely a predictor variable makes a positive classification. The most common kind of logistic regression is binary logistic regression, but it is possible to have: * Binary/binomial logistic regression * Multiclass/Multinomial logistic regression * Ordinal logistic regression (there is an order among the categories) <!-- NOTE: I think this has been covered already, except maybe the hard/soft distinction --> <!-- **What is classification?** A classification/categorization task is about placing an object, e.g., a patient, into one of many categories, e.g. diseases, based on some characteristics of that object, e.g, patient’s symptoms. Depending on the number of classes/categories, classification tasks are called: * Binary/binomial vs. multi-class/multinomial classification. The simplest form of classification is binary classification, e.g., spam vs. not-spam email. * Multinomial classification. An example is categorizing birds into one of many species. **Hard classification vs. soft classification** * Hard classification: The object is placed into one and only one category out of many * Soft classification: the object is assigned to more than one category with some measure indicating the confidence of that object belonging to those categories In this notebook, we focus on binary, hard classification tasks. --> ## What you will learn In the sections that follow you will learn about logistic regression, an extension of linear regression, and how it can be used for classification. We will study the following: - The math behind logistic regression - Interpreting logistic regression coefficients - Evaluating classification performance ## When to use logistic regression Logistic regression works best when you need a classifier and want to be able to interpret the predictor variables easily, as you can with linear regression. Because logistic regression is fundamentally regression, it has the some assumptions of linearity and additivity, which may not be appropriate for some problems. Binary logistic regression is widely used and scales well, but multinomial variants typically begin to have performance problems when the number of classes is large. ## Mathematical Foundations of Logistic Regression for Binary Classification We briefly review in this section the mathematical formulation of logistic regression for binary classification problems. That is, the predicted categories are just two (say, 1 or 0) and each object or instance belongs to one and only one category. Logistic regression expresses the relationship between the output variable, also called dependent variable, and the predictors, also called independent variables or features, in a similar way to linear regression with an additional twist. The additional twist is necessary in order to transform the typical continuous value of linear regression onto a categorical value (0 or 1). **From Linear Regression to Logistic Regression** Let us review first the basics of linear regression and then discuss how to transform the mathematical formulation of linear regression such that the outcome is categorical. In a typical linear regression equation, the output variable $Y$ is related to $n$ predictor variables $X_j$ ($j=1,n$) using the following linear relation, where the output $Y$ is a linear combination of the predictors $X_j$ with corresponding weights (or coefficients) $\beta_{j}$: $$Y = {\beta}_{0} + \sum \limits _{j=1} ^{n} X_{j}{\beta}_{j}$$ In linear regression, the output $Y$ has continuous values between $-\inf$ and $+\inf$. In order to map such output values to just 0 and 1, we need to apply the sigmoid or logistic function. $$\sigma (t) = \frac{1}{1 + e^{-t}}$$ A graphical representation of the sigmoid or logistic function is shown below (from Wikipedia). The important part is that the output values are in the interval $(0,1)$ which is close to our goal of just predicted values 1 or 0. <!-- ![Sigmoid Function|200x100,20%](attachment:image.png) --> <img src="attachment:image.png" width="400"> <center><b>Figure 1. The logistic function.</b> Source: <a href=\"https://commons.wikimedia.org/wiki/File:Logistic-curve.svg\">Wikipedia</a></center> When applied to the $Y = {\beta}_{0} + \sum \limits _{j=1} ^{n} X_{j}{\beta}_{j}$ from linear regression we get the following formulation for logistic regression: $$\frac{1}{1 + e^{{\beta}_{0} + \sum \limits _{j=1} ^{n} X_{j}{\beta}_{j}}}$$ The net effect is that the the typical linear regression output values ranging from $-\inf$ and $+\inf$ are now bound to $(0,1)$, which is typical for probabilities. That is, the above formulation can be interpreted as estimating the probability of instance $X$ (described by all predictors $X_j$) belonging to class 1. $$ P( Y=1 | X ) = \frac{1}{1 + e^{{\beta}_{0} + \sum \limits _{j=1} ^{p} X_{j}{\beta}_{j}}}$$ The probability of class 0 is then: $$ P( Y=0 | X ) = 1 - P( Y=1 | X ) $$ Values close to 0 are deemed to belong to class 0 and values close to 1 are deemed to belong to class 1, thus resulting in a categorical output which is what we intend in logistic regression. <!-- NOTE: This is great but I think too intense at this point. The problem is that we haven't done the background in probability it seems to require. --> <!-- # Supervised Training for Logistic Regression In order to apply logistic regression to a particular problem, e.g., email spam classification, we need to train a logistic regression model using a supervised method, i.e., we need a training dataset consisting of expert-labeled instances of the objects we want to classify and their correct categories as judged by human experts. During training, the best set of predictor variable weights or coefficients $\beta_{j}$ are estimated based on the training data. It is beyond the scope of this notebook to detail the details of the training process. We will just note that the objective is to find the weights that maximize how well the predicted categories match the true, actual/expert-labelled categories for all instances. Mathematically, the goal of the training is to maximize the following expression that captures how well a set of the values of the coefficients $\beta_{j}$, i.e., a logistic model, predicts the actual classes for all training instances T: $$Likelihood(T) = \prod _{j=1} ^{T} P(Y_{j}|X_{j}; \beta_{j})$$ The expression is called the likelihood of the training data $T$ and is defined as the product of the estimated probabilities of each training example $X_j$ given a model defined by the weights/coefficients $\beta_{j}$. For computational reasons (e.g., it is easier to work with sum of small numbers than with products of such small numbers, i.e., probabilities values between 0 and 1), we maximize the log of likelihood: $$Log-Likelihood(T) = log (\prod _{j=1} ^{T} P(Y_{j}|X_{j}; \beta_{j})) = \sum _{j=1} ^{T} P(Y_{j}|X_{j}; \beta_{j})$$ The probability of an instance labeled with class $Y_i$ is in compact form: $$ P(Y_{j}|X_{j}; \beta_{j}) = P(Y_{j}=1|X_{j}; \beta_{j})^{Y_j} (1- P(Y_{j}=1|X_{j}; \beta_{j})^{(1-Y_j)}$$ Based on this expression of the probability of each instance, we can rewrite the log-likelihood as: $$Log-Likelihood(T) = \sum _{j=1} ^{T} {P(Y_{j}=1|X_{j}; \beta_{j})^{Y_j}} + \sum _{j=1} ^{T} {(1- P(Y_{j}=1|X_{j}; \beta_{j})^{(1-Y_j)}}$$ During training this expression is maximized. It can be maximized by minimizing its opposite which we can call the cost function or optimization objective for logistic regression: $$cost-function = - Log-Likelihood(T) = - \sum _{j=1} ^{T} {P(Y_{j}=1|X_{j}; \beta_{j})^{Y_j}} - \sum _{j=1} ^{T} {(1- P(Y_{j}=1|X_{j}; \beta_{j})^{(1-Y_j)}}$$ Finding the weights or coefficients $beta_{j}$ that minimize the cost function can be done using various algorithms such as gradient descent. --> # Interpreting the Coefficients in Logistic Regression One of the best ways to interpret the coefficients in logistic regression is to transform it back into a linear regression whose coefficients are easier to interpret. From the earlier formulation, we know that: $$ Y = P( Y=1 | X ) = \frac{1}{1 + e^{{\beta}_{0} + \sum \limits _{j=1} ^{p} X_{j}{\beta}_{j}}}$$ Applying a log function on both sides, we get: $$ log \frac{P ( Y=1 | X )}{1- P( Y=1 | X )} = \sum \limits _{j=1} ^{p} X_{j}{\beta}_{j} $$ On the left-hand of the above expression we have the log odds defined as the ratio of the probability of class 1 versus the probability of class 0. Indeed, this expression $\frac{P ( Y=1 | X )}{1- P( Y=1 | X )}$ is the odds because $1- P( Y=1 | X )$ is the probability of class 0, i.e., $P( Y=0 | X )$. Therefore, we conclude that the log odds are a linear regression of the predictor variables weighted by the coefficients $\beta_{j}$. Each such coefficient therefore indicates a change in the log odds when the corresponding predictor changes with a unit (in the case of numerical predictors). You may feel more comfortable with probabilities than odds, but you have probably seen odds expressed frequently in the context of sports. Here are some examples: - 1 to 1 means 50% probability of winning - 2 to 1 means 67% probability of winning - 3 to 1 means 75% probability of winning - 4 to 1 means 80% probability of winning Odds are just the probability of success divided by the probability of failure. For example 75% probability of winning means 25% probability of losing, and $.75/.25=3$, and we say the odds are 3 to 1. Because log odds are not intuitive (for most people), it is common to interpret the coefficients of logistic regression as odds. When a log odds coefficient has been converted to odds (using $e^\beta$), a coefficient of 1.5 means the positive class is 1.5 times more likely given a unit increase in the variable. # Peformance Evaluation Performance evaluation for logistic regression is the same as for other classification methods. The typical performance metrics for classifiers are accuracy, precision, and recall (also called sensitivity). We previously talked about these, but we did not focus much on precision, so let's clarify that. In some of our previous classification examples, there are only two classes that are equally likely (each is 50% of the data). When classes are equally likely, we say they are **balanced**. If our classifier is correct 60% of the time with two balanced classes, we know it is 10% better than chance. However, sometimes things are very unbalanced. Suppose we're trying to detect a rare disease that occurs once in 10,000 people. In this case, a classifier that always predicts "no disease" will be correct 99.99% of the time. This is because the **true negatives** in the data are so much greater than the **true positives** Because the metrics of accuracy and specificity use true negatives, they can be somewhat misleading when classes are imbalanced. In contrast, precision and recall don't use true negatives at all (see the figure below). This makes them behave more consistently in both balanced and imbalance data. For these reasons, precision, recall, and their combination F1 (also called f-measure) are very popular in machine learning and data science. <!-- ![confusionMatrix-1.png](attachment:confusionMatrix-1.png) --> <div> <img src="attachment:confusionMatrix-1.png" width="700"/> </div> <center><b>Figure 2. A confusion matrix. Note recall is an alternate label for sensitivity. </b> </center> <!-- NOTE: this became redundant with Tasha's KNN classification notebook. I modified to amplify precision, which she did not focus much on. --> <!-- happens, it is easy These are typical derived by compared the predicted output to the golden or actual output/categories in the expert labelled dataset. For a binary classification case, we denote the category 1 as the positive category and category 0 as the negative category. Using this new terminology, When comparing the predicted categories to the actual categories we may end up with the following cases: * True Positives (TP): instances predicted as belonging to the positive category and which in fact do belong to the positive category * True Negatives (TN): instances predicted as belonging to the negative category and which in fact do belong to the negative category * False Positives (FP): instances predicted as belonging to the positive category and which in fact do belong to the negative category * False Negatives (FN): instances predicted as belonging to the negative category and which in fact do belong to the positive category From these categories, we define the following metrics: $Accuracy = \frac{TP + TN}{TP + TN + FP + FN}$ $Precision = \frac{TP}{TP + FP}$ $Recall = \frac{TP}{TP + FN}$ Classfication methods that have a high accuracy are preferred in general although --> In some cases, maximizing precision or recall may be preferred. For instance, a high recall is highly recommended when making medical diagnosis since it is preferrable to err on mis-diagnosing someone as having cancer as opposed to missing someone who indeed has cancer, i.e., the method should try not to miss anyone who may indeed have cancer. This idea is sometimes referred to as **cost-sensitive classification**, because there may be an asymmetric cost toward making one kind of mistake vs. another (i.e. FN vs. FP). In general, there is a trade-off between precision and recall. If precision is high then recall is low and vice versa. Total recall (100% recall) is achievable by always predicting the positive class, i.e., label all instances as positive, in which case precision will be very low. In the case of logistic regression, you can imagine that we changed the threshold from .50 to a higher value like .90. This would make many observations previously classified as 1 now classified as 0. What was left of 1 would be very likely to be 1, since we are 90% confident (high precision). However, we would have lost all of the 1s between 50-90% (low recall). <!-- TODO: we need to normalize coverage of performance metrics across notebooks, particularly for classification --> # Example: Diabetes or no Diabetes The type of dataset and problem is a classic supervised binary classification. Given a number of elements all with certain characteristics (features), we want to build a machine learning model to identify people affected by type 2 diabetes. To solve the problem we will have to analyze the data, do any required transformation and normalisation, apply a machine learning algorithm, train a model, check the performance of the trained model and iterate with other algorithms until we find the most performant for our type of dataset. ## The Pima Indians Dataset The Pima are a group of Native Americans living in Arizona. A genetic predisposition allowed this group to survive normally to a diet poor of carbohydrates for years. In recent years, because of a sudden shift from traditional agricultural crops to processed foods, together with a decline in physical activity, made them develop the highest prevalence of type 2 diabetes, and for this reason they have been subject of many studies. This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The dataset includes data from 768 women with 8 characteristics, in particular: | Variable | Type | Description | |----------|-------|:--------------------------------------------------------------------------| | pregnant | Ratio | Number of times pregnant | | glucose | Ratio | Plasma glucose concentration a 2 hours in an oral glucose tolerance test | | bp | Ratio | Diastolic blood pressure (mm Hg) | | skin | Ratio | Triceps skin fold thickness (mm) | | insulin | Ratio | 2-Hour serum insulin (mu U/ml) | | bmi | Ratio | Body mass index (weight in kg/(height in m)^2) | | pedigree | Ratio | Diabetes pedigree function | | age | Ratio | Age (years) | | label | Ratio | Diagnosed with diabetes (0 or 1) | **Source:** This dataset was taken from the UCI Machine Learning Repository library. <!-- NOTE: UCI is no longer providing access to the dataset, but without explanation regarding its continued use. Quick searches on Google also do not provide an explanation. TODO: determine if use of the Pima has been disallowed, and if so, replace it in this notebook with another dataset --> ## The problem The type of dataset and problem is a classic supervised binary classification. Given a number of elements all with certain characteristics (features), we want to build a machine learning model to identify people affected by type 2 diabetes. To solve the problem we will have to analyze the data, do any required transformation and normalization, apply a machine learning algorithm, train a model, check the performance of the trained model and iterate with other algorithms until we find the most performant for our type of dataset. <!--- AO: In the code below, I removed the np and os imports; they did not seem strictly necessary for this problem --> ## Get the data - First import `pandas` as `pd` so we can read the data file into a dataframe <!--- AO: Since this is just openning a file, several options: - Use full path for file in read command (Current choice) - Assemble path from separate strings - Store part of path in variable, then assemble I also added an explanation for why we are defining the col names Had to switch kernel to xpython here b/c Python 3 was not giving intellisense for pd --> Because our data file doesn't have a header (i.e., column names), we need to define these: - Create variable `col_names` - Set it to to a list containing: `"pregnant", "glucose", "bp", "skin", "insulin", "bmi", "pedigree", "age", "label"` - Create variable `dataframe` - Set it to `with pd do read_csv` using a list containing - `"datasets/pima-indians-diabetes.csv"` - freestyle `header=None` - freestyle `names=col_names` - `dataframe` (to display) <!-- TODO: I'm not sure if it makes sense to have data cleaning steps here; especially since we have an entire notebook on that --> ## Clean the data As you noticed when displaying the dataframe, something is wrong. Often the first row of a data file will be a **header** row that gives the names of the columns. In comma separated value (csv) format, the header and each following row of data are divided into columns using commas. However, in this case, something different is going on. Let's take a closer look at the first 20 rows: - `with dataframe do head using 20` As you can see, the first 9 rows (rows 0 to 8) are what we might expect in column headers. Since we manually specified the column names when we loaded the dataframe, these rows are "junk", and we should get rid of them. One way to do that is to get a sublist of rows from dataset that excludes them: - Set `dataframe` to `in list dataframe get sublist from #10 to last` - `dataframe` While the dataframe may look OK now, there is a subtle problem. When `pandas` reads data from a file, it uses what it finds in the column to decide what kind of variable that column is. Since the first column originally had some header information in it, `pandas` doesn't think it is numeric. So we need to tell `pandas` to correct it: - `import numpy as np` Convert everything in the dataframe to numeric: - Set `dataframe` to `with dataframe do astype using from np get float32` ## Explore the data ### Descriptive statistics - `with dataframe do describe using` There are some zeros which are really problematic. Having a glucose or blood pressure of 0 is not possible for a living person. Therefore we assume that variables with zero values in all variables except `pregnant` and `label` are actually **missing data**. That means, for example, that a piece of equipment broke during blood pressure measurement, so there was no value. - Create variable `dataframe2` - Set it to `with dataframe do drop using` a list containing - freestyle `columns=["pregnant","label"]` <!-- TODO: similarly question whether missing data should be part of this. Something to check on in future versions --> Now replace all the zeros in the remaining columns with the median in those columns: - `with dataframe2 do replace using` a list containing - `0` - `with dataframe2 do median using` - freestyle `inplace=True` Add the two missing columns back in: - Set dataframe to `with dataframe2 do assign using` a list containing - freestyle `pregnant = dataframe["pregnant"]` - freestyle `label = dataframe["label"]` - `dataframe` (to display) ### Correlations One of the most basic ways of exploring the data is to look at correlations. As we previously discussed, correlations show you how a variable is related to another variable. When the correlation is further away from zero, the variables are more strongly related: - Create `corr` and set to `with dataframe do corr` using nothing - Output `corr` This is a correlation matrix. The diagonal is 1.0 because each variable is perfectly correlated with itself. You might also notice that the upper and lower triangular matrices (above/below the diagonal) are mirror images of each other. Sometimes its easier to interpret a correlation matrix if we plot it in color with a heatmap. First, the import `plotly` for plotting: - `import plotly.express as px` To display the correlation matrix as a heatmap: - `with px do imshow using` a list containing - `corr` - A freestyle block **with a notch on the right** containing `x=`, connected to `from corr get columns` - A freestyle block **with a notch on the right** containing `y=`, connected to `from corr get columns` This is the color that represents zero: ![image.png](attachment:image.png) So anything darker is a negative correlation, and anything lighter is a positive one. As you can see, most of the negative correlations are weak and so not very interesting. The most positive correlations are pink-orange at around .55, which is a medium correlation. ### Histograms Another way to try to understand the data is to create histograms of all the variables. As we briefly discussed, a histogram shows you the count (on the y-axis) of the number of data points that fall into a certain range (also called a bin) of the variable. It can be very tedious to make a separate plot for each variable when you have many variables. The best way is to do it in a loop: - `for each item i in list` `from dataframe get columns` (use the green loop) - Set `fig` to `with px do histogram using` a list containing - `dataframe` - `x=` followed by `i` **Hint**: ![image.png](attachment:image.png) - Empty freestyle followed by `with fig do show using` **Hint**: ![image.png](attachment:image.png) Often we omit `with fig do show using` because Jupyter always displays the last "thing" in a cell. In this case, however, we want to display multiple things using one cell, so we need to explicitly display each one. From these histograms we observe: - Only `glucose`, `bp`, and `bmi` are normal - Everything else has larger mass on the lower end of the scale (i.e. on the left) ## Prepare train/test sets We need to split the dataframe into training data and testing data, and also separate the predictors from the class labels. Let's start by dropping the label: - Create variable `X` - Set it to `with dataframe do drop using` a list containing - freestyle `columns=["label"]` - `X` (to display) Save a dataframe with just `label` in `Y`: - Create variable `Y` - Set it to `dataframe [ ] ` containing the following in a list - `"label"` - `Y` (to display) To split our `X` and `Y` into train and test sets, we need an import: - `import sklearn.model_selection as model_selection` Now do the splits: - Create variable `splits` - Set it to `with model_selection do train_test_split using` a list containing - `X` (the features in an array) - `Y` (the labels in an array) - freestyle `test_size=0.2` (the proportion of the dataset to include in the test split) ## Logistic regression model We need libraries for: - Logistic regression - Performance metrics - Ravel As well as libraries we need to standardize: - Scale - Pipeline So do the following imports: - `import sklearn.linear_model as linear_model` - `import sklearn.metrics as metrics` - ~~`import numpy as np`~~ (already imported above) - `import sklearn.preprocessing as pp` - `import sklearn.pipeline as pipe` We're going to make a pipeline so we can scale and train in one step: - Create variable `std_clf` - Set it to `with pipe do make_pipeline using` a list containing - `with pp create StandardScaler using` - `with linear_model create LogisticRegression using` We can treat the whole pipeline as a classifier and call `fit` on it: - `with std_clf do fit using` a list containing - `in list splits get # 1` (this is Xtrain) - `with np do ravel using` a list containing - `in list splits get # 3` (this is Ytrain) Now we can get predictions from the model for our test data: - Create variable `predictions` - Set it to `with std_clf do predict using` a list containing - `in list splits get # 2` (this is Xtest) - `predictions` (to display) ## Assessing the model To get the accuracy: - `print create text with` - "Accuracy:" - `with metrics do accuracy_score using` a list containing - `in list splits get # 4` (this is `Ytest`) - `predictions` To get precision, recall, and F1: - `print with metrics do classification_report using` a list containing - `in list splits get # 4` (this is `Ytest`) - `predictions` Notice how the recall is much lower for `1` (diabetes), the rare class. Finally, let's create an ROC plot. To create the plot, we need predicted probabilities (for class `1`) as well as the ROC metrics using these probabilities and the true class labels: - Create variable `probs` - Set it to `with std_clf do predict_proba using` a list containing - `in list splits get # 2` (this is Xtest) - Create variable `rocMetrics` - Set it to `with metrics do roc_curve using` a list containing - `in list splits get # 4` (this is Ytest) - freestyle `probs[:,1]` (this is the positive class probabilities) - Set `fig` to `with px do line using` a list containing - freestyle `x=rocMetrics[0]` - freestyle `y=rocMetrics[1]` - `with fig do update_yaxes using` a list containing - freestyle `title_text="Recall/True positive rate"` - `with fig do update_xaxes using` a list containing - freestyle `title_text="False positive rate"` <!-- NOTE: Vasile had a nice AUC annotation on the plot, but I ran out of time to reverse engineer that -->
0.966148
0.992139
# Section 2: Data Wrangling To prepare our data for analysis, we need to perform data wrangling. In this section, we will learn how to clean and reformat data (e.g. renaming columns, fixing data type mismatches), restructure/reshape it, and enrich it (e.g. discretizing columns, calculating aggregations, combining data sources). <div style="text-align: center;"> <img width="300" src="https://images.ctfassets.net/cnu0m8re1exe/I3pMkGv5UGGmNN1tB4XG0/73edfcdbe6e1637e5e79e0c3a2d25fbb/panda.jpg?fm=jpg&fl=progressive&w=660&h=433&fit=fill" alt="Panda with bamboo"> <div><small><em><a href="https://www.discovermagazine.com/planet-earth/how-pandas-survive-on-a-diet-of-only-bamboo">Source</a></em></small></div> </div> ## Learning Path 1. **Data cleaning** 2. Working with the index 3. Reshaping data 4. Aggregations and grouping 5. Time series ## Data cleaning In this section, we will take a look at creating, renaming, and dropping columns; type conversion; and sorting &ndash; all of which make our analysis easier. We will be working with the 2019 Yellow Taxi Trip Data provided by NYC Open Data. ``` import pandas as pd taxis = pd.read_csv('../data/2019_Yellow_Taxi_Trip_Data.csv') taxis.head() ``` *Source: [NYC Open Data](https://data.cityofnewyork.us/Transportation/2019-Yellow-Taxi-Trip-Data/2upf-qytp) collected via [SODA](https://dev.socrata.com/foundry/data.cityofnewyork.us/2upf-qytp).* ### Dropping columns Let's start by dropping the ID columns and the `store_and_fwd_flag` column, which we won't be using. ``` mask = taxis.columns.str.contains('id$|store_and_fwd_flag', regex=True) columns_to_drop = taxis.columns[mask] columns_to_drop taxis = taxis.drop(columns=columns_to_drop) taxis.head() ``` *Tip: Another way to do this is to select the columns we want to keep: `taxis.loc[:,~mask]`.* ### Renaming columns Next, let's rename the datetime columns: ``` taxis.rename( columns={ 'tpep_pickup_datetime': 'pickup', 'tpep_dropoff_datetime': 'dropoff' }, inplace=True ) taxis.head() ``` **Important**: This operation was performed in-place &ndash; be careful with in-place operations. ### Type conversion Notice anything off with the data types? ``` taxis.dtypes ``` Both `pickup` and `dropoff` should be stored as datetimes. Let's fix this: ``` taxis.loc[:, ['pickup', 'dropoff']] = \ taxis.loc[:, ['pickup', 'dropoff']].apply(pd.to_datetime) taxis.dtypes ``` *Tip: There are other ways to perform type conversion. For numeric values, we can use the `pd.to_numeric()` function, and we will see the `astype()` method, which is a more generic method, a little later.* ### Creating new columns Let's calculate the following for each row: 1. elapsed time of the trip 2. the tip percentage 3. the total taxes, tolls, fees, and surcharges 4. the average speed of the taxi ``` taxis = taxis.assign( elapsed_time=lambda x: x.dropoff - x.pickup, # 1 cost_before_tip=lambda x: x.total_amount - x.tip_amount, tip_pct=lambda x: x.tip_amount / x.cost_before_tip, # 2 fees=lambda x: x.cost_before_tip - x.fare_amount, # 3 avg_speed=lambda x: x.trip_distance.div( x.elapsed_time.dt.total_seconds() / 60 / 60 ) # 4 ) ``` Our new columns get added to the right: ``` taxis.head(2) ``` Some things to note: - We used `lambda` functions to 1) avoid typing `taxis` repeatedly and 2) be able to access the `cost_before_tip` column in the same method that we create it. - To create a single new column, we can also use `df['new_col'] = <values>`. ### Sorting by values We can use the `sort_values()` method to sort based on any number of columns: ``` taxis.sort_values(['passenger_count', 'pickup'], ascending=[False, True]).head() ``` To pick out the largest/smallest rows, use `nlargest()` / `nsmallest()` instead. Looking at the 3 trips with the longest elapsed time, we see some possible data integrity issues: ``` taxis.nlargest(3, 'elapsed_time') ``` ## Learning Path 1. Data cleaning 2. **Working with the index** 3. Reshaping data 4. Aggregations and grouping 5. Time series ## Working with the index So far, we haven't really worked with the index because it's just been a row number; however, we can change the values we have in the index to access additional features of the `pandas` library. ### Setting and sorting the index Currently, we have a RangeIndex, but we can switch to a DatetimeIndex by specifying a datetime column when calling `set_index()`: ``` taxis.set_index('pickup', inplace=True) taxis.head(3) ``` Since we have a sample of the full dataset, let's sort the index to order by pickup time: ``` taxis.sort_index(inplace=True) ``` *Tip: `taxis.sort_index(axis=1)` will sort the columns by name. The `axis` parameter is present throughout the `pandas` library: `axis=0` targets rows and `axis=1` targets columns.* We can now select ranges from our data based on the datetime the same way we did with row numbers: ``` taxis['2019-10-23 07:45':'2019-10-23 08'] ``` When not specifying a range, we use `loc[]`: ``` taxis.loc['2019-10-23 08'] ``` ### Resetting the index We will be working with time series later this section, but sometimes we want to reset our index to row numbers and restore the columns. We can make `pickup` a column again with the `reset_index()` method: ``` taxis.reset_index(inplace=True) taxis.head() ``` ## Learning Path 1. Data cleaning 2. Working with the index 3. **Reshaping data** 4. Aggregations and grouping 5. Time series ## Reshaping data The taxi dataset we have be working with is in a format conducive to an analysis. This isn't always the case. Let's now take a look at the TSA traveler throughput data, which compares 2021 throughput to the same day in 2020 and 2019: ``` tsa = pd.read_csv('../data/tsa_passenger_throughput.csv', parse_dates=['Date']) tsa.head() ``` *Source: [TSA.gov](https://www.tsa.gov/coronavirus/passenger-throughput)* First, we will lowercase the column names and take the first word (e.g. `2021` for `2021 Traveler Throughput`) to make this easier to work with: ``` tsa = tsa.rename(columns=lambda x: x.lower().split()[0]) tsa.head() ``` Now, we can work on reshaping it. ### Melting Melting helps convert our data into long format. Now, we have all the traveler throughput numbers in a single column: ``` tsa_melted = tsa.melt( id_vars='date', # uniquely identifies a row var_name='year', value_name='travelers' ) tsa_melted.sample(5, random_state=1) # show some random entries ``` To convert this into a time series of traveler throughput, we need to replace the year in the `date` column with the one in the `year` column. Otherwise, we are marking prior years' numbers with the wrong year. ``` tsa_melted = tsa_melted.assign( date=lambda x: pd.to_datetime(x.year + x.date.dt.strftime('-%m-%d')) ) tsa_melted.sample(5, random_state=1) ``` This leaves us with some null values (the dates that haven't yet occurred): ``` tsa_melted.sort_values('date').tail(3) ``` These can be dropped with the `dropna()` method: ``` tsa_melted = tsa_melted.dropna() tsa_melted.sort_values('date').tail(3) ``` ### Pivoting Using the melted data, we can pivot the data to compare TSA traveler throughput on specific days across years: ``` tsa_pivoted = tsa_melted\ .query('date.dt.month == 3 and date.dt.day <= 10')\ .assign(day_in_march=lambda x: x.date.dt.day)\ .pivot(index='year', columns='day_in_march', values='travelers') tsa_pivoted ``` **Important**: We aren't covering the `unstack()` and `stack()` methods, which are additional ways to pivot and melt, respectively. These come in handy when we have a multi-level index (e.g. if we ran `set_index()` with more than one column). ### Transposing The `T` attribute provides a quick way to flip rows and columns. ``` tsa_pivoted.T ``` ### Merging We typically observe changes in air travel around the holidays, so adding information about the dates in the TSA dataset provides more context. The `holidays.csv` file contains a few major holidays in the United States: ``` holidays = pd.read_csv('../data/holidays.csv', parse_dates=True, index_col='date') holidays.head() ``` Merging the holidays with the TSA traveler throughput data will provide more context for our analysis: ``` tsa_melted_holidays = tsa_melted\ .merge(holidays, left_on='date', right_index=True, how='left')\ .sort_values('date') tsa_melted_holidays.head() ``` *Tip: There are many parameters for this method so be sure to check out the [documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html). To append rows, take a look at `append()` and `pd.concat()`.* We can take this a step further by marking a few days before and after each holiday as part of the holiday. This would make it easier to compare holiday travel across years and look for any uptick in travel around the holidays: ``` tsa_melted_holiday_travel = tsa_melted_holidays.assign( holiday=lambda x: x.holiday\ .fillna(method='ffill', limit=1)\ .fillna(method='bfill', limit=2) ) tsa_melted_holiday_travel.query( 'year == "2019" and ' '(holiday == "Thanksgiving" or holiday.str.contains("Christmas"))' ) ``` *Tip: Check out the [documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html) for the full list of functionality available with the `fillna()` method.* ## Learning Path 1. Data cleaning 2. Working with the index 3. Reshaping data 4. **Aggregations and grouping** 5. Time series ## Aggregations and grouping After reshaping and cleaning our data, we can perform aggregations to summarize it in a variety of ways. In this section, we will explore using pivot tables, crosstabs, and group by operations to aggregate the data. ### Pivot tables We can build a pivot table to compare holiday travel across the years in our dataset: ``` tsa_melted_holiday_travel.pivot_table( index='year', columns='holiday', values='travelers', aggfunc='sum' ) ``` We can use the `pct_change()` method on this result to see which holiday travel periods saw the biggest change in travel: ``` tsa_melted_holiday_travel.pivot_table( index='year', columns='holiday', values='travelers', aggfunc='sum' ).pct_change() ``` Let's make one last pivot table with column and row subtotals along with some formatting improvements. First, we set a display option for all floats: ``` pd.set_option('display.float_format', '{:,.0f}'.format) ``` Next, we group together Christmas Eve and Christmas Day, likewise for New Year's Eve and New Year's Day, and create the pivot table: ``` import numpy as np tsa_melted_holiday_travel.assign( holiday=lambda x: np.where( x.holiday.str.contains('Christmas|New Year', regex=True), x.holiday.str.replace('Day|Eve', '', regex=True).str.strip(), x.holiday ) ).pivot_table( index='year', columns='holiday', values='travelers', aggfunc='sum', margins=True, margins_name='Total' ) ``` Before moving on, let's reset the display option: ``` pd.reset_option('display.float_format') ``` *Tip: Read more about options in the documentation [here](https://pandas.pydata.org/pandas-docs/stable/user_guide/options.html).* ### Crosstabs The `pd.crosstab()` function provides an easy way to create a frequency table: ``` pd.crosstab( index=pd.cut( tsa_melted_holiday_travel.travelers, bins=3, labels=['low', 'medium', 'high'] ), columns=tsa_melted_holiday_travel.year, rownames=['travel_volume'] ) ``` *Tip: The `pd.crosstab()` function supports other aggregations provided you pass in the data to aggregate as `values` and specify the aggregation with `aggfunc`. You can also add subtotals and normalize the data. See the [documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.crosstab.html) for more information.* ### Group by operations Rather than perform aggregations, like `mean()` or `describe()`, on the full dataset at once, we can perform these calculations per group by first calling `groupby()`: ``` tsa_melted_holiday_travel.groupby('year').describe() ``` Groups can also be used to perform separate calculations per subset of the data. For example, we can find the highest-volume travel day per year using `rank()`: ``` tsa_melted_holiday_travel.assign( travel_volume_rank=lambda x: x.groupby('year').travelers.rank(ascending=False) ).sort_values(['travel_volume_rank', 'year']).head(3) ``` The previous two examples called a single method on the grouped data, but using the `agg()` method we can specify any number of them: ``` tsa_melted_holiday_travel.assign( holiday_travelers=lambda x: np.where(~x.holiday.isna(), x.travelers, np.nan), non_holiday_travelers=lambda x: np.where(x.holiday.isna(), x.travelers, np.nan) ).groupby('year').agg(['mean', 'std']) ``` In addition, we can specify which aggregations to perform on each column: ``` tsa_melted_holiday_travel.assign( holiday_travelers=lambda x: np.where(~x.holiday.isna(), x.travelers, np.nan), non_holiday_travelers=lambda x: np.where(x.holiday.isna(), x.travelers, np.nan) ).groupby('year').agg({ 'holiday_travelers': ['mean', 'std'], 'holiday': ['nunique', 'count'] }) ``` We are only scratching the surface; some additional functionalities to be aware of include the following: - We can group by multiple columns &ndash; this creates a hierarchical index. - Groups can be excluded from calculations with the `filter()` method. - We can group on content in the index using the `level` or `name` parameters e.g. `groupby(level=0)` or `groupby(name='year')`. - We can group by date ranges if we use a `pd.Grouper()` object. Be sure to check out the [documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html) for more details. ## Learning Path 1. Data cleaning 2. Working with the index 3. Reshaping data 4. Aggregations and grouping 5. **Time series** ## Time series When working with time series data, `pandas` provides us with additional functionality to not just compare the observations in our dataset, but to use their relationship in time to analyze the data. In this section, we will see a few such operations for selecting date/time ranges, calculating changes over time, performing window calculations, and resampling the data to different date/time intervals. ### Selecting based on date and time Let's switch back to the `taxis` dataset, which has timestamps of pickups and dropoffs. First, we will set the `dropoff` column as the index and sort the data: ``` taxis.set_index('dropoff', inplace=True) taxis.sort_index(inplace=True) ``` We saw earlier that we can slice on the datetimes: ``` taxis['2019-10-24 12':'2019-10-24 13'] ``` We can also represent this range with shorthand. Note that we must use `loc[]` here: ``` taxis.loc['2019-10-24 12'] ``` However, if we want to look at this time range across days, we need another strategy. We can pull out the dropoffs that happened between a certain time range on *any* day with the `between_time()` method: ``` taxis.between_time('12:00', '13:00') ``` *Tip: The `at_time()` method can be used to extract all entries at a given time (e.g. 12:35:27).* Finally, `head()` and `tail()` limit us to a number of rows, but we may be interested in rows within the first/last 2 hours (or any other time interval) of the data, in which case, we should use `first()` / `last()`: ``` taxis.first('2H') ``` For the rest of this section, we will be working with the TSA traveler throughput data. Let's start by setting the index to the `date` column: ``` tsa_melted_holiday_travel.set_index('date', inplace=True) ``` ### Calculating change over time ``` tsa_melted_holiday_travel.loc['2020'].assign( one_day_change=lambda x: x.travelers.diff(), seven_day_change=lambda x: x.travelers.diff(7), ).head(10) ``` *Tip: To perform operations other than subtraction, take a look at the `shift()` method. It also makes it possible to perform operations across columns.* ### Resampling We can use resampling to aggregate time series data to a new frequency: ``` tsa_melted_holiday_travel['2019':'2021-Q1'].resample('Q').agg(['sum', 'mean', 'std']) ``` ### Window calculations Window calculations are similar to group by calculations except the group over which the calculation is performed isn't static &ndash; it can move or expand. Pandas provides functionality for constructing a variety of windows, including moving/rolling windows, expanding windows (e.g. cumulative sum or mean up to the current date in a time series), and exponentially weighted moving windows (to weight closer observations higher than further ones). We will only look at rolling and expanding calculations here. <div style="text-align: center;"> <img width="300" src="https://i0.wp.com/www.untoldwanderlust.com/wp-content/uploads/2017/07/Panda-rolling-around-in-bamboo-in-Chengdu-China.jpg?w=700&ssl=1" alt="Panda Rolling"> <div><small><em><a href="https://www.untoldwanderlust.com/chengdu-panda-base/">Source</a></em></small></div> </div> Performing a window calculation is very similar to a group by calculation: we first define the window, and then we specify the aggregation: ``` tsa_melted_holiday_travel.loc['2020'].assign( **{ '7D MA': lambda x: x.rolling('7D').travelers.mean(), 'YTD mean': lambda x: x.expanding().travelers.mean() } ).head(10) ``` To understand what's happening, it's best to visualize the original data and the result, so here's a sneak peek of plotting with `pandas`: ``` %matplotlib inline tsa_melted_holiday_travel.loc['2020'].assign( **{ '7D MA': lambda x: x.rolling('7D').travelers.mean(), 'YTD mean': lambda x: x.expanding().travelers.mean() } ).plot(title='2020 TSA Traveler Throughput', ylabel='travelers', alpha=0.8) ``` Other types of windows: - [exponentially weighted moving](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ewm.html): use the `ewm()` method - [custom](https://pandas.pydata.org/docs/user_guide/window.html#window-custom-rolling-window): create a subclass of `pandas.api.indexers.BaseIndexer` or use a pre-built one in `pandas.api.indexers` ## Up Next: Data Visualization Let's take a 25-minute break for some exercises to check your understanding: 1. Read in the meteorite data from the `Meteorite_Landings.csv` file. 2. Rename the `mass (g)` column to `mass`, and drop all the latitude and longitude columns. 3. Update the `year` column to only contain the year, and create a new column indicating if the year is unknown. Hint: Use `year.str.slice()` to grab a substring. 4. There's a data entry error in the `year` column. Can you find it? (Don't spend too much time on this.) 5. Compare summary statistics of the `mass` column for the meteorites that were found versus observed falling. 6. Create a pivot table that shows both the number of meteorites and the 95th percentile of meteorite mass for those that were found versus observed falling per year from 1990 to 2000 (inclusive). 7. Using the `taxis` data from earlier this section, resample the data to an hourly frequency based on the dropoff time. Calculate the total `trip_distance`, `fare_amount`, `tolls_amount`, and `tip_amount`, then find the 5 hours with the most tips. ### Solutions ##### 1. Read in the meteorite data from the `Meteorite_Landings.csv` file: ``` meteorites = pd.read_csv('../data/Meteorite_Landings.csv') ``` ##### 2. Rename the `mass (g)` column to `mass`, and drop all the latitude and longitude columns. ``` meteorites = meteorites.rename(columns={'mass (g)': 'mass'})\ .drop(columns=meteorites.columns[-3:]) meteorites.head() ``` ##### 3. Update the `year` column to only contain the year, and create a new column indicating if the year is unknown. ``` meteorites = meteorites.assign( year=lambda x: pd.to_numeric(x.year.str.slice(6, 10)), no_year=lambda x: x.year.isna() ) meteorites.head() ``` ##### 4. There's a data entry error in the `year` column. Can you find it? ``` meteorites.query('year > 2021') ``` ###### Oops! This meteorite actually was found in 2010 (more information [here](https://www.lpi.usra.edu/meteor/metbull.php?code=57150)). ##### 5. Compare summary statistics of the `mass` column for the meteorites that were found versus observed falling. ``` meteorites.groupby('fall').mass.describe() ``` ##### 6. Create a pivot table that shows both the number of meteorites and the 95th percentile of meteorite mass for those that were found versus observed falling per year from 1990 to 2000 (inclusive). ``` meteorites.query('year.between(1990, 2000)').pivot_table( index='year', columns='fall', values='mass', aggfunc=['count', lambda x: x.quantile(0.95)] ).rename(columns={'<lambda>': '95th percentile'}) ``` ##### 7. Using the `taxis` data from earlier this section, resample the data to an hourly frequency based on the dropoff time. Calculate the total `trip_distance`, `fare_amount`, `tolls_amount`, and `tip_amount`, then find the 5 hours with the most tips. ``` taxis.resample('1H')[[ 'trip_distance', 'fare_amount', 'tolls_amount', 'tip_amount' ]].sum().nlargest(5, 'tip_amount') ``` ## Section 2 Complete 🎉 <img width="300" src="https://www.pngkey.com/png/detail/129-1298535_panda-bear-images-cartoon-cute-panda-bear.png" alt="Panda Bear Images - Cartoon Cute Panda Bear@pngkey.com">
github_jupyter
import pandas as pd taxis = pd.read_csv('../data/2019_Yellow_Taxi_Trip_Data.csv') taxis.head() mask = taxis.columns.str.contains('id$|store_and_fwd_flag', regex=True) columns_to_drop = taxis.columns[mask] columns_to_drop taxis = taxis.drop(columns=columns_to_drop) taxis.head() taxis.rename( columns={ 'tpep_pickup_datetime': 'pickup', 'tpep_dropoff_datetime': 'dropoff' }, inplace=True ) taxis.head() taxis.dtypes taxis.loc[:, ['pickup', 'dropoff']] = \ taxis.loc[:, ['pickup', 'dropoff']].apply(pd.to_datetime) taxis.dtypes taxis = taxis.assign( elapsed_time=lambda x: x.dropoff - x.pickup, # 1 cost_before_tip=lambda x: x.total_amount - x.tip_amount, tip_pct=lambda x: x.tip_amount / x.cost_before_tip, # 2 fees=lambda x: x.cost_before_tip - x.fare_amount, # 3 avg_speed=lambda x: x.trip_distance.div( x.elapsed_time.dt.total_seconds() / 60 / 60 ) # 4 ) taxis.head(2) taxis.sort_values(['passenger_count', 'pickup'], ascending=[False, True]).head() taxis.nlargest(3, 'elapsed_time') taxis.set_index('pickup', inplace=True) taxis.head(3) taxis.sort_index(inplace=True) taxis['2019-10-23 07:45':'2019-10-23 08'] taxis.loc['2019-10-23 08'] taxis.reset_index(inplace=True) taxis.head() tsa = pd.read_csv('../data/tsa_passenger_throughput.csv', parse_dates=['Date']) tsa.head() tsa = tsa.rename(columns=lambda x: x.lower().split()[0]) tsa.head() tsa_melted = tsa.melt( id_vars='date', # uniquely identifies a row var_name='year', value_name='travelers' ) tsa_melted.sample(5, random_state=1) # show some random entries tsa_melted = tsa_melted.assign( date=lambda x: pd.to_datetime(x.year + x.date.dt.strftime('-%m-%d')) ) tsa_melted.sample(5, random_state=1) tsa_melted.sort_values('date').tail(3) tsa_melted = tsa_melted.dropna() tsa_melted.sort_values('date').tail(3) tsa_pivoted = tsa_melted\ .query('date.dt.month == 3 and date.dt.day <= 10')\ .assign(day_in_march=lambda x: x.date.dt.day)\ .pivot(index='year', columns='day_in_march', values='travelers') tsa_pivoted tsa_pivoted.T holidays = pd.read_csv('../data/holidays.csv', parse_dates=True, index_col='date') holidays.head() tsa_melted_holidays = tsa_melted\ .merge(holidays, left_on='date', right_index=True, how='left')\ .sort_values('date') tsa_melted_holidays.head() tsa_melted_holiday_travel = tsa_melted_holidays.assign( holiday=lambda x: x.holiday\ .fillna(method='ffill', limit=1)\ .fillna(method='bfill', limit=2) ) tsa_melted_holiday_travel.query( 'year == "2019" and ' '(holiday == "Thanksgiving" or holiday.str.contains("Christmas"))' ) tsa_melted_holiday_travel.pivot_table( index='year', columns='holiday', values='travelers', aggfunc='sum' ) tsa_melted_holiday_travel.pivot_table( index='year', columns='holiday', values='travelers', aggfunc='sum' ).pct_change() pd.set_option('display.float_format', '{:,.0f}'.format) import numpy as np tsa_melted_holiday_travel.assign( holiday=lambda x: np.where( x.holiday.str.contains('Christmas|New Year', regex=True), x.holiday.str.replace('Day|Eve', '', regex=True).str.strip(), x.holiday ) ).pivot_table( index='year', columns='holiday', values='travelers', aggfunc='sum', margins=True, margins_name='Total' ) pd.reset_option('display.float_format') pd.crosstab( index=pd.cut( tsa_melted_holiday_travel.travelers, bins=3, labels=['low', 'medium', 'high'] ), columns=tsa_melted_holiday_travel.year, rownames=['travel_volume'] ) tsa_melted_holiday_travel.groupby('year').describe() tsa_melted_holiday_travel.assign( travel_volume_rank=lambda x: x.groupby('year').travelers.rank(ascending=False) ).sort_values(['travel_volume_rank', 'year']).head(3) tsa_melted_holiday_travel.assign( holiday_travelers=lambda x: np.where(~x.holiday.isna(), x.travelers, np.nan), non_holiday_travelers=lambda x: np.where(x.holiday.isna(), x.travelers, np.nan) ).groupby('year').agg(['mean', 'std']) tsa_melted_holiday_travel.assign( holiday_travelers=lambda x: np.where(~x.holiday.isna(), x.travelers, np.nan), non_holiday_travelers=lambda x: np.where(x.holiday.isna(), x.travelers, np.nan) ).groupby('year').agg({ 'holiday_travelers': ['mean', 'std'], 'holiday': ['nunique', 'count'] }) taxis.set_index('dropoff', inplace=True) taxis.sort_index(inplace=True) taxis['2019-10-24 12':'2019-10-24 13'] taxis.loc['2019-10-24 12'] taxis.between_time('12:00', '13:00') taxis.first('2H') tsa_melted_holiday_travel.set_index('date', inplace=True) tsa_melted_holiday_travel.loc['2020'].assign( one_day_change=lambda x: x.travelers.diff(), seven_day_change=lambda x: x.travelers.diff(7), ).head(10) tsa_melted_holiday_travel['2019':'2021-Q1'].resample('Q').agg(['sum', 'mean', 'std']) tsa_melted_holiday_travel.loc['2020'].assign( **{ '7D MA': lambda x: x.rolling('7D').travelers.mean(), 'YTD mean': lambda x: x.expanding().travelers.mean() } ).head(10) %matplotlib inline tsa_melted_holiday_travel.loc['2020'].assign( **{ '7D MA': lambda x: x.rolling('7D').travelers.mean(), 'YTD mean': lambda x: x.expanding().travelers.mean() } ).plot(title='2020 TSA Traveler Throughput', ylabel='travelers', alpha=0.8) meteorites = pd.read_csv('../data/Meteorite_Landings.csv') meteorites = meteorites.rename(columns={'mass (g)': 'mass'})\ .drop(columns=meteorites.columns[-3:]) meteorites.head() meteorites = meteorites.assign( year=lambda x: pd.to_numeric(x.year.str.slice(6, 10)), no_year=lambda x: x.year.isna() ) meteorites.head() meteorites.query('year > 2021') meteorites.groupby('fall').mass.describe() meteorites.query('year.between(1990, 2000)').pivot_table( index='year', columns='fall', values='mass', aggfunc=['count', lambda x: x.quantile(0.95)] ).rename(columns={'<lambda>': '95th percentile'}) taxis.resample('1H')[[ 'trip_distance', 'fare_amount', 'tolls_amount', 'tip_amount' ]].sum().nlargest(5, 'tip_amount')
0.252568
0.989938
# 序列 在 Python 中,序列类型包括**字符串、列表、元组、集合和字典**,这些序列支持一些通用的操作,但比较特殊的是,**集合和字典不支持索引、切片、相加和相乘操作。** ## 1. 针对序列的内置函数 - list(sub) 把一个可迭代对象转换为列表。 ``` a = list() print(a) # [] b = 'I Love LsgoGroup' b = list(b) print(b) # ['I', ' ', 'L', 'o', 'v', 'e', ' ', 'L', 's', 'g', 'o', 'G', 'r', 'o', 'u', 'p'] c = (1, 1, 2, 3, 5, 8) c = list(c) print(c) # [1, 1, 2, 3, 5, 8] ``` - tuple(sub) 把一个可迭代对象转换为元组。 ``` a = tuple() print(a) # () b = 'I Love LsgoGroup' b = tuple(b) print(b) # ('I', ' ', 'L', 'o', 'v', 'e', ' ', 'L', 's', 'g', 'o', 'G', 'r', 'o', 'u', 'p') c = [1, 1, 2, 3, 5, 8] c = tuple(c) print(c) # (1, 1, 2, 3, 5, 8) ``` - str(obj) 把obj对象转换为字符串 ``` a = 123 a = str(a) print(a) # 123 ``` - len(s) 返回对象(字符、列表、元组等)长度或元素个数。 - s -- 对象。 ``` a = list() print(len(a)) # 0 b = ('I', ' ', 'L', 'o', 'v', 'e', ' ', 'L', 's', 'g', 'o', 'G', 'r', 'o', 'u', 'p') print(len(b)) # 16 c = 'I Love LsgoGroup' print(len(c)) # 16 ``` - max(sub)返回序列或者参数集合中的最大值 ``` print(max(1, 2, 3, 4, 5)) # 5 print(max([-8, 99, 3, 7, 83])) # 99 print(max('IloveLsgoGroup')) # v ``` - min(sub)返回序列或参数集合中的最小值 ``` print(min(1, 2, 3, 4, 5)) # 1 print(min([-8, 99, 3, 7, 83])) # -8 print(min('IloveLsgoGroup')) # G ``` - sum(iterable[, start=0]) 返回序列iterable与可选参数start的总和。 ``` print(sum([1, 3, 5, 7, 9])) # 25 print(sum([1, 3, 5, 7, 9], 10)) # 35 print(sum((1, 3, 5, 7, 9))) # 25 print(sum((1, 3, 5, 7, 9), 20)) # 45 ``` - sorted(iterable, key=None, reverse=False) 对所有可迭代的对象进行排序操作。 - iterable -- 可迭代对象。 - key -- 主要是用来进行比较的元素,只有一个参数,具体的函数的参数就是取自于可迭代对象中,指定可迭代对象中的一个元素来进行排序。 - reverse -- 排序规则,reverse = True 降序 , reverse = False 升序(默认)。 - 返回重新排序的列表。 ``` x = [-8, 99, 3, 7, 83] print(sorted(x)) # [-8, 3, 7, 83, 99] print(sorted(x, reverse=True)) # [99, 83, 7, 3, -8] t = ({"age": 20, "name": "a"}, {"age": 25, "name": "b"}, {"age": 10, "name": "c"}) x = sorted(t, key=lambda a: a["age"]) print(x) # [{'age': 10, 'name': 'c'}, {'age': 20, 'name': 'a'}, {'age': 25, 'name': 'b'}] y = sorted(t, key=lambda a: a["name"]) print(y) ``` - reversed(seq) 函数返回一个反转的迭代器。 - seq -- 要转换的序列,可以是 **tuple, string, list 或 range**。 ``` s = 'lsgogroup' x = reversed(s) print(type(x)) # <class 'reversed'> print(x) # <reversed object at 0x000002507E8EC2C8> print(list(x)) # ['p', 'u', 'o', 'r', 'g', 'o', 'g', 's', 'l'] t = ('l', 's', 'g', 'o', 'g', 'r', 'o', 'u', 'p') print(list(reversed(t))) # ['p', 'u', 'o', 'r', 'g', 'o', 'g', 's', 'l'] r = range(5, 9) print(list(reversed(r))) # [8, 7, 6, 5] x = [-8, 99, 3, 7, 83] print(list(reversed(x))) # [83, 7, 3, 99, -8] ``` - enumerate(sequence, [start=0]) 用于将一个可遍历的数据对象(如列表、元组或字符串)组合为一个**索引序列,同时列出数据和数据下标**,一般用在 for 循环当中。 ``` seasons = ['Spring', 'Summer', 'Fall', 'Winter'] a = list(enumerate(seasons)) print(a) # [(0, 'Spring'), (1, 'Summer'), (2, 'Fall'), (3, 'Winter')] b = list(enumerate(seasons, 1)) print(b) # [(1, 'Spring'), (2, 'Summer'), (3, 'Fall'), (4, 'Winter')] for i, element in a: print('{0},{1}'.format(i, element)) # 0,Spring # 1,Summer # 2,Fall # 3,Winter ``` - zip(iter1 [,iter2 [...]]) - 用于将可迭代的对象作为参数,将对象中对应的元素打包成一个个元组,然后返回由这些元组组成的对象,这样做的好处是节约了不少的内存。 - 我们可以使用 list() 转换来输出列表。 - 如果各个迭代器的元素个数不一致,则返回列表长度与最短的对象相同,利用 * 号操作符,可以将元组解压为列表。 ``` a = [1, 2, 3] b = [4, 5, 6] c = [4, 5, 6, 7, 8] zipped = zip(a, b) print(zipped) # <zip object at 0x000000C5D89EDD88> print(list(zipped)) # [(1, 4), (2, 5), (3, 6)] zipped = zip(a, c) print(list(zipped)) # [(1, 4), (2, 5), (3, 6)] a1, a2 = zip(*zip(a, b)) print(list(a1)) # [1, 2, 3] print(list(a2)) # [4, 5, 6] a = [1, 2, 3] b = [4, 5, 6] print(*zip(a,b)) a1, a2 = zip(*(zip(a, b))) print(list(a1), list(a2)) ``` ## 练习题 1. 怎么找出序列中的最⼤、⼩值? max(sub), min(sub) 2. sort() 和 sorted() 区别 sort 是应用在 list 上的方法,sorted 可以对所有可迭代的对象进行排序操作。 list 的 sort 方法返回的是对已经存在的列表进行操作,无返回值,而内建函数 sorted 方法返回的是一个新的 list,而不是在原来的基础上进行的操作。 3. 怎么快速求 1 到 100 所有整数相加之和? ``` sum(range(1, 100)) ``` 4. 求列表 [2,3,4,5] 中每个元素的立方根。 ``` list1 = [2, 3, 4, 5] list2 = [] for i in list1: j = i ** 3 list2.append(j) print(list2) ``` 5. 将[‘x’,‘y’,‘z’] 和 [1,2,3] 转成 [(‘x’,1),(‘y’,2),(‘z’,3)] 的形式。 ``` c = ['x', 'y', 'z'] d = [1,2,3] zipped = zip(c,d) list(zipped) ```
github_jupyter
a = list() print(a) # [] b = 'I Love LsgoGroup' b = list(b) print(b) # ['I', ' ', 'L', 'o', 'v', 'e', ' ', 'L', 's', 'g', 'o', 'G', 'r', 'o', 'u', 'p'] c = (1, 1, 2, 3, 5, 8) c = list(c) print(c) # [1, 1, 2, 3, 5, 8] a = tuple() print(a) # () b = 'I Love LsgoGroup' b = tuple(b) print(b) # ('I', ' ', 'L', 'o', 'v', 'e', ' ', 'L', 's', 'g', 'o', 'G', 'r', 'o', 'u', 'p') c = [1, 1, 2, 3, 5, 8] c = tuple(c) print(c) # (1, 1, 2, 3, 5, 8) a = 123 a = str(a) print(a) # 123 a = list() print(len(a)) # 0 b = ('I', ' ', 'L', 'o', 'v', 'e', ' ', 'L', 's', 'g', 'o', 'G', 'r', 'o', 'u', 'p') print(len(b)) # 16 c = 'I Love LsgoGroup' print(len(c)) # 16 print(max(1, 2, 3, 4, 5)) # 5 print(max([-8, 99, 3, 7, 83])) # 99 print(max('IloveLsgoGroup')) # v print(min(1, 2, 3, 4, 5)) # 1 print(min([-8, 99, 3, 7, 83])) # -8 print(min('IloveLsgoGroup')) # G print(sum([1, 3, 5, 7, 9])) # 25 print(sum([1, 3, 5, 7, 9], 10)) # 35 print(sum((1, 3, 5, 7, 9))) # 25 print(sum((1, 3, 5, 7, 9), 20)) # 45 x = [-8, 99, 3, 7, 83] print(sorted(x)) # [-8, 3, 7, 83, 99] print(sorted(x, reverse=True)) # [99, 83, 7, 3, -8] t = ({"age": 20, "name": "a"}, {"age": 25, "name": "b"}, {"age": 10, "name": "c"}) x = sorted(t, key=lambda a: a["age"]) print(x) # [{'age': 10, 'name': 'c'}, {'age': 20, 'name': 'a'}, {'age': 25, 'name': 'b'}] y = sorted(t, key=lambda a: a["name"]) print(y) s = 'lsgogroup' x = reversed(s) print(type(x)) # <class 'reversed'> print(x) # <reversed object at 0x000002507E8EC2C8> print(list(x)) # ['p', 'u', 'o', 'r', 'g', 'o', 'g', 's', 'l'] t = ('l', 's', 'g', 'o', 'g', 'r', 'o', 'u', 'p') print(list(reversed(t))) # ['p', 'u', 'o', 'r', 'g', 'o', 'g', 's', 'l'] r = range(5, 9) print(list(reversed(r))) # [8, 7, 6, 5] x = [-8, 99, 3, 7, 83] print(list(reversed(x))) # [83, 7, 3, 99, -8] seasons = ['Spring', 'Summer', 'Fall', 'Winter'] a = list(enumerate(seasons)) print(a) # [(0, 'Spring'), (1, 'Summer'), (2, 'Fall'), (3, 'Winter')] b = list(enumerate(seasons, 1)) print(b) # [(1, 'Spring'), (2, 'Summer'), (3, 'Fall'), (4, 'Winter')] for i, element in a: print('{0},{1}'.format(i, element)) # 0,Spring # 1,Summer # 2,Fall # 3,Winter a = [1, 2, 3] b = [4, 5, 6] c = [4, 5, 6, 7, 8] zipped = zip(a, b) print(zipped) # <zip object at 0x000000C5D89EDD88> print(list(zipped)) # [(1, 4), (2, 5), (3, 6)] zipped = zip(a, c) print(list(zipped)) # [(1, 4), (2, 5), (3, 6)] a1, a2 = zip(*zip(a, b)) print(list(a1)) # [1, 2, 3] print(list(a2)) # [4, 5, 6] a = [1, 2, 3] b = [4, 5, 6] print(*zip(a,b)) a1, a2 = zip(*(zip(a, b))) print(list(a1), list(a2)) sum(range(1, 100)) list1 = [2, 3, 4, 5] list2 = [] for i in list1: j = i ** 3 list2.append(j) print(list2) c = ['x', 'y', 'z'] d = [1,2,3] zipped = zip(c,d) list(zipped)
0.067554
0.904186
# Tensor [![](https://gitee.com/mindspore/docs/raw/master/resource/_static/logo_source.png)](https://gitee.com/mindspore/docs/blob/master/docs/mindspore/programming_guide/source_zh_cn/tensor.ipynb)&emsp;[![](https://gitee.com/mindspore/docs/raw/master/resource/_static/logo_notebook.png)](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/master/programming_guide/zh_cn/mindspore_tensor.ipynb)&emsp;[![](https://gitee.com/mindspore/docs/raw/master/resource/_static/logo_modelarts.png)](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9vYnMuZHVhbHN0YWNrLmNuLW5vcnRoLTQubXlodWF3ZWljbG91ZC5jb20vbWluZHNwb3JlLXdlYnNpdGUvbm90ZWJvb2svbW9kZWxhcnRzL3Byb2dyYW1taW5nX2d1aWRlL21pbmRzcG9yZV90ZW5zb3IuaXB5bmI=&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c) ## 概述 张量(Tensor)是MindSpore网络运算中的基本数据结构。张量中的数据类型可参考[dtype](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dtype.html)。 不同维度的张量分别表示不同的数据,0维张量表示标量,1维张量表示向量,2维张量表示矩阵,3维张量可以表示彩色图像的RGB三通道等等。 ## 张量构造 构造张量时,支持传入`Tensor`、`float`、`int`、`bool`、`tuple`、`list`和`NumPy.array`类型,其中`tuple`和`list`里只能存放`float`、`int`、`bool`类型数据。 `Tensor`初始化时,可指定dtype。如果没有指定dtype,初始值`int`、`float`、`bool`分别生成数据类型为`mindspore.int32`、`mindspore.float32`、`mindspore.bool_`的0维Tensor, 初始值`tuple`和`list`生成的1维`Tensor`数据类型与`tuple`和`list`里存放的数据类型相对应,如果包含多种不同类型的数据,则按照优先级:`bool` < `int` < `float`,选择相对优先级最高类型所对应的mindspore数据类型。 如果初始值是`Tensor`,则生成的`Tensor`数据类型与其一致;如果初始值是`NumPy.array`,则生成的`Tensor`数据类型与之对应。 代码样例如下: ``` import numpy as np from mindspore import Tensor from mindspore import dtype as mstype x = Tensor(np.array([[1, 2], [3, 4]]), mstype.int32) y = Tensor(1.0, mstype.int32) z = Tensor(2, mstype.int32) m = Tensor(True, mstype.bool_) n = Tensor((1, 2, 3), mstype.int16) p = Tensor([4.0, 5.0, 6.0], mstype.float64) q = Tensor(p, mstype.float64) print(x, "\n\n", y, "\n\n", z, "\n\n", m, "\n\n", n, "\n\n", p, "\n\n", q) ``` ## 张量的运算、属性和方法 ### 运算 张量支持多种操作符运算,包括算术运算、逻辑运算等。以下是一些常用运算操作符: * 算术运算:加(`+`)、减(`-`)、乘(`*`)、除(`/`)、取模(`%`)、幂次方(`**`)、整除(`//`) * 逻辑运算:等于(`==`)、不等于(`!=`)、大于(`>`)、大于等于(`>=`)、小于(`<`)、小于等于(`<=`) 代码样例如下: ``` import numpy as np from mindspore import Tensor from mindspore import dtype as mstype x = Tensor(np.array([1, 2, 3]), mstype.float32) y = Tensor(np.array([4, 5, 6]), mstype.float32) output_add = x + y output_sub = x - y output_mul = x * y output_div = y / x output_mod = x % y output_pow = x ** 2 output_floordiv = y // x print("add:", output_add) print("sub:", output_sub) print("mul:", output_mul) print("div:", output_div) print("mod:", output_mod) print("pow:", output_pow) print("floordiv:", output_floordiv) a = Tensor(np.array([2, 2, 2]), mstype.int32) b = Tensor(np.array([1, 2, 3]), mstype.int32) output_eq = a == b output_ne = a != b output_gt = a > b output_gq = a >= b output_lt = a < b output_lq = a <= b print("equal:", output_eq) print("not equal:", output_ne) print("greater than:", output_gt) print("greater or equal:", output_gq) print("less than:", output_lt) print("less or equal:", output_lq) ``` ### 属性 张量的属性包括形状(shape)、数据类型(dtype)、转置张量(T)、单个元素大小(itemsize)、占用字节数量(nbytes)、维数(ndim)、元素个数(size)、每一维步长(strides)。 * 形状:`Tensor`的shape,是一个tuple。 * 数据类型:`Tensor`的dtype,是MindSpore的一个数据类型。 * 转置张量:`Tensor`的转置,也是一个`Tensor`。 * 单个元素大小: `Tensor`中每一个元素占用字节数,是一个整数。 * 占用字节数量: `Tensor`占用的总字节数,是一个整数。 * 维数: `Tensor`的秩,也就是len(tensor.shape),是一个整数。 * 元素个数: `Tensor`中所有元素的个数,是一个整数。 * 每一维步长: `Tensor`中每一维中进行遍历所需要经过的字节数。 代码样例如下: ``` import numpy as np from mindspore import Tensor from mindspore import dtype as mstype x = Tensor(np.array([[1, 2], [3, 4]]), mstype.int32) x_shape = x.shape x_dtype = x.dtype x_transposed = x.T x_itemsize = x.itemsize x_nbytes = x.nbytes x_ndim = x.ndim x_size = x.size x_strides = x.strides print("x_shape:", x_shape) print("x_dtype:", x_dtype) print("x_transposed:", x_transposed) print("x_itemsize:", x_itemsize) print("x_nbytes:", x_nbytes) print("x_ndim:", x_ndim) print("x_size:", x_size) print("x_strides:", x_strides) ``` ### 方法 张量的方法包括`len`、`str`、`repr`、`hash`、`all`、`any`和`asnumpy`等,我们同时提供了与`Numpy`的`ndarray`使用方式类似的Tensor方法来提升用户体验。详细的方法列表,使用方式以及支持后端请参考[Tensor类方法API](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.Tensor),以下是一些类方法的简单介绍: - `len()`:返回张量的长度。 - `str()`:返回张量的字符串表达。 - `repr()`:返回张量的字符串表达,供解释器读取。 - `hash()`:获取张量的哈希值。 - `all(axis, keep_dims)`:在指定维度上通过`and`操作进行归约,`axis`代表归约维度,`keep_dims`表示是否保留归约后的维度。 - `any(axis, keep_dims)`:在指定维度上通过`or`操作进行归约,参数含义同`all`。 - `asnumpy()`:将`Tensor`转换为`NumPy`的`array`。 - `sum(axis, dtype, keepdims, initial)`: 在指定维度上对张量进行归约求和,`axis`代表归约维度,`dtype`代表输出的数据的类型,`keepdims`表示是否保留归约后的维度, `initial`表示规约求和开始前的初始值。 代码样例如下: ``` import numpy as np from mindspore import Tensor from mindspore import dtype as mstype t = Tensor(np.array([1, 2, 3]), mstype.int32) t_len = len(t) t_str = str(t) t_repr = repr(t) t_hash = hash(t) print("t_len:", t_len) print("t_str:", t_str) print("t_repr:", t_repr) print("t_hash:", t_hash) x = Tensor(np.array([[True, True], [False, False]]), mstype.bool_) x_all = x.all() x_any = x.any() x_array = x.asnumpy() print("x_all:", x_all) print("x_any:", x_any) print("x_array:", x_array) import mindspore.numpy as mnp y = Tensor(np.array([[1., 2.], [3., 4.]]), mstype.float32) # y.sum() and mindspore.numpy.sum(y) are equivalent methods y_sum_tensor = y.sum() y_sum_mnp = mnp.sum(y) print("y_sum_tensor:", y_sum_tensor) print("y_sum_mnp:", y_sum_mnp) ``` ## 稀疏张量 稀疏张量是一种特殊张量,其中绝大部分元素的值为零。在某些应用场景中(比如推荐系统),数据的特征是稀疏的,若使用普通张量表征这些数据会引入大量不必要的计算,存储和通讯开销。在这种时候就可以使用稀疏张量来表征这些数据。 常用稀疏张量的表达形式是`<indices:Tensor, values:Tensor, dense_shape:Tensor>`。其中,`indices`表示非零下标元素, `values`表示非零元素的值,dense_shape表示的是被压缩的稀疏张量的形状。 在这个结构下,我们定义了`RowTensor`和`SparseTensor`两种稀疏张量结构。 > PyNative模式暂不支持稀疏张量。 ### RowTensor `RowTensor`用于压缩第零个维度稀疏的张量。若`RowTensor`的维度为`[L0, D1, D2, ..., DN ]`。第零维度的非零元素个数为`D0`, 则有`L0 >> D0`。 - `indices`: 一维整数张量,表示稀疏张量第零维度中非零元素的位置。形状:`[D0]` - `values`: 表示相对应的非零元素的值。形状:`[D0, D1, D2, ..., DN]` - `dense_shape`: 表示的是被压缩的稀疏张量的形状。 代码样例如下: ``` import mindspore as ms import mindspore.nn as nn from mindspore import RowTensor class Net(nn.Cell): def __init__(self, dense_shape): super(Net, self).__init__() self.dense_shape = dense_shape def construct(self, indices, values): x = RowTensor(indices, values, self.dense_shape) return x.values, x.indices, x.dense_shape indices = Tensor([0]) values = Tensor([[1, 2]], dtype=ms.float32) out = Net((3, 2))(indices, values) print(out[0]) print(out[1]) print(out[2]) ``` ### SparseTensor `SparseTensor`用于压缩非零元素位置分布不规则的Tensor,若非零元素的个数为`N`,被压缩的张量的维数为`ndims`,则有: - `indices`: 二维整数张量,每行代表非零元素下标。形状:`[N, ndims]` - `values`: 一维张量,表示相对应的非零元素的值。形状:`[N]` - `dense_shape`: 表示的是被压缩的稀疏张量的形状。 代码样例如下: ``` import mindspore as ms import mindspore.nn as nn from mindspore import SparseTensor class Net(nn.Cell): def __init__(self, dense_shape): super(Net, self).__init__() self.dense_shape = dense_shape def construct(self, indices, values): x = SparseTensor(indices, values, self.dense_shape) return x.values, x.indices, x.dense_shape indices = Tensor([[0, 1], [1, 2]]) values = Tensor([1, 2], dtype=ms.float32) out = Net((3, 4))(indices, values) print(out[0]) print(out[1]) print(out[2]) ```
github_jupyter
import numpy as np from mindspore import Tensor from mindspore import dtype as mstype x = Tensor(np.array([[1, 2], [3, 4]]), mstype.int32) y = Tensor(1.0, mstype.int32) z = Tensor(2, mstype.int32) m = Tensor(True, mstype.bool_) n = Tensor((1, 2, 3), mstype.int16) p = Tensor([4.0, 5.0, 6.0], mstype.float64) q = Tensor(p, mstype.float64) print(x, "\n\n", y, "\n\n", z, "\n\n", m, "\n\n", n, "\n\n", p, "\n\n", q) import numpy as np from mindspore import Tensor from mindspore import dtype as mstype x = Tensor(np.array([1, 2, 3]), mstype.float32) y = Tensor(np.array([4, 5, 6]), mstype.float32) output_add = x + y output_sub = x - y output_mul = x * y output_div = y / x output_mod = x % y output_pow = x ** 2 output_floordiv = y // x print("add:", output_add) print("sub:", output_sub) print("mul:", output_mul) print("div:", output_div) print("mod:", output_mod) print("pow:", output_pow) print("floordiv:", output_floordiv) a = Tensor(np.array([2, 2, 2]), mstype.int32) b = Tensor(np.array([1, 2, 3]), mstype.int32) output_eq = a == b output_ne = a != b output_gt = a > b output_gq = a >= b output_lt = a < b output_lq = a <= b print("equal:", output_eq) print("not equal:", output_ne) print("greater than:", output_gt) print("greater or equal:", output_gq) print("less than:", output_lt) print("less or equal:", output_lq) import numpy as np from mindspore import Tensor from mindspore import dtype as mstype x = Tensor(np.array([[1, 2], [3, 4]]), mstype.int32) x_shape = x.shape x_dtype = x.dtype x_transposed = x.T x_itemsize = x.itemsize x_nbytes = x.nbytes x_ndim = x.ndim x_size = x.size x_strides = x.strides print("x_shape:", x_shape) print("x_dtype:", x_dtype) print("x_transposed:", x_transposed) print("x_itemsize:", x_itemsize) print("x_nbytes:", x_nbytes) print("x_ndim:", x_ndim) print("x_size:", x_size) print("x_strides:", x_strides) import numpy as np from mindspore import Tensor from mindspore import dtype as mstype t = Tensor(np.array([1, 2, 3]), mstype.int32) t_len = len(t) t_str = str(t) t_repr = repr(t) t_hash = hash(t) print("t_len:", t_len) print("t_str:", t_str) print("t_repr:", t_repr) print("t_hash:", t_hash) x = Tensor(np.array([[True, True], [False, False]]), mstype.bool_) x_all = x.all() x_any = x.any() x_array = x.asnumpy() print("x_all:", x_all) print("x_any:", x_any) print("x_array:", x_array) import mindspore.numpy as mnp y = Tensor(np.array([[1., 2.], [3., 4.]]), mstype.float32) # y.sum() and mindspore.numpy.sum(y) are equivalent methods y_sum_tensor = y.sum() y_sum_mnp = mnp.sum(y) print("y_sum_tensor:", y_sum_tensor) print("y_sum_mnp:", y_sum_mnp) import mindspore as ms import mindspore.nn as nn from mindspore import RowTensor class Net(nn.Cell): def __init__(self, dense_shape): super(Net, self).__init__() self.dense_shape = dense_shape def construct(self, indices, values): x = RowTensor(indices, values, self.dense_shape) return x.values, x.indices, x.dense_shape indices = Tensor([0]) values = Tensor([[1, 2]], dtype=ms.float32) out = Net((3, 2))(indices, values) print(out[0]) print(out[1]) print(out[2]) import mindspore as ms import mindspore.nn as nn from mindspore import SparseTensor class Net(nn.Cell): def __init__(self, dense_shape): super(Net, self).__init__() self.dense_shape = dense_shape def construct(self, indices, values): x = SparseTensor(indices, values, self.dense_shape) return x.values, x.indices, x.dense_shape indices = Tensor([[0, 1], [1, 2]]) values = Tensor([1, 2], dtype=ms.float32) out = Net((3, 4))(indices, values) print(out[0]) print(out[1]) print(out[2])
0.42919
0.937498
``` import pandas as pd medicare = pd.read_csv("/netapp2/home/se197/data/CMS/Data/medicare.csv") train_set = medicare[medicare.Hospital != 'BWH'] # MGH validation_set = medicare[medicare.Hospital == 'BWH'] # BWH and Neither import numpy as np fifty_perc_EHR_cont = np.percentile(medicare['Cal_MPEC_R0'],50) train_set_high = train_set[train_set.Cal_MPEC_R0 >= fifty_perc_EHR_cont] train_set_low= train_set[train_set.Cal_MPEC_R0 < fifty_perc_EHR_cont] validation_set_high = validation_set[validation_set.Cal_MPEC_R0 >= fifty_perc_EHR_cont] validation_set_low = validation_set[validation_set.Cal_MPEC_R0 < fifty_perc_EHR_cont] predictor_variable = [ 'Co_CAD_R0', 'Co_Embolism_R0', 'Co_DVT_R0', 'Co_PE_R0', 'Co_AFib_R0', 'Co_Hypertension_R0', 'Co_Hyperlipidemia_R0', 'Co_Atherosclerosis_R0', 'Co_HF_R0', 'Co_HemoStroke_R0', 'Co_IscheStroke_R0', 'Co_OthStroke_R0', 'Co_TIA_R0', 'Co_COPD_R0', 'Co_Asthma_R0', 'Co_Pneumonia_R0', 'Co_Alcoholabuse_R0', 'Co_Drugabuse_R0', 'Co_Epilepsy_R0', 'Co_Cancer_R0', 'Co_MorbidObesity_R0', 'Co_Dementia_R0', 'Co_Depression_R0', 'Co_Bipolar_R0', 'Co_Psychosis_R0', 'Co_Personalitydisorder_R0', 'Co_Adjustmentdisorder_R0', 'Co_Anxiety_R0', 'Co_Generalizedanxiety_R0', 'Co_OldMI_R0', 'Co_AcuteMI_R0', 'Co_PUD_R0', 'Co_UpperGIbleed_R0', 'Co_LowerGIbleed_R0', 'Co_Urogenitalbleed_R0', 'Co_Othbleed_R0', 'Co_PVD_R0', 'Co_LiverDisease_R0', 'Co_MRI_R0', 'Co_ESRD_R0', 'Co_Obesity_R0', 'Co_Sepsis_R0', 'Co_Osteoarthritis_R0', 'Co_RA_R0', 'Co_NeuroPain_R0', 'Co_NeckPain_R0', 'Co_OthArthritis_R0', 'Co_Osteoporosis_R0', 'Co_Fibromyalgia_R0', 'Co_Migraine_R0', 'Co_Headache_R0', 'Co_OthPain_R0', 'Co_GeneralizedPain_R0', 'Co_PainDisorder_R0', 'Co_Falls_R0', 'Co_CoagulationDisorder_R0', 'Co_WhiteBloodCell_R0', 'Co_Parkinson_R0', 'Co_Anemia_R0', 'Co_UrinaryIncontinence_R0', 'Co_DecubitusUlcer_R0', 'Co_Oxygen_R0', 'Co_Mammography_R0', 'Co_PapTest_R0', 'Co_PSATest_R0', 'Co_Colonoscopy_R0', 'Co_FecalOccultTest_R0', 'Co_FluShot_R0', 'Co_PneumococcalVaccine_R0', 'Co_RenalDysfunction_R0', 'Co_Valvular_R0', 'Co_Hosp_Prior30Days_R0', 'Co_RX_Antibiotic_R0', 'Co_RX_Corticosteroid_R0', 'Co_RX_Aspirin_R0', 'Co_RX_Dipyridamole_R0', 'Co_RX_Clopidogrel_R0', 'Co_RX_Prasugrel_R0', 'Co_RX_Cilostazol_R0', 'Co_RX_Ticlopidine_R0', 'Co_RX_Ticagrelor_R0', 'Co_RX_OthAntiplatelet_R0', 'Co_RX_NSAIDs_R0', 'Co_RX_Opioid_R0', 'Co_RX_Antidepressant_R0', 'Co_RX_AAntipsychotic_R0', 'Co_RX_TAntipsychotic_R0', 'Co_RX_Anticonvulsant_R0', 'Co_RX_PPI_R0', 'Co_RX_H2Receptor_R0', 'Co_RX_OthGastro_R0', 'Co_RX_ACE_R0', 'Co_RX_ARB_R0', 'Co_RX_BBlocker_R0', 'Co_RX_CCB_R0', 'Co_RX_Thiazide_R0', 'Co_RX_Loop_R0', 'Co_RX_Potassium_R0', 'Co_RX_Nitrates_R0', 'Co_RX_Aliskiren_R0', 'Co_RX_OthAntihypertensive_R0', 'Co_RX_Antiarrhythmic_R0', 'Co_RX_OthAnticoagulant_R0', 'Co_RX_Insulin_R0', 'Co_RX_Noninsulin_R0', 'Co_RX_Digoxin_R0', 'Co_RX_Statin_R0', 'Co_RX_Lipid_R0', 'Co_RX_Lithium_R0', 'Co_RX_Benzo_R0', 'Co_RX_ZDrugs_R0', 'Co_RX_OthAnxiolytic_R0', 'Co_RX_Barbiturate_R0', 'Co_RX_Dementia_R0', 'Co_RX_Hormone_R0', 'Co_RX_Osteoporosis_R0', 'Co_N_Drugs_R0', 'Co_N_Hosp_R0', 'Co_Total_HospLOS_R0', 'Co_N_MDVisit_R0', 'Co_RX_AnyAspirin_R0', 'Co_RX_AspirinMono_R0', 'Co_RX_ClopidogrelMono_R0', 'Co_RX_AspirinClopidogrel_R0', 'Co_RX_DM_R0', 'Co_RX_Antipsychotic_R0' ] co_train_gpop = train_set[predictor_variable] co_train_high = train_set_high[predictor_variable] co_train_low = train_set_low[predictor_variable] co_validation_gpop = validation_set[predictor_variable] co_validation_high = validation_set_high[predictor_variable] co_validation_low = validation_set_low[predictor_variable] len(predictor_variable) out_train_hemorrhage_gpop = train_set['Out_Hemorrhage_RC1'] out_train_hemorrhage_high = train_set_high['Out_Hemorrhage_RC1'] out_train_hemorrhage_low = train_set_low['Out_Hemorrhage_RC1'] out_validation_hemorrhage_gpop = validation_set['Out_Hemorrhage_RC1'] out_validation_hemorrhage_high = validation_set_high['Out_Hemorrhage_RC1'] out_validation_hemorrhage_low = validation_set_low['Out_Hemorrhage_RC1'] def bart(X_train, y_train): from bartpy.sklearnmodel import SklearnModel from sklearn.model_selection import GridSearchCV from bartpy.data import Data from bartpy.sigma import Sigma param_grid = [{ 'n_trees': [10,30,50] }] model = SklearnModel() clf = GridSearchCV(estimator = model, param_grid = param_grid, n_jobs = 10, verbose = True) best_clf = clf.fit(X_train, y_train.to_numpy()) return best_clf """ def scores(X_train,y_train, best_clf): from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import fbeta_score from sklearn.metrics import roc_auc_score from sklearn.metrics import log_loss import numpy as np pred = np.round(best_clf.predict(X_train)) print(pred) actual = y_train print(accuracy_score(actual,pred)) print(f1_score(actual,pred)) print(fbeta_score(actual,pred, average = 'macro', beta = 2)) print(roc_auc_score(actual, best_clf.predict(X_train))) print(log_loss(actual,pred)) """ def scores(X,y): from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import fbeta_score from sklearn.metrics import roc_auc_score from sklearn.metrics import log_loss pred = best_clf.predict(X) actual = y print(accuracy_score(actual,np.round(pred))) print(f1_score(actual,np.round(pred))) print(fbeta_score(actual,np.round(pred), average = 'macro', beta = 2)) print(roc_auc_score(actual, best_clf.predict(X))) print(log_loss(actual,best_clf.predict(X))) def cross_val(X,y): from sklearn.model_selection import KFold from sklearn.model_selection import cross_validate from sklearn.metrics import log_loss from sklearn.metrics import roc_auc_score from sklearn.metrics import fbeta_score import sklearn import numpy as np cv = KFold(n_splits=5, random_state=1, shuffle=True) log_loss = [] auc = [] accuracy = [] f1 = [] f2 = [] for train_index, test_index in cv.split(X): X_train, X_test, y_train, y_test = X.iloc[train_index], X.iloc[test_index], y.iloc[train_index], y.iloc[test_index] model = bart(X_train, y_train) prob = model.predict(X_test) # prob is a vector of probabilities pred = np.round(model.predict(X_test)) # pred is the rounded predictions log_loss.append(sklearn.metrics.log_loss(y_test, prob)) auc.append(sklearn.metrics.roc_auc_score(y_test, prob)) accuracy.append(sklearn.metrics.accuracy_score(y_test, pred)) f1.append(sklearn.metrics.f1_score(y_test, pred, average = 'macro')) f2.append(fbeta_score(y_test,pred, average = 'macro', beta = 2)) print(np.mean(accuracy)) print(np.mean(f1)) print(np.mean(f2)) print(np.mean(auc)) print(np.mean(log_loss)) best_clf = bart(co_train_gpop,out_train_hemorrhage_gpop) #cross_val(co_train_gpop,out_train_hemorrhage_gpop) #print() #scores(co_validation_gpop, out_validation_hemorrhage_gpop) scores(co_validation_gpop, out_validation_hemorrhage_gpop) best_clf = bart(co_train_gpop,out_train_hemorrhage_gpop) best_clf.predict(co_train_gpop) scores(co_validation_gpop, out_validation_hemorrhage_gpop) import datetime begin_time = datetime.datetime.now() best_clf = bart(co_train_low,out_train_hemorrhage_low) cross_val(co_train_low,out_train_hemorrhage_low) print() scores(co_validation_low,out_validation_hemorrhage_low) print(datetime.datetime.now() - begin_time) import datetime begin_time = datetime.datetime.now() best_clf = bart(co_train_high,out_train_hemorrhage_high) cross_val(co_train_high,out_train_hemorrhage_high) print() scores(co_validation_high,out_validation_hemorrhage_high) print(datetime.datetime.now() - begin_time) ```
github_jupyter
import pandas as pd medicare = pd.read_csv("/netapp2/home/se197/data/CMS/Data/medicare.csv") train_set = medicare[medicare.Hospital != 'BWH'] # MGH validation_set = medicare[medicare.Hospital == 'BWH'] # BWH and Neither import numpy as np fifty_perc_EHR_cont = np.percentile(medicare['Cal_MPEC_R0'],50) train_set_high = train_set[train_set.Cal_MPEC_R0 >= fifty_perc_EHR_cont] train_set_low= train_set[train_set.Cal_MPEC_R0 < fifty_perc_EHR_cont] validation_set_high = validation_set[validation_set.Cal_MPEC_R0 >= fifty_perc_EHR_cont] validation_set_low = validation_set[validation_set.Cal_MPEC_R0 < fifty_perc_EHR_cont] predictor_variable = [ 'Co_CAD_R0', 'Co_Embolism_R0', 'Co_DVT_R0', 'Co_PE_R0', 'Co_AFib_R0', 'Co_Hypertension_R0', 'Co_Hyperlipidemia_R0', 'Co_Atherosclerosis_R0', 'Co_HF_R0', 'Co_HemoStroke_R0', 'Co_IscheStroke_R0', 'Co_OthStroke_R0', 'Co_TIA_R0', 'Co_COPD_R0', 'Co_Asthma_R0', 'Co_Pneumonia_R0', 'Co_Alcoholabuse_R0', 'Co_Drugabuse_R0', 'Co_Epilepsy_R0', 'Co_Cancer_R0', 'Co_MorbidObesity_R0', 'Co_Dementia_R0', 'Co_Depression_R0', 'Co_Bipolar_R0', 'Co_Psychosis_R0', 'Co_Personalitydisorder_R0', 'Co_Adjustmentdisorder_R0', 'Co_Anxiety_R0', 'Co_Generalizedanxiety_R0', 'Co_OldMI_R0', 'Co_AcuteMI_R0', 'Co_PUD_R0', 'Co_UpperGIbleed_R0', 'Co_LowerGIbleed_R0', 'Co_Urogenitalbleed_R0', 'Co_Othbleed_R0', 'Co_PVD_R0', 'Co_LiverDisease_R0', 'Co_MRI_R0', 'Co_ESRD_R0', 'Co_Obesity_R0', 'Co_Sepsis_R0', 'Co_Osteoarthritis_R0', 'Co_RA_R0', 'Co_NeuroPain_R0', 'Co_NeckPain_R0', 'Co_OthArthritis_R0', 'Co_Osteoporosis_R0', 'Co_Fibromyalgia_R0', 'Co_Migraine_R0', 'Co_Headache_R0', 'Co_OthPain_R0', 'Co_GeneralizedPain_R0', 'Co_PainDisorder_R0', 'Co_Falls_R0', 'Co_CoagulationDisorder_R0', 'Co_WhiteBloodCell_R0', 'Co_Parkinson_R0', 'Co_Anemia_R0', 'Co_UrinaryIncontinence_R0', 'Co_DecubitusUlcer_R0', 'Co_Oxygen_R0', 'Co_Mammography_R0', 'Co_PapTest_R0', 'Co_PSATest_R0', 'Co_Colonoscopy_R0', 'Co_FecalOccultTest_R0', 'Co_FluShot_R0', 'Co_PneumococcalVaccine_R0', 'Co_RenalDysfunction_R0', 'Co_Valvular_R0', 'Co_Hosp_Prior30Days_R0', 'Co_RX_Antibiotic_R0', 'Co_RX_Corticosteroid_R0', 'Co_RX_Aspirin_R0', 'Co_RX_Dipyridamole_R0', 'Co_RX_Clopidogrel_R0', 'Co_RX_Prasugrel_R0', 'Co_RX_Cilostazol_R0', 'Co_RX_Ticlopidine_R0', 'Co_RX_Ticagrelor_R0', 'Co_RX_OthAntiplatelet_R0', 'Co_RX_NSAIDs_R0', 'Co_RX_Opioid_R0', 'Co_RX_Antidepressant_R0', 'Co_RX_AAntipsychotic_R0', 'Co_RX_TAntipsychotic_R0', 'Co_RX_Anticonvulsant_R0', 'Co_RX_PPI_R0', 'Co_RX_H2Receptor_R0', 'Co_RX_OthGastro_R0', 'Co_RX_ACE_R0', 'Co_RX_ARB_R0', 'Co_RX_BBlocker_R0', 'Co_RX_CCB_R0', 'Co_RX_Thiazide_R0', 'Co_RX_Loop_R0', 'Co_RX_Potassium_R0', 'Co_RX_Nitrates_R0', 'Co_RX_Aliskiren_R0', 'Co_RX_OthAntihypertensive_R0', 'Co_RX_Antiarrhythmic_R0', 'Co_RX_OthAnticoagulant_R0', 'Co_RX_Insulin_R0', 'Co_RX_Noninsulin_R0', 'Co_RX_Digoxin_R0', 'Co_RX_Statin_R0', 'Co_RX_Lipid_R0', 'Co_RX_Lithium_R0', 'Co_RX_Benzo_R0', 'Co_RX_ZDrugs_R0', 'Co_RX_OthAnxiolytic_R0', 'Co_RX_Barbiturate_R0', 'Co_RX_Dementia_R0', 'Co_RX_Hormone_R0', 'Co_RX_Osteoporosis_R0', 'Co_N_Drugs_R0', 'Co_N_Hosp_R0', 'Co_Total_HospLOS_R0', 'Co_N_MDVisit_R0', 'Co_RX_AnyAspirin_R0', 'Co_RX_AspirinMono_R0', 'Co_RX_ClopidogrelMono_R0', 'Co_RX_AspirinClopidogrel_R0', 'Co_RX_DM_R0', 'Co_RX_Antipsychotic_R0' ] co_train_gpop = train_set[predictor_variable] co_train_high = train_set_high[predictor_variable] co_train_low = train_set_low[predictor_variable] co_validation_gpop = validation_set[predictor_variable] co_validation_high = validation_set_high[predictor_variable] co_validation_low = validation_set_low[predictor_variable] len(predictor_variable) out_train_hemorrhage_gpop = train_set['Out_Hemorrhage_RC1'] out_train_hemorrhage_high = train_set_high['Out_Hemorrhage_RC1'] out_train_hemorrhage_low = train_set_low['Out_Hemorrhage_RC1'] out_validation_hemorrhage_gpop = validation_set['Out_Hemorrhage_RC1'] out_validation_hemorrhage_high = validation_set_high['Out_Hemorrhage_RC1'] out_validation_hemorrhage_low = validation_set_low['Out_Hemorrhage_RC1'] def bart(X_train, y_train): from bartpy.sklearnmodel import SklearnModel from sklearn.model_selection import GridSearchCV from bartpy.data import Data from bartpy.sigma import Sigma param_grid = [{ 'n_trees': [10,30,50] }] model = SklearnModel() clf = GridSearchCV(estimator = model, param_grid = param_grid, n_jobs = 10, verbose = True) best_clf = clf.fit(X_train, y_train.to_numpy()) return best_clf """ def scores(X_train,y_train, best_clf): from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import fbeta_score from sklearn.metrics import roc_auc_score from sklearn.metrics import log_loss import numpy as np pred = np.round(best_clf.predict(X_train)) print(pred) actual = y_train print(accuracy_score(actual,pred)) print(f1_score(actual,pred)) print(fbeta_score(actual,pred, average = 'macro', beta = 2)) print(roc_auc_score(actual, best_clf.predict(X_train))) print(log_loss(actual,pred)) """ def scores(X,y): from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import fbeta_score from sklearn.metrics import roc_auc_score from sklearn.metrics import log_loss pred = best_clf.predict(X) actual = y print(accuracy_score(actual,np.round(pred))) print(f1_score(actual,np.round(pred))) print(fbeta_score(actual,np.round(pred), average = 'macro', beta = 2)) print(roc_auc_score(actual, best_clf.predict(X))) print(log_loss(actual,best_clf.predict(X))) def cross_val(X,y): from sklearn.model_selection import KFold from sklearn.model_selection import cross_validate from sklearn.metrics import log_loss from sklearn.metrics import roc_auc_score from sklearn.metrics import fbeta_score import sklearn import numpy as np cv = KFold(n_splits=5, random_state=1, shuffle=True) log_loss = [] auc = [] accuracy = [] f1 = [] f2 = [] for train_index, test_index in cv.split(X): X_train, X_test, y_train, y_test = X.iloc[train_index], X.iloc[test_index], y.iloc[train_index], y.iloc[test_index] model = bart(X_train, y_train) prob = model.predict(X_test) # prob is a vector of probabilities pred = np.round(model.predict(X_test)) # pred is the rounded predictions log_loss.append(sklearn.metrics.log_loss(y_test, prob)) auc.append(sklearn.metrics.roc_auc_score(y_test, prob)) accuracy.append(sklearn.metrics.accuracy_score(y_test, pred)) f1.append(sklearn.metrics.f1_score(y_test, pred, average = 'macro')) f2.append(fbeta_score(y_test,pred, average = 'macro', beta = 2)) print(np.mean(accuracy)) print(np.mean(f1)) print(np.mean(f2)) print(np.mean(auc)) print(np.mean(log_loss)) best_clf = bart(co_train_gpop,out_train_hemorrhage_gpop) #cross_val(co_train_gpop,out_train_hemorrhage_gpop) #print() #scores(co_validation_gpop, out_validation_hemorrhage_gpop) scores(co_validation_gpop, out_validation_hemorrhage_gpop) best_clf = bart(co_train_gpop,out_train_hemorrhage_gpop) best_clf.predict(co_train_gpop) scores(co_validation_gpop, out_validation_hemorrhage_gpop) import datetime begin_time = datetime.datetime.now() best_clf = bart(co_train_low,out_train_hemorrhage_low) cross_val(co_train_low,out_train_hemorrhage_low) print() scores(co_validation_low,out_validation_hemorrhage_low) print(datetime.datetime.now() - begin_time) import datetime begin_time = datetime.datetime.now() best_clf = bart(co_train_high,out_train_hemorrhage_high) cross_val(co_train_high,out_train_hemorrhage_high) print() scores(co_validation_high,out_validation_hemorrhage_high) print(datetime.datetime.now() - begin_time)
0.248352
0.195325
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/labmlai/labml/blob/master/samples/wandb/cifar10.ipynb) ## CIFAR-10 Sample This notebook trains a VGG model on CIFAR-10 dataset. [W&B dashboard for a sample run](https://wandb.ai/vpj/labml/runs/f2u6ip41?workspace=user-vpj) [labml.ai monitoring](https://app.labml.ai/run/451082b89e7f11ebbc450242ac1c0002) Install `labml` and `wandb` packages for monitoring and organizing experiments. ``` !pip install labml wandb ``` Imports ``` import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torch.utils.data from torchvision import datasets, transforms from labml import lab, tracker, experiment, monit, logger from labml.logger import Text ``` VGG model ``` class Net(nn.Module): def __init__(self): super().__init__() layers = [] in_channels = 3 for block in [[64, 64], [128, 128], [256, 256, 256], [512, 512, 512], [512, 512, 512]]: for channels in block: layers += [nn.Conv2d(in_channels, channels, kernel_size=3, padding=1), nn.BatchNorm2d(channels), nn.ReLU(inplace=True)] in_channels = channels layers += [nn.MaxPool2d(kernel_size=2, stride=2)] layers += [nn.AvgPool2d(kernel_size=1, stride=1)] self.layers = nn.Sequential(*layers) self.fc = nn.Linear(512, 10) def forward(self, x): x = self.layers(x) x = x.view(x.shape[0], -1) return self.fc(x) ``` A simple class to create the training and validation data loaders. ``` class DataLoaderFactory: def __init__(self): data_transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ]) self.dataset = [ datasets.CIFAR10(str(lab.get_data_path()), train=False, download=True, transform=data_transform), datasets.CIFAR10(str(lab.get_data_path()), train=True, download=True, transform=data_transform), ] def __call__(self, train, batch_size): return torch.utils.data.DataLoader(self.dataset[train], batch_size=batch_size, shuffle=True) ``` Model training function for a single epoch. ``` def train(model, optimizer, train_loader, device): model.train() for batch_idx, (data, target) in monit.enum("Train", train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = F.cross_entropy(output, target) loss.backward() optimizer.step() tracker.add_global_step(data.shape[0]) tracker.save({'loss.train': loss}) ``` Function to test the model on validation data. ``` def validate(model, valid_loader, device): model.eval() valid_loss = 0 correct = 0 with torch.no_grad(): for data, target in monit.iterate("valid", valid_loader): data, target = data.to(device), target.to(device) output = model(data) valid_loss += F.cross_entropy(output, target, reduction='sum').item() pred = output.argmax(dim=1, keepdim=True) correct += pred.eq(target.view_as(pred)).sum().item() valid_loss /= len(valid_loader.dataset) valid_accuracy = 100. * correct / len(valid_loader.dataset) tracker.save({'loss.valid': valid_loss, 'accuracy.valid': valid_accuracy}) ``` Main function ``` def main(): configs = { 'epochs': 50, 'learning_rate': 2.5e-4, 'device': "cuda:0" if torch.cuda.is_available() else "cpu", 'batch_size': 1024, } device = torch.device(configs['device']) dl_factory = DataLoaderFactory() train_loader = dl_factory(True, configs['batch_size']) valid_loader = dl_factory(False, configs['batch_size']) model = Net().to(device) optimizer = optim.Adam(model.parameters(), lr=configs['learning_rate']) experiment.create(name='cifar10') experiment.configs(configs) experiment.add_pytorch_models(dict(model=model)) with experiment.start(): for _ in monit.loop(range(1, configs['epochs'] + 1)): torch.cuda.empty_cache() train(model, optimizer, train_loader, device) validate(model, valid_loader, device) logger.log() experiment.save_checkpoint() main() ```
github_jupyter
!pip install labml wandb import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torch.utils.data from torchvision import datasets, transforms from labml import lab, tracker, experiment, monit, logger from labml.logger import Text class Net(nn.Module): def __init__(self): super().__init__() layers = [] in_channels = 3 for block in [[64, 64], [128, 128], [256, 256, 256], [512, 512, 512], [512, 512, 512]]: for channels in block: layers += [nn.Conv2d(in_channels, channels, kernel_size=3, padding=1), nn.BatchNorm2d(channels), nn.ReLU(inplace=True)] in_channels = channels layers += [nn.MaxPool2d(kernel_size=2, stride=2)] layers += [nn.AvgPool2d(kernel_size=1, stride=1)] self.layers = nn.Sequential(*layers) self.fc = nn.Linear(512, 10) def forward(self, x): x = self.layers(x) x = x.view(x.shape[0], -1) return self.fc(x) class DataLoaderFactory: def __init__(self): data_transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ]) self.dataset = [ datasets.CIFAR10(str(lab.get_data_path()), train=False, download=True, transform=data_transform), datasets.CIFAR10(str(lab.get_data_path()), train=True, download=True, transform=data_transform), ] def __call__(self, train, batch_size): return torch.utils.data.DataLoader(self.dataset[train], batch_size=batch_size, shuffle=True) def train(model, optimizer, train_loader, device): model.train() for batch_idx, (data, target) in monit.enum("Train", train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = F.cross_entropy(output, target) loss.backward() optimizer.step() tracker.add_global_step(data.shape[0]) tracker.save({'loss.train': loss}) def validate(model, valid_loader, device): model.eval() valid_loss = 0 correct = 0 with torch.no_grad(): for data, target in monit.iterate("valid", valid_loader): data, target = data.to(device), target.to(device) output = model(data) valid_loss += F.cross_entropy(output, target, reduction='sum').item() pred = output.argmax(dim=1, keepdim=True) correct += pred.eq(target.view_as(pred)).sum().item() valid_loss /= len(valid_loader.dataset) valid_accuracy = 100. * correct / len(valid_loader.dataset) tracker.save({'loss.valid': valid_loss, 'accuracy.valid': valid_accuracy}) def main(): configs = { 'epochs': 50, 'learning_rate': 2.5e-4, 'device': "cuda:0" if torch.cuda.is_available() else "cpu", 'batch_size': 1024, } device = torch.device(configs['device']) dl_factory = DataLoaderFactory() train_loader = dl_factory(True, configs['batch_size']) valid_loader = dl_factory(False, configs['batch_size']) model = Net().to(device) optimizer = optim.Adam(model.parameters(), lr=configs['learning_rate']) experiment.create(name='cifar10') experiment.configs(configs) experiment.add_pytorch_models(dict(model=model)) with experiment.start(): for _ in monit.loop(range(1, configs['epochs'] + 1)): torch.cuda.empty_cache() train(model, optimizer, train_loader, device) validate(model, valid_loader, device) logger.log() experiment.save_checkpoint() main()
0.940803
0.976802
## Multiple Traveling Salesman and the Problem of routing vehicles Imagine we have instead of one salesman traveling to all the sites, that instead the workload is shared among many salesman. This generalization of the traveling salesman problem is called the multiple traveling salesman problem or mTSP. In lots of literature it is studied under the name of Vehicle Routing Problem or [VRP](https://en.wikipedia.org/wiki/Vehicle_routing_problem), but it is equivalent. The problem goes back to the early 1960s where it was applied to oil delivery issues [1]. This is another NP-hard problem so for large amounts of locations a solution might take a long time to find. We can solve it for small values with Pulp though. [1] : https://andresjaquep.files.wordpress.com/2008/10/2627477-clasico-dantzig.pdf ``` from pulp import * import numpy as np import matplotlib.pyplot as plt %matplotlib inline import seaborn as sn ``` ### 1. First lets make some fake data ``` #a handful of sites sites = ['org','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P'] print(len(sites)-1) #make some positions (so we can plot this) positions = dict( ( a, (np.random.rand()-.5, np.random.rand()-.5)) for a in sites) positions['org']=(0,0) for s in positions: p = positions[s] plt.plot(p[0],p[1],'o') plt.text(p[0]+.01,p[1],s,horizontalalignment='left',verticalalignment='center') plt.gca().axis('off'); #straight line distance for simplicity d = lambda p1,p2: np.sqrt( (p1[0]-p2[0])**2+ (p1[1]-p2[1])**2) #calculate all the pairs distances=dict( ((s1,s2), d(positions[s1],positions[s2])) for s1 in positions for s2 in positions if s1!=s2) ``` ### 2. The model With a few modifications, the original traveling salesman problem can support multiple salesman. Instead of making each facility only be visited once, the origin facility will be visited multiple times. If we have two salesman then the origin is visited exactly twice and so on. For $K$ vehicles or sales people #### Variables: indicators: $$x_{i,j} = \begin{cases} 1, & \text{if site i comes exactly before j in the tour} \\ 0, & \text{otherwise} \end{cases} $$ order dummy variables: $$u_{i} : \text{order site i is visited}$$ #### Minimize: $$\sum_{i,j \space i \neq j} x_{i,j} Distance(i,j)$$ #### Subject to: $$\sum_{i \neq j} x_{i,j} = 1 \space \forall j \text{ except the origin}$$ $$\sum_{i \neq j} x_{i,origin} = K$$ $$\sum_{j \neq i} x_{i,j} = 1 \space \forall i \text{ except the origin}$$ $$\sum_{j \neq i} x_{i,origin} = K$$ $$u_{i}-u_{j} \leq (N \div M)(1-x_{i,j}) - 1 \ \forall i,j \text{ except origins}$$ ``` K = 4 #the number of sales people #create the problme prob=LpProblem("vehicle",LpMinimize) #indicator variable if site i is connected to site j in the tour x = LpVariable.dicts('x',distances, 0,1,LpBinary) #dummy vars to eliminate subtours u = LpVariable.dicts('u', sites, 0, len(sites)-1, LpInteger) #the objective cost = lpSum([x[(i,j)]*distances[(i,j)] for (i,j) in distances]) prob+=cost #constraints for k in sites: cap = 1 if k != 'org' else K #inbound connection prob+= lpSum([ x[(i,k)] for i in sites if (i,k) in x]) ==cap #outbound connection prob+=lpSum([ x[(k,i)] for i in sites if (k,i) in x]) ==cap #subtour elimination N=len(sites)/K for i in sites: for j in sites: if i != j and (i != 'org' and j!= 'org') and (i,j) in x: prob += u[i] - u[j] <= (N)*(1-x[(i,j)]) - 1 ``` ### Solve it! ``` %time prob.solve() #prob.solve(GLPK_CMD(options=['--simplex'])) print(LpStatus[prob.status]) ``` And the result: ``` non_zero_edges = [ e for e in x if value(x[e]) != 0 ] def get_next_site(parent): '''helper function to get the next edge''' edges = [e for e in non_zero_edges if e[0]==parent] for e in edges: non_zero_edges.remove(e) return edges tours = get_next_site('org') tours = [ [e] for e in tours ] for t in tours: while t[-1][1] !='org': t.append(get_next_site(t[-1][1])[-1]) ``` The optimal tours: ``` for t in tours: print(' -> '.join([ a for a,b in t]+['org'])) #draw the tours colors = [np.random.rand(3) for i in range(len(tours))] for t,c in zip(tours,colors): for a,b in t: p1,p2 = positions[a], positions[b] plt.plot([p1[0],p2[0]],[p1[1],p2[1]], color=c) #draw the map again for s in positions: p = positions[s] plt.plot(p[0],p[1],'o') plt.text(p[0]+.01,p[1],s,horizontalalignment='left',verticalalignment='center') plt.gca().axis('off'); print(value(prob.objective)) ``` ### Questions: 1. If we wanted the vehicles/salesman to start and end in different locations, how would we modify this? 2. How can we limit the miles/km each vehicle/salesman drives? 3. How can we incorporate some sort of site priority? 4. How can we incorporate ordering? ie site A must be visited before site B 5. How can we modify the sites per person to be flexible, ie maybe each person must visist between 3 and 6 sites a day
github_jupyter
from pulp import * import numpy as np import matplotlib.pyplot as plt %matplotlib inline import seaborn as sn #a handful of sites sites = ['org','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P'] print(len(sites)-1) #make some positions (so we can plot this) positions = dict( ( a, (np.random.rand()-.5, np.random.rand()-.5)) for a in sites) positions['org']=(0,0) for s in positions: p = positions[s] plt.plot(p[0],p[1],'o') plt.text(p[0]+.01,p[1],s,horizontalalignment='left',verticalalignment='center') plt.gca().axis('off'); #straight line distance for simplicity d = lambda p1,p2: np.sqrt( (p1[0]-p2[0])**2+ (p1[1]-p2[1])**2) #calculate all the pairs distances=dict( ((s1,s2), d(positions[s1],positions[s2])) for s1 in positions for s2 in positions if s1!=s2) K = 4 #the number of sales people #create the problme prob=LpProblem("vehicle",LpMinimize) #indicator variable if site i is connected to site j in the tour x = LpVariable.dicts('x',distances, 0,1,LpBinary) #dummy vars to eliminate subtours u = LpVariable.dicts('u', sites, 0, len(sites)-1, LpInteger) #the objective cost = lpSum([x[(i,j)]*distances[(i,j)] for (i,j) in distances]) prob+=cost #constraints for k in sites: cap = 1 if k != 'org' else K #inbound connection prob+= lpSum([ x[(i,k)] for i in sites if (i,k) in x]) ==cap #outbound connection prob+=lpSum([ x[(k,i)] for i in sites if (k,i) in x]) ==cap #subtour elimination N=len(sites)/K for i in sites: for j in sites: if i != j and (i != 'org' and j!= 'org') and (i,j) in x: prob += u[i] - u[j] <= (N)*(1-x[(i,j)]) - 1 %time prob.solve() #prob.solve(GLPK_CMD(options=['--simplex'])) print(LpStatus[prob.status]) non_zero_edges = [ e for e in x if value(x[e]) != 0 ] def get_next_site(parent): '''helper function to get the next edge''' edges = [e for e in non_zero_edges if e[0]==parent] for e in edges: non_zero_edges.remove(e) return edges tours = get_next_site('org') tours = [ [e] for e in tours ] for t in tours: while t[-1][1] !='org': t.append(get_next_site(t[-1][1])[-1]) for t in tours: print(' -> '.join([ a for a,b in t]+['org'])) #draw the tours colors = [np.random.rand(3) for i in range(len(tours))] for t,c in zip(tours,colors): for a,b in t: p1,p2 = positions[a], positions[b] plt.plot([p1[0],p2[0]],[p1[1],p2[1]], color=c) #draw the map again for s in positions: p = positions[s] plt.plot(p[0],p[1],'o') plt.text(p[0]+.01,p[1],s,horizontalalignment='left',verticalalignment='center') plt.gca().axis('off'); print(value(prob.objective))
0.138549
0.980655
# A Note on this Notebook This can be run as either a python/ipython script or as a notebook. It generates a firedrake `DumbCheckpoint` file called `true-fields.h5` containing the values of $u_{true}$ and $q_{true}$ in `Function`s named `u_true` and `q_true` respectively. The investigation continues in another notebook which uses these fields. # Problem Description We want to find out how the solution of our inverse problem converges as we increase the number of points for both the new and traditional methods of data interpolation. If we have what is known as **"posterior consistency"** then we expect that the error in our solution, when compared to the true solution, will always decrease as we increase the number of points we are assimilating. ## Posterior Consistency From a Bayesian point of view, the regularisation we choose and the weighting we give it encode information about our assumed prior probability distribution of $q$ before we start assimilating data (adding observations). Take, for example, the regularisation used in the this problem $$ \frac{\alpha^2}{2}\int_\Omega|\nabla q|^2dx $$ which asserts a prior that the solution $q$ which minimises $J$ should be smooth and gives a weighting $\alpha$ to the assertion. If we have posterior consistency, the contribution of increasing numbers of measurements $u_{obs}$ should increase the weighting of our data relative to our prior and we should converge towards the true solution. ## Hypothesis Our two methods minimise two different functionals. The first minimises $J$ $$J[u, q] = \underbrace{\frac{1}{2}\int_{\Omega_v}\left(\frac{u_{obs} - I(u, \text{P0DG}(\Omega_v))}{\sigma}\right)^2dx}_{\text{model-data misfit}} + \underbrace{\frac{\alpha^2}{2}\int_\Omega|\nabla q|^2dx}_{\text{regularization}}$$ whilst the second minimises $J''$ $$J''[u, q] = \underbrace{\frac{1}{2}\int_{\Omega}\left(\frac{u_{interpolated} - u}{\hat{\sigma}}\right)^2dx}_{\text{model-data misfit}} + \underbrace{\frac{\alpha^2}{2}\int_\Omega|\nabla q|^2dx}_{\text{regularization}}.$$ **where $\hat{\sigma}$ is an appropriate value for the given regularisation term $\alpha$ found with an l-curve analysis.** As set up here increasing the number of points to assimilate has the effect of increasing the size of the misfit term in $J$ (with a weight proportional to each measurement's variance $\sigma$) so we expect to converge to $q_{true}$ as the number of measurements increases. As we increase the number of measurements in $J''$ we hope that our calculated $u_{interpolated}$ approaches $u$ (to therefore minimise the misfit). There is, however, no mechanism to cause the misfit term to increase relative to the regularizatuion term. We therefore predict that minimising $J$ will display posterior consistency and that minimising the various $J'$ for each $u_{interpolated}$ will not. ## Hypothesis Amendment! A note on finite element method error Note that our solutions all exist in finite element spaces which are usually approximations of a true solution with some error that (hopefully) decreases as mesh density increase and solution space order increase. Since I am comparing to a solution $u_true$ in CG2 space I expect, at best, that we will converge to $u_true$ when we have, on average, enough points per cell to fully specify the lagrange polynomials in that cell. Were we in CG1 this would be 3 points per cell (I can't remember how many we would need for CG2!) to give convergence if those measurements had no noise. Since our measurements are noisy I do not expect actual convergence, but I anticipate some slowing in convergence. # Setup ``` from scipy.interpolate import ( LinearNDInterpolator, NearestNDInterpolator, CloughTocher2DInterpolator, Rbf, ) import matplotlib.pyplot as plt import firedrake import firedrake_adjoint from firedrake import Constant, cos, sin import numpy as np from numpy import pi as π from numpy import random import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import os currentdir = os.path.dirname(os.path.realpath('__file__')) mesh = firedrake.UnitSquareMesh(32, 32) # Solution Space V = firedrake.FunctionSpace(mesh, family='CG', degree=2) # q (Control) Space Q = firedrake.FunctionSpace(mesh, family='CG', degree=2) ``` ## Fake $q_{true}$ ``` seed = 1729 generator = random.default_rng(seed) degree = 5 x = firedrake.SpatialCoordinate(mesh) q_true = firedrake.Function(Q, name='q_true') for k in range(degree): for l in range(int(np.sqrt(degree**2 - k**2))): Z = np.sqrt(1 + k**2 + l**2) ϕ = 2 * π * (k * x[0] + l * x[1]) A_kl = generator.standard_normal() / Z B_kl = generator.standard_normal() / Z expr = Constant(A_kl) * cos(ϕ) + Constant(B_kl) * sin(ϕ) mode = firedrake.interpolate(expr, Q) q_true += mode print('Made fake q_true') ``` ## Fake $u_{true}$ ``` from firedrake import exp, inner, grad, dx u_true = firedrake.Function(V, name='u_true') v = firedrake.TestFunction(V) f = Constant(1.0) k0 = Constant(0.5) bc = firedrake.DirichletBC(V, 0, 'on_boundary') F = (k0 * exp(q_true) * inner(grad(u_true), grad(v)) - f * v) * dx firedrake.solve(F == 0, u_true, bc) print('Made fake u_true') # Clear tape since don't need to have taped above tape = firedrake_adjoint.get_working_tape() tape.clear_tape() ``` ## Data Output We save our fields to a firedrake checkpoint file. ``` filename = os.path.join(currentdir, 'true-fields') with firedrake.DumbCheckpoint(filename, mode=firedrake.FILE_CREATE) as chk: chk.store(q_true) chk.store(u_true) ``` Make sure they have saved... ``` with firedrake.DumbCheckpoint(filename, mode=firedrake.FILE_READ) as chk: chk.load(q_true, name='q_true') chk.load(u_true, name='u_true') fig, axes = plt.subplots() axes.set_aspect('equal') colors = firedrake.tripcolor(q_true, axes=axes, shading='gouraud') fig.colorbar(colors); fig, axes = plt.subplots() axes.set_aspect('equal') colors = firedrake.tripcolor(u_true, axes=axes, shading='gouraud') fig.colorbar(colors); ```
github_jupyter
from scipy.interpolate import ( LinearNDInterpolator, NearestNDInterpolator, CloughTocher2DInterpolator, Rbf, ) import matplotlib.pyplot as plt import firedrake import firedrake_adjoint from firedrake import Constant, cos, sin import numpy as np from numpy import pi as π from numpy import random import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import os currentdir = os.path.dirname(os.path.realpath('__file__')) mesh = firedrake.UnitSquareMesh(32, 32) # Solution Space V = firedrake.FunctionSpace(mesh, family='CG', degree=2) # q (Control) Space Q = firedrake.FunctionSpace(mesh, family='CG', degree=2) seed = 1729 generator = random.default_rng(seed) degree = 5 x = firedrake.SpatialCoordinate(mesh) q_true = firedrake.Function(Q, name='q_true') for k in range(degree): for l in range(int(np.sqrt(degree**2 - k**2))): Z = np.sqrt(1 + k**2 + l**2) ϕ = 2 * π * (k * x[0] + l * x[1]) A_kl = generator.standard_normal() / Z B_kl = generator.standard_normal() / Z expr = Constant(A_kl) * cos(ϕ) + Constant(B_kl) * sin(ϕ) mode = firedrake.interpolate(expr, Q) q_true += mode print('Made fake q_true') from firedrake import exp, inner, grad, dx u_true = firedrake.Function(V, name='u_true') v = firedrake.TestFunction(V) f = Constant(1.0) k0 = Constant(0.5) bc = firedrake.DirichletBC(V, 0, 'on_boundary') F = (k0 * exp(q_true) * inner(grad(u_true), grad(v)) - f * v) * dx firedrake.solve(F == 0, u_true, bc) print('Made fake u_true') # Clear tape since don't need to have taped above tape = firedrake_adjoint.get_working_tape() tape.clear_tape() filename = os.path.join(currentdir, 'true-fields') with firedrake.DumbCheckpoint(filename, mode=firedrake.FILE_CREATE) as chk: chk.store(q_true) chk.store(u_true) with firedrake.DumbCheckpoint(filename, mode=firedrake.FILE_READ) as chk: chk.load(q_true, name='q_true') chk.load(u_true, name='u_true') fig, axes = plt.subplots() axes.set_aspect('equal') colors = firedrake.tripcolor(q_true, axes=axes, shading='gouraud') fig.colorbar(colors); fig, axes = plt.subplots() axes.set_aspect('equal') colors = firedrake.tripcolor(u_true, axes=axes, shading='gouraud') fig.colorbar(colors);
0.680666
0.968738
# MaterialsCoord benchmarking – timing analysis Calculate the time taken to perform the MaterialsCoord benchmark and use the scores to plot a Pareto front. *Written using:* - MaterialsCoord==0.2.0 *Authors: Alex Ganose (03/30/20)* --- First, lets initialize the near neighbor methods we are interested in. ``` from pymatgen.analysis.local_env import BrunnerNN_reciprocal, EconNN, JmolNN, \ MinimumDistanceNN, MinimumOKeeffeNN, MinimumVIRENN, \ VoronoiNN, CrystalNN nn_methods = [ BrunnerNN_reciprocal(), EconNN(tol=0.5), JmolNN(), CrystalNN(), VoronoiNN(tol=0.5), MinimumDistanceNN(), MinimumOKeeffeNN(), MinimumVIRENN() ] ``` Next, import the benchmark and choose which structures we are interested in. ``` from materialscoord.core import Benchmark structure_groups = ["common_binaries", "elemental", "A2BX4", "ABX3", "ABX4"] bm = Benchmark.from_structure_group(structure_groups) ``` Calculate the timings for each algorithm by calculating the bonded structures for all structures in the benchmark. ``` import time timings = [] for nn_method in nn_methods: t0 = time.perf_counter() for structure in bm.structures.values(): nn_method.get_bonded_structure(structure) timings.append(time.perf_counter() - t0) ``` Calculate the benchmark scores for each algorithm. ``` df = bm.score(nn_methods, site_type="all") scores = df.loc["Total"].values ``` Define a function to calculate the Pareto front. ``` def pareto_frontier(x, y, maximize_x=False, maximize_y=False): sort = sorted([[x[i], y[i]] for i in range(len(x))], reverse=maximize_x) # Start the Pareto frontier with the first value in the sorted list p_front = [sort[0]] # Loop through the sorted list for pair in sort[1:]: if maximize_y: if pair[1] >= p_front[-1][1]: # Look for higher values of Y… p_front.append(pair) # … and add them to the Pareto frontier else: if pair[1] <= p_front[-1][1]: # Look for lower values of Y… p_front.append(pair) # … and add them to the Pareto frontier # Turn resulting pairs back into a list of Xs and Ys p_front_x = [pair[0] for pair in p_front] p_front_y = [pair[1] for pair in p_front] return p_front_x, p_front_y ``` Finally, plot the results. ``` %matplotlib inline import matplotlib.pyplot as plt import os import numpy as np p_front = pareto_frontier(scores, timings) # define matplotlib style settings style = { "font.sans-serif": ["Helvetica", "Arial"], "axes.labelsize": 16, "xtick.labelsize": 16, "ytick.labelsize": 16, "xtick.direction": "in", "ytick.direction": "in", "xtick.major.size": 8, "xtick.minor.size": 4, "ytick.major.size": 8, "ytick.minor.size": 4, "lines.linewidth": 2.5, "lines.markersize": 10, "axes.linewidth": 1.2, "xtick.major.width": 1.2, "xtick.minor.width": 1.2, "ytick.major.width": 1.2, "ytick.minor.width": 1.2, "pdf.fonttype": 42 } # use these shifts to fine tune the positions of the labels shifts = { "BrunnerNN_reciprocal": {"x": 4, "y": 0}, "EconNN": {"x": 3, "y": 1}, "JmolNN": {"x": -14, "y": 1.2}, "CrystalNN": {"x": 4, "y": 1}, "VoronoiNN": {"x": -8, "y": -70}, "MinimumDistanceNN": {"x": -10, "y": -2.1}, "MinimumOKeeffeNN": {"x": -20, "y": 4}, "MinimumVIRENN": {"x": 5, "y": -10}, } highlight = "#003366" plt.style.use(style) fig = plt.figure(figsize=(6, 6)) ax = plt.gca() nn_method_mapping = {"BrunnerNN_reciprocal": "BrunnerNN"} for nn_method, (x, y) in zip(nn_methods, zip(scores, timings)): algo = nn_method.__class__.__name__ xshift = shifts[algo]["x"] yshift = shifts[algo]["y"] algo = nn_method_mapping.get(algo, algo) # color the points depending on whether on_front = x in p_front[0] and y in p_front[1] color = highlight if on_front else "gray" ax.scatter(x, y / len(bm.structures), c=color) plt.text(x + xshift, (y + yshift) / len(bm.structures), algo, fontsize=14, color=color) # plot the Pareto front line ax.plot(p_front[0], np.array(p_front[1]) / len(bm.structures), zorder=-1, color=highlight) ax.set_ylim((3/ len(bm.structures), 4e2/ len(bm.structures))) ax.set(xlabel="Benchmark score", ylabel="Runtime per structure (s)") ax.semilogy() plt.savefig(os.path.join("plots", "timing.pdf"), bbox_inches="tight") plt.show() ```
github_jupyter
from pymatgen.analysis.local_env import BrunnerNN_reciprocal, EconNN, JmolNN, \ MinimumDistanceNN, MinimumOKeeffeNN, MinimumVIRENN, \ VoronoiNN, CrystalNN nn_methods = [ BrunnerNN_reciprocal(), EconNN(tol=0.5), JmolNN(), CrystalNN(), VoronoiNN(tol=0.5), MinimumDistanceNN(), MinimumOKeeffeNN(), MinimumVIRENN() ] from materialscoord.core import Benchmark structure_groups = ["common_binaries", "elemental", "A2BX4", "ABX3", "ABX4"] bm = Benchmark.from_structure_group(structure_groups) import time timings = [] for nn_method in nn_methods: t0 = time.perf_counter() for structure in bm.structures.values(): nn_method.get_bonded_structure(structure) timings.append(time.perf_counter() - t0) df = bm.score(nn_methods, site_type="all") scores = df.loc["Total"].values def pareto_frontier(x, y, maximize_x=False, maximize_y=False): sort = sorted([[x[i], y[i]] for i in range(len(x))], reverse=maximize_x) # Start the Pareto frontier with the first value in the sorted list p_front = [sort[0]] # Loop through the sorted list for pair in sort[1:]: if maximize_y: if pair[1] >= p_front[-1][1]: # Look for higher values of Y… p_front.append(pair) # … and add them to the Pareto frontier else: if pair[1] <= p_front[-1][1]: # Look for lower values of Y… p_front.append(pair) # … and add them to the Pareto frontier # Turn resulting pairs back into a list of Xs and Ys p_front_x = [pair[0] for pair in p_front] p_front_y = [pair[1] for pair in p_front] return p_front_x, p_front_y %matplotlib inline import matplotlib.pyplot as plt import os import numpy as np p_front = pareto_frontier(scores, timings) # define matplotlib style settings style = { "font.sans-serif": ["Helvetica", "Arial"], "axes.labelsize": 16, "xtick.labelsize": 16, "ytick.labelsize": 16, "xtick.direction": "in", "ytick.direction": "in", "xtick.major.size": 8, "xtick.minor.size": 4, "ytick.major.size": 8, "ytick.minor.size": 4, "lines.linewidth": 2.5, "lines.markersize": 10, "axes.linewidth": 1.2, "xtick.major.width": 1.2, "xtick.minor.width": 1.2, "ytick.major.width": 1.2, "ytick.minor.width": 1.2, "pdf.fonttype": 42 } # use these shifts to fine tune the positions of the labels shifts = { "BrunnerNN_reciprocal": {"x": 4, "y": 0}, "EconNN": {"x": 3, "y": 1}, "JmolNN": {"x": -14, "y": 1.2}, "CrystalNN": {"x": 4, "y": 1}, "VoronoiNN": {"x": -8, "y": -70}, "MinimumDistanceNN": {"x": -10, "y": -2.1}, "MinimumOKeeffeNN": {"x": -20, "y": 4}, "MinimumVIRENN": {"x": 5, "y": -10}, } highlight = "#003366" plt.style.use(style) fig = plt.figure(figsize=(6, 6)) ax = plt.gca() nn_method_mapping = {"BrunnerNN_reciprocal": "BrunnerNN"} for nn_method, (x, y) in zip(nn_methods, zip(scores, timings)): algo = nn_method.__class__.__name__ xshift = shifts[algo]["x"] yshift = shifts[algo]["y"] algo = nn_method_mapping.get(algo, algo) # color the points depending on whether on_front = x in p_front[0] and y in p_front[1] color = highlight if on_front else "gray" ax.scatter(x, y / len(bm.structures), c=color) plt.text(x + xshift, (y + yshift) / len(bm.structures), algo, fontsize=14, color=color) # plot the Pareto front line ax.plot(p_front[0], np.array(p_front[1]) / len(bm.structures), zorder=-1, color=highlight) ax.set_ylim((3/ len(bm.structures), 4e2/ len(bm.structures))) ax.set(xlabel="Benchmark score", ylabel="Runtime per structure (s)") ax.semilogy() plt.savefig(os.path.join("plots", "timing.pdf"), bbox_inches="tight") plt.show()
0.666714
0.954308
# Sentiment Analysis with an RNN In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the *sequence* of words. Here we'll use a dataset of movie reviews, accompanied by labels. The architecture for this network is shown below. <img src="assets/network_diagram.png" width=400px> Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own. From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function. We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label. ``` import numpy as np import tensorflow as tf with open('../sentiment-network/reviews.txt', 'r') as f: reviews = f.read() with open('../sentiment-network/labels.txt', 'r') as f: labels = f.read() reviews[:2000] ``` ## Data preprocessing The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit. You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines `\n`. To deal with those, I'm going to split the text into each review using `\n` as the delimiter. Then I can combined all the reviews back together into one big string. First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words. ``` from string import punctuation all_text = ''.join([c for c in reviews if c not in punctuation]) reviews = all_text.split('\n') all_text = ' '.join(reviews) words = all_text.split() all_text[:2000] words[:100] ``` ### Encoding the words The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network. > **Exercise:** Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers **start at 1, not 0**. > Also, convert the reviews to integers and store the reviews in a new list called `reviews_ints`. ``` from collections import Counter counts = Counter(words) vocab = sorted(counts, key=counts.get, reverse=True) vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)} reviews_ints = [] for each in reviews: reviews_ints.append([vocab_to_int[word] for word in each.split()]) ``` ### Encoding the labels Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1. > **Exercise:** Convert labels from `positive` and `negative` to 1 and 0, respectively. ``` labels = labels.split('\n') labels = np.array([1 if each == 'positive' else 0 for each in labels]) review_lens = Counter([len(x) for x in reviews_ints]) print("Zero-length reviews: {}".format(review_lens[0])) print("Maximum review length: {}".format(max(review_lens))) ``` Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters. > **Exercise:** First, remove the review with zero length from the `reviews_ints` list. ``` non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0] len(non_zero_idx) reviews_ints[-1] ``` Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general. ``` reviews_ints = [reviews_ints[ii] for ii in non_zero_idx] labels = np.array([labels[ii] for ii in non_zero_idx]) ``` > **Exercise:** Now, create an array `features` that contains the data we'll pass to the network. The data should come from `review_ints`, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is `['best', 'movie', 'ever']`, `[117, 18, 128]` as integers, the row will look like `[0, 0, 0, ..., 0, 117, 18, 128]`. For reviews longer than 200, use on the first 200 words as the feature vector. This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data. ``` seq_len = 200 features = np.zeros((len(reviews_ints), seq_len), dtype=int) for i, row in enumerate(reviews_ints): features[i, -len(row):] = np.array(row)[:seq_len] features[:10,:100] ``` ## Training, Validation, Test With our data in nice shape, we'll split it into training, validation, and test sets. > **Exercise:** Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, `train_x` and `train_y` for example. Define a split fraction, `split_frac` as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data. ``` split_frac = 0.8 split_idx = int(len(features)*0.8) train_x, val_x = features[:split_idx], features[split_idx:] train_y, val_y = labels[:split_idx], labels[split_idx:] test_idx = int(len(val_x)*0.5) val_x, test_x = val_x[:test_idx], val_x[test_idx:] val_y, test_y = val_y[:test_idx], val_y[test_idx:] print("\t\t\tFeature Shapes:") print("Train set: \t\t{}".format(train_x.shape), "\nValidation set: \t{}".format(val_x.shape), "\nTest set: \t\t{}".format(test_x.shape)) ``` With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like: ``` Feature Shapes: Train set: (20000, 200) Validation set: (2500, 200) Test set: (2500, 200) ``` ## Build the graph Here, we'll build the graph. First up, defining the hyperparameters. * `lstm_size`: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc. * `lstm_layers`: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting. * `batch_size`: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory. * `learning_rate`: Learning rate ``` lstm_size = 256 lstm_layers = 1 batch_size = 500 learning_rate = 0.001 ``` For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be `batch_size` vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability. > **Exercise:** Create the `inputs_`, `labels_`, and drop out `keep_prob` placeholders using `tf.placeholder`. `labels_` needs to be two-dimensional to work with some functions later. Since `keep_prob` is a scalar (a 0-dimensional tensor), you shouldn't provide a size to `tf.placeholder`. ``` n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1 # Create the graph object graph = tf.Graph() # Add nodes to the graph with graph.as_default(): inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs') labels_ = tf.placeholder(tf.int32, [None, None], name='labels') keep_prob = tf.placeholder(tf.float32, name='keep_prob') ``` ### Embedding Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights. > **Exercise:** Create the embedding lookup matrix as a `tf.Variable`. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with [`tf.nn.embedding_lookup`](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup). This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200]. ``` # Size of the embedding vectors (number of units in the embedding layer) embed_size = 300 with graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs_) ``` ### LSTM cell <img src="assets/network_diagram.png" width=400px> Next, we'll create our LSTM cells to use in the recurrent network ([TensorFlow documentation](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn)). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph. To create a basic LSTM cell for the graph, you'll want to use `tf.contrib.rnn.BasicLSTMCell`. Looking at the function documentation: ``` tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>) ``` you can see it takes a parameter called `num_units`, the number of units in the cell, called `lstm_size` in this code. So then, you can write something like ``` lstm = tf.contrib.rnn.BasicLSTMCell(num_units) ``` to create an LSTM cell with `num_units`. Next, you can add dropout to the cell with `tf.contrib.rnn.DropoutWrapper`. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like ``` drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) ``` Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with `tf.contrib.rnn.MultiRNNCell`: ``` cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) ``` Here, `[drop] * lstm_layers` creates a list of cells (`drop`) that is `lstm_layers` long. The `MultiRNNCell` wrapper builds this into multiple layers of RNN cells, one for each cell in the list. So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell. > **Exercise:** Below, use `tf.contrib.rnn.BasicLSTMCell` to create an LSTM cell. Then, add drop out to it with `tf.contrib.rnn.DropoutWrapper`. Finally, create multiple LSTM layers with `tf.contrib.rnn.MultiRNNCell`. Here is [a tutorial on building RNNs](https://www.tensorflow.org/tutorials/recurrent) that will help you out. ``` with graph.as_default(): # Your basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) # Getting an initial state of all zeros initial_state = cell.zero_state(batch_size, tf.float32) ``` ### RNN forward pass <img src="assets/network_diagram.png" width=400px> Now we need to actually run the data through the RNN nodes. You can use [`tf.nn.dynamic_rnn`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) to do this. You'd pass in the RNN cell you created (our multiple layered LSTM `cell` for instance), and the inputs to the network. ``` outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state) ``` Above I created an initial state, `initial_state`, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. `tf.nn.dynamic_rnn` takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer. > **Exercise:** Use `tf.nn.dynamic_rnn` to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, `embed`. ``` with graph.as_default(): outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state) ``` ### Output We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with `outputs[:, -1]`, the calculate the cost from that and `labels_`. ``` with graph.as_default(): predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid) cost = tf.losses.mean_squared_error(labels_, predictions) optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) ``` ### Validation accuracy Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass. ``` with graph.as_default(): correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) ``` ### Batching This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the `x` and `y` arrays and returns slices out of those arrays with size `[batch_size]`. ``` def get_batches(x, y, batch_size=100): n_batches = len(x)//batch_size x, y = x[:n_batches*batch_size], y[:n_batches*batch_size] for ii in range(0, len(x), batch_size): yield x[ii:ii+batch_size], y[ii:ii+batch_size] ``` ## Training Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the `checkpoints` directory exists. ``` epochs = 10 with graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=graph) as sess: sess.run(tf.global_variables_initializer()) iteration = 1 for e in range(epochs): state = sess.run(initial_state) for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 0.5, initial_state: state} loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed) if iteration%5==0: print("Epoch: {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Train loss: {:.3f}".format(loss)) if iteration%25==0: val_acc = [] val_state = sess.run(cell.zero_state(batch_size, tf.float32)) for x, y in get_batches(val_x, val_y, batch_size): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: val_state} batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed) val_acc.append(batch_acc) print("Val acc: {:.3f}".format(np.mean(val_acc))) iteration +=1 saver.save(sess, "checkpoints/sentiment.ckpt") ``` ## Testing ``` test_acc = [] with tf.Session(graph=graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) test_state = sess.run(cell.zero_state(batch_size, tf.float32)) for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: test_state} batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed) test_acc.append(batch_acc) print("Test accuracy: {:.3f}".format(np.mean(test_acc))) ```
github_jupyter
import numpy as np import tensorflow as tf with open('../sentiment-network/reviews.txt', 'r') as f: reviews = f.read() with open('../sentiment-network/labels.txt', 'r') as f: labels = f.read() reviews[:2000] from string import punctuation all_text = ''.join([c for c in reviews if c not in punctuation]) reviews = all_text.split('\n') all_text = ' '.join(reviews) words = all_text.split() all_text[:2000] words[:100] from collections import Counter counts = Counter(words) vocab = sorted(counts, key=counts.get, reverse=True) vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)} reviews_ints = [] for each in reviews: reviews_ints.append([vocab_to_int[word] for word in each.split()]) labels = labels.split('\n') labels = np.array([1 if each == 'positive' else 0 for each in labels]) review_lens = Counter([len(x) for x in reviews_ints]) print("Zero-length reviews: {}".format(review_lens[0])) print("Maximum review length: {}".format(max(review_lens))) non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0] len(non_zero_idx) reviews_ints[-1] reviews_ints = [reviews_ints[ii] for ii in non_zero_idx] labels = np.array([labels[ii] for ii in non_zero_idx]) seq_len = 200 features = np.zeros((len(reviews_ints), seq_len), dtype=int) for i, row in enumerate(reviews_ints): features[i, -len(row):] = np.array(row)[:seq_len] features[:10,:100] split_frac = 0.8 split_idx = int(len(features)*0.8) train_x, val_x = features[:split_idx], features[split_idx:] train_y, val_y = labels[:split_idx], labels[split_idx:] test_idx = int(len(val_x)*0.5) val_x, test_x = val_x[:test_idx], val_x[test_idx:] val_y, test_y = val_y[:test_idx], val_y[test_idx:] print("\t\t\tFeature Shapes:") print("Train set: \t\t{}".format(train_x.shape), "\nValidation set: \t{}".format(val_x.shape), "\nTest set: \t\t{}".format(test_x.shape)) Feature Shapes: Train set: (20000, 200) Validation set: (2500, 200) Test set: (2500, 200) lstm_size = 256 lstm_layers = 1 batch_size = 500 learning_rate = 0.001 n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1 # Create the graph object graph = tf.Graph() # Add nodes to the graph with graph.as_default(): inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs') labels_ = tf.placeholder(tf.int32, [None, None], name='labels') keep_prob = tf.placeholder(tf.float32, name='keep_prob') # Size of the embedding vectors (number of units in the embedding layer) embed_size = 300 with graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs_) tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>) lstm = tf.contrib.rnn.BasicLSTMCell(num_units) drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) with graph.as_default(): # Your basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) # Getting an initial state of all zeros initial_state = cell.zero_state(batch_size, tf.float32) outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state) with graph.as_default(): outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state) with graph.as_default(): predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid) cost = tf.losses.mean_squared_error(labels_, predictions) optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) with graph.as_default(): correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) def get_batches(x, y, batch_size=100): n_batches = len(x)//batch_size x, y = x[:n_batches*batch_size], y[:n_batches*batch_size] for ii in range(0, len(x), batch_size): yield x[ii:ii+batch_size], y[ii:ii+batch_size] epochs = 10 with graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=graph) as sess: sess.run(tf.global_variables_initializer()) iteration = 1 for e in range(epochs): state = sess.run(initial_state) for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 0.5, initial_state: state} loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed) if iteration%5==0: print("Epoch: {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Train loss: {:.3f}".format(loss)) if iteration%25==0: val_acc = [] val_state = sess.run(cell.zero_state(batch_size, tf.float32)) for x, y in get_batches(val_x, val_y, batch_size): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: val_state} batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed) val_acc.append(batch_acc) print("Val acc: {:.3f}".format(np.mean(val_acc))) iteration +=1 saver.save(sess, "checkpoints/sentiment.ckpt") test_acc = [] with tf.Session(graph=graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) test_state = sess.run(cell.zero_state(batch_size, tf.float32)) for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: test_state} batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed) test_acc.append(batch_acc) print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
0.603932
0.989751
``` import numpy as np import pandas as pd import nflgame np.arange(2009, 2010, 1) games=nflgame.games(year=np.arange(2009,2010,1)) plays=nflgame.combine_plays(games) results=[] for play in plays: if 'punt' in play.desc.lower() and 'no play' not in play.desc.lower(): surr_index = calc_field_pos_score(play) * calc_yds_to_go_multiplier(play) *\ calc_score_multiplier(play) * calc_clock_multiplier(play) results.append((play, round(surr_index, 2))) results.sort(key = lambda x: x[1], reverse=True) df = pd.DataFrame(results, columns=('Play', 'surr_index')) df['Year'] = df['Play'].apply(lambda x: int(x.drive.game.schedule['year'])) df['Week'] = df['Play'].apply(lambda x: int(x.drive.game.schedule['week'])) df['HomeTeam'] = df['Play'].apply(lambda x: x.drive.game.home) df['AwayTeam'] = df['Play'].apply(lambda x: x.drive.game.away) df['Qtr'] = df['Play'].apply(lambda x: x.data['qtr']) df['Time'] = df['Play'].apply(lambda x: x.data['time']) df['PosTeam'] = df['Play'].apply(lambda x: x.data['posteam']) df['YrdLine'] = df['Play'].apply(lambda x: calc_yd_line_int(x)) df=df[['Year', 'Week', 'AwayTeam', 'HomeTeam', 'Qtr', 'Time', 'PosTeam', 'YrdLine', 'Play', 'surr_index']] for play, surr_index in results[:10]: print('{:} Week {:>2}'.format(play.drive.game.schedule['year'], play.drive.game.schedule['week']), '{:>3} @ {:>3}'.format(play.drive.game.away, play.drive.game.home), f" - Q{play.data['qtr']}", play.data['time'], '{:>3}'.format(play.data['posteam']), '{:>7.2f}'.format(surr_index)) print() def calc_field_pos_score(play): yrdln=play.data['yrdln'] yrdlnInt=int(yrdln.split(' ')[-1]) if '50' in play.data['yrdln']: return (1.1) ** 10 elif play.data['posteam'] in play.data['yrdln']: return max(1., (1.1) ** (yrdlnInt - 40)) else: return (1.2) ** (50-yrdlnInt) * (1.1) ** 10 def calc_yds_to_go_multiplier(play): if play.data['ydstogo'] >= 10: return 0.2 elif play.data['ydstogo'] >= 7: return 0.4 elif play.data['ydstogo'] >= 4: return 0.6 elif play.data['ydstogo'] >= 2: return 0.8 else: return 1 def calc_score_multiplier(play): # 1x if winning # 2x if tied # 3x if down by >8pts # 4x if down by <=8pts points_diff = calc_score_diff(play) if points_diff > 0: return 1. elif points_diff == 0: return 2. elif points_diff >=-8: return 4. else: return 3. def calc_clock_multiplier(play): if play.data['qtr'] > 2 and calc_score_diff(play)<=0: num_seconds=calc_sec_since_half(play) return ((num_seconds * 0.001) ** 3) + 1 else: return 1. def process_touchdown(drive): total_points = 6 pat=list(drive.plays)[-1] if 'extra point is GOOD' in pat.desc: total_points += 1 elif 'ATTEMPT SUCCEEDS' in pat.desc: total_points += 2 elif drive.result in ('Fumble', 'Interception'): total_points += 1 if 'TOUCHDOWN NULLIFIED' in pat.desc: total_points = 0 return total_points def calc_score_diff(play): """ Calculates the score difference at the point in the game when the given play occurred. Parameters: play(nflgame.game.Play): The play in question Returns: int: The score differential of the team with possession. Positive == winning, negative == losing """ home_team = play.drive.game.home away_team = play.drive.game.away score = {away_team: 0, home_team: 0} drives=[d for d in play.drive.game.drives if d.drive_num < play.drive.drive_num] for drive in drives: if drive.result == 'Field Goal': score[drive.team] += 3 elif 'Safety' in drive.result: if drive.team == home_team: score[away_team] += 2 else: score[home_team] += 2 elif drive.result == 'Touchdown': score[drive.team] += process_touchdown(drive) elif drive.result in ('Fumble', 'Interception') and any([p.touchdown for p in drive.plays]): if drive.team == home_team: score[away_team] += process_touchdown(drive) else: score[home_team] += process_touchdown(drive) points_diff=score[home_team] - score[away_team] if play.data['posteam'] == home_team: return int(points_diff) else: return int(-points_diff) def calc_sec_from_str(time_str: str): """ Calculates the integer number of seconds from a given time string of the format MM:SS """ mn,sc = map(int, time_str.split(':')) return mn * 60 + sc def calc_sec_since_half(play): if play.data['qtr'] <= 2: return 0. if play.drive.game.schedule['year'] >= 2018: ot_len = 10 else: ot_len = 15 if play.drive.game.schedule['season_type'] != 'POST' and play.data['qtr'] == 5: sec_in_qtr = (ot_len * 60) - calc_sec_from_str(play.data['time']) else: sec_in_qtr = (15 * 60) - calc_sec_from_str(play.data['time']) return max(sec_in_qtr + (15 * 60) * (play.data['qtr'] - 3), 0) def calc_yd_line_int(play): """ Calculates the yard line as an integer b/w 0 - 100, where 0 - 50 represents the opponent's side of the field, and 50 - 100 represents the possessing team's side. """ if play.data['yrdln'] == '': return None if play.data['yrdln'] == '50': return 50 side, yrdln = play.data['yrdln'].split(' ') yrdln = int(yrdln) if play.data['posteam'] == side: return yrdln else: return 100 - yrdln import matplotlib.pyplot as plt import matplotlib.ticker as ticker %matplotlib inline import seaborn as sb blue,red,amber=sb.xkcd_palette(['windows blue','pale red','amber']) fig=plt.figure(figsize=(16,6)) subplt=plt.subplot(111) subplt.plot(df['Year'] + (df['Week'] - 8)/40, df['surr_index'], 'ko', ms=3,alpha=0.7) subplt.set_xlabel('Year', fontsize=30) subplt.set_ylabel('Surrender Index', fontsize=30) subplt.set_xticks(np.arange(2009,2018,1)) plt.tick_params(which='both',labelsize=20) plt.ticklabel_format(useOffset=False) subplt.xaxis.set_minor_locator(ticker.FixedLocator(np.arange(2008,2018,1)+0.5)) plt.grid(b=False,which='major', axis='x') plt.grid(b=True,which='minor', axis='x', lw=4) fig=plt.figure(figsize=(16,8.5)) subplt=plt.subplot(111) ys=np.random.rand(len(df)) subplt.plot(df['YrdLine'], ys, 'ko', ms=2,alpha=0.4) subplt.plot(df['YrdLine'].iloc[0:10], ys[0:10], linestyle='None', marker='o', color=red, ms=10) subplt.set_xlim(0,100) subplt.set_xticks(np.arange(0,101,5)) subplt.set_xticklabels([0,'',10,'',20,'',30,'',40,'',50, '',40,'',30,'',20,'',10,'',0]) subplt.xaxis.set_minor_locator(ticker.MultipleLocator(1)) subplt.set_yticks([]) subplt.set_yticklabels([]) plt.tick_params(axis='both', labelsize=20) subplt.grid(which='major', axis='x', color='k', lw=2) subplt.grid(which='minor', axis='x', color='0.1',alpha=0.2) subplt.set_ylabel('Own Endzone',fontsize=30) subplt.set_xlabel('Field Position',fontsize=30) np.arange(0,101,5) def live_callback(active, completed, diffs): for diff in diffs: print(diffs) for play in diff.plays: if 'punt' in play.desc.lower() and 'no play' not in play.desc.lower(): print(play.desc) nflgame.live.run(live_callback, active_interval=15, inactive_interval=15, stop=None) print(f"Q{play.data['qtr']}", play.data['time']) print(results[0][0].desc) play.playid scores={} play.drive.game.gamekey scores[play.drive.game.gamekey] = set(play) game = set() game.add(play) ```
github_jupyter
import numpy as np import pandas as pd import nflgame np.arange(2009, 2010, 1) games=nflgame.games(year=np.arange(2009,2010,1)) plays=nflgame.combine_plays(games) results=[] for play in plays: if 'punt' in play.desc.lower() and 'no play' not in play.desc.lower(): surr_index = calc_field_pos_score(play) * calc_yds_to_go_multiplier(play) *\ calc_score_multiplier(play) * calc_clock_multiplier(play) results.append((play, round(surr_index, 2))) results.sort(key = lambda x: x[1], reverse=True) df = pd.DataFrame(results, columns=('Play', 'surr_index')) df['Year'] = df['Play'].apply(lambda x: int(x.drive.game.schedule['year'])) df['Week'] = df['Play'].apply(lambda x: int(x.drive.game.schedule['week'])) df['HomeTeam'] = df['Play'].apply(lambda x: x.drive.game.home) df['AwayTeam'] = df['Play'].apply(lambda x: x.drive.game.away) df['Qtr'] = df['Play'].apply(lambda x: x.data['qtr']) df['Time'] = df['Play'].apply(lambda x: x.data['time']) df['PosTeam'] = df['Play'].apply(lambda x: x.data['posteam']) df['YrdLine'] = df['Play'].apply(lambda x: calc_yd_line_int(x)) df=df[['Year', 'Week', 'AwayTeam', 'HomeTeam', 'Qtr', 'Time', 'PosTeam', 'YrdLine', 'Play', 'surr_index']] for play, surr_index in results[:10]: print('{:} Week {:>2}'.format(play.drive.game.schedule['year'], play.drive.game.schedule['week']), '{:>3} @ {:>3}'.format(play.drive.game.away, play.drive.game.home), f" - Q{play.data['qtr']}", play.data['time'], '{:>3}'.format(play.data['posteam']), '{:>7.2f}'.format(surr_index)) print() def calc_field_pos_score(play): yrdln=play.data['yrdln'] yrdlnInt=int(yrdln.split(' ')[-1]) if '50' in play.data['yrdln']: return (1.1) ** 10 elif play.data['posteam'] in play.data['yrdln']: return max(1., (1.1) ** (yrdlnInt - 40)) else: return (1.2) ** (50-yrdlnInt) * (1.1) ** 10 def calc_yds_to_go_multiplier(play): if play.data['ydstogo'] >= 10: return 0.2 elif play.data['ydstogo'] >= 7: return 0.4 elif play.data['ydstogo'] >= 4: return 0.6 elif play.data['ydstogo'] >= 2: return 0.8 else: return 1 def calc_score_multiplier(play): # 1x if winning # 2x if tied # 3x if down by >8pts # 4x if down by <=8pts points_diff = calc_score_diff(play) if points_diff > 0: return 1. elif points_diff == 0: return 2. elif points_diff >=-8: return 4. else: return 3. def calc_clock_multiplier(play): if play.data['qtr'] > 2 and calc_score_diff(play)<=0: num_seconds=calc_sec_since_half(play) return ((num_seconds * 0.001) ** 3) + 1 else: return 1. def process_touchdown(drive): total_points = 6 pat=list(drive.plays)[-1] if 'extra point is GOOD' in pat.desc: total_points += 1 elif 'ATTEMPT SUCCEEDS' in pat.desc: total_points += 2 elif drive.result in ('Fumble', 'Interception'): total_points += 1 if 'TOUCHDOWN NULLIFIED' in pat.desc: total_points = 0 return total_points def calc_score_diff(play): """ Calculates the score difference at the point in the game when the given play occurred. Parameters: play(nflgame.game.Play): The play in question Returns: int: The score differential of the team with possession. Positive == winning, negative == losing """ home_team = play.drive.game.home away_team = play.drive.game.away score = {away_team: 0, home_team: 0} drives=[d for d in play.drive.game.drives if d.drive_num < play.drive.drive_num] for drive in drives: if drive.result == 'Field Goal': score[drive.team] += 3 elif 'Safety' in drive.result: if drive.team == home_team: score[away_team] += 2 else: score[home_team] += 2 elif drive.result == 'Touchdown': score[drive.team] += process_touchdown(drive) elif drive.result in ('Fumble', 'Interception') and any([p.touchdown for p in drive.plays]): if drive.team == home_team: score[away_team] += process_touchdown(drive) else: score[home_team] += process_touchdown(drive) points_diff=score[home_team] - score[away_team] if play.data['posteam'] == home_team: return int(points_diff) else: return int(-points_diff) def calc_sec_from_str(time_str: str): """ Calculates the integer number of seconds from a given time string of the format MM:SS """ mn,sc = map(int, time_str.split(':')) return mn * 60 + sc def calc_sec_since_half(play): if play.data['qtr'] <= 2: return 0. if play.drive.game.schedule['year'] >= 2018: ot_len = 10 else: ot_len = 15 if play.drive.game.schedule['season_type'] != 'POST' and play.data['qtr'] == 5: sec_in_qtr = (ot_len * 60) - calc_sec_from_str(play.data['time']) else: sec_in_qtr = (15 * 60) - calc_sec_from_str(play.data['time']) return max(sec_in_qtr + (15 * 60) * (play.data['qtr'] - 3), 0) def calc_yd_line_int(play): """ Calculates the yard line as an integer b/w 0 - 100, where 0 - 50 represents the opponent's side of the field, and 50 - 100 represents the possessing team's side. """ if play.data['yrdln'] == '': return None if play.data['yrdln'] == '50': return 50 side, yrdln = play.data['yrdln'].split(' ') yrdln = int(yrdln) if play.data['posteam'] == side: return yrdln else: return 100 - yrdln import matplotlib.pyplot as plt import matplotlib.ticker as ticker %matplotlib inline import seaborn as sb blue,red,amber=sb.xkcd_palette(['windows blue','pale red','amber']) fig=plt.figure(figsize=(16,6)) subplt=plt.subplot(111) subplt.plot(df['Year'] + (df['Week'] - 8)/40, df['surr_index'], 'ko', ms=3,alpha=0.7) subplt.set_xlabel('Year', fontsize=30) subplt.set_ylabel('Surrender Index', fontsize=30) subplt.set_xticks(np.arange(2009,2018,1)) plt.tick_params(which='both',labelsize=20) plt.ticklabel_format(useOffset=False) subplt.xaxis.set_minor_locator(ticker.FixedLocator(np.arange(2008,2018,1)+0.5)) plt.grid(b=False,which='major', axis='x') plt.grid(b=True,which='minor', axis='x', lw=4) fig=plt.figure(figsize=(16,8.5)) subplt=plt.subplot(111) ys=np.random.rand(len(df)) subplt.plot(df['YrdLine'], ys, 'ko', ms=2,alpha=0.4) subplt.plot(df['YrdLine'].iloc[0:10], ys[0:10], linestyle='None', marker='o', color=red, ms=10) subplt.set_xlim(0,100) subplt.set_xticks(np.arange(0,101,5)) subplt.set_xticklabels([0,'',10,'',20,'',30,'',40,'',50, '',40,'',30,'',20,'',10,'',0]) subplt.xaxis.set_minor_locator(ticker.MultipleLocator(1)) subplt.set_yticks([]) subplt.set_yticklabels([]) plt.tick_params(axis='both', labelsize=20) subplt.grid(which='major', axis='x', color='k', lw=2) subplt.grid(which='minor', axis='x', color='0.1',alpha=0.2) subplt.set_ylabel('Own Endzone',fontsize=30) subplt.set_xlabel('Field Position',fontsize=30) np.arange(0,101,5) def live_callback(active, completed, diffs): for diff in diffs: print(diffs) for play in diff.plays: if 'punt' in play.desc.lower() and 'no play' not in play.desc.lower(): print(play.desc) nflgame.live.run(live_callback, active_interval=15, inactive_interval=15, stop=None) print(f"Q{play.data['qtr']}", play.data['time']) print(results[0][0].desc) play.playid scores={} play.drive.game.gamekey scores[play.drive.game.gamekey] = set(play) game = set() game.add(play)
0.419172
0.301452
... ***CURRENTLY UNDER DEVELOPMENT*** ... ## Obtain synthetic DWTs timeseries inputs required: * Historical DWTs * Historical AWT and IWT * Synthetic timeseries of AWT and IWT in this notebook: * Fit the ALR model of DWT based on seasonality,and AWT and IWT timeseries * Generate *n* simulations of 1000 years of DWTs timeseries ### Workflow: <div> <img src="resources/nb01_11.png" width="400px"> </div> Simulating sequencing and persistence of synthetic DWTs is accomplished with an autoregressive logistic model (ALR). ALR models are simultaneously able to account for covariates varying at different timescales as well as the autocorrelation of those covariates at different orders (Guanche et al., 2013; Antolinez et al., 2015). In this sense, the AWT, seasonality, IWT, as well as the ordering (transitions between DWTs) and duration (persistence within a DWT) can all be accounted for within a single framework to make a categorical decision of what the weather pattern should be on any given day. Mathematically, the model is represented as: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Prob$(Y_t=i|Y_{t-1},...,Y_{t-e},X_t)$ = &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$= {{\exp{\large (}\beta_{0,i} + \beta_{1,i}\cos \omega t + \beta_{2,i}\sin \omega t + \sum\limits_{j=1}^{3}\beta_{j,i}^{awt} APC_j(t) + \sum\limits_{j=1}^{2}\beta_{j,i}^{iwt} IPC_j(t) + \sum\limits_{j=1}^e Y_{t-j\gamma j,i}{\large )}} \over {\sum\limits_{k=1}^{n_{DWT}} \exp{\large (}\beta_{0,k} + \beta_{1,k}\cos \omega t + \beta_{2,k}\sin \omega t + \sum\limits_{j=1}^{3}\beta_{j,k}^{awt} APC_j(t) + \sum\limits_{j=1}^{2}\beta_{j,k}^{iwt} IPC_j(t) + \sum\limits_{j=1}^e Y_{t-j\gamma j,k}{\large )}}}$;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</center> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$\forall i = 1,...,n_{ss}$ where $\beta_{1,i}$ and $\beta_{2,i}$ covariates account for the seasonal probabilites of each DWT. Covariates $\beta_{j,k}^{awt} APC_j(t)$ account for each weather type’s probability associated with the leading three principle components used to create the AWTs, covariates $\beta_{j,k}^{iwt} IPC_j(t)$ account for the leading two principle components of the MJO, and $Y_{t-j}$ represents the DWT of the previous j-states, and $\beta_{j,i}$ is the parameter associated with the previous j-state, and the order e corresponds to the number of previous states that influence the actual DWT. Each of these covariates was found to be statistically significant by the likelihood ratio (Guanche et al. 2014), where inclusion of a covariate required an improvement in prediction beyond a penalty associated with the added degrees of freedom. An iterative method began with the best univariate model (seasonality) and added each covariate in a pair-wise fashion to determine the next best model (seasonality + $APC_1$), continuing this process until all covariates were added. The model performance is evaluated at the end of the notebook by means of comparison historical and simulated probabilities of occurrence of the 42 DWTs during a perpetual year, the transition probabilities between DWTs and finally seasonal and conditional probabilities of occurrance of DWT to AWT and IWT. ``` #!/usr/bin/env python # -*- coding: utf-8 -*- # common import os import os.path as op # pip import numpy as np import xarray as xr # DEV: override installed teslakit import sys sys.path.insert(0, op.join(os.path.abspath(''), '..', '..', '..')) # teslakit from teslakit.database import Database from teslakit.alr import ALR_WRP from teslakit.util.time_operations import xds_reindex_daily, xds_common_dates_daily from teslakit.plotting.estela import Plot_DWTs_Probs from teslakit.plotting.wts import Plot_Probs_WT_WT, Plot_Probs_WT_WT_anomaly from teslakit.plotting.waves import Plot_Waves_DWTs ``` ## Database and Site parameters ``` # -------------------------------------- # Teslakit database p_data = r'C:\Users\lcag075\Dropbox\MAJURO-teslakit\teslakit\DATA' db = Database(p_data) # set site db.SetSite('MAJURO') # -------------------------------------- # load data and set parameters MJO_fit = db.Load_MJO_hist() # historical MJO KMA_fit = db.Load_ESTELA_KMA() # ESTELA + TCs Predictor PCs_all = db.Load_SST_PCA() # SST PCs (annual) MJO_sim_all = db.Load_MJO_sim() # MJO simulations (daily) PCs_sim_all = db.Load_SST_PCs_sim_d() # SST PCs simulations (daily) # ALR fit parameters alr_num_clusters = 42 alr_markov_order = 3 alr_seasonality = [2, 4] # ALR simulation num_sims = 100 # one simulation for each simulated MJO, SST ``` ## ESTELA Predictor - Autoregressive Logistic Regression Fitting ``` # -------------------------------------- # Data used to FIT ALR model and preprocess: # KMA: bmus (daily) (use sorted_bmus_storms, add 1 to get 1-42 bmus set) BMUS_fit = xr.Dataset( { 'bmus':(('time',), KMA_fit['sorted_bmus_storms'].values[:] + 1), }, coords = {'time': KMA_fit.time.values[:]} ) # MJO: rmm1, rmm2 (daily) print(MJO_fit) # SST: PCs (annual) sst_PCs = PCs_all.PCs.values[:] PCs_fit = xr.Dataset( { 'PC1': (('time',), sst_PCs[:,0]), 'PC2': (('time',), sst_PCs[:,1]), 'PC3': (('time',), sst_PCs[:,2]), }, coords = {'time': PCs_all.time.values[:]} ) # reindex annual data to daily data PCs_fit = xds_reindex_daily(PCs_fit) print(PCs_fit) # -------------------------------------- # Mount covariates matrix (model fit: BMUS_fit, MJO_fit, PCs_fit) # covariates_fit dates d_fit = xds_common_dates_daily([MJO_fit, PCs_fit, BMUS_fit]) # KMA dates BMUS_fit = BMUS_fit.sel(time = slice(d_fit[0], d_fit[-1])) # PCs covars cov_PCs = PCs_fit.sel(time = slice(d_fit[0], d_fit[-1])) cov_1 = cov_PCs.PC1.values.reshape(-1,1) cov_2 = cov_PCs.PC2.values.reshape(-1,1) cov_3 = cov_PCs.PC3.values.reshape(-1,1) # MJO covars cov_MJO = MJO_fit.sel(time = slice(d_fit[0], d_fit[-1])) cov_4 = cov_MJO.rmm1.values.reshape(-1,1) cov_5 = cov_MJO.rmm2.values.reshape(-1,1) # join covars cov_T = np.hstack((cov_1, cov_2, cov_3, cov_4, cov_5)) # normalize cov_norm_fit = (cov_T - cov_T.mean(axis=0)) / cov_T.std(axis=0) cov_fit = xr.Dataset( { 'cov_norm': (('time','n_covariates'), cov_norm_fit), 'cov_names': (('n_covariates',), ['PC1','PC2','PC3','MJO1','MJO2']), }, coords = {'time': d_fit} ) print(cov_fit) # -------------------------------------- # Autoregressive Logistic Regression # model fit: BMUS_fit, cov_fit, num_clusters # model sim: cov_sim, sim_num, sim_years # ALR terms d_terms_settings = { 'mk_order' : alr_markov_order, 'constant' : True, 'long_term' : False, 'seasonality': (True, alr_seasonality), 'covariates': (True, cov_fit), } # ALR wrapper ALRW = ALR_WRP(db.paths.site.ESTELA.alrw) ALRW.SetFitData(alr_num_clusters, BMUS_fit, d_terms_settings) # ALR model fitting ALRW.FitModel(max_iter=50000) # Plot model p-values and params ALRW.Report_Fit() ``` ## ESTELA Predictor - Autoregressive Logistic Regression Simulation ``` # -------------------------------------- # Prepare Covariates for ALR simulations # simulation dates d_sim = xds_common_dates_daily([MJO_sim_all, PCs_sim_all]) # join covariates for all MJO, PCs simulations l_cov_sims = [] for i in MJO_sim_all.n_sim: # select simulation MJO_sim = MJO_sim_all.sel(n_sim=i) PCs_sim = PCs_sim_all.sel(n_sim=i) # PCs covar cov_PCs = PCs_sim.sel(time = slice(d_sim[0], d_sim[-1])) cov_1 = cov_PCs.PC1.values.reshape(-1,1) cov_2 = cov_PCs.PC2.values.reshape(-1,1) cov_3 = cov_PCs.PC3.values.reshape(-1,1) # MJO covars cov_MJO = MJO_sim.sel(time = slice(d_sim[0], d_sim[-1])) cov_4 = cov_MJO.rmm1.values.reshape(-1,1) cov_5 = cov_MJO.rmm2.values.reshape(-1,1) # join covars (do not normalize simulation covariates) cov_T_sim = np.hstack((cov_1, cov_2, cov_3, cov_4, cov_5)) cov_sim = xr.Dataset( { 'cov_values': (('time','n_covariates'), cov_T_sim), }, coords = {'time': d_sim} ) l_cov_sims.append(cov_sim) # use "n_sim" name to join covariates (ALR.Simulate() will recognize it) cov_sims = xr.concat(l_cov_sims, dim='n_sim') cov_sims = cov_sims.squeeze() print(cov_sims) # -------------------------------------- # Autoregressive Logistic Regression - simulate # launch simulation xds_alr = ALRW.Simulate(num_sims, d_sim, cov_sims) # Store Daily Weather Types DWT_sim = xds_alr.evbmus_sims.to_dataset() db.Save_ESTELA_DWT_sim(DWT_sim) print(DWT_sim) # show sim report ALRW.Report_Sim(py_month_ini=6); # Plot Historical DWTs probabilities (with TCs DWTs) bmus_fit = KMA_fit.sorted_bmus_storms.values[:] + 1 dbmus_fit = KMA_fit.time.values[:] Plot_DWTs_Probs(bmus_fit, dbmus_fit, alr_num_clusters); # Plot Simulated DWTs probabilities (with TCs DWTs) bmus_sim = DWT_sim.isel(n_sim=0).evbmus_sims.values[:] dbmus_sim = DWT_sim.time.values[:] Plot_DWTs_Probs(bmus_sim, dbmus_sim, alr_num_clusters); # Plot AWTs/DWTs Probabilities # clusters to plot (no TCs) n_clusters_AWT = 6 n_clusters_DWT = 42 n_sim = 0 # simulation to plot # Plot AWTs/DWTs Probs - historical AWT_hist, DWT_hist = db.Load_AWTs_DWTs_Plots_hist() AWT_bmus = AWT_hist.bmus.values[:] DWT_bmus = DWT_hist.bmus.values[:] Plot_Probs_WT_WT( AWT_bmus, DWT_bmus, n_clusters_AWT, n_clusters_DWT, wt_colors=True, ttl = 'DWTs Probabilities by AWTs - Historical' ); # Plot AWTs/DWTs sim - simulated AWT_sim, DWT_sim = db.Load_AWTs_DWTs_Plots_sim(n_sim=0) AWT_bmus = AWT_sim.bmus.values[:] DWT_bmus = DWT_sim.bmus.values[:] Plot_Probs_WT_WT( AWT_bmus, DWT_bmus, n_clusters_AWT, n_clusters_DWT, wt_colors=True, ttl = 'DWTs Probabilities by AWTs - Simulation' ); # plot DWTs conditional probabilities to each AWT, minus mean probabilities # Plot AWTs/DWTs Probs - historical AWT_hist, DWT_hist = db.Load_AWTs_DWTs_Plots_hist() AWT_bmus = AWT_hist.bmus.values[:] DWT_bmus = DWT_hist.bmus.values[:] Plot_Probs_WT_WT_anomaly( AWT_bmus, DWT_bmus, n_clusters_AWT, n_clusters_DWT, wt_colors=True, ttl = 'DWTs anomaly Probabilities by AWTs - Historical' ); # Plot AWTs/DWTs sim - simulated AWT_sim, DWT_sim = db.Load_AWTs_DWTs_Plots_sim(n_sim=0) AWT_bmus = AWT_sim.bmus.values[:] DWT_bmus = DWT_sim.bmus.values[:] Plot_Probs_WT_WT_anomaly( AWT_bmus, DWT_bmus, n_clusters_AWT, n_clusters_DWT, wt_colors=True, ttl = 'DWTs anomaly Probabilities by AWTs - Simulation' ); ```
github_jupyter
#!/usr/bin/env python # -*- coding: utf-8 -*- # common import os import os.path as op # pip import numpy as np import xarray as xr # DEV: override installed teslakit import sys sys.path.insert(0, op.join(os.path.abspath(''), '..', '..', '..')) # teslakit from teslakit.database import Database from teslakit.alr import ALR_WRP from teslakit.util.time_operations import xds_reindex_daily, xds_common_dates_daily from teslakit.plotting.estela import Plot_DWTs_Probs from teslakit.plotting.wts import Plot_Probs_WT_WT, Plot_Probs_WT_WT_anomaly from teslakit.plotting.waves import Plot_Waves_DWTs # -------------------------------------- # Teslakit database p_data = r'C:\Users\lcag075\Dropbox\MAJURO-teslakit\teslakit\DATA' db = Database(p_data) # set site db.SetSite('MAJURO') # -------------------------------------- # load data and set parameters MJO_fit = db.Load_MJO_hist() # historical MJO KMA_fit = db.Load_ESTELA_KMA() # ESTELA + TCs Predictor PCs_all = db.Load_SST_PCA() # SST PCs (annual) MJO_sim_all = db.Load_MJO_sim() # MJO simulations (daily) PCs_sim_all = db.Load_SST_PCs_sim_d() # SST PCs simulations (daily) # ALR fit parameters alr_num_clusters = 42 alr_markov_order = 3 alr_seasonality = [2, 4] # ALR simulation num_sims = 100 # one simulation for each simulated MJO, SST # -------------------------------------- # Data used to FIT ALR model and preprocess: # KMA: bmus (daily) (use sorted_bmus_storms, add 1 to get 1-42 bmus set) BMUS_fit = xr.Dataset( { 'bmus':(('time',), KMA_fit['sorted_bmus_storms'].values[:] + 1), }, coords = {'time': KMA_fit.time.values[:]} ) # MJO: rmm1, rmm2 (daily) print(MJO_fit) # SST: PCs (annual) sst_PCs = PCs_all.PCs.values[:] PCs_fit = xr.Dataset( { 'PC1': (('time',), sst_PCs[:,0]), 'PC2': (('time',), sst_PCs[:,1]), 'PC3': (('time',), sst_PCs[:,2]), }, coords = {'time': PCs_all.time.values[:]} ) # reindex annual data to daily data PCs_fit = xds_reindex_daily(PCs_fit) print(PCs_fit) # -------------------------------------- # Mount covariates matrix (model fit: BMUS_fit, MJO_fit, PCs_fit) # covariates_fit dates d_fit = xds_common_dates_daily([MJO_fit, PCs_fit, BMUS_fit]) # KMA dates BMUS_fit = BMUS_fit.sel(time = slice(d_fit[0], d_fit[-1])) # PCs covars cov_PCs = PCs_fit.sel(time = slice(d_fit[0], d_fit[-1])) cov_1 = cov_PCs.PC1.values.reshape(-1,1) cov_2 = cov_PCs.PC2.values.reshape(-1,1) cov_3 = cov_PCs.PC3.values.reshape(-1,1) # MJO covars cov_MJO = MJO_fit.sel(time = slice(d_fit[0], d_fit[-1])) cov_4 = cov_MJO.rmm1.values.reshape(-1,1) cov_5 = cov_MJO.rmm2.values.reshape(-1,1) # join covars cov_T = np.hstack((cov_1, cov_2, cov_3, cov_4, cov_5)) # normalize cov_norm_fit = (cov_T - cov_T.mean(axis=0)) / cov_T.std(axis=0) cov_fit = xr.Dataset( { 'cov_norm': (('time','n_covariates'), cov_norm_fit), 'cov_names': (('n_covariates',), ['PC1','PC2','PC3','MJO1','MJO2']), }, coords = {'time': d_fit} ) print(cov_fit) # -------------------------------------- # Autoregressive Logistic Regression # model fit: BMUS_fit, cov_fit, num_clusters # model sim: cov_sim, sim_num, sim_years # ALR terms d_terms_settings = { 'mk_order' : alr_markov_order, 'constant' : True, 'long_term' : False, 'seasonality': (True, alr_seasonality), 'covariates': (True, cov_fit), } # ALR wrapper ALRW = ALR_WRP(db.paths.site.ESTELA.alrw) ALRW.SetFitData(alr_num_clusters, BMUS_fit, d_terms_settings) # ALR model fitting ALRW.FitModel(max_iter=50000) # Plot model p-values and params ALRW.Report_Fit() # -------------------------------------- # Prepare Covariates for ALR simulations # simulation dates d_sim = xds_common_dates_daily([MJO_sim_all, PCs_sim_all]) # join covariates for all MJO, PCs simulations l_cov_sims = [] for i in MJO_sim_all.n_sim: # select simulation MJO_sim = MJO_sim_all.sel(n_sim=i) PCs_sim = PCs_sim_all.sel(n_sim=i) # PCs covar cov_PCs = PCs_sim.sel(time = slice(d_sim[0], d_sim[-1])) cov_1 = cov_PCs.PC1.values.reshape(-1,1) cov_2 = cov_PCs.PC2.values.reshape(-1,1) cov_3 = cov_PCs.PC3.values.reshape(-1,1) # MJO covars cov_MJO = MJO_sim.sel(time = slice(d_sim[0], d_sim[-1])) cov_4 = cov_MJO.rmm1.values.reshape(-1,1) cov_5 = cov_MJO.rmm2.values.reshape(-1,1) # join covars (do not normalize simulation covariates) cov_T_sim = np.hstack((cov_1, cov_2, cov_3, cov_4, cov_5)) cov_sim = xr.Dataset( { 'cov_values': (('time','n_covariates'), cov_T_sim), }, coords = {'time': d_sim} ) l_cov_sims.append(cov_sim) # use "n_sim" name to join covariates (ALR.Simulate() will recognize it) cov_sims = xr.concat(l_cov_sims, dim='n_sim') cov_sims = cov_sims.squeeze() print(cov_sims) # -------------------------------------- # Autoregressive Logistic Regression - simulate # launch simulation xds_alr = ALRW.Simulate(num_sims, d_sim, cov_sims) # Store Daily Weather Types DWT_sim = xds_alr.evbmus_sims.to_dataset() db.Save_ESTELA_DWT_sim(DWT_sim) print(DWT_sim) # show sim report ALRW.Report_Sim(py_month_ini=6); # Plot Historical DWTs probabilities (with TCs DWTs) bmus_fit = KMA_fit.sorted_bmus_storms.values[:] + 1 dbmus_fit = KMA_fit.time.values[:] Plot_DWTs_Probs(bmus_fit, dbmus_fit, alr_num_clusters); # Plot Simulated DWTs probabilities (with TCs DWTs) bmus_sim = DWT_sim.isel(n_sim=0).evbmus_sims.values[:] dbmus_sim = DWT_sim.time.values[:] Plot_DWTs_Probs(bmus_sim, dbmus_sim, alr_num_clusters); # Plot AWTs/DWTs Probabilities # clusters to plot (no TCs) n_clusters_AWT = 6 n_clusters_DWT = 42 n_sim = 0 # simulation to plot # Plot AWTs/DWTs Probs - historical AWT_hist, DWT_hist = db.Load_AWTs_DWTs_Plots_hist() AWT_bmus = AWT_hist.bmus.values[:] DWT_bmus = DWT_hist.bmus.values[:] Plot_Probs_WT_WT( AWT_bmus, DWT_bmus, n_clusters_AWT, n_clusters_DWT, wt_colors=True, ttl = 'DWTs Probabilities by AWTs - Historical' ); # Plot AWTs/DWTs sim - simulated AWT_sim, DWT_sim = db.Load_AWTs_DWTs_Plots_sim(n_sim=0) AWT_bmus = AWT_sim.bmus.values[:] DWT_bmus = DWT_sim.bmus.values[:] Plot_Probs_WT_WT( AWT_bmus, DWT_bmus, n_clusters_AWT, n_clusters_DWT, wt_colors=True, ttl = 'DWTs Probabilities by AWTs - Simulation' ); # plot DWTs conditional probabilities to each AWT, minus mean probabilities # Plot AWTs/DWTs Probs - historical AWT_hist, DWT_hist = db.Load_AWTs_DWTs_Plots_hist() AWT_bmus = AWT_hist.bmus.values[:] DWT_bmus = DWT_hist.bmus.values[:] Plot_Probs_WT_WT_anomaly( AWT_bmus, DWT_bmus, n_clusters_AWT, n_clusters_DWT, wt_colors=True, ttl = 'DWTs anomaly Probabilities by AWTs - Historical' ); # Plot AWTs/DWTs sim - simulated AWT_sim, DWT_sim = db.Load_AWTs_DWTs_Plots_sim(n_sim=0) AWT_bmus = AWT_sim.bmus.values[:] DWT_bmus = DWT_sim.bmus.values[:] Plot_Probs_WT_WT_anomaly( AWT_bmus, DWT_bmus, n_clusters_AWT, n_clusters_DWT, wt_colors=True, ttl = 'DWTs anomaly Probabilities by AWTs - Simulation' );
0.417746
0.901575
# Lab 1 CMSC 457 Spring 2022 Prepared by Yufan Zheng Special thanks to Ethan Hickman: This note is based on Ethan's Lab 1 (https://github.com/eth-n/457lab1). ## Content * Preliminary for programming in Qiskit * A glance at quantum programming languages * How to use Jupyter * Qiskit basics * Implementing a Elitzur–Vaidman bomb tester in Qiskit ## Preliminary * Anaconda: A cross-platform Python distribution * Qiskit is still in development * Easy package management to avoid any compatibility issue by virtual environments * Installation * Install by the package downloaded from https://www.anaconda.com/download/ * Run Anaconda Prompt * Create a new virtual environment: `conda create -n ENV_NAME python=3` where `ENV_NAME` will be the name of this environment * Activate the new environment: `conda activate ENV_NAME` * Install Qiskit: `pip install qiskit[visualization]` * Jupyter: Notebook interface for coding * Included in Anaconda * Run Anaconda Navigator (GUI) * Select the virtual environment created above and launch Jupyter notebook ## Quantum Programming Languages See 'Quantum Programming' on Wikipedia (https://en.wikipedia.org/wiki/Quantum_programming) for a complete list. * Too many languages! * Classify them by types ### Quantum Assembly * Assembly can be understood by hardware * Comparable to classical assembly like x86: ``` push ebp ; save calling function's stack frame (ebp) mov ebp, esp ; make a new stack frame on top of our caller's stack sub esp, 4 ; allocate 4 bytes of stack space for this function's local variables jmp eax ; jump to the address pointed by the eax register ``` * An example for Quil, a quantum assembly: ``` # QUANTUM TELEPORTATION # Declare classical memory DECLARE ro BIT[2] # Create Bell Pair H 0 CNOT 0 1 # Teleport CNOT 2 0 H 2 MEASURE 2 ro[0] MEASURE 0 ro[1] # Classically communicate measurements JUMP-UNLESS @SKIP ro[1] X 1 LABEL @SKIP JUMP-UNLESS @END ro[0] Z 1 LABEL @END ``` ### High-Level Quantum Programming Languages * Comparable to classical high-level programming languages like Python, C++ and Java. * An example for QCL, a high-level quantum programming language: ``` operator diffuse (qureg q) { H(q); // Hadamard Transform Not(q); // Invert q CPhase(pi, q); // Rotate if q=1111.. !Not(q); // undo inversion !H(q); // undo Hadamard Transform } qureg x1[2]; // 2-qubit quantum register x1 qureg x2[2]; // 2-qubit quantum register x2 diffuse(x1); H(x1); // Hadamard operation on x1 H(x2[1]); // Hadamard operation on the first qubit of the register x2 ``` ### Mid-Level Languages, or SDKs (software development kits) * Designed to do quantum computing on prototype quantum devices * Language design depends on what the device can do * D-Wave Ocean: 0-1 quadratic formula optimization * e.g., $\max_{x_1,x_2,x_3 \in \{0,1\}}\{x_1x_2 + x_3 - x_2x_3\}$ * The device uses quantum adiabatic algorithm to solve it * Performance not guaranteed * IBM **Qiskit**: Circuit model * Classical simulation supported * Circuits can be submitted to IBM quantum devices * Circuits will be rewritten to match the topology of an actual device <img src="https://qiskit.org/documentation/_images/transpiling_core_steps.png"> ## Jupyter with Python * Same as running Python in Command Prompt or Terminal * Press Ctrl + Enter to run the code block * Each individual run will not do any initialization ``` x = 0 print(x) x += 7 print(x) def add3(x): return x + 3 print("add3() defined.") print(x, add3(x)) ``` * Click "Kernel > Restart" if you want to start over ## Qiskit Documentation: https://qiskit.org/documentation/index.html * Google-based programming ### Implementing Circuits See https://qiskit.org/documentation/stubs/qiskit.circuit.QuantumCircuit.html for all possible operations (e.g. adding different gates) on `QuantumCircuit`. ``` from qiskit import QuantumCircuit # Create a Quantum Circuit with 3 qubits and 2 classical bits # Classical bits are for measurement results circuit = QuantumCircuit(3, 2) # Add a H gate on qubit 0 circuit.h(0) # Add a CX (CNOT) gate on control qubit 0 and target qubit 1 circuit.cx(0, 1) # Add a CSWAP gate on control qubit 2 and target qubits 0 and 1 circuit.cswap(2, 0, 1) # Map the quantum measurement to the classical bits circuit.measure(1, 0) circuit.measure(2, 1) # Draw the circuit circuit.draw(output='mpl') ``` ### Running Circuits * On IBMQ devices * Create an account on https://quantum-computing.ibm.com/ * Get your API token on the webpage * Run the following ``` from qiskit import IBMQ IBMQ.save_account(YOUR_API_TOKEN) IBMQ.load_account() ``` * There will be multiple devices to choose * Refer to https://www.youtube.com/watch?v=RrUTwq5jKM4 (starts from 9:00) for how to choose a device * Local simulation: Qiskit Aer ## Generate a Bell State ``` from qiskit import QuantumCircuit, transpile # Create a Quantum Circuit acting on the q register circuit = QuantumCircuit(2) # Add a H gate on qubit 0 circuit.h(0) # Add a CX (CNOT) gate on control qubit 0 and target qubit 1 circuit.cx(0, 1) #Draw the circuit circuit.draw(output='mpl') ``` The result shuold be $\frac{1}{\sqrt{2}}(|00 \rangle + |11 \rangle$). ``` from qiskit.providers.aer import StatevectorSimulator # Use Aer's unitary simulator simulator = StatevectorSimulator() # Job execution and getting the result as an object job = simulator.run(circuit) result = job.result() # Get the unitary matrix from the result object print(result.get_statevector(circuit, decimals=3)) ``` We can plot the unitary for the whole circuit. ``` from qiskit.providers.aer import UnitarySimulator # Use Aer's unitary simulator simulator = UnitarySimulator() # Job execution and getting the result as an object job = simulator.run(circuit) result = job.result() # Get the unitary matrix from the result object print(result.get_unitary(circuit, decimals=3)) ``` For simulating an experiment, we use `QasmSimulator` for efficiency. ``` from qiskit import QuantumCircuit, transpile # Create a Quantum Circuit acting on the q register circuit = QuantumCircuit(2, 2) # Add a H gate on qubit 0 circuit.h(0) # Add a CX (CNOT) gate on control qubit 0 and target qubit 1 circuit.cx(0, 1) # Map the quantum measurement to the classical bits circuit.measure(0, 0) circuit.measure(1, 1) #Draw the circuit circuit.draw(output='mpl') from qiskit.providers.aer import QasmSimulator from qiskit.visualization import plot_histogram # Use Aer's qasm_simulator simulator = QasmSimulator() # compile the circuit down to low-level QASM instructions # supported by the backend (not needed for simple circuits) compiled_circuit = transpile(circuit, simulator) # Execute the circuit on the qasm simulator job = simulator.run(compiled_circuit, shots=1000) # Grab results from the job result = job.result() # Returns counts counts = result.get_counts(compiled_circuit) print("\nTotal count for 00 and 11 are:", counts) # Draw Histogram plot_histogram(counts) ``` ## Elitzur–Vaidman Bomb Tester * A bomb which may or may not be functional * Functional * Let $B$ act on one qubit * ${\rm y}$: trigger the bomb * ${\rm n}$: do nothing * $B$ can be modeled by a measurement in $\{|\rm{y}\rangle,|\rm{n}\rangle\}$ * Observing $|\rm{y}\rangle$: we are dead * Observing $|\rm{n}\rangle$: all good * Not functional * $B = I$. * Tester circuit * $0$ for ${\rm n}$ and $1$ for ${\rm y}$ * Alternating between the rotation gate $R_\varepsilon$ on the first qubit and $B$ * $ R_\varepsilon = \left(\begin{array}{cc} \cos \varepsilon & -\sin \varepsilon\\ \sin \varepsilon & \cos \varepsilon \end{array}\right) $; Basically rotating from $|0\rangle$ toward $|1\rangle$ by $\varepsilon$ radian * Repeat $N = \frac{\pi/2}{\varepsilon}$ times * Not functional: $|\rm{n}\rangle$/$|0\rangle$ will be gradually rotated to $|\rm{y}\rangle$/$|1\rangle$ * Functional * We are dead if out of $N$ measurement there is one measurement having the result $|{\rm y}\rangle$ * Otherwise the final state must be $|{\rm n}\rangle$ ``` from math import pi, cos, sin from qiskit import QuantumCircuit, transpile from qiskit.quantum_info.operators import Operator N = 2 eps = pi / 2 / N is_bomb_functional = True # We need one qubit and N classical bits for N measurements circuit = QuantumCircuit(1, N) # Define the gate R_epsilon R_eps = Operator([ [cos(eps), -sin(eps)], [sin(eps), cos(eps)] ]) for i in range(N): circuit.unitary(R_eps, 0, label='R_eps') if is_bomb_functional or i == N - 1: circuit.measure(0, i) #Draw the circuit circuit.draw(output='mpl') from qiskit.providers.aer import QasmSimulator from qiskit.visualization import plot_histogram # Use Aer's qasm_simulator simulator = QasmSimulator() # compile the circuit down to low-level QASM instructions # supported by the backend (not needed for simple circuits) compiled_circuit = transpile(circuit, simulator) # Execute the circuit on the qasm simulator job = simulator.run(compiled_circuit, shots=100000) # Grab results from the job result = job.result() # Returns counts counts = result.get_counts(compiled_circuit) # Draw histogram plot_histogram(counts) from math import log2 # lsb(x) will return the position of first "1" in the binary representation of x # lsb('110100') = '2' # lsb('110010') = '1' # lsb('011000') = '3' def lsb(x): x = int(x) if x == 0: return 'None' return str(int(log2(x & (x ^ (x - 1))))) new_counts = {'None':0} for i in range(N): new_counts[str(i)] = 0 for res in counts: new_counts[lsb(res)] += counts[res] print(new_counts) plot_histogram(new_counts) ```
github_jupyter
push ebp ; save calling function's stack frame (ebp) mov ebp, esp ; make a new stack frame on top of our caller's stack sub esp, 4 ; allocate 4 bytes of stack space for this function's local variables jmp eax ; jump to the address pointed by the eax register # QUANTUM TELEPORTATION # Declare classical memory DECLARE ro BIT[2] # Create Bell Pair H 0 CNOT 0 1 # Teleport CNOT 2 0 H 2 MEASURE 2 ro[0] MEASURE 0 ro[1] # Classically communicate measurements JUMP-UNLESS @SKIP ro[1] X 1 LABEL @SKIP JUMP-UNLESS @END ro[0] Z 1 LABEL @END operator diffuse (qureg q) { H(q); // Hadamard Transform Not(q); // Invert q CPhase(pi, q); // Rotate if q=1111.. !Not(q); // undo inversion !H(q); // undo Hadamard Transform } qureg x1[2]; // 2-qubit quantum register x1 qureg x2[2]; // 2-qubit quantum register x2 diffuse(x1); H(x1); // Hadamard operation on x1 H(x2[1]); // Hadamard operation on the first qubit of the register x2 x = 0 print(x) x += 7 print(x) def add3(x): return x + 3 print("add3() defined.") print(x, add3(x)) from qiskit import QuantumCircuit # Create a Quantum Circuit with 3 qubits and 2 classical bits # Classical bits are for measurement results circuit = QuantumCircuit(3, 2) # Add a H gate on qubit 0 circuit.h(0) # Add a CX (CNOT) gate on control qubit 0 and target qubit 1 circuit.cx(0, 1) # Add a CSWAP gate on control qubit 2 and target qubits 0 and 1 circuit.cswap(2, 0, 1) # Map the quantum measurement to the classical bits circuit.measure(1, 0) circuit.measure(2, 1) # Draw the circuit circuit.draw(output='mpl') from qiskit import IBMQ IBMQ.save_account(YOUR_API_TOKEN) IBMQ.load_account() ``` * There will be multiple devices to choose * Refer to https://www.youtube.com/watch?v=RrUTwq5jKM4 (starts from 9:00) for how to choose a device * Local simulation: Qiskit Aer ## Generate a Bell State The result shuold be $\frac{1}{\sqrt{2}}(|00 \rangle + |11 \rangle$). We can plot the unitary for the whole circuit. For simulating an experiment, we use `QasmSimulator` for efficiency. ## Elitzur–Vaidman Bomb Tester * A bomb which may or may not be functional * Functional * Let $B$ act on one qubit * ${\rm y}$: trigger the bomb * ${\rm n}$: do nothing * $B$ can be modeled by a measurement in $\{|\rm{y}\rangle,|\rm{n}\rangle\}$ * Observing $|\rm{y}\rangle$: we are dead * Observing $|\rm{n}\rangle$: all good * Not functional * $B = I$. * Tester circuit * $0$ for ${\rm n}$ and $1$ for ${\rm y}$ * Alternating between the rotation gate $R_\varepsilon$ on the first qubit and $B$ * $ R_\varepsilon = \left(\begin{array}{cc} \cos \varepsilon & -\sin \varepsilon\\ \sin \varepsilon & \cos \varepsilon \end{array}\right) $; Basically rotating from $|0\rangle$ toward $|1\rangle$ by $\varepsilon$ radian * Repeat $N = \frac{\pi/2}{\varepsilon}$ times * Not functional: $|\rm{n}\rangle$/$|0\rangle$ will be gradually rotated to $|\rm{y}\rangle$/$|1\rangle$ * Functional * We are dead if out of $N$ measurement there is one measurement having the result $|{\rm y}\rangle$ * Otherwise the final state must be $|{\rm n}\rangle$
0.765944
0.861305
``` import os import json import pickle import random from collections import defaultdict, Counter from indra.literature.adeft_tools import universal_extract_text from indra.databases.hgnc_client import get_hgnc_name, get_hgnc_id from adeft.discover import AdeftMiner from adeft.gui import ground_with_gui from adeft.modeling.label import AdeftLabeler from adeft.modeling.classify import AdeftClassifier from adeft.disambiguate import AdeftDisambiguator from adeft_indra.ground.ground import AdeftGrounder from adeft_indra.model_building.s3 import model_to_s3 from adeft_indra.model_building.escape import escape_filename from adeft_indra.db.content import get_pmids_for_agent_text, get_pmids_for_entity, \ get_plaintexts_for_pmids adeft_grounder = AdeftGrounder() shortforms = ['CAP', 'CAPs'] model_name = ':'.join(sorted(escape_filename(shortform) for shortform in shortforms)) results_path = os.path.abspath(os.path.join('../..', 'results', model_name)) miners = dict() all_texts = {} for shortform in shortforms: pmids = get_pmids_for_agent_text(shortform) text_dict = get_plaintexts_for_pmids(pmids, contains=shortforms) text_dict = {pmid: text for pmid, text in text_dict.items() if len(text) > 5} miners[shortform] = AdeftMiner(shortform) miners[shortform].process_texts(text_dict.values()) all_texts.update(text_dict) longform_dict = {} for shortform in shortforms: longforms = miners[shortform].get_longforms() longforms = [(longform, count, score) for longform, count, score in longforms if count*score > 2] longform_dict[shortform] = longforms combined_longforms = Counter() for longform_rows in longform_dict.values(): combined_longforms.update({longform: count for longform, count, score in longform_rows}) grounding_map = {} names = {} for longform in combined_longforms: groundings = adeft_grounder.ground(longform) if groundings: grounding = groundings[0]['grounding'] grounding_map[longform] = grounding names[grounding] = groundings[0]['name'] longforms, counts = zip(*combined_longforms.most_common()) grounding_map, names, pos_labels = ground_with_gui(longforms, counts, grounding_map=grounding_map, names=names, no_browser=True, port=8890) result = [grounding_map, names, pos_labels] result grounding_map, names, pos_labels = [{'adherent proliferating cells': 'ungrounded', 'analysis of principal coordinates': 'ungrounded', 'calcium phosphate': 'CHEBI:CHEBI:77635', 'camp receptor protein': 'MESH:D002373', 'candidatus accumulibacter phosphatis': 'ungrounded', 'cap': 'ungrounded', 'capacitance': 'ungrounded', 'capecitabine': 'CHEBI:CHEBI:31348', 'capreomycin': 'CHEBI:CHEBI:3371', 'capsaicin': 'CHEBI:CHEBI:3374', 'capsaicinoids': 'CHEBI:CHEBI:46931', 'captopril': 'CHEBI:CHEBI:3380', 'carcinoma of the prostate': 'DOID:DOID:10286', 'cardiac action potential': 'GO:GO:0001508', 'cardiac activator period': 'ungrounded', 'carotid artery plaque': 'MESH:D016893', 'carprofen': 'CHEBI:CHEBI:364453', 'catabolite activator protein': 'MESH:D002373', 'catabolite gene activator protein': 'MESH:D002373', 'cationic antimicrobial peptide': 'MESH:D023181', 'cbl associated protein': 'HGNC:14926', 'cellulose acetate phthalate': 'CHEBI:CHEBI:31386', 'cementum attachment protein': 'HGNC:9639', 'cercarial antigen preparation': 'ungrounded', 'channel activating proteases': 'ungrounded', 'chemotherapy after pd 1 inhibitor treatment': 'ungrounded', 'child and adolescent psychiatry': 'MESH:D000296', 'chloramphenicol': 'CHEBI:CHEBI:17698', 'chlorantraniliprole': 'CHEBI:CHEBI:67113', 'cholinergic anti inflammatory pathway': 'MESH:D015213', 'chronic abdominal pain': 'MESH:D015746', 'chronic apical periodontitis': 'DOID:DOID:11269', 'clathrin associated proteins': 'FPLX:Adaptor_protein', 'clinical assessment protocols': 'ungrounded', 'cochlear action potential': 'ungrounded', 'cold atmospheric plasma': 'cold_atmospheric_plasma', 'cold atmospheric plasmas': 'ungrounded', 'cold atmospheric pressure plasma': 'ungrounded', 'college of american pathologists': 'ungrounded', 'community acquired pneumonia': 'MESH:D011014', 'complement alternating pathway': 'MESH:D003170', 'compound action potential': 'GO:GO:0001508', 'compound action potential of the auditory nerve': 'GO:GO:0001508', 'compound action potentials': 'GO:GO:0001508', 'computer assisted planning': 'ungrounded', 'concentrated air particles': 'MESH:D052638', 'concentrated ambient particles': 'MESH:D052638', 'concentrated ambient particulate matter pm 2 5': 'MESH:D052638', 'concentrated ambient particulates': 'MESH:D052638', 'concentrated ambient pm2 5': 'MESH:D052638', 'concentrated fine particulate matter': 'MESH:D052638', 'controlled attenuation parameter': 'NCIT:C174123', 'coronary artery perforation': 'ungrounded', 'coronary atherosclerotic plaque': 'MESH:D058226', 'critical acromial point': 'ungrounded', 'cyclase associated protein': 'FPLX:CAP', 'cyclase associated proteins': 'FPLX:CAP', 'cyclic alternating pattern': 'cyclic_alternating_pattern', 'cyclic amp receptor protein': 'MESH:D002373', 'prostate cancer': 'MESH:D011471'}, {'CHEBI:CHEBI:77635': 'calcium phosphate', 'MESH:D002373': 'Cyclic AMP Receptor Protein', 'CHEBI:CHEBI:31348': 'capecitabine', 'CHEBI:CHEBI:3371': 'capreomycin', 'CHEBI:CHEBI:3374': 'capsaicin', 'CHEBI:CHEBI:46931': 'capsaicinoid', 'CHEBI:CHEBI:3380': 'captopril', 'DOID:DOID:10286': 'prostate carcinoma', 'GO:GO:0001508': 'action potential', 'MESH:D016893': 'Carotid Stenosis', 'CHEBI:CHEBI:364453': 'carprofen', 'MESH:D023181': 'Antimicrobial Cationic Peptides', 'HGNC:14926': 'SORBS1', 'CHEBI:CHEBI:31386': 'Cellulose acetate phthalate', 'HGNC:9639': 'HACD1', 'MESH:D000296': 'Adolescent Psychiatry', 'CHEBI:CHEBI:17698': 'chloramphenicol', 'CHEBI:CHEBI:67113': 'chlorantraniliprole', 'MESH:D015213': 'Neuroimmunomodulation', 'MESH:D015746': 'Abdominal Pain', 'DOID:DOID:11269': 'chronic apical periodontitis', 'FPLX:Adaptor_protein': 'Adaptor_protein', 'cold_atmospheric_plasma': 'cold_atmospheric_plasma', 'MESH:D011014': 'Pneumonia', 'MESH:D003170': 'Complement Pathway, Alternative', 'MESH:D052638': 'Particulate Matter', 'NCIT:C174123': 'Controlled Attenuation Parameter', 'MESH:D058226': 'Plaque, Atherosclerotic', 'FPLX:CAP': 'CAP', 'cyclic_alternating_pattern': 'cyclic_alternating_pattern', 'MESH:D011471': 'Prostatic Neoplasms'}, ['CHEBI:CHEBI:17698', 'CHEBI:CHEBI:31348', 'CHEBI:CHEBI:31386', 'CHEBI:CHEBI:3371', 'CHEBI:CHEBI:3374', 'CHEBI:CHEBI:3380', 'CHEBI:CHEBI:364453', 'CHEBI:CHEBI:46931', 'CHEBI:CHEBI:67113', 'CHEBI:CHEBI:77635', 'DOID:DOID:10286', 'DOID:DOID:11269', 'FPLX:Adaptor_protein', 'FPLX:CAP', 'GO:GO:0001508', 'HGNC:14926', 'HGNC:9639', 'MESH:D000296', 'MESH:D002373', 'MESH:D003170', 'MESH:D011014', 'MESH:D011471', 'MESH:D015213', 'MESH:D015746', 'MESH:D016893', 'MESH:D023181', 'MESH:D052638', 'MESH:D058226', 'NCIT:C174123']] excluded_longforms = ['cap'] grounding_dict = {shortform: {longform: grounding_map[longform] for longform, _, _ in longforms if longform in grounding_map and longform not in excluded_longforms} for shortform, longforms in longform_dict.items()} result = [grounding_dict, names, pos_labels] if not os.path.exists(results_path): os.mkdir(results_path) with open(os.path.join(results_path, f'{model_name}_preliminary_grounding_info.json'), 'w') as f: json.dump(result, f) additional_entities = {} unambiguous_agent_texts = {} labeler = AdeftLabeler(grounding_dict) corpus = labeler.build_from_texts((text, pmid) for pmid, text in all_texts.items()) agent_text_pmid_map = defaultdict(list) for text, label, id_ in corpus: agent_text_pmid_map[label].append(id_) entity_pmid_map = {entity: set(get_pmids_for_entity(*entity.split(':', maxsplit=1), major_topic=True))for entity in additional_entities} intersection1 = [] for entity1, pmids1 in entity_pmid_map.items(): for entity2, pmids2 in entity_pmid_map.items(): intersection1.append((entity1, entity2, len(pmids1 & pmids2))) intersection2 = [] for entity1, pmids1 in agent_text_pmid_map.items(): for entity2, pmids2 in entity_pmid_map.items(): intersection2.append((entity1, entity2, len(set(pmids1) & pmids2))) list(zip(longforms, counts)) intersection1 intersection2 all_used_pmids = set() for entity, agent_texts in unambiguous_agent_texts.items(): used_pmids = set() for agent_text in agent_texts: pmids = set(get_pmids_for_agent_text(agent_text)) new_pmids = list(pmids - all_texts.keys() - used_pmids) text_dict = get_plaintexts_for_pmids(new_pmids, contains=agent_texts) corpus.extend([(text, entity, pmid) for pmid, text in text_dict.items()]) used_pmids.update(new_pmids) all_used_pmids.update(used_pmids) for entity, pmids in entity_pmid_map.items(): new_pmids = list(set(pmids) - all_texts.keys() - all_used_pmids) if len(new_pmids) > 10000: new_pmids = random.choices(new_pmids, k=10000) text_dict = get_plaintexts_for_pmids(new_pmids) corpus.extend([(text, entity, pmid) for pmid, text in text_dict.items()]) %%capture classifier = AdeftClassifier(shortforms, pos_labels=pos_labels, random_state=1729) param_grid = {'C': [100.0], 'max_features': [10000]} texts, labels, pmids = zip(*corpus) classifier.cv(texts, labels, param_grid, cv=5, n_jobs=5) classifier.stats disamb = AdeftDisambiguator(classifier, grounding_dict, names) disamb.dump(model_name, results_path) print(disamb.info()) model_to_s3(disamb) ```
github_jupyter
import os import json import pickle import random from collections import defaultdict, Counter from indra.literature.adeft_tools import universal_extract_text from indra.databases.hgnc_client import get_hgnc_name, get_hgnc_id from adeft.discover import AdeftMiner from adeft.gui import ground_with_gui from adeft.modeling.label import AdeftLabeler from adeft.modeling.classify import AdeftClassifier from adeft.disambiguate import AdeftDisambiguator from adeft_indra.ground.ground import AdeftGrounder from adeft_indra.model_building.s3 import model_to_s3 from adeft_indra.model_building.escape import escape_filename from adeft_indra.db.content import get_pmids_for_agent_text, get_pmids_for_entity, \ get_plaintexts_for_pmids adeft_grounder = AdeftGrounder() shortforms = ['CAP', 'CAPs'] model_name = ':'.join(sorted(escape_filename(shortform) for shortform in shortforms)) results_path = os.path.abspath(os.path.join('../..', 'results', model_name)) miners = dict() all_texts = {} for shortform in shortforms: pmids = get_pmids_for_agent_text(shortform) text_dict = get_plaintexts_for_pmids(pmids, contains=shortforms) text_dict = {pmid: text for pmid, text in text_dict.items() if len(text) > 5} miners[shortform] = AdeftMiner(shortform) miners[shortform].process_texts(text_dict.values()) all_texts.update(text_dict) longform_dict = {} for shortform in shortforms: longforms = miners[shortform].get_longforms() longforms = [(longform, count, score) for longform, count, score in longforms if count*score > 2] longform_dict[shortform] = longforms combined_longforms = Counter() for longform_rows in longform_dict.values(): combined_longforms.update({longform: count for longform, count, score in longform_rows}) grounding_map = {} names = {} for longform in combined_longforms: groundings = adeft_grounder.ground(longform) if groundings: grounding = groundings[0]['grounding'] grounding_map[longform] = grounding names[grounding] = groundings[0]['name'] longforms, counts = zip(*combined_longforms.most_common()) grounding_map, names, pos_labels = ground_with_gui(longforms, counts, grounding_map=grounding_map, names=names, no_browser=True, port=8890) result = [grounding_map, names, pos_labels] result grounding_map, names, pos_labels = [{'adherent proliferating cells': 'ungrounded', 'analysis of principal coordinates': 'ungrounded', 'calcium phosphate': 'CHEBI:CHEBI:77635', 'camp receptor protein': 'MESH:D002373', 'candidatus accumulibacter phosphatis': 'ungrounded', 'cap': 'ungrounded', 'capacitance': 'ungrounded', 'capecitabine': 'CHEBI:CHEBI:31348', 'capreomycin': 'CHEBI:CHEBI:3371', 'capsaicin': 'CHEBI:CHEBI:3374', 'capsaicinoids': 'CHEBI:CHEBI:46931', 'captopril': 'CHEBI:CHEBI:3380', 'carcinoma of the prostate': 'DOID:DOID:10286', 'cardiac action potential': 'GO:GO:0001508', 'cardiac activator period': 'ungrounded', 'carotid artery plaque': 'MESH:D016893', 'carprofen': 'CHEBI:CHEBI:364453', 'catabolite activator protein': 'MESH:D002373', 'catabolite gene activator protein': 'MESH:D002373', 'cationic antimicrobial peptide': 'MESH:D023181', 'cbl associated protein': 'HGNC:14926', 'cellulose acetate phthalate': 'CHEBI:CHEBI:31386', 'cementum attachment protein': 'HGNC:9639', 'cercarial antigen preparation': 'ungrounded', 'channel activating proteases': 'ungrounded', 'chemotherapy after pd 1 inhibitor treatment': 'ungrounded', 'child and adolescent psychiatry': 'MESH:D000296', 'chloramphenicol': 'CHEBI:CHEBI:17698', 'chlorantraniliprole': 'CHEBI:CHEBI:67113', 'cholinergic anti inflammatory pathway': 'MESH:D015213', 'chronic abdominal pain': 'MESH:D015746', 'chronic apical periodontitis': 'DOID:DOID:11269', 'clathrin associated proteins': 'FPLX:Adaptor_protein', 'clinical assessment protocols': 'ungrounded', 'cochlear action potential': 'ungrounded', 'cold atmospheric plasma': 'cold_atmospheric_plasma', 'cold atmospheric plasmas': 'ungrounded', 'cold atmospheric pressure plasma': 'ungrounded', 'college of american pathologists': 'ungrounded', 'community acquired pneumonia': 'MESH:D011014', 'complement alternating pathway': 'MESH:D003170', 'compound action potential': 'GO:GO:0001508', 'compound action potential of the auditory nerve': 'GO:GO:0001508', 'compound action potentials': 'GO:GO:0001508', 'computer assisted planning': 'ungrounded', 'concentrated air particles': 'MESH:D052638', 'concentrated ambient particles': 'MESH:D052638', 'concentrated ambient particulate matter pm 2 5': 'MESH:D052638', 'concentrated ambient particulates': 'MESH:D052638', 'concentrated ambient pm2 5': 'MESH:D052638', 'concentrated fine particulate matter': 'MESH:D052638', 'controlled attenuation parameter': 'NCIT:C174123', 'coronary artery perforation': 'ungrounded', 'coronary atherosclerotic plaque': 'MESH:D058226', 'critical acromial point': 'ungrounded', 'cyclase associated protein': 'FPLX:CAP', 'cyclase associated proteins': 'FPLX:CAP', 'cyclic alternating pattern': 'cyclic_alternating_pattern', 'cyclic amp receptor protein': 'MESH:D002373', 'prostate cancer': 'MESH:D011471'}, {'CHEBI:CHEBI:77635': 'calcium phosphate', 'MESH:D002373': 'Cyclic AMP Receptor Protein', 'CHEBI:CHEBI:31348': 'capecitabine', 'CHEBI:CHEBI:3371': 'capreomycin', 'CHEBI:CHEBI:3374': 'capsaicin', 'CHEBI:CHEBI:46931': 'capsaicinoid', 'CHEBI:CHEBI:3380': 'captopril', 'DOID:DOID:10286': 'prostate carcinoma', 'GO:GO:0001508': 'action potential', 'MESH:D016893': 'Carotid Stenosis', 'CHEBI:CHEBI:364453': 'carprofen', 'MESH:D023181': 'Antimicrobial Cationic Peptides', 'HGNC:14926': 'SORBS1', 'CHEBI:CHEBI:31386': 'Cellulose acetate phthalate', 'HGNC:9639': 'HACD1', 'MESH:D000296': 'Adolescent Psychiatry', 'CHEBI:CHEBI:17698': 'chloramphenicol', 'CHEBI:CHEBI:67113': 'chlorantraniliprole', 'MESH:D015213': 'Neuroimmunomodulation', 'MESH:D015746': 'Abdominal Pain', 'DOID:DOID:11269': 'chronic apical periodontitis', 'FPLX:Adaptor_protein': 'Adaptor_protein', 'cold_atmospheric_plasma': 'cold_atmospheric_plasma', 'MESH:D011014': 'Pneumonia', 'MESH:D003170': 'Complement Pathway, Alternative', 'MESH:D052638': 'Particulate Matter', 'NCIT:C174123': 'Controlled Attenuation Parameter', 'MESH:D058226': 'Plaque, Atherosclerotic', 'FPLX:CAP': 'CAP', 'cyclic_alternating_pattern': 'cyclic_alternating_pattern', 'MESH:D011471': 'Prostatic Neoplasms'}, ['CHEBI:CHEBI:17698', 'CHEBI:CHEBI:31348', 'CHEBI:CHEBI:31386', 'CHEBI:CHEBI:3371', 'CHEBI:CHEBI:3374', 'CHEBI:CHEBI:3380', 'CHEBI:CHEBI:364453', 'CHEBI:CHEBI:46931', 'CHEBI:CHEBI:67113', 'CHEBI:CHEBI:77635', 'DOID:DOID:10286', 'DOID:DOID:11269', 'FPLX:Adaptor_protein', 'FPLX:CAP', 'GO:GO:0001508', 'HGNC:14926', 'HGNC:9639', 'MESH:D000296', 'MESH:D002373', 'MESH:D003170', 'MESH:D011014', 'MESH:D011471', 'MESH:D015213', 'MESH:D015746', 'MESH:D016893', 'MESH:D023181', 'MESH:D052638', 'MESH:D058226', 'NCIT:C174123']] excluded_longforms = ['cap'] grounding_dict = {shortform: {longform: grounding_map[longform] for longform, _, _ in longforms if longform in grounding_map and longform not in excluded_longforms} for shortform, longforms in longform_dict.items()} result = [grounding_dict, names, pos_labels] if not os.path.exists(results_path): os.mkdir(results_path) with open(os.path.join(results_path, f'{model_name}_preliminary_grounding_info.json'), 'w') as f: json.dump(result, f) additional_entities = {} unambiguous_agent_texts = {} labeler = AdeftLabeler(grounding_dict) corpus = labeler.build_from_texts((text, pmid) for pmid, text in all_texts.items()) agent_text_pmid_map = defaultdict(list) for text, label, id_ in corpus: agent_text_pmid_map[label].append(id_) entity_pmid_map = {entity: set(get_pmids_for_entity(*entity.split(':', maxsplit=1), major_topic=True))for entity in additional_entities} intersection1 = [] for entity1, pmids1 in entity_pmid_map.items(): for entity2, pmids2 in entity_pmid_map.items(): intersection1.append((entity1, entity2, len(pmids1 & pmids2))) intersection2 = [] for entity1, pmids1 in agent_text_pmid_map.items(): for entity2, pmids2 in entity_pmid_map.items(): intersection2.append((entity1, entity2, len(set(pmids1) & pmids2))) list(zip(longforms, counts)) intersection1 intersection2 all_used_pmids = set() for entity, agent_texts in unambiguous_agent_texts.items(): used_pmids = set() for agent_text in agent_texts: pmids = set(get_pmids_for_agent_text(agent_text)) new_pmids = list(pmids - all_texts.keys() - used_pmids) text_dict = get_plaintexts_for_pmids(new_pmids, contains=agent_texts) corpus.extend([(text, entity, pmid) for pmid, text in text_dict.items()]) used_pmids.update(new_pmids) all_used_pmids.update(used_pmids) for entity, pmids in entity_pmid_map.items(): new_pmids = list(set(pmids) - all_texts.keys() - all_used_pmids) if len(new_pmids) > 10000: new_pmids = random.choices(new_pmids, k=10000) text_dict = get_plaintexts_for_pmids(new_pmids) corpus.extend([(text, entity, pmid) for pmid, text in text_dict.items()]) %%capture classifier = AdeftClassifier(shortforms, pos_labels=pos_labels, random_state=1729) param_grid = {'C': [100.0], 'max_features': [10000]} texts, labels, pmids = zip(*corpus) classifier.cv(texts, labels, param_grid, cv=5, n_jobs=5) classifier.stats disamb = AdeftDisambiguator(classifier, grounding_dict, names) disamb.dump(model_name, results_path) print(disamb.info()) model_to_s3(disamb)
0.363534
0.170163
# TP 15 - Tri de collections Pour chaque exercice, rentrez votre réponse dans l'éditeur Python associé. Enregistrez vos modifications, de préférence au format ipynb, lorsque vous aurez terminé. ## Exercice 1 - Test Driven Development Imaginez que vous avez une fonction `f` qui prend en entrée une liste et la trie. Par exemple : ```py L = [2, 3, 1] f(L) print(L) ``` va afficher : ``` [1, 2, 3] ``` Ecrivez des tests unitaires qui prennent en paramètre cette fonction et teste que son comportement est correct. Vous avez le droit de reprendre les fonctions développée dans le TP 14. ``` def test_1(f): """f est une fonction de trie d'une liste.""" pass def test_2(f): """f est une fonction de trie d'une liste.""" pass # Ecrivez autant de tests unitaires que vous jugez nécessaire ``` ## Exercice 2 - Bubble Sort En vous aidant des diapositives du cours, implémentez l'algorithme de tri *Bubble Sort* dans une fonction nommée `tri_bulle` qui prend une liste `a` en entrée. Appelez ensuite chacun des tests unitaires sur la fonction `tri_bulle`. ``` ``` ## Exercice 3 - Selection Sort En vous aidant des diapositives du cours, implémentez l'algorithme de tri *Selection Sort* dans une fonction nommée `tri_selection` qui prend une liste `a` en entrée. Appelez ensuite chacun des tests unitaires sur la fonction `tri_selection`. ``` ``` ## Exercice 4 - Partition En vous aidant des diapositives du cours, implémentez l'algorithme *Partition* dans une fonction nommée `partition` qui prend en entrée une liste `a`, et 2 bornes `debut` et `fin`. La partition se fera par rapport à l'élément `L[debut]`. Cette partition s'effectue dans `L[debut:fin+1]`. ``` ``` ## Exercice 5 - Quick Sort En vous aidant de la fonction `partition` de l'exercice précédent et des diapositives du cours, implémentez l'algorithme de tri *Quick Sort* dans une fonction nommée `tri_rapide` qui prend une liste `a` en entrée. Appelez ensuite chacun des tests unitaires sur la fonction `tri_rapide`. ``` ``` ## Exercice 6 - Fusion En vous aidant des diapositives du cours, implémentez l'algorithme *Fusion* dans une fonction nommée `fusion`. Cette fonction prend en entrée 4 arguments : * La liste `a`, * L'indice `debut`, * L'indice `milieu` * L'indice `fin`. La liste `a` doit être telle que : * `a[debut:milieu]` est triée, * `a[milieu+1:fin]` est triée. La fonction `fusion` modifie `a` de telle sorte qu'elle soit triée sur tout l'interval `[debut:fin]`. ``` ``` ## Exercice 7 - Merge Sort En vous aidant de la fonction `fusion` de l'exercice précédent et des diapositives du cours, implémentez l'algorithme de tri *Merge Sort* dans une fonction nommée `tri_fusion` qui prend une liste `a` en entrée. Appelez ensuite chacun des tests unitaires sur la fonction `tri_fusion`. ``` ``` ## Exercice 8 - Comparaison des algorithmes Reprenez votre fonction `genere_liste_aleatoire` du TP 11. ``` ``` Implémentez une fonction `gen_data_set` qui renvoie une liste de listes de nombres aléatoires. Chacune de ces listes de nombres aléatoires est générée avec `genere_liste_aleatoire` pour générer respectivement $10^0$, $10^1$, $10^2$, ..., $10^4$ nombres. Ainsi : ```py data = gen_data_set() for element in data: print(len(element)) ``` doit afficher : ``` 1 10 100 1000 10000 ``` ``` ``` Implémentez une fonction `bench_data_set` qui prend en entrée une fonction `f` et une liste de donnée `data` et renvoie une liste de temps mis pour appliquer f sur chaque élément de `data`. Utilisez `timeit` pour mesurer le temps d'exécution. Par exemple : ```py def python_sort(L): L.sort() data = gen_data_set() result = bench_data_set(python_sort, data) print(result) ``` peut afficher : ``` [2.3989996407181025e-06, 1.963002432603389e-06, 1.1650001397356391e-05, 0.0001281039985769894, 0.0016064369992818683] ``` ``` ``` Appelez `bench_data_set` sur `tri_bulle`, `tri_selection`, `tri_rapide` et `tri_fusion`. Comparez-les ensemble et avec `python_sort`. Qu'en déduisez-vous ? ``` ```
github_jupyter
L = [2, 3, 1] f(L) print(L) [1, 2, 3] def test_1(f): """f est une fonction de trie d'une liste.""" pass def test_2(f): """f est une fonction de trie d'une liste.""" pass # Ecrivez autant de tests unitaires que vous jugez nécessaire ``` ## Exercice 3 - Selection Sort En vous aidant des diapositives du cours, implémentez l'algorithme de tri *Selection Sort* dans une fonction nommée `tri_selection` qui prend une liste `a` en entrée. Appelez ensuite chacun des tests unitaires sur la fonction `tri_selection`. ## Exercice 4 - Partition En vous aidant des diapositives du cours, implémentez l'algorithme *Partition* dans une fonction nommée `partition` qui prend en entrée une liste `a`, et 2 bornes `debut` et `fin`. La partition se fera par rapport à l'élément `L[debut]`. Cette partition s'effectue dans `L[debut:fin+1]`. ## Exercice 5 - Quick Sort En vous aidant de la fonction `partition` de l'exercice précédent et des diapositives du cours, implémentez l'algorithme de tri *Quick Sort* dans une fonction nommée `tri_rapide` qui prend une liste `a` en entrée. Appelez ensuite chacun des tests unitaires sur la fonction `tri_rapide`. ## Exercice 6 - Fusion En vous aidant des diapositives du cours, implémentez l'algorithme *Fusion* dans une fonction nommée `fusion`. Cette fonction prend en entrée 4 arguments : * La liste `a`, * L'indice `debut`, * L'indice `milieu` * L'indice `fin`. La liste `a` doit être telle que : * `a[debut:milieu]` est triée, * `a[milieu+1:fin]` est triée. La fonction `fusion` modifie `a` de telle sorte qu'elle soit triée sur tout l'interval `[debut:fin]`. ## Exercice 7 - Merge Sort En vous aidant de la fonction `fusion` de l'exercice précédent et des diapositives du cours, implémentez l'algorithme de tri *Merge Sort* dans une fonction nommée `tri_fusion` qui prend une liste `a` en entrée. Appelez ensuite chacun des tests unitaires sur la fonction `tri_fusion`. ## Exercice 8 - Comparaison des algorithmes Reprenez votre fonction `genere_liste_aleatoire` du TP 11. Implémentez une fonction `gen_data_set` qui renvoie une liste de listes de nombres aléatoires. Chacune de ces listes de nombres aléatoires est générée avec `genere_liste_aleatoire` pour générer respectivement $10^0$, $10^1$, $10^2$, ..., $10^4$ nombres. Ainsi : doit afficher : Implémentez une fonction `bench_data_set` qui prend en entrée une fonction `f` et une liste de donnée `data` et renvoie une liste de temps mis pour appliquer f sur chaque élément de `data`. Utilisez `timeit` pour mesurer le temps d'exécution. Par exemple : peut afficher : Appelez `bench_data_set` sur `tri_bulle`, `tri_selection`, `tri_rapide` et `tri_fusion`. Comparez-les ensemble et avec `python_sort`. Qu'en déduisez-vous ?
0.460046
0.985882
# Exercise: Current induced domain wall motion ## Problem description In this tutorial we show how Zhang-Li spin transfer torque (STT) can be included in micromagnetic simulations. To illustrate that, we will try to move a domain wall pair using spin-polarised current. Let us simulate a two-dimensional sample with length $L = 500 \,\text{nm}$, width $w = 20 \,\text{nm}$ and discretisation cell $(2.5 \,\text{nm}, 2.5 \,\text{nm}, 2.5 \,\text{nm})$. The material parameters are: - exchange energy constant $A = 15 \,\text{pJ}\,\text{m}^{-1}$, - Dzyaloshinskii-Moriya energy constant $D = 3 \,\text{mJ}\,\text{m}^{-2}$, - uniaxial anisotropy constant $K = 0.5 \,\text{MJ}\,\text{m}^{-3}$ with easy axis $\mathbf{u}$ in the out of plane direction $(0, 0, 1)$, - gyrotropic ratio $\gamma = 2.211 \times 10^{5} \,\text{m}\,\text{A}^{-1}\,\text{s}^{-1}$, and - Gilbert damping $\alpha=0.3$. ## Domain-wall pair ``` import oommfc as mc import discretisedfield as df import micromagneticmodel as mm # Definition of parameters L = 500e-9 # sample length (m) w = 20e-9 # sample width (m) d = 2.5e-9 # discretisation cell size (m) Ms = 5.8e5 # saturation magnetisation (A/m) A = 15e-12 # exchange energy constant (J/) D = 3e-3 # Dzyaloshinkii-Moriya energy constant (J/m**2) K = 0.5e6 # uniaxial anisotropy constant (J/m**3) u = (0, 0, 1) # easy axis gamma0 = 2.211e5 # gyromagnetic ratio (m/As) alpha = 0.3 # Gilbert damping # Mesh definition p1 = (0, 0, 0) p2 = (L, w, d) cell = (d, d, d) region = df.Region(p1=p1, p2=p2) mesh = df.Mesh(region=region, cell=cell) # Micromagnetic system definition system = mm.System(name='domain_wall_pair') system.energy = mm.Exchange(A=A) + \ mm.DMI(D=D, crystalclass='Cnv_z') + \ mm.UniaxialAnisotropy(K=K, u=u) system.dynamics = mm.Precession(gamma0=gamma0) + mm.Damping(alpha=alpha) ``` Because we want to move a DW pair, we need to initialise the magnetisation in an appropriate way before we relax the system. ``` def m_value(pos): x, y, z = pos if 20e-9 < x < 40e-9: return (0, 0, -1) else: return (0, 0, 1) system.m = df.Field(mesh, dim=3, value=m_value, norm=Ms) system.m.z.plane('z').mpl() ``` Now, we can relax the magnetisation. ``` md = mc.MinDriver() md.drive(system) system.m.z.plane('z').mpl.scalar() ``` Now we can add the STT term to the dynamics equation. ``` ux = 400 # velocity in x-direction (m/s) beta = 0.5 # non-adiabatic STT parameter system.dynamics += mm.ZhangLi(u=ux, beta=beta) # please notice the use of `+=` operator ``` And drive the system for $0.5 \,\text{ns}$: ``` td = mc.TimeDriver() td.drive(system, t=0.5e-9, n=100) system.m.z.plane('z').mpl.scalar() ``` We see that the DW pair has moved to the positive $x$ direction. Now, let us visualise the motion using interactive plot. ``` import k3d import micromagneticdata as md data = md.Data(system.name) plot = k3d.plot() @df.interact(n=data[1].slider()) def my_plot(n): data[1][n].z.k3d.scalar(plot=plot, interactive_field=system.m) plot.display() ``` ## Single domain wall Modify the previous code to obtain one domain wall instead of a domain wall pair and move it using the same current. **Solution** ``` # Definition of parameters L = 500e-9 # sample length (m) w = 20e-9 # sample width (m) d = 2.5e-9 # discretisation cell size (m) Ms = 5.8e5 # saturation magnetisation (A/m) A = 15e-12 # exchange energy constant (J/) D = 3e-3 # Dzyaloshinkii-Moriya energy constant (J/m**2) K = 0.5e6 # uniaxial anisotropy constant (J/m**3) u = (0, 0, 1) # easy axis gamma0 = 2.211e5 # gyromagnetic ratio (m/As) alpha = 0.3 # Gilbert damping # Mesh definition p1 = (0, 0, 0) p2 = (L, w, d) cell = (d, d, d) region = df.Region(p1=p1, p2=p2) mesh = df.Mesh(region=region, cell=cell) # Micromagnetic system definition system = mm.System(name='domain_wall') system.energy = mm.Exchange(A=A) + \ mm.DMI(D=D, crystalclass='Cnv_z') + \ mm.UniaxialAnisotropy(K=K, u=u) system.dynamics = mm.Precession(gamma0=gamma0) + mm.Damping(alpha=alpha) def m_value(pos): x, y, z = pos # Modify the following line if 20e-9 < x: return (0, 0, -1) else: return (0, 0, 1) # We have added the y-component of 1e-8 to the magnetisation to be able to # plot the vector field. This will not be necessary in the long run. system.m = df.Field(mesh, dim=3, value=m_value, norm=Ms) system.m.z.plane('z').mpl() md = mc.MinDriver() md.drive(system) system.m.z.plane('z').k3d.scalar() ux = 400 # velocity in x direction (m/s) beta = 0.5 # non-adiabatic STT parameter system.dynamics += mm.ZhangLi(u=ux, beta=beta) td = mc.TimeDriver() td.drive(system, t=0.5e-9, n=100) system.m.z.plane('z').k3d.scalar() ```
github_jupyter
import oommfc as mc import discretisedfield as df import micromagneticmodel as mm # Definition of parameters L = 500e-9 # sample length (m) w = 20e-9 # sample width (m) d = 2.5e-9 # discretisation cell size (m) Ms = 5.8e5 # saturation magnetisation (A/m) A = 15e-12 # exchange energy constant (J/) D = 3e-3 # Dzyaloshinkii-Moriya energy constant (J/m**2) K = 0.5e6 # uniaxial anisotropy constant (J/m**3) u = (0, 0, 1) # easy axis gamma0 = 2.211e5 # gyromagnetic ratio (m/As) alpha = 0.3 # Gilbert damping # Mesh definition p1 = (0, 0, 0) p2 = (L, w, d) cell = (d, d, d) region = df.Region(p1=p1, p2=p2) mesh = df.Mesh(region=region, cell=cell) # Micromagnetic system definition system = mm.System(name='domain_wall_pair') system.energy = mm.Exchange(A=A) + \ mm.DMI(D=D, crystalclass='Cnv_z') + \ mm.UniaxialAnisotropy(K=K, u=u) system.dynamics = mm.Precession(gamma0=gamma0) + mm.Damping(alpha=alpha) def m_value(pos): x, y, z = pos if 20e-9 < x < 40e-9: return (0, 0, -1) else: return (0, 0, 1) system.m = df.Field(mesh, dim=3, value=m_value, norm=Ms) system.m.z.plane('z').mpl() md = mc.MinDriver() md.drive(system) system.m.z.plane('z').mpl.scalar() ux = 400 # velocity in x-direction (m/s) beta = 0.5 # non-adiabatic STT parameter system.dynamics += mm.ZhangLi(u=ux, beta=beta) # please notice the use of `+=` operator td = mc.TimeDriver() td.drive(system, t=0.5e-9, n=100) system.m.z.plane('z').mpl.scalar() import k3d import micromagneticdata as md data = md.Data(system.name) plot = k3d.plot() @df.interact(n=data[1].slider()) def my_plot(n): data[1][n].z.k3d.scalar(plot=plot, interactive_field=system.m) plot.display() # Definition of parameters L = 500e-9 # sample length (m) w = 20e-9 # sample width (m) d = 2.5e-9 # discretisation cell size (m) Ms = 5.8e5 # saturation magnetisation (A/m) A = 15e-12 # exchange energy constant (J/) D = 3e-3 # Dzyaloshinkii-Moriya energy constant (J/m**2) K = 0.5e6 # uniaxial anisotropy constant (J/m**3) u = (0, 0, 1) # easy axis gamma0 = 2.211e5 # gyromagnetic ratio (m/As) alpha = 0.3 # Gilbert damping # Mesh definition p1 = (0, 0, 0) p2 = (L, w, d) cell = (d, d, d) region = df.Region(p1=p1, p2=p2) mesh = df.Mesh(region=region, cell=cell) # Micromagnetic system definition system = mm.System(name='domain_wall') system.energy = mm.Exchange(A=A) + \ mm.DMI(D=D, crystalclass='Cnv_z') + \ mm.UniaxialAnisotropy(K=K, u=u) system.dynamics = mm.Precession(gamma0=gamma0) + mm.Damping(alpha=alpha) def m_value(pos): x, y, z = pos # Modify the following line if 20e-9 < x: return (0, 0, -1) else: return (0, 0, 1) # We have added the y-component of 1e-8 to the magnetisation to be able to # plot the vector field. This will not be necessary in the long run. system.m = df.Field(mesh, dim=3, value=m_value, norm=Ms) system.m.z.plane('z').mpl() md = mc.MinDriver() md.drive(system) system.m.z.plane('z').k3d.scalar() ux = 400 # velocity in x direction (m/s) beta = 0.5 # non-adiabatic STT parameter system.dynamics += mm.ZhangLi(u=ux, beta=beta) td = mc.TimeDriver() td.drive(system, t=0.5e-9, n=100) system.m.z.plane('z').k3d.scalar()
0.48438
0.978996
# **2-1. Linear Regression** **Jonathan Choi 2021** **[Deep Learning By Torch] End to End study scripts of Deep Learning by implementing code practice with Pytorch.** If you have an any issue, please PR below. [[Deep Learning By Torch] - Github @JonyChoi](https://github.com/jonychoi/Deep-Learning-By-Torch) ## Theoretical Overview $H(x)$: How to predict for a given $x$ value. $cost(W,b)$: How well $H(x)$ predicted $y$. ## Imports ``` import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim torch.manual_seed(1) ``` ## Data Basically the PyTorch has a NCHW fomat ``` x_train = torch.FloatTensor([[1], [2], [3]]) y_train = torch.FloatTensor([[1], [2], [3]]) print(x_train) print(x_train.shape) print(y_train) print(y_train.shape) ``` ## Weight Initialization ``` W = torch.zeros(1, requires_grad = True) print(W) b = torch.zeros(1, requires_grad = True) print(b) ``` ## Hyptothesis $H(x) = Wx + b$ ``` hypothesis = x_train * W + b print(hypothesis) ``` ## Cost Function $ cost(W, b) = \frac{1}{m} \sum^m_{i=1}( H(x^{(i)}) - y^{(i)})^2 $ ``` print(hypothesis) print(y_train) print(hypothesis - y_train) print((hypothesis - y_train)**2) cost = torch.mean((hypothesis - y_train)**2) print(cost) ``` ## Gradient Descent ``` optimizer = optim.SGD([W, b], lr=0.01) optimizer.zero_grad() cost.backward()#Computes the gradient of current tensor w.r.t. graph leaves. optimizer.step()#Performs a single optimization step print(W) print(b) ``` Now Let's check whether the hypothesis works better ``` hypothesis = x_train * W + b print(hypothesis) cost = torch.mean((hypothesis - y_train) ** 2) print(cost) ``` ## Training with Full Code In reality, we will be training on the dataset for multiple epochs. This can be done simply with loops. ``` #DATA x_train = torch.FloatTensor([[1], [2], [3]]) y_train = torch.FloatTensor([[1], [2], [3]]) #Model Initialization W = torch.zeros(1, requires_grad=True) b = torch.zeros(1, requires_grad=True) #Set Optimizer optimizer = optim.SGD([W, b], lr=0.01) nb_epochs = 1000 for epoch in range(nb_epochs + 1): hypothesis = x_train * W + b cost = torch.mean((hypothesis - y_train)**2) optimizer.zero_grad() cost.backward() optimizer.step() #print logs for 100 epochs if epoch % 100 == 0: print('Epoch {:4d}/{} W: {:.3f}, b: {:.3f} Cost: {:.6f}'.format(epoch, nb_epochs, W.item(), b.item(), cost.item())) print(hypothesis) ``` ## High-level Implementation with ```nn.Module``` ``` x_train = torch.FloatTensor([[1], [2], [3]]) y_train = torch.FloatTensor([[1], [2], [3]]) ``` #### Basically, all the models providing by PyTorch are made of inheriting ```nn.Module```. Now we are going to build linear regression model ``` class LinearRegressionModel(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear(1, 1) def forward(self, x): return self.linear(x) ``` #### At the model __init__, we are going to define the layers that will be used. Here, we are building the linear regression model, we will use ```nn.Linear```. And at the ```forward```, we will tell it how this model should return the output from the input. ``` model = LinearRegressionModel() ``` ## Hypothesis Now Let's make hypothesis by generating model Basically, The ***Hypothesis*** means the return value of ```forward()``` of ```nn.Module```, which means that it is the forward result of the Linear Layer. ### **Hypothesis === Forward** ``` hypothesis = model(x_train) print(list(model.parameters())) print(hypothesis) ``` ## Cost Now, Let's get cost by MSE(Mean Squared Error). MSE function is also provided by PyTorch. ``` print(hypothesis) print(y_train) cost = F.mse_loss(hypothesis, y_train) print(cost) ``` ## Gradient Descent Let's reduce the cost by the optimizer providing by PyTorch. You can use one of the optimizers in ```torch.optim```. Here, we will use the SGD. ``` optimizer = optim.SGD(model.parameters(), lr=0.01) optimizer.zero_grad() cost.backward() optimizer.step() ``` ## Training with Full Code Now, we understand the Linear Regression, Let's fit it. ``` # Data x_train = torch.FloatTensor([[1], [2], [3]]) y_train = torch.FloatTensor([[1], [2], [3]]) #Initialize the Model model = LinearRegressionModel() #Set the optimizer optimizer = optim.SGD(model.parameters(), lr=0.01) nb_epochs = 1000 for epoch in range(nb_epochs + 1): #Hypothsis pred = model(x_train) #Cost cost = F.mse_loss(pred, y_train) #Optimize the Cost optimizer.zero_grad() cost.backward() optimizer.step() #Print Logs for 100 epochs if epoch % 100 == 0: params = list(model.parameters()) W = params[0].item() b = params[1].item() print('Epoch {:4d}/{} W: {: .3f}, b: {: 3f} Cost: {: .6f}'.format( epoch, nb_epochs, W, b, cost.item() )) ```
github_jupyter
import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim torch.manual_seed(1) x_train = torch.FloatTensor([[1], [2], [3]]) y_train = torch.FloatTensor([[1], [2], [3]]) print(x_train) print(x_train.shape) print(y_train) print(y_train.shape) W = torch.zeros(1, requires_grad = True) print(W) b = torch.zeros(1, requires_grad = True) print(b) hypothesis = x_train * W + b print(hypothesis) print(hypothesis) print(y_train) print(hypothesis - y_train) print((hypothesis - y_train)**2) cost = torch.mean((hypothesis - y_train)**2) print(cost) optimizer = optim.SGD([W, b], lr=0.01) optimizer.zero_grad() cost.backward()#Computes the gradient of current tensor w.r.t. graph leaves. optimizer.step()#Performs a single optimization step print(W) print(b) hypothesis = x_train * W + b print(hypothesis) cost = torch.mean((hypothesis - y_train) ** 2) print(cost) #DATA x_train = torch.FloatTensor([[1], [2], [3]]) y_train = torch.FloatTensor([[1], [2], [3]]) #Model Initialization W = torch.zeros(1, requires_grad=True) b = torch.zeros(1, requires_grad=True) #Set Optimizer optimizer = optim.SGD([W, b], lr=0.01) nb_epochs = 1000 for epoch in range(nb_epochs + 1): hypothesis = x_train * W + b cost = torch.mean((hypothesis - y_train)**2) optimizer.zero_grad() cost.backward() optimizer.step() #print logs for 100 epochs if epoch % 100 == 0: print('Epoch {:4d}/{} W: {:.3f}, b: {:.3f} Cost: {:.6f}'.format(epoch, nb_epochs, W.item(), b.item(), cost.item())) print(hypothesis) #### Basically, all the models providing by PyTorch are made of inheriting ```nn.Module```. Now we are going to build linear regression model #### At the model __init__, we are going to define the layers that will be used. Here, we are building the linear regression model, we will use ```nn.Linear```. And at the ```forward```, we will tell it how this model should return the output from the input. ## Hypothesis Now Let's make hypothesis by generating model Basically, The ***Hypothesis*** means the return value of ```forward()``` of ```nn.Module```, which means that it is the forward result of the Linear Layer. ### **Hypothesis === Forward** ## Cost Now, Let's get cost by MSE(Mean Squared Error). MSE function is also provided by PyTorch. ## Gradient Descent Let's reduce the cost by the optimizer providing by PyTorch. You can use one of the optimizers in ```torch.optim```. Here, we will use the SGD. ## Training with Full Code Now, we understand the Linear Regression, Let's fit it.
0.842896
0.990015
``` %matplotlib inline %load_ext autoreload %autoreload 2 ``` # Parsing the notebook ## Finding tags, filter cells ``` #export #default_exp om import re _re_tag = re.compile(r"^\s*#([a-zA-Z_]+).*$") #export def extract_tag(line): """Returns the name of a tag (#name), if it occurs at the beginning of the line, or None.""" m = _re_tag.match(line) if m is not None: return m.group(1) else: return None assert extract_tag("#nbx ") == "nbx" assert extract_tag("#nbx something else ") == "nbx" assert extract_tag("# nbx something else ") == None assert extract_tag("#xarg ") == "xarg" #export def contains_tag(name): return lambda line: extract_tag(line) == name is_nbx = contains_tag("nbx") assert is_nbx("#nbx") assert is_nbx("# nbx") == False assert is_nbx(" #nbx") assert is_nbx(" #nbx") #export def is_nbx_cell(cell): if cell['cell_type'] != 'code': return False if not cell['source']: return False line0 = cell['source'][0] return is_nbx(line0) ``` When we create our python script we need to exclude jupyter's *magic* functions and shell commands that can beused in a code cell. ``` #export _re_magic = re.compile(r"^\s*%{1,2}|^\s*!") #export def is_magic_or_shell(line): m = _re_magic.match(line) return m is not None assert is_magic_or_shell("%pwd ") assert is_magic_or_shell("%%capture ") assert is_magic_or_shell("!ls") ``` ## Parsing "xargs" We need to parse the line below `#xarg`, and decompose it into a variable declaration and the parameter range for the sweep. ``` #export _re_xarg = re.compile(r""" # parses the line below an `xarg` tag: ^ ([^=]+) = ([^;]+) ;? (.*) $""", re.VERBOSE) #export def strip(s): return s.strip() def parse_xarg(line): m = _re_xarg.match(line) name, val, sweep = map(strip, m.groups()) return name, val, sweep parse_xarg("x = 0 ; [0,1,2,3]") ``` ## Extracting imports ``` #export import ast from collections import namedtuple Import = namedtuple("Import", ["module", "name", "alias"]) def get_imports_from_src(src): """Taken from: https://stackoverflow.com/questions/9008451/python-easy-way-to-read-all-import-statements-from-py-module """ root = ast.parse(src) start = None end = None occ = [] imp = [] stm = [] for node in ast.iter_child_nodes(root): occ.append((node.lineno)) if isinstance(node, ast.Import): imp.append(True) module = [] elif isinstance(node, ast.ImportFrom): imp.append(True) module = node.module.split('.') else: imp.append(False) continue for n in node.names: stm.append(Import(".".join(module), n.name, n.asname)) print(occ) ign = [] lines = src.split("\n") occ.append(len(lines)) for i in range(len(occ)-1): if imp[i] == True: ign.extend(range(occ[i]-1,occ[i+1]-1)) return stm, ign #export def create_import_statement(module, name, alias): if module == '': return f"import {name}" + ("" if alias == None else f" as {alias}") else: return f"from {module} import {name}" + ("" if alias == None else f" as {alias}") src = """ import ast x=0; from collections import (namedtuple, b) """ #export def extract_imports_from(src): imps, ign = get_imports_from_src(src) return [create_import_statement(*im) for im in imps], ign extract_imports_from(src) ``` ## Parsing "nbx" cells First let's load the notebook ``` #export import json from argparse import Namespace class Bunch(object): def __init__(self, adict={}): self.__dict__.update(adict) def __repr__(self): return str(self.__dict__.keys()) def load_nb(fname): nbdict = json.load(open(fname,'r',encoding="utf-8")) nb = Bunch(nbdict) nb.name = fname return nb nb = load_nb("om.ipynb") nb #export def parse_src(a, src): if len(src) == 0: return a, [] tag = extract_tag(src[0]) if tag is None: if not is_magic_or_shell(src[0]): a['xbody'].append(src[0]) rest = src[1:] elif tag == 'nbx': a['xbody'].append(src[0]) rest = src[1:] elif tag == 'xarg': a['xarg'].append(src[1]) rest = src[2:] else: rest = src[1:] return parse_src(a, rest) def parse_nbx_cell(cell): a = {'xbody': [], 'xarg': [] } a, _ = parse_src(a, cell['source']) return a['xarg'], a['xbody'] ``` Let's see how it works ``` #nbx #xarg x = 0 ; [0,1,2,3,4] #xarg y = 0 ; #xarg task_id = 0 #xarg results_dir = "./" # some comment z = 1 #nbx print("some result") nb = load_nb("om.ipynb") for cell in list(filter(is_nbx_cell, nb.cells)): print("\n*****************\n** Parsed Cell **\n*****************\n") xarg, xbody = parse_nbx_cell(cell) print(xarg) print(xbody) ``` ## Parsing the whole thing ``` #export from functools import reduce def concat(list1, list2): return list1 + list2 def unzip(zipped): return zip(*zipped) def negate(func): return lambda x: not func(x) def is_constarg(a): return len(a[2]) == 0 not_constarg = negate(is_constarg) def get_item(i): return lambda x: x[i] def get_items(*I): return lambda x: tuple([x[i] for i in I]) #export def parse_nb(nb): nbx_cells = filter(is_nbx_cell, nb.cells) xargs = [] xbody = [] for cell in nbx_cells: xa, xb = parse_nbx_cell(cell) xargs += [parse_xarg(line) for line in xa] xbody += xb pnb = Bunch() pnb.name = nb.name pnb.func_body = xbody pnb.args = list(map(get_items(0,1), xargs)) pnb.const_args = list(map(get_items(0,1), filter(is_constarg, xargs))) pnb.sweep_args = list(map(get_items(0,2), filter(not_constarg, xargs))) return pnb nb = load_nb("om.ipynb") pnb = parse_nb(nb) print(pnb.name) print(pnb.args) print(pnb.const_args) print(pnb.sweep_args) print(pnb.func_body) ``` # Creating the file bundle ``` #export def get_arrays(num, m=1000): if num < m: return [[1,num]] arrays = [] for i in range(num//m): arrays.append([i*m+1, (i+1)*m]) last = arrays[-1][1] if last < num: arrays.append([last+1, num]) return arrays get_arrays(1543) #export def init_job(start, end, step): return f"job_0=`sbatch --array={start}-{end}%{step} job.sh | awk '{{ print $4 }}'`" def cont_job(j, start, end, step): return f"job_{j}=`sbatch --array={start}-{end}%{step} --dependency=afterok:$job_{j-1} job.sh | awk '{{ print $4 }}'`" def chain_jobs(arrays, step): s = "" for i, arr in enumerate(arrays): if i ==0: s += init_job(arr[0], arr[1], step) else: s += cont_job(i, arr[0], arr[1], step) s += "\n" return s print(chain_jobs(get_arrays(1543), step=1)) ``` ## NbxBundle Class ``` #export from pathlib import PurePosixPath as Path import pkg_resources import importlib from nbx.templ import * import os def check_parsed_nb(pnb): keys = list(map(get_item(0), pnb.args)) if "task_id" not in keys: raise KeyError("You didn't specify `task_id`!!") if "results_dir" not in keys: raise KeyError("You didn't specify `results_dir`!!") class NbxBundle(): def __init__(self, nbname, mail_user, name=None, linting=True, time=[1,0], ntasks=10, step=5, simg="pytorch.simg", max_arr=900, mem_per_cpu=2000): if name is None: name = Path(nbname).stem self.max_arr = max_arr self.nbname = nbname self.name = name self.path = Path(f"{name}_nbx") nb = load_nb(nbname) nb = parse_nb(nb) self.nb = nb check_parsed_nb(nb) self.create_folders() self.create_script("experiment.tpl", "experiment.py", vars(nb)); p = ".".join((self.path/'experiment').parts) exp = importlib.import_module(p) len(exp.sweep_params) self.num_configs = len(exp.sweep_params) self.create_run_script(len(exp.sweep_params), step, max_arr) self.create_script("wrapper.tpl", "wrapper.py", { 'experiment_module': "experiment"}); self.create_script("job.tpl", "job.sh", { 'job_name': name, 'nbx_folder': os.environ['omx'], 'script_to_run': "wrapper.py", 'results_dir': "./results", 'hours': time[0], 'mins': time[1], 'ntasks': ntasks, 'script': 'wrapper.py', 'simg': Path(os.environ['omsimg'])/simg, 'mail_user': mail_user, 'mem_per_cpu': mem_per_cpu }); print(self) if linting: self.check_scripts() def create_run_script(self, num, step, max_arr): path = self.path/'run.sh' with open(path, "w", newline="\n") as f: f.write("#!/bin/sh\n\n") f.write("#SBATCH --out=io/runner_out__%A\n") f.write("#SBATCH --error=io/runner_err__%A\n\n") f.write(chain_jobs(get_arrays(num, max_arr), step)) def create_script(self, tname, fname, vars): tpath = Path(pkg_resources.resource_filename( __name__, f"/templates/{tname}")) create_file_from_template(tpath, self.path/fname, vars) def create_folders(self): if not os.path.exists(self.path): os.makedirs(self.path) os.makedirs(self.path/'io') if os.path.exists('./src'): if not os.path.exists(self.path/'src'): os.makedirs(self.path/'src') os.system(f"cp -r src/* {self.path/'src'}") open(self.path/'__init__.py', 'a').close() def run_experiment(self): self.run() print("check status with `ssh $om squeue -u $omid` or `bundle.status()`") print("pull results with `bundle.pull_results()`") def __str__(self): return render_template_from_string(BUNDLE_SUMMARY, vars(self)) def _run_command(self, cmd): stream = os.popen(cmd) output = stream.read() return output.strip() def push(self): cmd = f"scp -r {self.path} $om:$omx" output = self._run_command(cmd) if len(output) > 0: print(output) def run(self): cmd = f"ssh $om sbatch -D $omx/{self.path} $omx/{self.path}/run.sh" print(self._run_command(cmd)) def status(self): cmd = f"ssh $om squeue -u $omid" print(self._run_command(cmd)) def pull_results(self): cmd = f"scp -r $om:$omx/{self.path}/results ./{self.path}" output = self._run_command(cmd) if len(output) > 0: print(output) print(f"copied to `{self.path}/results/") def check_scripts(self): output = self._run_command(f"pylint -E {self.path/'wrapper.py'}") if len(output) > 0: print(output) raise "Check wrapper script" output = self._run_command(f"pylint -E {self.path/'experiment.py'}") if len(output) > 0: print(output) raise "Check experiment script" print("(pylinting went ok)") BUNDLE_SUMMARY = """ ** nbx bundle created ** Path: {{path}} Source nb: {{nbname}} Parameters (#configs {{num_configs}}): {% for k,v in nb.sweep_args %}* {{k}} = {{v}}{% if not loop.last %} {% endif %}{% endfor %} {% for k,v in nb.const_args %} {{k}} = {{v}}{% if not loop.last %} {% endif %}{% endfor %} Instructions: Copy to remote, run the bash script, and pull the results - `bundle.push()` or `scp -r {{path}} $om:$omx` - `bundle.run()` or `ssh $om sbatch -D $omx/{{path}} $omx/{{path}}/run.sh` - `bundle.pull_results()` or `scp -r $om:$omx/{{path}}/results ./results` """ ```
github_jupyter
%matplotlib inline %load_ext autoreload %autoreload 2 #export #default_exp om import re _re_tag = re.compile(r"^\s*#([a-zA-Z_]+).*$") #export def extract_tag(line): """Returns the name of a tag (#name), if it occurs at the beginning of the line, or None.""" m = _re_tag.match(line) if m is not None: return m.group(1) else: return None assert extract_tag("#nbx ") == "nbx" assert extract_tag("#nbx something else ") == "nbx" assert extract_tag("# nbx something else ") == None assert extract_tag("#xarg ") == "xarg" #export def contains_tag(name): return lambda line: extract_tag(line) == name is_nbx = contains_tag("nbx") assert is_nbx("#nbx") assert is_nbx("# nbx") == False assert is_nbx(" #nbx") assert is_nbx(" #nbx") #export def is_nbx_cell(cell): if cell['cell_type'] != 'code': return False if not cell['source']: return False line0 = cell['source'][0] return is_nbx(line0) #export _re_magic = re.compile(r"^\s*%{1,2}|^\s*!") #export def is_magic_or_shell(line): m = _re_magic.match(line) return m is not None assert is_magic_or_shell("%pwd ") assert is_magic_or_shell("%%capture ") assert is_magic_or_shell("!ls") #export _re_xarg = re.compile(r""" # parses the line below an `xarg` tag: ^ ([^=]+) = ([^;]+) ;? (.*) $""", re.VERBOSE) #export def strip(s): return s.strip() def parse_xarg(line): m = _re_xarg.match(line) name, val, sweep = map(strip, m.groups()) return name, val, sweep parse_xarg("x = 0 ; [0,1,2,3]") #export import ast from collections import namedtuple Import = namedtuple("Import", ["module", "name", "alias"]) def get_imports_from_src(src): """Taken from: https://stackoverflow.com/questions/9008451/python-easy-way-to-read-all-import-statements-from-py-module """ root = ast.parse(src) start = None end = None occ = [] imp = [] stm = [] for node in ast.iter_child_nodes(root): occ.append((node.lineno)) if isinstance(node, ast.Import): imp.append(True) module = [] elif isinstance(node, ast.ImportFrom): imp.append(True) module = node.module.split('.') else: imp.append(False) continue for n in node.names: stm.append(Import(".".join(module), n.name, n.asname)) print(occ) ign = [] lines = src.split("\n") occ.append(len(lines)) for i in range(len(occ)-1): if imp[i] == True: ign.extend(range(occ[i]-1,occ[i+1]-1)) return stm, ign #export def create_import_statement(module, name, alias): if module == '': return f"import {name}" + ("" if alias == None else f" as {alias}") else: return f"from {module} import {name}" + ("" if alias == None else f" as {alias}") src = """ import ast x=0; from collections import (namedtuple, b) """ #export def extract_imports_from(src): imps, ign = get_imports_from_src(src) return [create_import_statement(*im) for im in imps], ign extract_imports_from(src) #export import json from argparse import Namespace class Bunch(object): def __init__(self, adict={}): self.__dict__.update(adict) def __repr__(self): return str(self.__dict__.keys()) def load_nb(fname): nbdict = json.load(open(fname,'r',encoding="utf-8")) nb = Bunch(nbdict) nb.name = fname return nb nb = load_nb("om.ipynb") nb #export def parse_src(a, src): if len(src) == 0: return a, [] tag = extract_tag(src[0]) if tag is None: if not is_magic_or_shell(src[0]): a['xbody'].append(src[0]) rest = src[1:] elif tag == 'nbx': a['xbody'].append(src[0]) rest = src[1:] elif tag == 'xarg': a['xarg'].append(src[1]) rest = src[2:] else: rest = src[1:] return parse_src(a, rest) def parse_nbx_cell(cell): a = {'xbody': [], 'xarg': [] } a, _ = parse_src(a, cell['source']) return a['xarg'], a['xbody'] #nbx #xarg x = 0 ; [0,1,2,3,4] #xarg y = 0 ; #xarg task_id = 0 #xarg results_dir = "./" # some comment z = 1 #nbx print("some result") nb = load_nb("om.ipynb") for cell in list(filter(is_nbx_cell, nb.cells)): print("\n*****************\n** Parsed Cell **\n*****************\n") xarg, xbody = parse_nbx_cell(cell) print(xarg) print(xbody) #export from functools import reduce def concat(list1, list2): return list1 + list2 def unzip(zipped): return zip(*zipped) def negate(func): return lambda x: not func(x) def is_constarg(a): return len(a[2]) == 0 not_constarg = negate(is_constarg) def get_item(i): return lambda x: x[i] def get_items(*I): return lambda x: tuple([x[i] for i in I]) #export def parse_nb(nb): nbx_cells = filter(is_nbx_cell, nb.cells) xargs = [] xbody = [] for cell in nbx_cells: xa, xb = parse_nbx_cell(cell) xargs += [parse_xarg(line) for line in xa] xbody += xb pnb = Bunch() pnb.name = nb.name pnb.func_body = xbody pnb.args = list(map(get_items(0,1), xargs)) pnb.const_args = list(map(get_items(0,1), filter(is_constarg, xargs))) pnb.sweep_args = list(map(get_items(0,2), filter(not_constarg, xargs))) return pnb nb = load_nb("om.ipynb") pnb = parse_nb(nb) print(pnb.name) print(pnb.args) print(pnb.const_args) print(pnb.sweep_args) print(pnb.func_body) #export def get_arrays(num, m=1000): if num < m: return [[1,num]] arrays = [] for i in range(num//m): arrays.append([i*m+1, (i+1)*m]) last = arrays[-1][1] if last < num: arrays.append([last+1, num]) return arrays get_arrays(1543) #export def init_job(start, end, step): return f"job_0=`sbatch --array={start}-{end}%{step} job.sh | awk '{{ print $4 }}'`" def cont_job(j, start, end, step): return f"job_{j}=`sbatch --array={start}-{end}%{step} --dependency=afterok:$job_{j-1} job.sh | awk '{{ print $4 }}'`" def chain_jobs(arrays, step): s = "" for i, arr in enumerate(arrays): if i ==0: s += init_job(arr[0], arr[1], step) else: s += cont_job(i, arr[0], arr[1], step) s += "\n" return s print(chain_jobs(get_arrays(1543), step=1)) #export from pathlib import PurePosixPath as Path import pkg_resources import importlib from nbx.templ import * import os def check_parsed_nb(pnb): keys = list(map(get_item(0), pnb.args)) if "task_id" not in keys: raise KeyError("You didn't specify `task_id`!!") if "results_dir" not in keys: raise KeyError("You didn't specify `results_dir`!!") class NbxBundle(): def __init__(self, nbname, mail_user, name=None, linting=True, time=[1,0], ntasks=10, step=5, simg="pytorch.simg", max_arr=900, mem_per_cpu=2000): if name is None: name = Path(nbname).stem self.max_arr = max_arr self.nbname = nbname self.name = name self.path = Path(f"{name}_nbx") nb = load_nb(nbname) nb = parse_nb(nb) self.nb = nb check_parsed_nb(nb) self.create_folders() self.create_script("experiment.tpl", "experiment.py", vars(nb)); p = ".".join((self.path/'experiment').parts) exp = importlib.import_module(p) len(exp.sweep_params) self.num_configs = len(exp.sweep_params) self.create_run_script(len(exp.sweep_params), step, max_arr) self.create_script("wrapper.tpl", "wrapper.py", { 'experiment_module': "experiment"}); self.create_script("job.tpl", "job.sh", { 'job_name': name, 'nbx_folder': os.environ['omx'], 'script_to_run': "wrapper.py", 'results_dir': "./results", 'hours': time[0], 'mins': time[1], 'ntasks': ntasks, 'script': 'wrapper.py', 'simg': Path(os.environ['omsimg'])/simg, 'mail_user': mail_user, 'mem_per_cpu': mem_per_cpu }); print(self) if linting: self.check_scripts() def create_run_script(self, num, step, max_arr): path = self.path/'run.sh' with open(path, "w", newline="\n") as f: f.write("#!/bin/sh\n\n") f.write("#SBATCH --out=io/runner_out__%A\n") f.write("#SBATCH --error=io/runner_err__%A\n\n") f.write(chain_jobs(get_arrays(num, max_arr), step)) def create_script(self, tname, fname, vars): tpath = Path(pkg_resources.resource_filename( __name__, f"/templates/{tname}")) create_file_from_template(tpath, self.path/fname, vars) def create_folders(self): if not os.path.exists(self.path): os.makedirs(self.path) os.makedirs(self.path/'io') if os.path.exists('./src'): if not os.path.exists(self.path/'src'): os.makedirs(self.path/'src') os.system(f"cp -r src/* {self.path/'src'}") open(self.path/'__init__.py', 'a').close() def run_experiment(self): self.run() print("check status with `ssh $om squeue -u $omid` or `bundle.status()`") print("pull results with `bundle.pull_results()`") def __str__(self): return render_template_from_string(BUNDLE_SUMMARY, vars(self)) def _run_command(self, cmd): stream = os.popen(cmd) output = stream.read() return output.strip() def push(self): cmd = f"scp -r {self.path} $om:$omx" output = self._run_command(cmd) if len(output) > 0: print(output) def run(self): cmd = f"ssh $om sbatch -D $omx/{self.path} $omx/{self.path}/run.sh" print(self._run_command(cmd)) def status(self): cmd = f"ssh $om squeue -u $omid" print(self._run_command(cmd)) def pull_results(self): cmd = f"scp -r $om:$omx/{self.path}/results ./{self.path}" output = self._run_command(cmd) if len(output) > 0: print(output) print(f"copied to `{self.path}/results/") def check_scripts(self): output = self._run_command(f"pylint -E {self.path/'wrapper.py'}") if len(output) > 0: print(output) raise "Check wrapper script" output = self._run_command(f"pylint -E {self.path/'experiment.py'}") if len(output) > 0: print(output) raise "Check experiment script" print("(pylinting went ok)") BUNDLE_SUMMARY = """ ** nbx bundle created ** Path: {{path}} Source nb: {{nbname}} Parameters (#configs {{num_configs}}): {% for k,v in nb.sweep_args %}* {{k}} = {{v}}{% if not loop.last %} {% endif %}{% endfor %} {% for k,v in nb.const_args %} {{k}} = {{v}}{% if not loop.last %} {% endif %}{% endfor %} Instructions: Copy to remote, run the bash script, and pull the results - `bundle.push()` or `scp -r {{path}} $om:$omx` - `bundle.run()` or `ssh $om sbatch -D $omx/{{path}} $omx/{{path}}/run.sh` - `bundle.pull_results()` or `scp -r $om:$omx/{{path}}/results ./results` """
0.502197
0.831006
### Note * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. ``` # Dependencies and Setup import pandas as pd # File to Load (Remember to Change These) school_data_to_load = "Resources/schools_complete.csv" student_data_to_load = "Resources/students_complete.csv" # Read School and Student Data File and store into Pandas DataFrames school_data = pd.read_csv(school_data_to_load) student_data = pd.read_csv(student_data_to_load) # Combine the data into a single dataset. school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"]) ``` ## District Summary * Calculate the total number of schools * Calculate the total number of students * Calculate the total budget * Calculate the average math score * Calculate the average reading score * Calculate the percentage of students with a passing math score (70 or greater) * Calculate the percentage of students with a passing reading score (70 or greater) * Calculate the percentage of students who passed math **and** reading (% Overall Passing) * Create a dataframe to hold the above results * Optional: give the displayed data cleaner formatting ## School Summary * Create an overview table that summarizes key metrics about each school, including: * School Name * School Type * Total Students * Total School Budget * Per Student Budget * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * % Overall Passing (The percentage of students that passed math **and** reading.) * Create a dataframe to hold the above results ## Top Performing Schools (By % Overall Passing) * Sort and display the top five performing schools by % overall passing. ## Bottom Performing Schools (By % Overall Passing) * Sort and display the five worst-performing schools by % overall passing. ## Math Scores by Grade * Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school. * Create a pandas series for each grade. Hint: use a conditional statement. * Group each series by school * Combine the series into a dataframe * Optional: give the displayed data cleaner formatting ## Reading Score by Grade * Perform the same operations as above for reading scores ## Scores by School Spending * Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following: * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two) ## Scores by School Size * Perform the same operations as above, based on school size. ## Scores by School Type * Perform the same operations as above, based on school type
github_jupyter
# Dependencies and Setup import pandas as pd # File to Load (Remember to Change These) school_data_to_load = "Resources/schools_complete.csv" student_data_to_load = "Resources/students_complete.csv" # Read School and Student Data File and store into Pandas DataFrames school_data = pd.read_csv(school_data_to_load) student_data = pd.read_csv(student_data_to_load) # Combine the data into a single dataset. school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"])
0.411939
0.857768
# spotlob Version 0.9.0a --- Python meetup 18.04.2019 Fabian Meyer Fraunhofer ISE # Task: image segmentation * Find features in scientific image data * Precise measurement * Traceable and repeatable evaluation * no training data available * Semi-automatic: tweaking will be needed <img src="presentation-images/usecases.png"> # There is very mature software for this... <br> <table> <tr> <td> <img src="presentation-images/opencv.png" height="200" width="200"> </td> <td> <img src="presentation-images/scikit-image.png" height="200" width="200"> </td> </tr> <tr> <td style="text-align:left; font-size:16pt"> <h3>OpenCV</h3> <ul> <li>C/C++ with Python interface</li> <li>fast</li> <li>also very popular outside Python world</li> <li>not exactly pythonic to use</li> </ul> </td> <td style="text-align:left; font-size:16pt; vertical-align:top"> <h3>Scikit-image</h3> <ul> <li>Python/Cython</li> <li>feature-rich</li> <li>integrates well with other popular python packages</li> <li>great documentation</li> </ul> </td> </tr> </table> # Why spotlob? ## Typical workflow 1. record a new set of images 2. search for an old script to evaluate them 3. doesn't work 4. search for another script to evaluate them 3. doesn't work 4. merge the scripts and make it work by iteratively changing parameters 5. apply script on whole set of images 6. memory full ## Two months later: next set of images start all over # Why spotlob? * finding the right detection quicker * standardize for later use * don't lose flexibility compared to scripting * jupyter compatible * low memory consumption * multithreaded batch processing * combine multiple libraries # Notebook demo ``` %matplotlib widget import sys sys.path.append("../") from spotlob.defaults import * gui = make_gui("presentation-images/demoim.jpg") show_gui(gui) ``` <img src="presentation-images\pipeline.png"> # Detection process steps # Conversion <img src="presentation-images/hsv.png" width="800px" height="600px"> <img src="presentation-images\pipeline.png"> # Detect features - the pipeline 1. Load image as array 2. Extract a single channel 4. Binarization 6. Find connected areas 7. Evaluate them # Results as pandas dataframes ``` gui.results() ``` # The pipeline ``` print(gui.pipeline) ``` # The pipeline * the sequence of processes to apply, ProcessSteps * the parameters = everything you need to know to repeat the detection # Store the pipeline ``` gui.pipeline.save("mypipe.pipe") ``` # and restore ``` from spotlob.pipeline import Pipeline mypipe = Pipeline.from_file("mypipe.pipe") print(mypipe) ``` # batch processing ``` from glob import glob all_images = glob("*.JPG") from spotlob.batch import batchprocess all_results = batchprocess("mypipe.pipe", all_images) all_results.groupby("filepath").agg(["mean", "std"]) ``` # Change the pipeline another binarization function ``` from spotlob.process_opencv import BinaryThreshold binarization = BinaryThreshold(threshold = 100) new_pipeline = gui.pipeline.replaced_with(binarization) print(new_pipeline) ``` # Use the new pipeline ``` gui = make_gui("presentation-images/demoim.jpg") gui.pipeline = new_pipeline show_gui(gui) ``` # Extend spotlob: subclassing ``` from spotlob.process_steps import Binarization class MyThreshold(Binarization): def __init__(self, threshold): threshold_parameter = NumericRangeParameter("threshold", threshold, 0, 255) pars = SpotlobParameterSet([threshold_parameter]) super(BinaryThreshold, self).__init__(self.threshold_fn, pars) def my_threshold_function(self, grey_image, threshold): _, im = cv2.threshold(grey_image, threshold, 255, cv2.THRESH_BINARY) return im ``` # Extend spotlob: with a function... ``` import numpy as np def my_threshold(image, lower_threshold, upper_threshold): above_lower = image > lower_threshold below_upper = image < upper_threshold out = np.logical_and(above_lower, below_upper).astype(np.uint8)*255 return out ``` # Extend spotlob: ...using decorators ``` from spotlob.register import PROCESS_REGISTER as register gui = make_gui("presentation-images/demoim.jpg") @use_in(gui) @register.binarization_plugin([("lower_threshold",(0,255,100)), ("upper_threshold",(0,255,200))]) def my_threshold(image, lower_threshold, upper_threshold): above_lower = image > lower_threshold below_upper = image < upper_threshold out = np.logical_and(above_lower, below_upper).astype(np.uint8)*255 return out ``` # `@use_in(gui)` ``` %matplotlib widget show_gui(gui) ``` # jupyter.ise.fhg.de use kernel `Python 3.6.3` # Installation ``` pip install spotlob ``` # Source & docs ``` https://gitlab.cc-asp.fraunhofer.de/fmeyer/spotlob ``` # Thank you ## for your attention # Questions # Suggestions
github_jupyter
%matplotlib widget import sys sys.path.append("../") from spotlob.defaults import * gui = make_gui("presentation-images/demoim.jpg") show_gui(gui) gui.results() print(gui.pipeline) gui.pipeline.save("mypipe.pipe") from spotlob.pipeline import Pipeline mypipe = Pipeline.from_file("mypipe.pipe") print(mypipe) from glob import glob all_images = glob("*.JPG") from spotlob.batch import batchprocess all_results = batchprocess("mypipe.pipe", all_images) all_results.groupby("filepath").agg(["mean", "std"]) from spotlob.process_opencv import BinaryThreshold binarization = BinaryThreshold(threshold = 100) new_pipeline = gui.pipeline.replaced_with(binarization) print(new_pipeline) gui = make_gui("presentation-images/demoim.jpg") gui.pipeline = new_pipeline show_gui(gui) from spotlob.process_steps import Binarization class MyThreshold(Binarization): def __init__(self, threshold): threshold_parameter = NumericRangeParameter("threshold", threshold, 0, 255) pars = SpotlobParameterSet([threshold_parameter]) super(BinaryThreshold, self).__init__(self.threshold_fn, pars) def my_threshold_function(self, grey_image, threshold): _, im = cv2.threshold(grey_image, threshold, 255, cv2.THRESH_BINARY) return im import numpy as np def my_threshold(image, lower_threshold, upper_threshold): above_lower = image > lower_threshold below_upper = image < upper_threshold out = np.logical_and(above_lower, below_upper).astype(np.uint8)*255 return out from spotlob.register import PROCESS_REGISTER as register gui = make_gui("presentation-images/demoim.jpg") @use_in(gui) @register.binarization_plugin([("lower_threshold",(0,255,100)), ("upper_threshold",(0,255,200))]) def my_threshold(image, lower_threshold, upper_threshold): above_lower = image > lower_threshold below_upper = image < upper_threshold out = np.logical_and(above_lower, below_upper).astype(np.uint8)*255 return out %matplotlib widget show_gui(gui) pip install spotlob https://gitlab.cc-asp.fraunhofer.de/fmeyer/spotlob
0.313315
0.827236
``` # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session pd.set_option('display.max_rows', None) pd.set_option('display.max_columns', None) train = pd.read_csv('/kaggle/input/costa-rican-household-poverty-prediction/train.csv') test = pd.read_csv('/kaggle/input/costa-rican-household-poverty-prediction/test.csv') train.shape, test.shape train.info() train.describe() # False -> 0, True -> 1 train[['Id']].isnull().sum() train[['hacdor']].isnull().sum() train.isnull().sum() train = train.dropna(axis='columns') train.columns, len(train.columns) train.isnull().sum() train.columns ``` ## choose columns : refrig, tamhog, hhsize, public, idhogar, dependency, hogar_total, overcrowding, bedrooms, agesq, Target ``` train_feature = train[['refrig', 'tamhog', 'hhsize', 'public', 'idhogar', 'dependency', 'hogar_total', 'overcrowding', 'bedrooms', 'agesq']] train_feature.head(3) train_feature.info() ``` * 연속형 : refrig, public, dependency, overcrowding, bedrooms, * 분류형 : tamhog, hhsize, hogar_total, ``` train_feature['agesq'].value_counts() train_feature = train[['refrig', 'public', 'dependency', 'overcrowding', 'bedrooms','tamhog', 'hhsize', 'hogar_total']] train_feature.shape train_feature.head(3) train_feature['dependency'].value_counts() train_feature['dependency'] = train_feature['dependency'].replace(to_replace='yes',value=2) train_feature['dependency'] = train_feature['dependency'].replace(to_replace='no',value=6) # train_feature['dependency'].describe() train_feature.isnull().sum() train_feature.corr() train_heatmap = pd.concat([train['Target'],train_feature], axis='columns') # train_heatmap.corr() train_heatmap.corr() import seaborn as sns sns.heatmap(train_heatmap.corr(),annot=True) train_onehot = pd.get_dummies(train_feature, columns=['tamhog', 'hhsize', 'hogar_total']) train_onehot.shape from sklearn import preprocessing scaler = preprocessing.StandardScaler() scaler.fit(train_onehot) X = scaler.transform(train_onehot) X.shape ``` 여기에서 data split을 해야 함. 나중에 할 것!!! ``` from lightgbm import LGBMClassifier lgbm = LGBMClassifier() Y = train['Target'] Y.shape lgbm.fit(X,Y) lgbm.score(X,Y) ```
github_jupyter
# This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session pd.set_option('display.max_rows', None) pd.set_option('display.max_columns', None) train = pd.read_csv('/kaggle/input/costa-rican-household-poverty-prediction/train.csv') test = pd.read_csv('/kaggle/input/costa-rican-household-poverty-prediction/test.csv') train.shape, test.shape train.info() train.describe() # False -> 0, True -> 1 train[['Id']].isnull().sum() train[['hacdor']].isnull().sum() train.isnull().sum() train = train.dropna(axis='columns') train.columns, len(train.columns) train.isnull().sum() train.columns train_feature = train[['refrig', 'tamhog', 'hhsize', 'public', 'idhogar', 'dependency', 'hogar_total', 'overcrowding', 'bedrooms', 'agesq']] train_feature.head(3) train_feature.info() train_feature['agesq'].value_counts() train_feature = train[['refrig', 'public', 'dependency', 'overcrowding', 'bedrooms','tamhog', 'hhsize', 'hogar_total']] train_feature.shape train_feature.head(3) train_feature['dependency'].value_counts() train_feature['dependency'] = train_feature['dependency'].replace(to_replace='yes',value=2) train_feature['dependency'] = train_feature['dependency'].replace(to_replace='no',value=6) # train_feature['dependency'].describe() train_feature.isnull().sum() train_feature.corr() train_heatmap = pd.concat([train['Target'],train_feature], axis='columns') # train_heatmap.corr() train_heatmap.corr() import seaborn as sns sns.heatmap(train_heatmap.corr(),annot=True) train_onehot = pd.get_dummies(train_feature, columns=['tamhog', 'hhsize', 'hogar_total']) train_onehot.shape from sklearn import preprocessing scaler = preprocessing.StandardScaler() scaler.fit(train_onehot) X = scaler.transform(train_onehot) X.shape from lightgbm import LGBMClassifier lgbm = LGBMClassifier() Y = train['Target'] Y.shape lgbm.fit(X,Y) lgbm.score(X,Y)
0.48121
0.434641
## Welcome to Notebook 3 of the HELINA Workshop ### This notebook will read the pickle file of Table 6 of the BAIS 2013 report and create a choropleth map. ### First, we need to read in the pickle file ``` import os import pickle import pandas as pd import matplotlib as plt import geopandas as gpd table6 = pd.read_pickle('table6pickle.pkl') table6 # Rename district names to match with the shapefile. table6 = table6.replace({"Central-Bobonong": "Bobonong", "Central-Mahalapye": "Mahalapye", "Central-Serowe": "Serowe", "Central-Tutume": "Tutume", "Kgalagadi North": "Hukunsti", "Kgalagadi South": "Tshabong", "Kweneng East": "Kweneng North", "Kweneng West": "Kweneng South", "Northeast": "Masungu", "Ngamiland North": "Ngamiland East", "Ngamiland South": "Ngamiland West", "Southeast": "South East", "Selebi-Phikwe": "Selibe Phikwe", "Orapa": "Lethlakane", "Southern": "Ngwaketse Central", "Ngwaketse West": "Ngwaketse South"}) table6.head() # save as a csv. table6.to_csv('resources/popdata.csv') # check data types. table6.dtypes # load in a shapefile fp = "resources/gadm36_BWA_2.shp" map_df = gpd.read_file(fp) # check data type so we can see that this is not a normal dataframe, but a GEOdataframe map_df.head() map_df.dtypes #map_df['id'] = map_df.index # rename "NAME_2" columnt to "District" to match with the csv. map_df.rename(columns={'NAME_2':'District'}, inplace=True) map_df.head() map_df.dtypes # save shapefile as GeoJson. map_df.to_file('resources/polygons.json', driver="GeoJSON") map_df.plot() # Check if districts in csv and shapefile match. df_merge = pd.merge(table6, map_df, on='District', how='outer') df_merge import folium import json state_geo = r'resources/polygons.json' table = r'resources/popdata.csv' csvtable = pd.read_csv(table) m = folium.Map(location=[-22.212136, 23.717579], zoom_start=6) folium.Choropleth( geo_data=state_geo, name='choropleth', data=csvtable, columns=['District', 'TotalPop'], key_on='feature.properties.District', fill_color='YlGn', fill_opacity=0.7, line_opacity=0.2, legend_name="HIV Prevalence Total Population", ).add_to(m) m.save(os.path.join('resources', 'map.html')) ``` # HIV Prevalence Total Population (5 disticts have no data and display in dark grey). ``` from IPython.display import IFrame IFrame(src='resources/map.html', width=950, height=600) m = folium.Map(location=[-22.212136, 23.717579], zoom_start=6) folium.Choropleth( geo_data=state_geo, name='choropleth', data=csvtable, columns=['District', 'Male'], key_on='feature.properties.District', fill_color='GnBu', fill_opacity=0.7, line_opacity=0.2, legend_name="HIV Prevalence Male", ).add_to(m) m.save(os.path.join('resources', 'map1.html')) from IPython.display import IFrame IFrame(src='resources/map1.html', width=950, height=600) m = folium.Map(location=[-22.212136, 23.717579], zoom_start=6) folium.Choropleth( geo_data=state_geo, name='choropleth', data=csvtable, columns=['District', 'Female'], key_on='feature.properties.District', fill_color='BuPu', fill_opacity=0.7, line_opacity=0.2, legend_name="HIV Prevalence Female", ).add_to(m) m.save(os.path.join('resources', 'map2.html')) from IPython.display import IFrame IFrame(src='resources/map2.html', width=950, height=600) ``` # There are 26 records in BAIS 2013 table vs 30 records in the Botswana Shapefile so there are 5 District on the map have no data and display as dark grey. ## Congratulations ! You have completed the PLACE lego block ! ![image.png](attachment:image.png) ## Continue to notebook 4 to build the TIME lego block....
github_jupyter
import os import pickle import pandas as pd import matplotlib as plt import geopandas as gpd table6 = pd.read_pickle('table6pickle.pkl') table6 # Rename district names to match with the shapefile. table6 = table6.replace({"Central-Bobonong": "Bobonong", "Central-Mahalapye": "Mahalapye", "Central-Serowe": "Serowe", "Central-Tutume": "Tutume", "Kgalagadi North": "Hukunsti", "Kgalagadi South": "Tshabong", "Kweneng East": "Kweneng North", "Kweneng West": "Kweneng South", "Northeast": "Masungu", "Ngamiland North": "Ngamiland East", "Ngamiland South": "Ngamiland West", "Southeast": "South East", "Selebi-Phikwe": "Selibe Phikwe", "Orapa": "Lethlakane", "Southern": "Ngwaketse Central", "Ngwaketse West": "Ngwaketse South"}) table6.head() # save as a csv. table6.to_csv('resources/popdata.csv') # check data types. table6.dtypes # load in a shapefile fp = "resources/gadm36_BWA_2.shp" map_df = gpd.read_file(fp) # check data type so we can see that this is not a normal dataframe, but a GEOdataframe map_df.head() map_df.dtypes #map_df['id'] = map_df.index # rename "NAME_2" columnt to "District" to match with the csv. map_df.rename(columns={'NAME_2':'District'}, inplace=True) map_df.head() map_df.dtypes # save shapefile as GeoJson. map_df.to_file('resources/polygons.json', driver="GeoJSON") map_df.plot() # Check if districts in csv and shapefile match. df_merge = pd.merge(table6, map_df, on='District', how='outer') df_merge import folium import json state_geo = r'resources/polygons.json' table = r'resources/popdata.csv' csvtable = pd.read_csv(table) m = folium.Map(location=[-22.212136, 23.717579], zoom_start=6) folium.Choropleth( geo_data=state_geo, name='choropleth', data=csvtable, columns=['District', 'TotalPop'], key_on='feature.properties.District', fill_color='YlGn', fill_opacity=0.7, line_opacity=0.2, legend_name="HIV Prevalence Total Population", ).add_to(m) m.save(os.path.join('resources', 'map.html')) from IPython.display import IFrame IFrame(src='resources/map.html', width=950, height=600) m = folium.Map(location=[-22.212136, 23.717579], zoom_start=6) folium.Choropleth( geo_data=state_geo, name='choropleth', data=csvtable, columns=['District', 'Male'], key_on='feature.properties.District', fill_color='GnBu', fill_opacity=0.7, line_opacity=0.2, legend_name="HIV Prevalence Male", ).add_to(m) m.save(os.path.join('resources', 'map1.html')) from IPython.display import IFrame IFrame(src='resources/map1.html', width=950, height=600) m = folium.Map(location=[-22.212136, 23.717579], zoom_start=6) folium.Choropleth( geo_data=state_geo, name='choropleth', data=csvtable, columns=['District', 'Female'], key_on='feature.properties.District', fill_color='BuPu', fill_opacity=0.7, line_opacity=0.2, legend_name="HIV Prevalence Female", ).add_to(m) m.save(os.path.join('resources', 'map2.html')) from IPython.display import IFrame IFrame(src='resources/map2.html', width=950, height=600)
0.424651
0.749202
# Instructions Build a whale recognizer deep learing model in PyTorch. <img src="static/classifier_example.jpg" alt="Drawing" style="width: 300px;"/> Your model takes as an input X tensor and outputs predictions for the whale id. Read more about pytorch and model definition in the following resources: https://towardsdatascience.com/pytorch-tutorial-distilled-95ce8781a89c?gi=ef974c787a5e http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html Now the architecture that you need is the following: <img src="static/classifier_architecture.png" alt="Drawing" style="height:500px;, width: 300px;"/> Apart from whale id this model also uses auxilary targets. It helps with training and serves the purpose of regularization. The auxilary outputs is the callosity pattern. You can read about it on our blog https://blog.deepsense.ai/deep-learning-right-whale-recognition-kaggle/ and investigate `metadata.csv` to look at those target columns. # Your Solution Your solution function should be called solution. CONFIG is a dictionary with all parameters that you want to pass to your solution function ``` import torch.nn as nn CONFIG = {'input_size':(3, 256, 256), 'classes':{'points':128, 'callosity':3 , 'whale_id':447} } def solution(input_shape, classes): class PyTorchClassifier(nn.Module): def __init__(self, input_shape, classes): """ input_shape: tuple representing shape classes: dictionary of ints with keys ['points','callosity','whale_id'] """ super(PyTorchClassifier, self).__init__() self.features = nn.Sequential( # Feature extraction part of the neural network. # Stack those layers to get architecture defined in the notes above. ) self.flat_features_nr = self._get_flat_features_nr(input_shape, self.features) self.whale_id = nn.Sequential( # Put your classification layers for whale id. ) self.callosity = nn.Sequential( # Put your classification layers for callosity. ) def _get_flat_features_nr(self, in_size, features): # Linear layers need to know what is the size of the input. # Implement a function that returns the size of flattened features. return flattened_features_size def forward(self, x): # Implement forward pass through the network. return [pred_whale_id, pred_callosity] def forward_target(self, x): # We want to forget about the auxilary outputs here and only output the target predictions. return [pred_whale_id] return PyTorchClassifier(input_shape, classes) ```
github_jupyter
import torch.nn as nn CONFIG = {'input_size':(3, 256, 256), 'classes':{'points':128, 'callosity':3 , 'whale_id':447} } def solution(input_shape, classes): class PyTorchClassifier(nn.Module): def __init__(self, input_shape, classes): """ input_shape: tuple representing shape classes: dictionary of ints with keys ['points','callosity','whale_id'] """ super(PyTorchClassifier, self).__init__() self.features = nn.Sequential( # Feature extraction part of the neural network. # Stack those layers to get architecture defined in the notes above. ) self.flat_features_nr = self._get_flat_features_nr(input_shape, self.features) self.whale_id = nn.Sequential( # Put your classification layers for whale id. ) self.callosity = nn.Sequential( # Put your classification layers for callosity. ) def _get_flat_features_nr(self, in_size, features): # Linear layers need to know what is the size of the input. # Implement a function that returns the size of flattened features. return flattened_features_size def forward(self, x): # Implement forward pass through the network. return [pred_whale_id, pred_callosity] def forward_target(self, x): # We want to forget about the auxilary outputs here and only output the target predictions. return [pred_whale_id] return PyTorchClassifier(input_shape, classes)
0.928587
0.91682
# Neuroevolution Neuroevolution is a method of applying evolutionary algorithms to optimise neural networks instead of using backpropagation. Neuroevolution therefore is a non-gradient (or derivation-free) optimisation, which can speed up training as backward passes are not computed. The neural network optimised by neuroevolution can be adapted in terms of parameters, hyperparameters or network architecture. Prominent examples of neuroevolution are NeuroEvolution of Augmenting Topologies (NEAT) and Covariance-Matrix Adaptation Evolution Strategy (CMA-ES). The evolutionary algorithm employed in this notebook is the vanilla genetic algorithm without crossing-over, applying only mutation over neural network parameters (weights). 2nd of a 3-part series on evolutionary computation (Part 1 - [Genetic Algorithm](https://jetnew.io/posts/2020/11/genetic-algorithm/), Part 3 - [Novelty Search](https://jetnew.io/posts/2020/11/novelty-search/)). ``` import numpy as np np.random.seed(0) import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.autograd import Variable ``` # The Neural Network Model ("Neuro"-evolution) The neural network, or a multi-layer perceptron, is a universal function approximator. The neural network in PyTorch with 2 hidden layers and non-linear activation functions hyperbolic tangent (tanh) and sigmoid is defined. ``` net = nn.Sequential( nn.Linear(in_features=2, out_features=16, bias=True), nn.Tanh(), nn.Linear(in_features=16, out_features=1), nn.Sigmoid() ) class Net(nn.Module): def __init__(self, input_size, output_size, n_hidden=16): super(Net, self).__init__() self.linear1 = nn.Linear(input_size, n_hidden, bias=True) self.tanh1 = nn.Tanh() self.linear2 = nn.Linear(n_hidden, output_size) self.sigmoid = nn.Sigmoid() def forward(self, x): x = self.linear1(x) x = self.tanh1(x) x = self.linear2(x) x = self.sigmoid(x) return x net = Net(2, 1) ``` # The Mutation Function (Neuro-"evolution") As with the genetic algorithm, neuroevolution can be implemented by adding an additive Gaussian noise $\epsilon\sim N(0,\sigma)$ to all neural network weights to introduce variance in the "gene pool" of the population. ``` from torch.nn.utils import parameters_to_vector, vector_to_parameters def get_params(net): return parameters_to_vector(net.parameters()) def mutate_params(net, sigma=0.1): mutated_params = get_params(net) + torch.normal(0, sigma, size=get_params(net).data.shape) vector_to_parameters(mutated_params, net.parameters()) print(f"Before mutation:\n {get_params(net)}\n") mutate_params(net, sigma=0.1) print(f"After mutation:\n {get_params(net)}") ``` # Optimization Problem: Circles Dataset The optimization problem is the Circles dataset from Scikit-Learn, where the neural network model must learn to predict and discriminate between the inner circles (labelled 1) and outer circles (labelled 0). The Circles dataset is the reason that non-linear activation functions in the neural network architecture are needed. $X$ is 2-dimensional while $y$ is 1-dimensional. ``` from sklearn.datasets import make_circles def plot_data(X, y): X = X.detach().numpy() y = y.detach().numpy().flatten() plt.plot(X[y==0,0], X[y==0,1], '.', c='b', label='0') plt.plot(X[y==1,0], X[y==1,1], '.', c='r', label='1') X, y = make_circles(n_samples=100) X = torch.from_numpy(X).float() y = torch.from_numpy(y).float().view(-1, 1) plot_data(X, y) net(X[:5, :]) ``` # Process 1: Generate the initial population of neural networks. For illustration purposes, a small population size of 5 and 4 hidden units per neural network layer is used. Inspecting the first 2 neural networks in the population, neural network weights are randomly initialised. The specific initialisation method used for the weights is documented in the [PyTorch documentation](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) for interested readers. ``` population_size = 5 initial_population = np.array([Net(2,1,n_hidden=4) for _ in range(population_size)]) for p in initial_population[:2]: print(get_params(p)) ``` # Process 2: Compute the fitness of the population. The fitness function measures the performance of an individual neural network. Because $y$ is a binary variable of values $\{0,1\}$, the negative binary cross entropy error (BCE) is employed, negated to reflect a higher value as more desirable. ``` def fitness_function(net): return -nn.BCELoss()(net(X), y).detach().numpy().item() def compute_fitness(population): return np.array([fitness_function(individual) for individual in population]) fitness_score = fitness_function(net) fitness_scores = compute_fitness(initial_population) fitness_score, fitness_scores ``` # Process 3: Select the fittest neural networks. Select the top $k$ percentage of neural networks with the highest fitness score to form the parent subpopulation. ``` def solution(individual): return individual(X).view(-1).detach().numpy().round() def get_fittest(population, fitness_scores): return population[fitness_scores.argmax()] def select_fittest(population, fitness_scores, k=0.5): return population[np.argsort(fitness_scores)[-int(len(population) * k):]] parent_subpopulation = select_fittest(initial_population, fitness_scores, k=0.4) compute_fitness(parent_subpopulation) ``` # Process 4: Perform reproduction of the parents to replenish the population. In contrast to common implementations of genetic algorithms, no crossing-over is performed. Parent neural networks are simply uniformly sampled with replacement to create an identical copy as the child. ``` import copy def perform_reproduction(subpopulation): num_children = population_size - len(subpopulation) parents = np.random.choice(subpopulation, num_children) return np.append(subpopulation, [copy.deepcopy(p) for p in parents], axis=0) next_population = perform_reproduction(parent_subpopulation) compute_fitness(next_population) ``` # Process 5: Perform mutation on the population. As explained previously, add a Gaussian noise perturbation to all parameters of the neural network. ``` def get_population_parameter(population): return [get_params(net) for net in population] def perform_mutation(population, sigma=0.1): for individual in population: mutate_params(individual, sigma=0.1) return population print("Before mutation:") print(get_population_parameter(next_population)) perform_mutation(next_population) print("\nAfter mutation:") print(get_population_parameter(next_population)) ``` # The Neuroevolution Algorithm: All 5 Processes Together By combining the 5 processes together, we construct the neuroevolution algorithm and run it to find a neural network solution that models the Circles dataset well. Neuroevolution: 1. Generate the initial population of individuals. 2. Repeat until convergence: 1. Compute fitness of the population. 2. Select the fittest individuals (parent subpopulation). 3. Perform reproduction between parents to produce children. 4. Perform mutation on the population. 3. Select the fittest individual of the population as the solution. ``` # Neuroevolution hyperparameters population_size = 100 num_generations = 300 top_k = 0.1 mutation_sigma = 0.1 n_hidden = 16 # Process 1: Generate the initial population. population = np.array([Net(2, 1, n_hidden) for _ in range(population_size)]) # Misc: Experimental tracking scores = [] solutions = [] fittests = [] for i in range(num_generations): # Process 2: Compute fitness of the population. fitness_scores = compute_fitness(population) # Process 3: Select the fittest individuals. fittest_subpopulation = select_fittest(population, fitness_scores, k=top_k) # Misc: Experimental tracking fittest = get_fittest(population, fitness_scores) fittests.append(fittest) solutions.append(solution(fittest)) scores.append(fitness_function(fittest)) # Process 4: Perform reproduction between parents. children = perform_reproduction(fittest_subpopulation) # Process 5: Perform mutation on the population. population = perform_mutation(children, sigma=mutation_sigma) # Misc: Experimental tracking plt.plot(np.arange(num_generations), scores) plt.show() ``` # Experiment Result The background colours illustrate the neural network's decision boundary, while the individual data points are the original dataset. Looking at the fittest individual neural network of the final population, the non-linear decision boundary has been correctly and well-learnt by the fittest neural network in the final population. ``` def plot_individual(net): x1 = np.arange(X[:,0].min()*1.2, X[:,0].max()*1.2, 0.01) x2 = np.arange(X[:,1].min()*1.2, X[:,1].max()*1.2, 0.01) X1, X2 = np.meshgrid(x1, x2) Y = np.zeros(X1.shape).flatten() for i, [x1, x2] in enumerate(zip(X1.flatten(), X2.flatten())): Y[i] = np.asarray(net(Variable(torch.Tensor([x1,x2])).float()).data) Y = Y.reshape(X1.shape) plt.xlim(min(X[:,0])*1.2, max(X[:,0])*1.2) plt.ylim(min(X[:,1])*1.2, max(X[:,1])*1.2) plt.contourf(X1, X2, Y, cmap='bwr', alpha=0.8) plt.colorbar() fitness_score = fitness_function(fittest) print(f"Fittest score: {fitness_score}") plot_data(X, y) plot_individual(fittest) ``` By visualising the fittest model at each generation of neuroevolution, notice that the circular decision boundary is eventually found. For an evolutionary strategy based on novelty applied on reinforcement learning, refer to [Part 3](https://jetnew.io/posts/2020/11/novelty-search/) of the Evolutionary Computation series on Novelty Search. For an introductory treatment of the genetic algorithm, refer to [Part 1](https://jetnew.io/posts/2020/11/genetic-algorithm/). ``` %%capture from matplotlib.animation import FuncAnimation fig, ax = plt.subplots() plot_data(X, y) ax.set_xlim(min(X[:,0]*1.2), max(X[:,0])*1.2) ax.set_ylim(min(X[:,1]*1.2), max(X[:,1])*1.2) x1 = np.arange(X[:,0].min()*1.2, X[:,0].max()*1.2, 0.01) x2 = np.arange(X[:,1].min()*1.2, X[:,1].max()*1.2, 0.01) X1, X2 = np.meshgrid(x1, x2) def animate(i): net = fittests[i] Y = net(torch.Tensor(np.stack([X1.flatten(), X2.flatten()], axis=1))).detach().numpy().reshape(X1.shape) ax.contourf(X1, X2, Y, cmap='bwr', alpha=0.8) ax.set_xlabel(f'Gen {i+1}') ani = FuncAnimation(fig, animate, frames=np.arange(0, num_generations), interval=80, repeat=False) ani.save('../images/neuroevolution/neuroevolution.gif') ``` <img src="../images/neuroevolution/neuroevolution.gif">
github_jupyter
import numpy as np np.random.seed(0) import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.autograd import Variable net = nn.Sequential( nn.Linear(in_features=2, out_features=16, bias=True), nn.Tanh(), nn.Linear(in_features=16, out_features=1), nn.Sigmoid() ) class Net(nn.Module): def __init__(self, input_size, output_size, n_hidden=16): super(Net, self).__init__() self.linear1 = nn.Linear(input_size, n_hidden, bias=True) self.tanh1 = nn.Tanh() self.linear2 = nn.Linear(n_hidden, output_size) self.sigmoid = nn.Sigmoid() def forward(self, x): x = self.linear1(x) x = self.tanh1(x) x = self.linear2(x) x = self.sigmoid(x) return x net = Net(2, 1) from torch.nn.utils import parameters_to_vector, vector_to_parameters def get_params(net): return parameters_to_vector(net.parameters()) def mutate_params(net, sigma=0.1): mutated_params = get_params(net) + torch.normal(0, sigma, size=get_params(net).data.shape) vector_to_parameters(mutated_params, net.parameters()) print(f"Before mutation:\n {get_params(net)}\n") mutate_params(net, sigma=0.1) print(f"After mutation:\n {get_params(net)}") from sklearn.datasets import make_circles def plot_data(X, y): X = X.detach().numpy() y = y.detach().numpy().flatten() plt.plot(X[y==0,0], X[y==0,1], '.', c='b', label='0') plt.plot(X[y==1,0], X[y==1,1], '.', c='r', label='1') X, y = make_circles(n_samples=100) X = torch.from_numpy(X).float() y = torch.from_numpy(y).float().view(-1, 1) plot_data(X, y) net(X[:5, :]) population_size = 5 initial_population = np.array([Net(2,1,n_hidden=4) for _ in range(population_size)]) for p in initial_population[:2]: print(get_params(p)) def fitness_function(net): return -nn.BCELoss()(net(X), y).detach().numpy().item() def compute_fitness(population): return np.array([fitness_function(individual) for individual in population]) fitness_score = fitness_function(net) fitness_scores = compute_fitness(initial_population) fitness_score, fitness_scores def solution(individual): return individual(X).view(-1).detach().numpy().round() def get_fittest(population, fitness_scores): return population[fitness_scores.argmax()] def select_fittest(population, fitness_scores, k=0.5): return population[np.argsort(fitness_scores)[-int(len(population) * k):]] parent_subpopulation = select_fittest(initial_population, fitness_scores, k=0.4) compute_fitness(parent_subpopulation) import copy def perform_reproduction(subpopulation): num_children = population_size - len(subpopulation) parents = np.random.choice(subpopulation, num_children) return np.append(subpopulation, [copy.deepcopy(p) for p in parents], axis=0) next_population = perform_reproduction(parent_subpopulation) compute_fitness(next_population) def get_population_parameter(population): return [get_params(net) for net in population] def perform_mutation(population, sigma=0.1): for individual in population: mutate_params(individual, sigma=0.1) return population print("Before mutation:") print(get_population_parameter(next_population)) perform_mutation(next_population) print("\nAfter mutation:") print(get_population_parameter(next_population)) # Neuroevolution hyperparameters population_size = 100 num_generations = 300 top_k = 0.1 mutation_sigma = 0.1 n_hidden = 16 # Process 1: Generate the initial population. population = np.array([Net(2, 1, n_hidden) for _ in range(population_size)]) # Misc: Experimental tracking scores = [] solutions = [] fittests = [] for i in range(num_generations): # Process 2: Compute fitness of the population. fitness_scores = compute_fitness(population) # Process 3: Select the fittest individuals. fittest_subpopulation = select_fittest(population, fitness_scores, k=top_k) # Misc: Experimental tracking fittest = get_fittest(population, fitness_scores) fittests.append(fittest) solutions.append(solution(fittest)) scores.append(fitness_function(fittest)) # Process 4: Perform reproduction between parents. children = perform_reproduction(fittest_subpopulation) # Process 5: Perform mutation on the population. population = perform_mutation(children, sigma=mutation_sigma) # Misc: Experimental tracking plt.plot(np.arange(num_generations), scores) plt.show() def plot_individual(net): x1 = np.arange(X[:,0].min()*1.2, X[:,0].max()*1.2, 0.01) x2 = np.arange(X[:,1].min()*1.2, X[:,1].max()*1.2, 0.01) X1, X2 = np.meshgrid(x1, x2) Y = np.zeros(X1.shape).flatten() for i, [x1, x2] in enumerate(zip(X1.flatten(), X2.flatten())): Y[i] = np.asarray(net(Variable(torch.Tensor([x1,x2])).float()).data) Y = Y.reshape(X1.shape) plt.xlim(min(X[:,0])*1.2, max(X[:,0])*1.2) plt.ylim(min(X[:,1])*1.2, max(X[:,1])*1.2) plt.contourf(X1, X2, Y, cmap='bwr', alpha=0.8) plt.colorbar() fitness_score = fitness_function(fittest) print(f"Fittest score: {fitness_score}") plot_data(X, y) plot_individual(fittest) %%capture from matplotlib.animation import FuncAnimation fig, ax = plt.subplots() plot_data(X, y) ax.set_xlim(min(X[:,0]*1.2), max(X[:,0])*1.2) ax.set_ylim(min(X[:,1]*1.2), max(X[:,1])*1.2) x1 = np.arange(X[:,0].min()*1.2, X[:,0].max()*1.2, 0.01) x2 = np.arange(X[:,1].min()*1.2, X[:,1].max()*1.2, 0.01) X1, X2 = np.meshgrid(x1, x2) def animate(i): net = fittests[i] Y = net(torch.Tensor(np.stack([X1.flatten(), X2.flatten()], axis=1))).detach().numpy().reshape(X1.shape) ax.contourf(X1, X2, Y, cmap='bwr', alpha=0.8) ax.set_xlabel(f'Gen {i+1}') ani = FuncAnimation(fig, animate, frames=np.arange(0, num_generations), interval=80, repeat=False) ani.save('../images/neuroevolution/neuroevolution.gif')
0.884931
0.990404
# Fastpages Notebook Blog Post > A tutorial of fastpages for Jupyter notebooks. - toc: true - badges: true - comments: true - categories: [jupyter] - image: images/chart-preview.png # About This notebook is a demonstration of some of capabilities of [fastpages](https://github.com/fastai/fastpages) with notebooks. With `fastpages` you can save your jupyter notebooks into the `_notebooks` folder at the root of your repository, and they will be automatically be converted to Jekyll compliant blog posts! ## Front Matter The first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this: ``` # "My Title" > "Awesome summary" - toc: true- branch: master- badges: true - comments: true - author: Hamel Husain & Jeremy Howard - categories: [fastpages, jupyter] ``` - Setting `toc: true` will automatically generate a table of contents - Setting `badges: true` will automatically include GitHub and Google Colab links to your notebook. - Setting `comments: true` will enable commenting on your blog post, powered by [utterances](https://github.com/utterance/utterances). The title and description need to be enclosed in double quotes only if they include special characters such as a colon. More details and options for front matter can be viewed on the [front matter section](https://github.com/fastai/fastpages#front-matter-related-options) of the README. ## Markdown Shortcuts A `#hide` comment at the top of any code cell will hide **both the input and output** of that cell in your blog post. A `#hide_input` comment at the top of any code cell will **only hide the input** of that cell. ``` #hide_input print('The comment #hide_input was used to hide the code that produced this.') ``` put a `#collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it: ``` #collapse-hide import pandas as pd import altair as alt ``` put a `#collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it: ``` #collapse-show cars = 'https://vega.github.io/vega-datasets/data/cars.json' movies = 'https://vega.github.io/vega-datasets/data/movies.json' sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv' stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv' flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json' ``` ## Interactive Charts With Altair Charts made with Altair remain interactive. Example charts taken from [this repo](https://github.com/uwdata/visualization-curriculum), specifically [this notebook](https://github.com/uwdata/visualization-curriculum/blob/master/altair_interaction.ipynb). ``` # hide df = pd.read_json(movies) # load movies data genres = df['Major_Genre'].unique() # get unique field values genres = list(filter(lambda d: d is not None, genres)) # filter out None values genres.sort() # sort alphabetically #hide mpaa = ['G', 'PG', 'PG-13', 'R', 'NC-17', 'Not Rated'] ``` ### Example 1: DropDown ``` # single-value selection over [Major_Genre, MPAA_Rating] pairs # use specific hard-wired values as the initial selected values selection = alt.selection_single( name='Select', fields=['Major_Genre', 'MPAA_Rating'], init={'Major_Genre': 'Drama', 'MPAA_Rating': 'R'}, bind={'Major_Genre': alt.binding_select(options=genres), 'MPAA_Rating': alt.binding_radio(options=mpaa)} ) # scatter plot, modify opacity based on selection alt.Chart(movies).mark_circle().add_selection( selection ).encode( x='Rotten_Tomatoes_Rating:Q', y='IMDB_Rating:Q', tooltip='Title:N', opacity=alt.condition(selection, alt.value(0.75), alt.value(0.05)) ) ``` ### Example 2: Tooltips ``` alt.Chart(movies).mark_circle().add_selection( alt.selection_interval(bind='scales', encodings=['x']) ).encode( x='Rotten_Tomatoes_Rating:Q', y=alt.Y('IMDB_Rating:Q', axis=alt.Axis(minExtent=30)), # use min extent to stabilize axis title placement tooltip=['Title:N', 'Release_Date:N', 'IMDB_Rating:Q', 'Rotten_Tomatoes_Rating:Q'] ).properties( width=600, height=400 ) ``` ### Example 3: More Tooltips ``` # select a point for which to provide details-on-demand label = alt.selection_single( encodings=['x'], # limit selection to x-axis value on='mouseover', # select on mouseover events nearest=True, # select data point nearest the cursor empty='none' # empty selection includes no data points ) # define our base line chart of stock prices base = alt.Chart().mark_line().encode( alt.X('date:T'), alt.Y('price:Q', scale=alt.Scale(type='log')), alt.Color('symbol:N') ) alt.layer( base, # base line chart # add a rule mark to serve as a guide line alt.Chart().mark_rule(color='#aaa').encode( x='date:T' ).transform_filter(label), # add circle marks for selected time points, hide unselected points base.mark_circle().encode( opacity=alt.condition(label, alt.value(1), alt.value(0)) ).add_selection(label), # add white stroked text to provide a legible background for labels base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode( text='price:Q' ).transform_filter(label), # add text labels for stock prices base.mark_text(align='left', dx=5, dy=-5).encode( text='price:Q' ).transform_filter(label), data=stocks ).properties( width=700, height=400 ) ``` ## Data Tables You can display tables per the usual way in your blog: ``` movies = 'https://vega.github.io/vega-datasets/data/movies.json' df = pd.read_json(movies) # display table with pandas df[['Title', 'Worldwide_Gross', 'Production_Budget', 'Distributor', 'MPAA_Rating', 'IMDB_Rating', 'Rotten_Tomatoes_Rating']].head() ``` ## Images ### Local Images You can reference local images and they will be copied and rendered on your blog automatically. You can include these with the following markdown syntax: `![](my_icons/fastai_logo.png)` ![](my_icons/fastai_logo.png) ### Remote Images Remote images can be included with the following markdown syntax: `![](https://image.flaticon.com/icons/svg/36/36686.svg)` ![](https://image.flaticon.com/icons/svg/36/36686.svg) ### Animated Gifs Animated Gifs work, too! `![](https://upload.wikimedia.org/wikipedia/commons/7/71/ChessPawnSpecialMoves.gif)` ![](https://upload.wikimedia.org/wikipedia/commons/7/71/ChessPawnSpecialMoves.gif) ### Captions You can include captions with markdown images like this: ``` ![](https://www.fast.ai/images/fastai_paper/show_batch.png "Credit: https://www.fast.ai/2020/02/13/fastai-A-Layered-API-for-Deep-Learning/") ``` ![](https://www.fast.ai/images/fastai_paper/show_batch.png "Credit: https://www.fast.ai/2020/02/13/fastai-A-Layered-API-for-Deep-Learning/") # Other Elements ## GitHub Flavored Emojis Typing `I give this post two :+1:!` will render this: I give this post two :+1:! ## Tweetcards Typing `> twitter: https://twitter.com/jakevdp/status/1204765621767901185?s=20` will render this: > twitter: https://twitter.com/jakevdp/status/1204765621767901185?s=20 ## Youtube Videos Typing `> youtube: https://youtu.be/XfoYk_Z5AkI` will render this: > youtube: https://youtu.be/XfoYk_Z5AkI ## Boxes / Callouts Typing `> Warning: There will be no second warning!` will render this: > Warning: There will be no second warning! Typing `> Important: Pay attention! It's important.` will render this: > Important: Pay attention! It's important. Typing `> Tip: This is my tip.` will render this: > Tip: This is my tip. Typing `> Note: Take note of this.` will render this: > Note: Take note of this. Typing `> Note: A doc link to [an example website: fast.ai](https://www.fast.ai/) should also work fine.` will render in the docs: > Note: A doc link to [an example website: fast.ai](https://www.fast.ai/) should also work fine. ## Footnotes You can have footnotes in notebooks, however the syntax is different compared to markdown documents. [This guide provides more detail about this syntax](https://github.com/fastai/fastpages/blob/master/_fastpages_docs/NOTEBOOK_FOOTNOTES.md), which looks like this: ``` {% raw %}For example, here is a footnote {% fn 1 %}. And another {% fn 2 %} {{ 'This is the footnote.' | fndetail: 1 }} {{ 'This is the other footnote. You can even have a [link](www.github.com)!' | fndetail: 2 }}{% endraw %} ``` For example, here is a footnote {% fn 1 %}. And another {% fn 2 %} {{ 'This is the footnote.' | fndetail: 1 }} {{ 'This is the other footnote. You can even have a [link](www.github.com)!' | fndetail: 2 }}
github_jupyter
# "My Title" > "Awesome summary" - toc: true- branch: master- badges: true - comments: true - author: Hamel Husain & Jeremy Howard - categories: [fastpages, jupyter] #hide_input print('The comment #hide_input was used to hide the code that produced this.') #collapse-hide import pandas as pd import altair as alt #collapse-show cars = 'https://vega.github.io/vega-datasets/data/cars.json' movies = 'https://vega.github.io/vega-datasets/data/movies.json' sp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv' stocks = 'https://vega.github.io/vega-datasets/data/stocks.csv' flights = 'https://vega.github.io/vega-datasets/data/flights-5k.json' # hide df = pd.read_json(movies) # load movies data genres = df['Major_Genre'].unique() # get unique field values genres = list(filter(lambda d: d is not None, genres)) # filter out None values genres.sort() # sort alphabetically #hide mpaa = ['G', 'PG', 'PG-13', 'R', 'NC-17', 'Not Rated'] # single-value selection over [Major_Genre, MPAA_Rating] pairs # use specific hard-wired values as the initial selected values selection = alt.selection_single( name='Select', fields=['Major_Genre', 'MPAA_Rating'], init={'Major_Genre': 'Drama', 'MPAA_Rating': 'R'}, bind={'Major_Genre': alt.binding_select(options=genres), 'MPAA_Rating': alt.binding_radio(options=mpaa)} ) # scatter plot, modify opacity based on selection alt.Chart(movies).mark_circle().add_selection( selection ).encode( x='Rotten_Tomatoes_Rating:Q', y='IMDB_Rating:Q', tooltip='Title:N', opacity=alt.condition(selection, alt.value(0.75), alt.value(0.05)) ) alt.Chart(movies).mark_circle().add_selection( alt.selection_interval(bind='scales', encodings=['x']) ).encode( x='Rotten_Tomatoes_Rating:Q', y=alt.Y('IMDB_Rating:Q', axis=alt.Axis(minExtent=30)), # use min extent to stabilize axis title placement tooltip=['Title:N', 'Release_Date:N', 'IMDB_Rating:Q', 'Rotten_Tomatoes_Rating:Q'] ).properties( width=600, height=400 ) # select a point for which to provide details-on-demand label = alt.selection_single( encodings=['x'], # limit selection to x-axis value on='mouseover', # select on mouseover events nearest=True, # select data point nearest the cursor empty='none' # empty selection includes no data points ) # define our base line chart of stock prices base = alt.Chart().mark_line().encode( alt.X('date:T'), alt.Y('price:Q', scale=alt.Scale(type='log')), alt.Color('symbol:N') ) alt.layer( base, # base line chart # add a rule mark to serve as a guide line alt.Chart().mark_rule(color='#aaa').encode( x='date:T' ).transform_filter(label), # add circle marks for selected time points, hide unselected points base.mark_circle().encode( opacity=alt.condition(label, alt.value(1), alt.value(0)) ).add_selection(label), # add white stroked text to provide a legible background for labels base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode( text='price:Q' ).transform_filter(label), # add text labels for stock prices base.mark_text(align='left', dx=5, dy=-5).encode( text='price:Q' ).transform_filter(label), data=stocks ).properties( width=700, height=400 ) movies = 'https://vega.github.io/vega-datasets/data/movies.json' df = pd.read_json(movies) # display table with pandas df[['Title', 'Worldwide_Gross', 'Production_Budget', 'Distributor', 'MPAA_Rating', 'IMDB_Rating', 'Rotten_Tomatoes_Rating']].head() ![](https://www.fast.ai/images/fastai_paper/show_batch.png "Credit: https://www.fast.ai/2020/02/13/fastai-A-Layered-API-for-Deep-Learning/") {% raw %}For example, here is a footnote {% fn 1 %}. And another {% fn 2 %} {{ 'This is the footnote.' | fndetail: 1 }} {{ 'This is the other footnote. You can even have a [link](www.github.com)!' | fndetail: 2 }}{% endraw %}
0.591133
0.878471
<img src='./IMG/head_b.png' alt='Logo Head' align='center' width='100%'></img> <br> <img src='./IMG/header_1.png' alt='Logo UNSPIDER' align='left' width='50%'></img> <br> <a href="./04_GEE_JavaScripts_2_Python_NB.ipynb">**<< 04 Conversión de GEE-JavaScripts a Python-NB**</a><span style="float:right;"><a href="./06_ÍNDICE_DE_QUEMA_NBR_EN_AUSTRALIA.ipynb"> **06 ÍNDICE DE QUEMA NBR DE MODIS EN AUSTRALIA >>**</a> <hr> # 5. Mapa Interactivo Usando la API geemap en Python 3 Para el siguiente ejercicio, puede iniciar un cuaderno interactivo con **Google Colab**. Tenga en cuenta que Google Colab actualmente no es compatible con `ipyleaflet`. Por lo tanto, debe usar import `geemap.eefolium` en lugar de `import geemap`. Por otra parte puede si tiene instalado Anaconda, puede correrlo directamente sobre su Jupyter Notebook o en Jupyter Viewer. <a href="https://colab.research.google.com/github/Alexanderariza/FOREST-FIRES-ON-GEE-GEEMAP/blob/master/.ipynb_checkpoints/05_MAPEO_INTERACTIVO_USANDO_API_PYTHON_DE_GEE_GEEMAP-checkpoint.ipynb"><img src='./IMG/COLAB.svg' alt='Logo CO' align='left' width='10%'></img> <br> <a href="https://nbviewer.jupyter.org/github/Alexanderariza/FOREST-FIRES-ON-GEE-GEEMAP/blob/master/05_MAPEO_INTERACTIVO_USANDO_API_PYTHON_DE_GEE_GEEMAP.ipynb"><img src='./IMG/NT_vie.svg' alt='NT_VW' align='left' width='10%'></img> <br> ## Instala el paquete geemap: ``` # Instala el paquete geemap import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) ``` * Comprueba si este cuaderno se está ejecutando en `Google Colab`: ``` try: import google.colab import geemap.eefolium as emap except: import geemap as emap ``` * Identifiquese e inicie en Earth Engine: ``` import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ``` ## Cree un mapa interactivo: * Cree un mapa interactivo de una zona de interés (puede utilizar la herramienta [BoundingBox](https://boundingbox.klokantech.com/) para extraer las coordenadas): <img src='./IMG/bbox.png' alt='Logo CO' align='center' width='50%'></img> <br> ``` Map = emap.Map(center=[-74,11], zoom=4) ``` * Agrega el conjunto de datos de Earth Engine (e.g. Modelo de elevación del STRM): ``` image = ee.Image('USGS/SRTMGL1_003') ``` * Establece los parámetros de visualización (rango y paleta de color): ``` vis_params = { 'min': 0, 'max': 4000, 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']} ``` * (e.g.) Imprime la elevación de la sierra Nevada de Santa Marta: ``` xy = ee.Geometry.Point([-73.7505149047, 10.7223708103]) elev = image.sample(xy, 30).first().get('elevación').getInfo() print('Elevación de la Sierra Nevada de Santa Marta (m):', elev) ``` * Agrega capas de Earth Engine al mapa: ``` Map.addLayer(image, vis_params, 'SRTM DEM', True, 0.5) Map.addLayer(xy, {'color': 'red'}, 'Mount Everest') ``` * Centre y muestre el objeto: <img src='./IMG/santa_marta.jpg' alt='Logo CO' align='center' width='50%'></img> <br> ``` Map.setCenter(-74,11, 9) Map.addLayerControl() Map ``` <a href="./04_GEE_JavaScripts_2_Python_NB.ipynb">**<< 04 Conversión de GEE-JavaScripts a Python-NB**</a><span style="float:right;"><a href="./06_ÍNDICE_DE_QUEMA_NBR_EN_AUSTRALIA.ipynb"> **06 ÍNDICE DE QUEMA NBR DE MODIS EN AUSTRALIA >>**</a> <hr> <img src='./IMG/UNicon.png' alt='Logo down' align='right' width='8%'></img> <i><p style="text-align:right;">Material organizado por el [**Alexander Ariza**](https://www.researchgate.net/profile/Alexander_Ariza2), como apoyo al portal de conocimiento de la oficina de [**UNSPIDER**](https://www.un-spider.org), (2020) .
github_jupyter
# Instala el paquete geemap import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) try: import google.colab import geemap.eefolium as emap except: import geemap as emap import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() Map = emap.Map(center=[-74,11], zoom=4) image = ee.Image('USGS/SRTMGL1_003') vis_params = { 'min': 0, 'max': 4000, 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']} xy = ee.Geometry.Point([-73.7505149047, 10.7223708103]) elev = image.sample(xy, 30).first().get('elevación').getInfo() print('Elevación de la Sierra Nevada de Santa Marta (m):', elev) Map.addLayer(image, vis_params, 'SRTM DEM', True, 0.5) Map.addLayer(xy, {'color': 'red'}, 'Mount Everest') Map.setCenter(-74,11, 9) Map.addLayerControl() Map
0.301876
0.874346
### Using Pytorch to realize ScatterNet, based on python3 Now only main function are supported ### import data ``` from scatter_net_core import * data = 'data/8_layer_tio2' percent_val = .2 train_X, train_Y , test_X, test_Y, val_X, val_Y , x_mean, x_std = get_data(data,percentTest=percent_val) import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms from torch.nn import Linear, Module, MSELoss from torch.utils.data import TensorDataset, Dataset, DataLoader import numpy as np from torch.autograd import Variable BATCH_SIZE = 200 EPOCHS = 5000 DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") x_data = torch.from_numpy(train_X) y_data = torch.from_numpy(train_Y) deal_dataset = TensorDataset(x_data, y_data) train_loader = DataLoader(dataset = deal_dataset, batch_size = BATCH_SIZE, shuffle = True, num_workers = 0) #Network class MLP(nn.Module): def __init__(self): super(MLP,self).__init__() self.fc1 = nn.Linear(8,500) torch.nn.init.normal_(self.fc1.weight, mean=0, std=.1) torch.nn.init.normal_(self.fc1.bias, mean=0, std=.1) self.fc2 = nn.Linear(500,500) torch.nn.init.normal_(self.fc1.weight, mean=0, std=.1) torch.nn.init.normal_(self.fc1.bias, mean=0, std=.1) self.fc3 = nn.Linear(500,500) torch.nn.init.normal_(self.fc1.weight, mean=0, std=.1) torch.nn.init.normal_(self.fc1.bias, mean=0, std=.1) self.fc4 = nn.Linear(500,500) torch.nn.init.normal_(self.fc1.weight, mean=0, std=.1) torch.nn.init.normal_(self.fc1.bias, mean=0, std=.1) self.fc5 = nn.Linear(500,200) torch.nn.init.normal_(self.fc1.weight, mean=0, std=.1) torch.nn.init.normal_(self.fc1.bias, mean=0, std=.1) def forward(self,din): din = din.view(-1,8) din = F.dropout(self.fc1(din), p = 0.1) dout = F.relu(din) dout = F.relu(self.fc2(dout)) dout = F.relu(self.fc3(dout)) dout = F.relu(self.fc4(dout)) dout = self.fc5(dout) return dout model = MLP().to(DEVICE) criterion = MSELoss() #optimizer = optim.Adam(model.parameters(), lr = 0.001, weight_decay=.99) optimizer = optim.RMSprop(model.parameters(), lr = 0.001, alpha=0.99) def train(model, device, train_loader, optimizer, epoch): model.train() for batch_idx, (inputs, labels) in enumerate(train_loader,0): inputs, labels = inputs.to(device), labels.to(device) inputs, labels = Variable(inputs),Variable(labels) optimizer.zero_grad() outputs = model(inputs.float()) loss = criterion(outputs, labels.float()) loss.backward() optimizer.step() if (epoch%10==0): print('epoch {}, loss {:1.4f}'.format(epoch,loss.item())) #Training for epoch in range(1, EPOCHS + 1): train(model, DEVICE, train_loader, optimizer, epoch) torch.save(model, 'model.pth') ``` ### test ``` import matplotlib.pyplot as plt N = 2000 test = model(x_data[N].float()).detach().numpy()[0] plt.plot(range(400,800,2),test,range(400,800,2),train_Y[N]) ```
github_jupyter
from scatter_net_core import * data = 'data/8_layer_tio2' percent_val = .2 train_X, train_Y , test_X, test_Y, val_X, val_Y , x_mean, x_std = get_data(data,percentTest=percent_val) import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms from torch.nn import Linear, Module, MSELoss from torch.utils.data import TensorDataset, Dataset, DataLoader import numpy as np from torch.autograd import Variable BATCH_SIZE = 200 EPOCHS = 5000 DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") x_data = torch.from_numpy(train_X) y_data = torch.from_numpy(train_Y) deal_dataset = TensorDataset(x_data, y_data) train_loader = DataLoader(dataset = deal_dataset, batch_size = BATCH_SIZE, shuffle = True, num_workers = 0) #Network class MLP(nn.Module): def __init__(self): super(MLP,self).__init__() self.fc1 = nn.Linear(8,500) torch.nn.init.normal_(self.fc1.weight, mean=0, std=.1) torch.nn.init.normal_(self.fc1.bias, mean=0, std=.1) self.fc2 = nn.Linear(500,500) torch.nn.init.normal_(self.fc1.weight, mean=0, std=.1) torch.nn.init.normal_(self.fc1.bias, mean=0, std=.1) self.fc3 = nn.Linear(500,500) torch.nn.init.normal_(self.fc1.weight, mean=0, std=.1) torch.nn.init.normal_(self.fc1.bias, mean=0, std=.1) self.fc4 = nn.Linear(500,500) torch.nn.init.normal_(self.fc1.weight, mean=0, std=.1) torch.nn.init.normal_(self.fc1.bias, mean=0, std=.1) self.fc5 = nn.Linear(500,200) torch.nn.init.normal_(self.fc1.weight, mean=0, std=.1) torch.nn.init.normal_(self.fc1.bias, mean=0, std=.1) def forward(self,din): din = din.view(-1,8) din = F.dropout(self.fc1(din), p = 0.1) dout = F.relu(din) dout = F.relu(self.fc2(dout)) dout = F.relu(self.fc3(dout)) dout = F.relu(self.fc4(dout)) dout = self.fc5(dout) return dout model = MLP().to(DEVICE) criterion = MSELoss() #optimizer = optim.Adam(model.parameters(), lr = 0.001, weight_decay=.99) optimizer = optim.RMSprop(model.parameters(), lr = 0.001, alpha=0.99) def train(model, device, train_loader, optimizer, epoch): model.train() for batch_idx, (inputs, labels) in enumerate(train_loader,0): inputs, labels = inputs.to(device), labels.to(device) inputs, labels = Variable(inputs),Variable(labels) optimizer.zero_grad() outputs = model(inputs.float()) loss = criterion(outputs, labels.float()) loss.backward() optimizer.step() if (epoch%10==0): print('epoch {}, loss {:1.4f}'.format(epoch,loss.item())) #Training for epoch in range(1, EPOCHS + 1): train(model, DEVICE, train_loader, optimizer, epoch) torch.save(model, 'model.pth') import matplotlib.pyplot as plt N = 2000 test = model(x_data[N].float()).detach().numpy()[0] plt.plot(range(400,800,2),test,range(400,800,2),train_Y[N])
0.907147
0.865281
# Lab 5 ## Some remarks 1. Repositories and directories for this course: Most course materials are in [the course repo](https://github.com/berkeley-stat159-f17/stat159-f17). We recomend that you clone the repo into a directory called `stat159-f17-reference`. First, move into your directory for this course (e.g. `stat159`) and clone the repo: ``` git clone https://github.com/berkeley-stat159-f17/stat159-f17.git stat159-f17-reference ``` Then copy the contents of the repo into a new directory called `stat159-f17-work`: ``` cp -r stat159-f17-reference stat159-f17-work ``` Now in the `stat159-f17-work` directory you can make changes on notebooks etc. When we add course materials you can pull in the `stat159-f17-reference` directory, and then copy again. 2. Absolute paths vs relative paths: Repositories are meant to be shared. If you have a path to data that looks like `Users/username/repo/data.csv`, will it be able to run on another computer? How can we change the path so that it can run from inside the `repo` directory? ## File IO This section was slightly modified from the [Python docs](https://docs.python.org/3.6/tutorial/inputoutput.html) ### Opening a file The function `open` returns a file object, and is most commonly used with two arguments: `open(filename, mode)`. ``` f = open('workfile', 'w') ``` The first argument is a string containing the filename. The second argument is another string containing a few characters describing the way in which the file will be used. *mode* can be `'r'` when the file will only be read, `'w'` for only writing (an existing file with the same name will be erased), and `'a'` opens the file for appending; any data written to the file is automatically added to the end. `'r+'` opens the file for both reading and writing. The *mode* argument is optional; `'r'` will be assumed if it's omitted. Normally, files are opened in text mode, that means, you read and write strings from and to the file, which are encoded in a specific encoding. If encoding is not specified, the default is platform dependent (see open). `'b'` appended to the mode opens the file in binary mode: now the data is read and written in the form of bytes objects. This mode should be used for all files that don't contain text. In text mode, the default when reading is to convert platform-specific line endings (`\n` on Unix, `\r\n` on Windows) to just `\n`. When writing in text mode, the default is to convert occurrences of `\n` back to platform-specific line endings. This behind-the-scenes modification to file data is fine for text files, but will corrupt binary data like that in JPEG or EXE files. Be very careful to use binary mode when reading and writing such files. It is good practice to use the with keyword when dealing with file objects. The advantage is that the file is properly closed after its suite finishes, even if an exception is raised at some point. Using with is also much shorter than writing equivalent try-finally blocks: ``` with open('workfile') as f: read_data = f.read() f.closed ``` If you're not using the with keyword, then you should call `f.close()` to close the file and immediately free up any system resources used by it. If you don't explicitly close a file, Python's garbage collector will eventually destroy the object and close the open file for you, but the file may stay open for a while. Another risk is that different Python implementations will do this clean-up at different times. After a file object is closed, either by a with statement or by calling `f.close()`, attempts to use the file object will automatically fail. : ``` f.close() f.read() ``` Exercise: Write the equivalent logic of the `with` statement with try-finally blocks ``` # Your code here ``` ### Methods for file objects First, let's create a file object for `example.txt` ``` f = open("lab5-files/example.txt", "r") ``` #### Reading To read a file's contents, call `f.read(size)`, which reads some quantity of data and returns it as a string (in text mode) or bytes object (in binary mode). `size` is an optional numeric argument. When `size` is omitted or negative, **the entire contents of the file will be read and returned**; it's your problem if the file is twice as large as your machine's memory. Otherwise, at most *size* bytes are read and returned. If the end of the file has been reached, `f.read()` will return an empty string (`''`). : ``` print(f.read()) f.read() ``` `f.readline()` reads a single line from the file; a newline character (`\n`) is left at the end of the string, and is only omitted on the last line of the file if the file doesn't end in a newline. This makes the return value unambiguous; if `f.readline()` returns an empty string, the end of the file has been reached, while a blank line is represented by `'\n'`, a string containing only a single newline. : ``` f = open("example.txt", "r") f.readline() f.readline() f.readline() ``` For reading lines from a file, you can loop over the file object. This is memory efficient, fast, and leads to simple code: ``` f = open("example.txt", "r") for line in f: print(line, end='') ``` If you want to read all the lines of a file in a list you can also use `list(f)` or `f.readlines()`. #### Writing Now let's create a new file to write to ``` f = open("our_file.txt", "w") ``` `f.write(string)` writes the contents of *string* to the file, returning the number of characters written. : ``` f.write('This is a test\n') ``` Other types of objects need to be converted -- either to a string (in text mode) or a bytes object (in binary mode) -- before writing them: ``` value = ('the answer', 42) s = str(value) # convert the tuple to string f.write(s) f.close() ``` `f.tell()` returns an integer giving the file object's current position in the file represented as number of bytes from the beginning of the file when in binary mode and an opaque number when in text mode. To change the file object's position, use `f.seek(offset, from_what)`. The position is computed from adding *offset* to a reference point; the reference point is selected by the *from\_what* argument. A *from\_what* value of 0 measures from the beginning of the file, 1 uses the current file position, and 2 uses the end of the file as the reference point. *from\_what* can be omitted and defaults to 0, using the beginning of the file as the reference point. : ``` f = open('our_file.txt', 'rb+') f.write(b'0123456789abcdef') f.seek(5) # Go to the 6th byte in the file f.read(1) f.seek(-3, 2) # Go to the 3rd byte before the end f.read(1) ``` In text files (those opened without a `b` in the mode string), only seeks relative to the beginning of the file are allowed (the exception being seeking to the very file end with `seek(0, 2)`) and the only valid *offset* values are those returned from the `f.tell()`, or zero. Any other *offset* value produces undefined behaviour. #### Other file types Many times you'll encounter data not stored in text files. In particular, data is oftentimes compressed. For the second homework you'll need to read a file which has been compressed with [GNU Gzip](https://www.gnu.org/software/gzip/). You can open files like this in python in similar ways as ordinary text files. ## Calisthenics ### Exception handling Using a try-catch-finally block, write a function which takes in a list of numbers and returns a list of all the elements up until the first negative number. ``` def no_negatives(n): no_neg = [] for i in n: try: if i<0: raise ValueError else: no_neg.append(i) except ValueError: return(no_neg) print(no_negatives([1,2,3,-4,-5,6])) ``` ### Quantiles Write a function to compute the median of a list of numbers ``` def median(l): length=len(l) if length%2 == 1: med = l[int(length/2)] else: med = (l[int(length/2)-1]+l[int(length/2)])/2 return med print(median([1,2,3,4,5])) ``` Now write a function to compute the $p^\text{th}$ percentile ``` def percentile(scores, p): scores.sort() print(scores) percentile([0.34, 0.35, 0.67, 0.96],0.75) ``` ### File I/O Write a function which creates a file with $n$ lines numbered ``` # Type your code here ``` Write a function which appends to that file an extra $m$ lines ``` # Type your code here ```
github_jupyter
git clone https://github.com/berkeley-stat159-f17/stat159-f17.git stat159-f17-reference ``` Then copy the contents of the repo into a new directory called `stat159-f17-work`: ``` cp -r stat159-f17-reference stat159-f17-work ``` Now in the `stat159-f17-work` directory you can make changes on notebooks etc. When we add course materials you can pull in the `stat159-f17-reference` directory, and then copy again. 2. Absolute paths vs relative paths: Repositories are meant to be shared. If you have a path to data that looks like `Users/username/repo/data.csv`, will it be able to run on another computer? How can we change the path so that it can run from inside the `repo` directory? ## File IO This section was slightly modified from the [Python docs](https://docs.python.org/3.6/tutorial/inputoutput.html) ### Opening a file The function `open` returns a file object, and is most commonly used with two arguments: `open(filename, mode)`. The first argument is a string containing the filename. The second argument is another string containing a few characters describing the way in which the file will be used. *mode* can be `'r'` when the file will only be read, `'w'` for only writing (an existing file with the same name will be erased), and `'a'` opens the file for appending; any data written to the file is automatically added to the end. `'r+'` opens the file for both reading and writing. The *mode* argument is optional; `'r'` will be assumed if it's omitted. Normally, files are opened in text mode, that means, you read and write strings from and to the file, which are encoded in a specific encoding. If encoding is not specified, the default is platform dependent (see open). `'b'` appended to the mode opens the file in binary mode: now the data is read and written in the form of bytes objects. This mode should be used for all files that don't contain text. In text mode, the default when reading is to convert platform-specific line endings (`\n` on Unix, `\r\n` on Windows) to just `\n`. When writing in text mode, the default is to convert occurrences of `\n` back to platform-specific line endings. This behind-the-scenes modification to file data is fine for text files, but will corrupt binary data like that in JPEG or EXE files. Be very careful to use binary mode when reading and writing such files. It is good practice to use the with keyword when dealing with file objects. The advantage is that the file is properly closed after its suite finishes, even if an exception is raised at some point. Using with is also much shorter than writing equivalent try-finally blocks: If you're not using the with keyword, then you should call `f.close()` to close the file and immediately free up any system resources used by it. If you don't explicitly close a file, Python's garbage collector will eventually destroy the object and close the open file for you, but the file may stay open for a while. Another risk is that different Python implementations will do this clean-up at different times. After a file object is closed, either by a with statement or by calling `f.close()`, attempts to use the file object will automatically fail. : Exercise: Write the equivalent logic of the `with` statement with try-finally blocks ### Methods for file objects First, let's create a file object for `example.txt` #### Reading To read a file's contents, call `f.read(size)`, which reads some quantity of data and returns it as a string (in text mode) or bytes object (in binary mode). `size` is an optional numeric argument. When `size` is omitted or negative, **the entire contents of the file will be read and returned**; it's your problem if the file is twice as large as your machine's memory. Otherwise, at most *size* bytes are read and returned. If the end of the file has been reached, `f.read()` will return an empty string (`''`). : `f.readline()` reads a single line from the file; a newline character (`\n`) is left at the end of the string, and is only omitted on the last line of the file if the file doesn't end in a newline. This makes the return value unambiguous; if `f.readline()` returns an empty string, the end of the file has been reached, while a blank line is represented by `'\n'`, a string containing only a single newline. : For reading lines from a file, you can loop over the file object. This is memory efficient, fast, and leads to simple code: If you want to read all the lines of a file in a list you can also use `list(f)` or `f.readlines()`. #### Writing Now let's create a new file to write to `f.write(string)` writes the contents of *string* to the file, returning the number of characters written. : Other types of objects need to be converted -- either to a string (in text mode) or a bytes object (in binary mode) -- before writing them: `f.tell()` returns an integer giving the file object's current position in the file represented as number of bytes from the beginning of the file when in binary mode and an opaque number when in text mode. To change the file object's position, use `f.seek(offset, from_what)`. The position is computed from adding *offset* to a reference point; the reference point is selected by the *from\_what* argument. A *from\_what* value of 0 measures from the beginning of the file, 1 uses the current file position, and 2 uses the end of the file as the reference point. *from\_what* can be omitted and defaults to 0, using the beginning of the file as the reference point. : In text files (those opened without a `b` in the mode string), only seeks relative to the beginning of the file are allowed (the exception being seeking to the very file end with `seek(0, 2)`) and the only valid *offset* values are those returned from the `f.tell()`, or zero. Any other *offset* value produces undefined behaviour. #### Other file types Many times you'll encounter data not stored in text files. In particular, data is oftentimes compressed. For the second homework you'll need to read a file which has been compressed with [GNU Gzip](https://www.gnu.org/software/gzip/). You can open files like this in python in similar ways as ordinary text files. ## Calisthenics ### Exception handling Using a try-catch-finally block, write a function which takes in a list of numbers and returns a list of all the elements up until the first negative number. ### Quantiles Write a function to compute the median of a list of numbers Now write a function to compute the $p^\text{th}$ percentile ### File I/O Write a function which creates a file with $n$ lines numbered Write a function which appends to that file an extra $m$ lines
0.889592
0.927166
``` from PIL import Image import glob from keras.applications.inception_v3 import InceptionV3 from keras.applications.inception_v3 import preprocess_input, decode_predictions from keras.preprocessing import image import numpy as np import json import sqlite3 import pandas as pd import numpy as np import matplotlib.pyplot as plt # plotting import io from datetime import datetime from PIL import Image, ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True from tqdm.auto import tqdm # progress bars tqdm.pandas() %%time con = sqlite3.connect('D2019.11.29_S00522_I3171_P.pdb') df = pd.read_sql_query("SELECT * from IMAGES WHERE Focal=0", con) df ``` Time appears to be represented as fractional days since Jan 1, 1900 (43798 days is ~119.9 years) ``` cur = con.cursor() cur.execute("SELECT Val FROM GENERAL WHERE Par = 'Fertilization'") fert = float(cur.fetchone()[0]) fert df["datetime"] = pd.to_datetime(df.Time, origin=datetime(1900,1,1), unit="D").dt.floor("s") df["datetime"].describe() df["Time since fertilisation (minutes)"] = (df.Time - fert) * 1440 df["Time since fertilisation (minutes)"].describe() def images_to_sprite(data): """ Creates the sprite image along with any necessary padding Source : https://github.com/tensorflow/tensorflow/issues/6322 Args: data: NxHxW[x3] tensor containing the images. Returns: data: Properly shaped HxWx3 image with any necessary padding. """ if len(data.shape) == 3: data = np.tile(data[...,np.newaxis], (1,1,1,3)) data = data.astype(np.float32) min = np.min(data.reshape((data.shape[0], -1)), axis=1) data = (data.transpose(1,2,3,0) - min).transpose(3,0,1,2) max = np.max(data.reshape((data.shape[0], -1)), axis=1) data = (data.transpose(1,2,3,0) / max).transpose(3,0,1,2) n = int(np.ceil(np.sqrt(data.shape[0]))) padding = ((0, n ** 2 - data.shape[0]), (0, 0), (0, 0)) + ((0, 0),) * (data.ndim - 3) data = np.pad(data, padding, mode='constant', constant_values=0) # Tile the individual thumbnails into an image. data = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3) + tuple(range(4, data.ndim + 1))) data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:]) data = (data * 255).astype(np.uint8) return data def populate_img_arr(images, target_size=(100,100),should_preprocess= False): """ Get an array of images for a list of image paths Args: target_size: the size of image , in pixels should_preprocess: if the images should be processed (according to InceptionV3 requirements) Returns: arr: An array of the loaded images """ arr = [] for i,img_bytes in enumerate(images): img = Image.open(io.BytesIO(img_bytes)) img = img.convert('RGB') img = img.resize(target_size, Image.NEAREST) x = image.img_to_array(img) arr.append(x) arr = np.array(arr) if should_preprocess: arr = preprocess_input(arr) return arr ``` ## Model Definition ### If you want to use another model, you can change it here ``` model = InceptionV3(include_top=False,pooling='avg') model.summary() sample = df.sample(1000) sample %%time img_arr = populate_img_arr(sample.Image,target_size=(100,100),should_preprocess=True) preds = model.predict(img_arr,batch_size=64) preds.tofile("./oss_data/tensor.bytes") del img_arr,preds %%time raw_imgs = populate_img_arr(sample.Image, target_size=(100,100),should_preprocess=False) sprite = Image.fromarray(images_to_sprite(raw_imgs).astype(np.uint8)) sprite.save('./oss_data/sprites.png') del raw_imgs sample["label"] = "unknown" sample[["Well", "Run", "datetime", "Time since fertilisation (minutes)", "label"]].to_csv('./oss_data/metadata.tsv',sep='\t',index_label="index") ```
github_jupyter
from PIL import Image import glob from keras.applications.inception_v3 import InceptionV3 from keras.applications.inception_v3 import preprocess_input, decode_predictions from keras.preprocessing import image import numpy as np import json import sqlite3 import pandas as pd import numpy as np import matplotlib.pyplot as plt # plotting import io from datetime import datetime from PIL import Image, ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True from tqdm.auto import tqdm # progress bars tqdm.pandas() %%time con = sqlite3.connect('D2019.11.29_S00522_I3171_P.pdb') df = pd.read_sql_query("SELECT * from IMAGES WHERE Focal=0", con) df cur = con.cursor() cur.execute("SELECT Val FROM GENERAL WHERE Par = 'Fertilization'") fert = float(cur.fetchone()[0]) fert df["datetime"] = pd.to_datetime(df.Time, origin=datetime(1900,1,1), unit="D").dt.floor("s") df["datetime"].describe() df["Time since fertilisation (minutes)"] = (df.Time - fert) * 1440 df["Time since fertilisation (minutes)"].describe() def images_to_sprite(data): """ Creates the sprite image along with any necessary padding Source : https://github.com/tensorflow/tensorflow/issues/6322 Args: data: NxHxW[x3] tensor containing the images. Returns: data: Properly shaped HxWx3 image with any necessary padding. """ if len(data.shape) == 3: data = np.tile(data[...,np.newaxis], (1,1,1,3)) data = data.astype(np.float32) min = np.min(data.reshape((data.shape[0], -1)), axis=1) data = (data.transpose(1,2,3,0) - min).transpose(3,0,1,2) max = np.max(data.reshape((data.shape[0], -1)), axis=1) data = (data.transpose(1,2,3,0) / max).transpose(3,0,1,2) n = int(np.ceil(np.sqrt(data.shape[0]))) padding = ((0, n ** 2 - data.shape[0]), (0, 0), (0, 0)) + ((0, 0),) * (data.ndim - 3) data = np.pad(data, padding, mode='constant', constant_values=0) # Tile the individual thumbnails into an image. data = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3) + tuple(range(4, data.ndim + 1))) data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:]) data = (data * 255).astype(np.uint8) return data def populate_img_arr(images, target_size=(100,100),should_preprocess= False): """ Get an array of images for a list of image paths Args: target_size: the size of image , in pixels should_preprocess: if the images should be processed (according to InceptionV3 requirements) Returns: arr: An array of the loaded images """ arr = [] for i,img_bytes in enumerate(images): img = Image.open(io.BytesIO(img_bytes)) img = img.convert('RGB') img = img.resize(target_size, Image.NEAREST) x = image.img_to_array(img) arr.append(x) arr = np.array(arr) if should_preprocess: arr = preprocess_input(arr) return arr model = InceptionV3(include_top=False,pooling='avg') model.summary() sample = df.sample(1000) sample %%time img_arr = populate_img_arr(sample.Image,target_size=(100,100),should_preprocess=True) preds = model.predict(img_arr,batch_size=64) preds.tofile("./oss_data/tensor.bytes") del img_arr,preds %%time raw_imgs = populate_img_arr(sample.Image, target_size=(100,100),should_preprocess=False) sprite = Image.fromarray(images_to_sprite(raw_imgs).astype(np.uint8)) sprite.save('./oss_data/sprites.png') del raw_imgs sample["label"] = "unknown" sample[["Well", "Run", "datetime", "Time since fertilisation (minutes)", "label"]].to_csv('./oss_data/metadata.tsv',sep='\t',index_label="index")
0.607314
0.745468
# Embeddings <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/logo.png" width=150> So far, we've represented text in a bagged one-hot encoded form which is a n-dimensional array where each index corresponds to a token. The value at that index corresponds to the number of times the word appears in the sentence. This method forces us to completely lose the structural information in our inputs. ```python [0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]``` We've also represented our input in a one-hot encoded form where each token is represented by an n-dimensional array. T ```python [[0. 0. 0. ... 0. 0. 0.] [0. 0. 1. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] ``` his allows us to preserve the structural information but there are two major disadvantages here. If we have a large vocabulary, the representation length for each token will be massive leading to large computes. And though we preserve the structure within the text, the actual representation for each token does not preserve any relationship with respect to other tokens. In this notebook, we're going to learn about embeddings and how they address all the shortcomings of the representation methods we've seen so far. # Overview * **Objective:** Represent tokens in text that capture the intrinsic semantic relationships. * **Advantages:** * Low-dimensionality while capturing relationships. * Interpretable token representations * **Disadvantages:** None * **Miscellaneous:** There are lot's of pretrained embeddings to choose from but you can also train your own from scratch. # Learning embeddings The main idea of embeddings is to have fixed length representations for the tokens in a text regardless of the number of tokens in the vocabulary. So instead of each token representation having the shape [1XV] where V is vocab size, each token now has the shape [1 X D] where D is the embedding size (usually 50, 100, 200, 300). The numbers in the representation will no longer be 0s and 1s but rather floats that represent that token in a D-dimensional latent space. If the embeddings really did capture the relationship between tokens, then we should be able to inspect this latent space and confirm known relationships (we'll do this soon). But how do we learn the embeddings the first place? The intuition behind embeddings is that the definition of a token depends on the token itself but on it's context. There are several different ways of doing this: 1. Given the word in the context, predict the target word (CBOW - continuous bag of words). 2. Given the target word, predict the context word (skip-gram). 3. Given a sequence of words, predict the next word (LM - language modeling). All of these approaches involve create data to train our model on. Every word in a sentence becomes the target word and the context words are determines by a window. In the image below (skip-gram), the window size is 2. We repeat this for every sentence in our corpus and this results in our training data for the unsupervised task. This in an unsupervised learning technique since we don't have official labels for contexts. The idea is that similar target words will appear with similar contexts and we can learn this relationship by repeatedly training our mode with (context, target) pairs. <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/skipgram.png" width=600> We can learn embeddings using any of these approaches above and some work better than others. You can inspect the learned embeddings but the best way to choose an approach is to empirically validate the performance on a supervised task. We can learn embeddings by creating our models in PyTorch but instead, we're going to use a library that specializes in embeddings and topic modeling called [Gensim](https://radimrehurek.com/gensim/). ``` !pip install gensim import os from argparse import Namespace import copy import gensim from gensim.models import Word2Vec import json import nltk; nltk.download('punkt') import numpy as np import pandas as pd import re import urllib import warnings warnings.filterwarnings('ignore') args = Namespace( seed=1234, data_file="harrypotter.txt", embedding_dim=100, window=5, min_count=3, skip_gram=1, # 0 = CBOW negative_sampling=20, ) # Upload data from GitHub to notebook's local drive url = "https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/data/harrypotter.txt" response = urllib.request.urlopen(url) html = response.read() with open(args.data_file, 'wb') as fp: fp.write(html) # Split text into sentences tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') with open(args.data_file, encoding='cp1252') as fp: book = fp.read() sentences = tokenizer.tokenize(book) print (len(sentences)) print (sentences[11]) # Preprocessing def preprocess_text(text): text = ' '.join(word.lower() for word in text.split(" ")) text = re.sub(r"([.,!?])", r" \1 ", text) text = re.sub(r"[^a-zA-Z.,!?]+", r" ", text) text = text.strip() return text # Clean sentences sentences = [preprocess_text(sentence) for sentence in sentences] print (sentences[11]) # Process sentences for gensim sentences = [sentence.split(" ") for sentence in sentences] print (sentences[11]) ``` When we have large vocabularies to learn embeddings for, things can get complex very quickly. Recall that the backpropagation with softmax updates both the correct and incorrect class weights. This becomes a massive computation for every backwas pass we do so a workaround is to use [negative sampling](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/) which only updates the correct class and a few arbitrary incorrect classes (negative_sampling=20). We're able to do this because of the large amount of training data where we'll see the same word as the target class multiple times. ``` # Super fast because of optimized C code under the hood model = Word2Vec(sentences=sentences, size=args.embedding_dim, window=args.window, min_count=args.min_count, sg=args.skip_gram, negative=args.negative_sampling) print (model) # Vector for each word model.wv.get_vector("potter") # Get nearest neighbors (excluding itself) model.wv.most_similar(positive="scar", topn=5) # Save the weights model.wv.save_word2vec_format('model.txt', binary=False) ``` # Pretrained embeddings We can learn embeddings from scratch using one of the approaches above but we can also leverage pretrained embeddings that have been trained on millions of documents. Popular ones include Word2Vec (skip-gram) or GloVe (global word-word co-occurrence). We can validate that these embeddings captured meaningful semantic relationships by confirming them. ``` from gensim.scripts.glove2word2vec import glove2word2vec from gensim.models import KeyedVectors from io import BytesIO import matplotlib.pyplot as plt from sklearn.decomposition import PCA from zipfile import ZipFile from urllib.request import urlopen # Unzip the file (may take ~3 minutes) resp = urlopen('http://nlp.stanford.edu/data/glove.6B.zip') zipfile = ZipFile(BytesIO(resp.read())) zipfile.namelist() # Write embeddings embeddings_file = 'glove.6B.{0}d.txt'.format(args.embedding_dim) zipfile.extract(embeddings_file) # Save GloVe embeddings to local directory in word2vec format word2vec_output_file = '{0}.word2vec'.format(embeddings_file) glove2word2vec(embeddings_file, word2vec_output_file) # Load embeddings (may take a minute) glove = KeyedVectors.load_word2vec_format(word2vec_output_file, binary=False) # (king - man) + woman = ? glove.most_similar(positive=['woman', 'king'], negative=['man'], topn=5) # Get nearest neighbors (exlcusing itself) glove.wv.most_similar(positive="goku", topn=5) # Reduce dimensionality for plotting X = glove[glove.wv.vocab] pca = PCA(n_components=2) pca_results = pca.fit_transform(X) def plot_embeddings(words, embeddings, pca_results): for word in words: index = embeddings.index2word.index(word) plt.scatter(pca_results[index, 0], pca_results[index, 1]) plt.annotate(word, xy=(pca_results[index, 0], pca_results[index, 1])) plt.show() plot_embeddings(words=["king", "queen", "man", "woman"], embeddings=glove, pca_results=pca_results) # Bias in embeddings glove.most_similar(positive=['woman', 'doctor'], negative=['man'], topn=5) ``` # Using Embeddings There are several different ways to use embeddings. 1. Use your own trained embeddings (trained on an unsupervised dataset). 2. Use pretrained embeddings (GloVe, word2vec, etc.) 3. Randomly initialized embeddings. Once you have chosen embeddings, you can choose to freeze them or continue to train them using the supervised data (this could lead to overfitting). In this example, we're going to use GloVe embeddings and freeze them during training. Our task will be to predict an article's category given its title. ## Set up ``` # Load PyTorch library !pip3 install torch import os from argparse import Namespace import collections import json import matplotlib.pyplot as plt import numpy as np import pandas as pd import re import torch # Set Numpy and PyTorch seeds def set_seeds(seed, cuda): np.random.seed(seed) torch.manual_seed(seed) if cuda: torch.cuda.manual_seed_all(seed) # Creating directories def create_dirs(dirpath): if not os.path.exists(dirpath): os.makedirs(dirpath) # Arguments args = Namespace( seed=1234, cuda=True, shuffle=True, data_file="news.csv", vectorizer_file="vectorizer.json", model_state_file="model.pth", save_dir="news", train_size=0.7, val_size=0.15, test_size=0.15, cutoff=25, # token must appear at least <cutoff> times to be in SequenceVocabulary num_epochs=5, early_stopping_criteria=5, learning_rate=1e-3, batch_size=64, num_filters=100, embedding_dim=100, hidden_dim=100, dropout_p=0.1, ) # Set seeds set_seeds(seed=args.seed, cuda=args.cuda) # Create save dir create_dirs(args.save_dir) # Expand filepaths args.vectorizer_file = os.path.join(args.save_dir, args.vectorizer_file) args.model_state_file = os.path.join(args.save_dir, args.model_state_file) # Check CUDA if not torch.cuda.is_available(): args.cuda = False args.device = torch.device("cuda" if args.cuda else "cpu") print("Using CUDA: {}".format(args.cuda)) ``` ## Data ``` import re import urllib # Upload data from GitHub to notebook's local drive url = "https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/data/news.csv" response = urllib.request.urlopen(url) html = response.read() with open(args.data_file, 'wb') as fp: fp.write(html) # Raw data df = pd.read_csv(args.data_file, header=0) df.head() # Split by category by_category = collections.defaultdict(list) for _, row in df.iterrows(): by_category[row.category].append(row.to_dict()) for category in by_category: print ("{0}: {1}".format(category, len(by_category[category]))) # Create split data final_list = [] for _, item_list in sorted(by_category.items()): if args.shuffle: np.random.shuffle(item_list) n = len(item_list) n_train = int(args.train_size*n) n_val = int(args.val_size*n) n_test = int(args.test_size*n) # Give data point a split attribute for item in item_list[:n_train]: item['split'] = 'train' for item in item_list[n_train:n_train+n_val]: item['split'] = 'val' for item in item_list[n_train+n_val:]: item['split'] = 'test' # Add to final list final_list.extend(item_list) # df with split datasets split_df = pd.DataFrame(final_list) split_df["split"].value_counts() # Preprocessing def preprocess_text(text): text = ' '.join(word.lower() for word in text.split(" ")) text = re.sub(r"([.,!?])", r" \1 ", text) text = re.sub(r"[^a-zA-Z.,!?]+", r" ", text) return text split_df.title = split_df.title.apply(preprocess_text) split_df.head() ``` ## Vocabulary ``` class Vocabulary(object): def __init__(self, token_to_idx=None): # Token to index if token_to_idx is None: token_to_idx = {} self.token_to_idx = token_to_idx # Index to token self.idx_to_token = {idx: token \ for token, idx in self.token_to_idx.items()} def to_serializable(self): return {'token_to_idx': self.token_to_idx} @classmethod def from_serializable(cls, contents): return cls(**contents) def add_token(self, token): if token in self.token_to_idx: index = self.token_to_idx[token] else: index = len(self.token_to_idx) self.token_to_idx[token] = index self.idx_to_token[index] = token return index def add_tokens(self, tokens): return [self.add_token[token] for token in tokens] def lookup_token(self, token): return self.token_to_idx[token] def lookup_index(self, index): if index not in self.idx_to_token: raise KeyError("the index (%d) is not in the Vocabulary" % index) return self.idx_to_token[index] def __str__(self): return "<Vocabulary(size=%d)>" % len(self) def __len__(self): return len(self.token_to_idx) # Vocabulary instance category_vocab = Vocabulary() for index, row in df.iterrows(): category_vocab.add_token(row.category) print (category_vocab) # __str__ print (len(category_vocab)) # __len__ index = category_vocab.lookup_token("Business") print (index) print (category_vocab.lookup_index(index)) ``` ## Sequence vocabulary Next, we're going to create our Vocabulary classes for the article's title, which is a sequence of tokens. ``` from collections import Counter import string class SequenceVocabulary(Vocabulary): def __init__(self, token_to_idx=None, unk_token="<UNK>", mask_token="<MASK>", begin_seq_token="<BEGIN>", end_seq_token="<END>"): super(SequenceVocabulary, self).__init__(token_to_idx) self.mask_token = mask_token self.unk_token = unk_token self.begin_seq_token = begin_seq_token self.end_seq_token = end_seq_token self.mask_index = self.add_token(self.mask_token) self.unk_index = self.add_token(self.unk_token) self.begin_seq_index = self.add_token(self.begin_seq_token) self.end_seq_index = self.add_token(self.end_seq_token) # Index to token self.idx_to_token = {idx: token \ for token, idx in self.token_to_idx.items()} def to_serializable(self): contents = super(SequenceVocabulary, self).to_serializable() contents.update({'unk_token': self.unk_token, 'mask_token': self.mask_token, 'begin_seq_token': self.begin_seq_token, 'end_seq_token': self.end_seq_token}) return contents def lookup_token(self, token): return self.token_to_idx.get(token, self.unk_index) def lookup_index(self, index): if index not in self.idx_to_token: raise KeyError("the index (%d) is not in the SequenceVocabulary" % index) return self.idx_to_token[index] def __str__(self): return "<SequenceVocabulary(size=%d)>" % len(self.token_to_idx) def __len__(self): return len(self.token_to_idx) # Get word counts word_counts = Counter() for title in split_df.title: for token in title.split(" "): if token not in string.punctuation: word_counts[token] += 1 # Create SequenceVocabulary instance title_vocab = SequenceVocabulary() for word, word_count in word_counts.items(): if word_count >= args.cutoff: title_vocab.add_token(word) print (title_vocab) # __str__ print (len(title_vocab)) # __len__ index = title_vocab.lookup_token("general") print (index) print (title_vocab.lookup_index(index)) ``` ## Vectorizer ``` class NewsVectorizer(object): def __init__(self, title_vocab, category_vocab): self.title_vocab = title_vocab self.category_vocab = category_vocab def vectorize(self, title): indices = [self.title_vocab.lookup_token(token) for token in title.split(" ")] indices = [self.title_vocab.begin_seq_index] + indices + \ [self.title_vocab.end_seq_index] # Create vector title_length = len(indices) vector = np.zeros(title_length, dtype=np.int64) vector[:len(indices)] = indices return vector def unvectorize(self, vector): tokens = [self.title_vocab.lookup_index(index) for index in vector] title = " ".join(token for token in tokens) return title @classmethod def from_dataframe(cls, df, cutoff): # Create class vocab category_vocab = Vocabulary() for category in sorted(set(df.category)): category_vocab.add_token(category) # Get word counts word_counts = Counter() for title in df.title: for token in title.split(" "): word_counts[token] += 1 # Create title vocab title_vocab = SequenceVocabulary() for word, word_count in word_counts.items(): if word_count >= cutoff: title_vocab.add_token(word) return cls(title_vocab, category_vocab) @classmethod def from_serializable(cls, contents): title_vocab = SequenceVocabulary.from_serializable(contents['title_vocab']) category_vocab = Vocabulary.from_serializable(contents['category_vocab']) return cls(title_vocab=title_vocab, category_vocab=category_vocab) def to_serializable(self): return {'title_vocab': self.title_vocab.to_serializable(), 'category_vocab': self.category_vocab.to_serializable()} # Vectorizer instance vectorizer = NewsVectorizer.from_dataframe(split_df, cutoff=args.cutoff) print (vectorizer.title_vocab) print (vectorizer.category_vocab) vectorized_title = vectorizer.vectorize(preprocess_text( "Roger Federer wins the Wimbledon tennis tournament.")) print (np.shape(vectorized_title)) print (vectorized_title) print (vectorizer.unvectorize(vectorized_title)) ``` ## Dataset ``` from torch.utils.data import Dataset, DataLoader class NewsDataset(Dataset): def __init__(self, df, vectorizer): self.df = df self.vectorizer = vectorizer # Max title length get_length = lambda title: len(title.split(" ")) self.max_seq_length = max(map(get_length, df.title)) + 2 # (<BEGIN> + <END>) # Data splits self.train_df = self.df[self.df.split=='train'] self.train_size = len(self.train_df) self.val_df = self.df[self.df.split=='val'] self.val_size = len(self.val_df) self.test_df = self.df[self.df.split=='test'] self.test_size = len(self.test_df) self.lookup_dict = {'train': (self.train_df, self.train_size), 'val': (self.val_df, self.val_size), 'test': (self.test_df, self.test_size)} self.set_split('train') # Class weights (for imbalances) class_counts = df.category.value_counts().to_dict() def sort_key(item): return self.vectorizer.category_vocab.lookup_token(item[0]) sorted_counts = sorted(class_counts.items(), key=sort_key) frequencies = [count for _, count in sorted_counts] self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32) @classmethod def load_dataset_and_make_vectorizer(cls, df, cutoff): train_df = df[df.split=='train'] return cls(df, NewsVectorizer.from_dataframe(train_df, cutoff)) @classmethod def load_dataset_and_load_vectorizer(cls, df, vectorizer_filepath): vectorizer = cls.load_vectorizer_only(vectorizer_filepath) return cls(df, vectorizer) def load_vectorizer_only(vectorizer_filepath): with open(vectorizer_filepath) as fp: return NewsVectorizer.from_serializable(json.load(fp)) def save_vectorizer(self, vectorizer_filepath): with open(vectorizer_filepath, "w") as fp: json.dump(self.vectorizer.to_serializable(), fp) def set_split(self, split="train"): self.target_split = split self.target_df, self.target_size = self.lookup_dict[split] def __str__(self): return "<Dataset(split={0}, size={1})".format( self.target_split, self.target_size) def __len__(self): return self.target_size def __getitem__(self, index): row = self.target_df.iloc[index] title_vector = self.vectorizer.vectorize(row.title) category_index = self.vectorizer.category_vocab.lookup_token(row.category) return {'title': title_vector, 'category': category_index} def get_num_batches(self, batch_size): return len(self) // batch_size def generate_batches(self, batch_size, collate_fn, shuffle=True, drop_last=False, device="cpu"): dataloader = DataLoader(dataset=self, batch_size=batch_size, collate_fn=collate_fn, shuffle=shuffle, drop_last=drop_last) for data_dict in dataloader: out_data_dict = {} for name, tensor in data_dict.items(): out_data_dict[name] = data_dict[name].to(device) yield out_data_dict # Dataset instance dataset = NewsDataset.load_dataset_and_make_vectorizer(df=split_df, cutoff=args.cutoff) print (dataset) # __str__ title_vector = dataset[5]['title'] # __getitem__ print (title_vector) print (dataset.vectorizer.unvectorize(title_vector)) print (dataset.class_weights) ``` ## Model input → embedding → conv → FC We will be using 1d conv operations ([nn.Conv1D](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv1d)) even though our inputs are words because we are not representing them at a character level. The inputs are of shape $\in \mathbb{R}^{NXSXE}$ * where: * N = batchsize * S = max sentence length * E = embedding dim at a word level ``` import torch.nn as nn import torch.nn.functional as F class NewsModel(nn.Module): def __init__(self, embedding_dim, num_embeddings, num_input_channels, num_channels, hidden_dim, num_classes, dropout_p, pretrained_embeddings=None, freeze_embeddings=False, padding_idx=0): super(NewsModel, self).__init__() if pretrained_embeddings is None: self.embeddings = nn.Embedding(embedding_dim=embedding_dim, num_embeddings=num_embeddings, padding_idx=padding_idx) else: pretrained_embeddings = torch.from_numpy(pretrained_embeddings).float() self.embeddings = nn.Embedding(embedding_dim=embedding_dim, num_embeddings=num_embeddings, padding_idx=padding_idx, _weight=pretrained_embeddings) # Conv weights self.conv = nn.ModuleList([nn.Conv1d(num_input_channels, num_channels, kernel_size=f) for f in [2,3,4]]) # FC weights self.dropout = nn.Dropout(dropout_p) self.fc1 = nn.Linear(num_channels*3, hidden_dim) self.fc2 = nn.Linear(hidden_dim, num_classes) if freeze_embeddings: self.embeddings.weight.requires_grad = False def forward(self, x_in, channel_first=False, apply_softmax=False): # Embed x_in = self.embeddings(x_in) # Rearrange input so num_channels is in dim 1 (N, C, L) if not channel_first: x_in = x_in.transpose(1, 2) # Conv outputs z1 = self.conv[0](x_in) z1 = F.max_pool1d(z1, z1.size(2)).squeeze(2) z2 = self.conv[1](x_in) z2 = F.max_pool1d(z2, z2.size(2)).squeeze(2) z3 = self.conv[2](x_in) z3 = F.max_pool1d(z3, z3.size(2)).squeeze(2) # Concat conv outputs z = torch.cat([z1, z2, z3], 1) # FC layers z = self.dropout(z) z = self.fc1(z) y_pred = self.fc2(z) if apply_softmax: y_pred = F.softmax(y_pred, dim=1) return y_pred ``` ## Training ``` import torch.optim as optim class Trainer(object): def __init__(self, dataset, model, model_state_file, save_dir, device, shuffle, num_epochs, batch_size, learning_rate, early_stopping_criteria): self.dataset = dataset self.class_weights = dataset.class_weights.to(device) self.model = model.to(device) self.save_dir = save_dir self.device = device self.shuffle = shuffle self.num_epochs = num_epochs self.batch_size = batch_size self.loss_func = nn.CrossEntropyLoss(self.class_weights) self.optimizer = optim.Adam(self.model.parameters(), lr=learning_rate) self.scheduler = optim.lr_scheduler.ReduceLROnPlateau( optimizer=self.optimizer, mode='min', factor=0.5, patience=1) self.train_state = { 'done_training': False, 'stop_early': False, 'early_stopping_step': 0, 'early_stopping_best_val': 1e8, 'early_stopping_criteria': early_stopping_criteria, 'learning_rate': learning_rate, 'epoch_index': 0, 'train_loss': [], 'train_acc': [], 'val_loss': [], 'val_acc': [], 'test_loss': -1, 'test_acc': -1, 'model_filename': model_state_file} def update_train_state(self): # Verbose print ("[EPOCH]: {0} | [LR]: {1} | [TRAIN LOSS]: {2:.2f} | [TRAIN ACC]: {3:.1f}% | [VAL LOSS]: {4:.2f} | [VAL ACC]: {5:.1f}%".format( self.train_state['epoch_index'], self.train_state['learning_rate'], self.train_state['train_loss'][-1], self.train_state['train_acc'][-1], self.train_state['val_loss'][-1], self.train_state['val_acc'][-1])) # Save one model at least if self.train_state['epoch_index'] == 0: torch.save(self.model.state_dict(), self.train_state['model_filename']) self.train_state['stop_early'] = False # Save model if performance improved elif self.train_state['epoch_index'] >= 1: loss_tm1, loss_t = self.train_state['val_loss'][-2:] # If loss worsened if loss_t >= self.train_state['early_stopping_best_val']: # Update step self.train_state['early_stopping_step'] += 1 # Loss decreased else: # Save the best model if loss_t < self.train_state['early_stopping_best_val']: torch.save(self.model.state_dict(), self.train_state['model_filename']) # Reset early stopping step self.train_state['early_stopping_step'] = 0 # Stop early ? self.train_state['stop_early'] = self.train_state['early_stopping_step'] \ >= self.train_state['early_stopping_criteria'] return self.train_state def compute_accuracy(self, y_pred, y_target): _, y_pred_indices = y_pred.max(dim=1) n_correct = torch.eq(y_pred_indices, y_target).sum().item() return n_correct / len(y_pred_indices) * 100 def pad_seq(self, seq, length): vector = np.zeros(length, dtype=np.int64) vector[:len(seq)] = seq vector[len(seq):] = self.dataset.vectorizer.title_vocab.mask_index return vector def collate_fn(self, batch): # Make a deep copy batch_copy = copy.deepcopy(batch) processed_batch = {"title": [], "category": []} # Get max sequence length max_seq_len = max([len(sample["title"]) for sample in batch_copy]) # Pad for i, sample in enumerate(batch_copy): seq = sample["title"] category = sample["category"] padded_seq = self.pad_seq(seq, max_seq_len) processed_batch["title"].append(padded_seq) processed_batch["category"].append(category) # Convert to appropriate tensor types processed_batch["title"] = torch.LongTensor( processed_batch["title"]) processed_batch["category"] = torch.LongTensor( processed_batch["category"]) return processed_batch def run_train_loop(self): for epoch_index in range(self.num_epochs): self.train_state['epoch_index'] = epoch_index # Iterate over train dataset # initialize batch generator, set loss and acc to 0, set train mode on self.dataset.set_split('train') batch_generator = self.dataset.generate_batches( batch_size=self.batch_size, collate_fn=self.collate_fn, shuffle=self.shuffle, device=self.device) running_loss = 0.0 running_acc = 0.0 self.model.train() for batch_index, batch_dict in enumerate(batch_generator): # zero the gradients self.optimizer.zero_grad() # compute the output y_pred = self.model(batch_dict['title']) # compute the loss loss = self.loss_func(y_pred, batch_dict['category']) loss_t = loss.item() running_loss += (loss_t - running_loss) / (batch_index + 1) # compute gradients using loss loss.backward() # use optimizer to take a gradient step self.optimizer.step() # compute the accuracy acc_t = self.compute_accuracy(y_pred, batch_dict['category']) running_acc += (acc_t - running_acc) / (batch_index + 1) self.train_state['train_loss'].append(running_loss) self.train_state['train_acc'].append(running_acc) # Iterate over val dataset # initialize batch generator, set loss and acc to 0; set eval mode on self.dataset.set_split('val') batch_generator = self.dataset.generate_batches( batch_size=self.batch_size, collate_fn=self.collate_fn, shuffle=self.shuffle, device=self.device) running_loss = 0. running_acc = 0. self.model.eval() for batch_index, batch_dict in enumerate(batch_generator): # compute the output y_pred = self.model(batch_dict['title']) # compute the loss loss = self.loss_func(y_pred, batch_dict['category']) loss_t = loss.to("cpu").item() running_loss += (loss_t - running_loss) / (batch_index + 1) # compute the accuracy acc_t = self.compute_accuracy(y_pred, batch_dict['category']) running_acc += (acc_t - running_acc) / (batch_index + 1) self.train_state['val_loss'].append(running_loss) self.train_state['val_acc'].append(running_acc) self.train_state = self.update_train_state() self.scheduler.step(self.train_state['val_loss'][-1]) if self.train_state['stop_early']: break def run_test_loop(self): # initialize batch generator, set loss and acc to 0; set eval mode on self.dataset.set_split('test') batch_generator = self.dataset.generate_batches( batch_size=self.batch_size, collate_fn=self.collate_fn, shuffle=self.shuffle, device=self.device) running_loss = 0.0 running_acc = 0.0 self.model.eval() for batch_index, batch_dict in enumerate(batch_generator): # compute the output y_pred = self.model(batch_dict['title']) # compute the loss loss = self.loss_func(y_pred, batch_dict['category']) loss_t = loss.item() running_loss += (loss_t - running_loss) / (batch_index + 1) # compute the accuracy acc_t = self.compute_accuracy(y_pred, batch_dict['category']) running_acc += (acc_t - running_acc) / (batch_index + 1) self.train_state['test_loss'] = running_loss self.train_state['test_acc'] = running_acc def plot_performance(self): # Figure size plt.figure(figsize=(15,5)) # Plot Loss plt.subplot(1, 2, 1) plt.title("Loss") plt.plot(trainer.train_state["train_loss"], label="train") plt.plot(trainer.train_state["val_loss"], label="val") plt.legend(loc='upper right') # Plot Accuracy plt.subplot(1, 2, 2) plt.title("Accuracy") plt.plot(trainer.train_state["train_acc"], label="train") plt.plot(trainer.train_state["val_acc"], label="val") plt.legend(loc='lower right') # Save figure plt.savefig(os.path.join(self.save_dir, "performance.png")) # Show plots plt.show() def save_train_state(self): self.train_state["done_training"] = True with open(os.path.join(self.save_dir, "train_state.json"), "w") as fp: json.dump(self.train_state, fp) # Initialization dataset = NewsDataset.load_dataset_and_make_vectorizer(df=split_df, cutoff=args.cutoff) dataset.save_vectorizer(args.vectorizer_file) vectorizer = dataset.vectorizer model = NewsModel(embedding_dim=args.embedding_dim, num_embeddings=len(vectorizer.title_vocab), num_input_channels=args.embedding_dim, num_channels=args.num_filters, hidden_dim=args.hidden_dim, num_classes=len(vectorizer.category_vocab), dropout_p=args.dropout_p, pretrained_embeddings=None, padding_idx=vectorizer.title_vocab.mask_index) print (model.named_modules) # Train trainer = Trainer(dataset=dataset, model=model, model_state_file=args.model_state_file, save_dir=args.save_dir, device=args.device, shuffle=args.shuffle, num_epochs=args.num_epochs, batch_size=args.batch_size, learning_rate=args.learning_rate, early_stopping_criteria=args.early_stopping_criteria) trainer.run_train_loop() # Plot performance trainer.plot_performance() # Test performance trainer.run_test_loop() print("Test loss: {0:.2f}".format(trainer.train_state['test_loss'])) print("Test Accuracy: {0:.1f}%".format(trainer.train_state['test_acc'])) # Save all results trainer.save_train_state() ``` ## Using GloVe embeddings We just used some randomly initialized embeddings and we were able to receive decent performance. Keep in mind that this may not always be the case and we may overfit on other datasets with this approach. We're now going to use pretrained GloVe embeddings to initialize our embeddings. We will train our model on the supervised task and assess the performance by first freezing these embeddings (so they don't change during training) and then not freezing them and allowing them to be trained. ```python pretrained_embeddings = torch.from_numpy(pretrained_embeddings).float() self.embeddings = nn.Embedding(embedding_dim=embedding_dim, num_embeddings=num_embeddings, padding_idx=padding_idx, _weight=pretrained_embeddings) ``` ``` def load_glove_embeddings(embeddings_file): word_to_idx = {} embeddings = [] with open(embeddings_file, "r") as fp: for index, line in enumerate(fp): line = line.split(" ") word = line[0] word_to_idx[word] = index embedding_i = np.array([float(val) for val in line[1:]]) embeddings.append(embedding_i) return word_to_idx, np.stack(embeddings) def make_embeddings_matrix(words): word_to_idx, glove_embeddings = load_glove_embeddings(embeddings_file) embedding_dim = glove_embeddings.shape[1] embeddings = np.zeros((len(words), embedding_dim)) for i, word in enumerate(words): if word in word_to_idx: embeddings[i, :] = glove_embeddings[word_to_idx[word]] else: embedding_i = torch.zeros(1, embedding_dim) nn.init.xavier_uniform_(embedding_i) embeddings[i, :] = embedding_i return embeddings args.use_glove_embeddings = True # Initialization dataset = NewsDataset.load_dataset_and_make_vectorizer(df=split_df, cutoff=args.cutoff) dataset.save_vectorizer(args.vectorizer_file) vectorizer = dataset.vectorizer # Create embeddings embeddings = None if args.use_glove_embeddings: embeddings_file = 'glove.6B.{0}d.txt'.format(args.embedding_dim) words = vectorizer.title_vocab.token_to_idx.keys() embeddings = make_embeddings_matrix(words=words) print ("<Embeddings(words={0}, dim={1})>".format( np.shape(embeddings)[0], np.shape(embeddings)[1])) # Initialize model model = NewsModel(embedding_dim=args.embedding_dim, num_embeddings=len(vectorizer.title_vocab), num_input_channels=args.embedding_dim, num_channels=args.num_filters, hidden_dim=args.hidden_dim, num_classes=len(vectorizer.category_vocab), dropout_p=args.dropout_p, pretrained_embeddings=embeddings, padding_idx=vectorizer.title_vocab.mask_index) print (model.named_modules) # Train trainer = Trainer(dataset=dataset, model=model, model_state_file=args.model_state_file, save_dir=args.save_dir, device=args.device, shuffle=args.shuffle, num_epochs=args.num_epochs, batch_size=args.batch_size, learning_rate=args.learning_rate, early_stopping_criteria=args.early_stopping_criteria) trainer.run_train_loop() # Plot performance trainer.plot_performance() # Test performance trainer.run_test_loop() print("Test loss: {0:.2f}".format(trainer.train_state['test_loss'])) print("Test Accuracy: {0:.1f}%".format(trainer.train_state['test_acc'])) # Save all results trainer.save_train_state() ``` ## Freeze embeddings Now we're going to freeze our GloVe embeddings and train on the supervised task. The only modification in the model is to turn on `freeze_embeddings`: ```python if freeze_embeddings: self.embeddings.weight.requires_grad = False ``` ``` args.freeze_embeddings = True # Initialize model model = NewsModel(embedding_dim=args.embedding_dim, num_embeddings=len(vectorizer.title_vocab), num_input_channels=args.embedding_dim, num_channels=args.num_filters, hidden_dim=args.hidden_dim, num_classes=len(vectorizer.category_vocab), dropout_p=args.dropout_p, pretrained_embeddings=embeddings, freeze_embeddings=args.freeze_embeddings, padding_idx=vectorizer.title_vocab.mask_index) print (model.named_modules) # Train trainer = Trainer(dataset=dataset, model=model, model_state_file=args.model_state_file, save_dir=args.save_dir, device=args.device, shuffle=args.shuffle, num_epochs=args.num_epochs, batch_size=args.batch_size, learning_rate=args.learning_rate, early_stopping_criteria=args.early_stopping_criteria) trainer.run_train_loop() # Plot performance trainer.plot_performance() # Test performance trainer.run_test_loop() print("Test loss: {0:.2f}".format(trainer.train_state['test_loss'])) print("Test Accuracy: {0:.1f}%".format(trainer.train_state['test_acc'])) # Save all results trainer.save_train_state() ``` So you can see that using GloVe embeddings but not freezing them resulted in the best results on the test dataset. Different tasks will yield different results so you need to choose whether or not to freeze your embeddings based on empirical results. # TODO * when to use skip-gram/CBOW * According to Mikolov: Skip-gram: works well with small amount of the training data, represents well even rare words or phrases. CBOW: several times faster to train than the skip-gram, slightly better accuracy for the frequent words * Loading word2vec * interpretable conv filters since we apply it on words here * [contextualized word embeddings](https://arxiv.org/abs/1607.00578)
github_jupyter
[0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]``` We've also represented our input in a one-hot encoded form where each token is represented by an n-dimensional array. T ```python [[0. 0. 0. ... 0. 0. 0.] [0. 0. 1. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.]] !pip install gensim import os from argparse import Namespace import copy import gensim from gensim.models import Word2Vec import json import nltk; nltk.download('punkt') import numpy as np import pandas as pd import re import urllib import warnings warnings.filterwarnings('ignore') args = Namespace( seed=1234, data_file="harrypotter.txt", embedding_dim=100, window=5, min_count=3, skip_gram=1, # 0 = CBOW negative_sampling=20, ) # Upload data from GitHub to notebook's local drive url = "https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/data/harrypotter.txt" response = urllib.request.urlopen(url) html = response.read() with open(args.data_file, 'wb') as fp: fp.write(html) # Split text into sentences tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') with open(args.data_file, encoding='cp1252') as fp: book = fp.read() sentences = tokenizer.tokenize(book) print (len(sentences)) print (sentences[11]) # Preprocessing def preprocess_text(text): text = ' '.join(word.lower() for word in text.split(" ")) text = re.sub(r"([.,!?])", r" \1 ", text) text = re.sub(r"[^a-zA-Z.,!?]+", r" ", text) text = text.strip() return text # Clean sentences sentences = [preprocess_text(sentence) for sentence in sentences] print (sentences[11]) # Process sentences for gensim sentences = [sentence.split(" ") for sentence in sentences] print (sentences[11]) # Super fast because of optimized C code under the hood model = Word2Vec(sentences=sentences, size=args.embedding_dim, window=args.window, min_count=args.min_count, sg=args.skip_gram, negative=args.negative_sampling) print (model) # Vector for each word model.wv.get_vector("potter") # Get nearest neighbors (excluding itself) model.wv.most_similar(positive="scar", topn=5) # Save the weights model.wv.save_word2vec_format('model.txt', binary=False) from gensim.scripts.glove2word2vec import glove2word2vec from gensim.models import KeyedVectors from io import BytesIO import matplotlib.pyplot as plt from sklearn.decomposition import PCA from zipfile import ZipFile from urllib.request import urlopen # Unzip the file (may take ~3 minutes) resp = urlopen('http://nlp.stanford.edu/data/glove.6B.zip') zipfile = ZipFile(BytesIO(resp.read())) zipfile.namelist() # Write embeddings embeddings_file = 'glove.6B.{0}d.txt'.format(args.embedding_dim) zipfile.extract(embeddings_file) # Save GloVe embeddings to local directory in word2vec format word2vec_output_file = '{0}.word2vec'.format(embeddings_file) glove2word2vec(embeddings_file, word2vec_output_file) # Load embeddings (may take a minute) glove = KeyedVectors.load_word2vec_format(word2vec_output_file, binary=False) # (king - man) + woman = ? glove.most_similar(positive=['woman', 'king'], negative=['man'], topn=5) # Get nearest neighbors (exlcusing itself) glove.wv.most_similar(positive="goku", topn=5) # Reduce dimensionality for plotting X = glove[glove.wv.vocab] pca = PCA(n_components=2) pca_results = pca.fit_transform(X) def plot_embeddings(words, embeddings, pca_results): for word in words: index = embeddings.index2word.index(word) plt.scatter(pca_results[index, 0], pca_results[index, 1]) plt.annotate(word, xy=(pca_results[index, 0], pca_results[index, 1])) plt.show() plot_embeddings(words=["king", "queen", "man", "woman"], embeddings=glove, pca_results=pca_results) # Bias in embeddings glove.most_similar(positive=['woman', 'doctor'], negative=['man'], topn=5) # Load PyTorch library !pip3 install torch import os from argparse import Namespace import collections import json import matplotlib.pyplot as plt import numpy as np import pandas as pd import re import torch # Set Numpy and PyTorch seeds def set_seeds(seed, cuda): np.random.seed(seed) torch.manual_seed(seed) if cuda: torch.cuda.manual_seed_all(seed) # Creating directories def create_dirs(dirpath): if not os.path.exists(dirpath): os.makedirs(dirpath) # Arguments args = Namespace( seed=1234, cuda=True, shuffle=True, data_file="news.csv", vectorizer_file="vectorizer.json", model_state_file="model.pth", save_dir="news", train_size=0.7, val_size=0.15, test_size=0.15, cutoff=25, # token must appear at least <cutoff> times to be in SequenceVocabulary num_epochs=5, early_stopping_criteria=5, learning_rate=1e-3, batch_size=64, num_filters=100, embedding_dim=100, hidden_dim=100, dropout_p=0.1, ) # Set seeds set_seeds(seed=args.seed, cuda=args.cuda) # Create save dir create_dirs(args.save_dir) # Expand filepaths args.vectorizer_file = os.path.join(args.save_dir, args.vectorizer_file) args.model_state_file = os.path.join(args.save_dir, args.model_state_file) # Check CUDA if not torch.cuda.is_available(): args.cuda = False args.device = torch.device("cuda" if args.cuda else "cpu") print("Using CUDA: {}".format(args.cuda)) import re import urllib # Upload data from GitHub to notebook's local drive url = "https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/data/news.csv" response = urllib.request.urlopen(url) html = response.read() with open(args.data_file, 'wb') as fp: fp.write(html) # Raw data df = pd.read_csv(args.data_file, header=0) df.head() # Split by category by_category = collections.defaultdict(list) for _, row in df.iterrows(): by_category[row.category].append(row.to_dict()) for category in by_category: print ("{0}: {1}".format(category, len(by_category[category]))) # Create split data final_list = [] for _, item_list in sorted(by_category.items()): if args.shuffle: np.random.shuffle(item_list) n = len(item_list) n_train = int(args.train_size*n) n_val = int(args.val_size*n) n_test = int(args.test_size*n) # Give data point a split attribute for item in item_list[:n_train]: item['split'] = 'train' for item in item_list[n_train:n_train+n_val]: item['split'] = 'val' for item in item_list[n_train+n_val:]: item['split'] = 'test' # Add to final list final_list.extend(item_list) # df with split datasets split_df = pd.DataFrame(final_list) split_df["split"].value_counts() # Preprocessing def preprocess_text(text): text = ' '.join(word.lower() for word in text.split(" ")) text = re.sub(r"([.,!?])", r" \1 ", text) text = re.sub(r"[^a-zA-Z.,!?]+", r" ", text) return text split_df.title = split_df.title.apply(preprocess_text) split_df.head() class Vocabulary(object): def __init__(self, token_to_idx=None): # Token to index if token_to_idx is None: token_to_idx = {} self.token_to_idx = token_to_idx # Index to token self.idx_to_token = {idx: token \ for token, idx in self.token_to_idx.items()} def to_serializable(self): return {'token_to_idx': self.token_to_idx} @classmethod def from_serializable(cls, contents): return cls(**contents) def add_token(self, token): if token in self.token_to_idx: index = self.token_to_idx[token] else: index = len(self.token_to_idx) self.token_to_idx[token] = index self.idx_to_token[index] = token return index def add_tokens(self, tokens): return [self.add_token[token] for token in tokens] def lookup_token(self, token): return self.token_to_idx[token] def lookup_index(self, index): if index not in self.idx_to_token: raise KeyError("the index (%d) is not in the Vocabulary" % index) return self.idx_to_token[index] def __str__(self): return "<Vocabulary(size=%d)>" % len(self) def __len__(self): return len(self.token_to_idx) # Vocabulary instance category_vocab = Vocabulary() for index, row in df.iterrows(): category_vocab.add_token(row.category) print (category_vocab) # __str__ print (len(category_vocab)) # __len__ index = category_vocab.lookup_token("Business") print (index) print (category_vocab.lookup_index(index)) from collections import Counter import string class SequenceVocabulary(Vocabulary): def __init__(self, token_to_idx=None, unk_token="<UNK>", mask_token="<MASK>", begin_seq_token="<BEGIN>", end_seq_token="<END>"): super(SequenceVocabulary, self).__init__(token_to_idx) self.mask_token = mask_token self.unk_token = unk_token self.begin_seq_token = begin_seq_token self.end_seq_token = end_seq_token self.mask_index = self.add_token(self.mask_token) self.unk_index = self.add_token(self.unk_token) self.begin_seq_index = self.add_token(self.begin_seq_token) self.end_seq_index = self.add_token(self.end_seq_token) # Index to token self.idx_to_token = {idx: token \ for token, idx in self.token_to_idx.items()} def to_serializable(self): contents = super(SequenceVocabulary, self).to_serializable() contents.update({'unk_token': self.unk_token, 'mask_token': self.mask_token, 'begin_seq_token': self.begin_seq_token, 'end_seq_token': self.end_seq_token}) return contents def lookup_token(self, token): return self.token_to_idx.get(token, self.unk_index) def lookup_index(self, index): if index not in self.idx_to_token: raise KeyError("the index (%d) is not in the SequenceVocabulary" % index) return self.idx_to_token[index] def __str__(self): return "<SequenceVocabulary(size=%d)>" % len(self.token_to_idx) def __len__(self): return len(self.token_to_idx) # Get word counts word_counts = Counter() for title in split_df.title: for token in title.split(" "): if token not in string.punctuation: word_counts[token] += 1 # Create SequenceVocabulary instance title_vocab = SequenceVocabulary() for word, word_count in word_counts.items(): if word_count >= args.cutoff: title_vocab.add_token(word) print (title_vocab) # __str__ print (len(title_vocab)) # __len__ index = title_vocab.lookup_token("general") print (index) print (title_vocab.lookup_index(index)) class NewsVectorizer(object): def __init__(self, title_vocab, category_vocab): self.title_vocab = title_vocab self.category_vocab = category_vocab def vectorize(self, title): indices = [self.title_vocab.lookup_token(token) for token in title.split(" ")] indices = [self.title_vocab.begin_seq_index] + indices + \ [self.title_vocab.end_seq_index] # Create vector title_length = len(indices) vector = np.zeros(title_length, dtype=np.int64) vector[:len(indices)] = indices return vector def unvectorize(self, vector): tokens = [self.title_vocab.lookup_index(index) for index in vector] title = " ".join(token for token in tokens) return title @classmethod def from_dataframe(cls, df, cutoff): # Create class vocab category_vocab = Vocabulary() for category in sorted(set(df.category)): category_vocab.add_token(category) # Get word counts word_counts = Counter() for title in df.title: for token in title.split(" "): word_counts[token] += 1 # Create title vocab title_vocab = SequenceVocabulary() for word, word_count in word_counts.items(): if word_count >= cutoff: title_vocab.add_token(word) return cls(title_vocab, category_vocab) @classmethod def from_serializable(cls, contents): title_vocab = SequenceVocabulary.from_serializable(contents['title_vocab']) category_vocab = Vocabulary.from_serializable(contents['category_vocab']) return cls(title_vocab=title_vocab, category_vocab=category_vocab) def to_serializable(self): return {'title_vocab': self.title_vocab.to_serializable(), 'category_vocab': self.category_vocab.to_serializable()} # Vectorizer instance vectorizer = NewsVectorizer.from_dataframe(split_df, cutoff=args.cutoff) print (vectorizer.title_vocab) print (vectorizer.category_vocab) vectorized_title = vectorizer.vectorize(preprocess_text( "Roger Federer wins the Wimbledon tennis tournament.")) print (np.shape(vectorized_title)) print (vectorized_title) print (vectorizer.unvectorize(vectorized_title)) from torch.utils.data import Dataset, DataLoader class NewsDataset(Dataset): def __init__(self, df, vectorizer): self.df = df self.vectorizer = vectorizer # Max title length get_length = lambda title: len(title.split(" ")) self.max_seq_length = max(map(get_length, df.title)) + 2 # (<BEGIN> + <END>) # Data splits self.train_df = self.df[self.df.split=='train'] self.train_size = len(self.train_df) self.val_df = self.df[self.df.split=='val'] self.val_size = len(self.val_df) self.test_df = self.df[self.df.split=='test'] self.test_size = len(self.test_df) self.lookup_dict = {'train': (self.train_df, self.train_size), 'val': (self.val_df, self.val_size), 'test': (self.test_df, self.test_size)} self.set_split('train') # Class weights (for imbalances) class_counts = df.category.value_counts().to_dict() def sort_key(item): return self.vectorizer.category_vocab.lookup_token(item[0]) sorted_counts = sorted(class_counts.items(), key=sort_key) frequencies = [count for _, count in sorted_counts] self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32) @classmethod def load_dataset_and_make_vectorizer(cls, df, cutoff): train_df = df[df.split=='train'] return cls(df, NewsVectorizer.from_dataframe(train_df, cutoff)) @classmethod def load_dataset_and_load_vectorizer(cls, df, vectorizer_filepath): vectorizer = cls.load_vectorizer_only(vectorizer_filepath) return cls(df, vectorizer) def load_vectorizer_only(vectorizer_filepath): with open(vectorizer_filepath) as fp: return NewsVectorizer.from_serializable(json.load(fp)) def save_vectorizer(self, vectorizer_filepath): with open(vectorizer_filepath, "w") as fp: json.dump(self.vectorizer.to_serializable(), fp) def set_split(self, split="train"): self.target_split = split self.target_df, self.target_size = self.lookup_dict[split] def __str__(self): return "<Dataset(split={0}, size={1})".format( self.target_split, self.target_size) def __len__(self): return self.target_size def __getitem__(self, index): row = self.target_df.iloc[index] title_vector = self.vectorizer.vectorize(row.title) category_index = self.vectorizer.category_vocab.lookup_token(row.category) return {'title': title_vector, 'category': category_index} def get_num_batches(self, batch_size): return len(self) // batch_size def generate_batches(self, batch_size, collate_fn, shuffle=True, drop_last=False, device="cpu"): dataloader = DataLoader(dataset=self, batch_size=batch_size, collate_fn=collate_fn, shuffle=shuffle, drop_last=drop_last) for data_dict in dataloader: out_data_dict = {} for name, tensor in data_dict.items(): out_data_dict[name] = data_dict[name].to(device) yield out_data_dict # Dataset instance dataset = NewsDataset.load_dataset_and_make_vectorizer(df=split_df, cutoff=args.cutoff) print (dataset) # __str__ title_vector = dataset[5]['title'] # __getitem__ print (title_vector) print (dataset.vectorizer.unvectorize(title_vector)) print (dataset.class_weights) import torch.nn as nn import torch.nn.functional as F class NewsModel(nn.Module): def __init__(self, embedding_dim, num_embeddings, num_input_channels, num_channels, hidden_dim, num_classes, dropout_p, pretrained_embeddings=None, freeze_embeddings=False, padding_idx=0): super(NewsModel, self).__init__() if pretrained_embeddings is None: self.embeddings = nn.Embedding(embedding_dim=embedding_dim, num_embeddings=num_embeddings, padding_idx=padding_idx) else: pretrained_embeddings = torch.from_numpy(pretrained_embeddings).float() self.embeddings = nn.Embedding(embedding_dim=embedding_dim, num_embeddings=num_embeddings, padding_idx=padding_idx, _weight=pretrained_embeddings) # Conv weights self.conv = nn.ModuleList([nn.Conv1d(num_input_channels, num_channels, kernel_size=f) for f in [2,3,4]]) # FC weights self.dropout = nn.Dropout(dropout_p) self.fc1 = nn.Linear(num_channels*3, hidden_dim) self.fc2 = nn.Linear(hidden_dim, num_classes) if freeze_embeddings: self.embeddings.weight.requires_grad = False def forward(self, x_in, channel_first=False, apply_softmax=False): # Embed x_in = self.embeddings(x_in) # Rearrange input so num_channels is in dim 1 (N, C, L) if not channel_first: x_in = x_in.transpose(1, 2) # Conv outputs z1 = self.conv[0](x_in) z1 = F.max_pool1d(z1, z1.size(2)).squeeze(2) z2 = self.conv[1](x_in) z2 = F.max_pool1d(z2, z2.size(2)).squeeze(2) z3 = self.conv[2](x_in) z3 = F.max_pool1d(z3, z3.size(2)).squeeze(2) # Concat conv outputs z = torch.cat([z1, z2, z3], 1) # FC layers z = self.dropout(z) z = self.fc1(z) y_pred = self.fc2(z) if apply_softmax: y_pred = F.softmax(y_pred, dim=1) return y_pred import torch.optim as optim class Trainer(object): def __init__(self, dataset, model, model_state_file, save_dir, device, shuffle, num_epochs, batch_size, learning_rate, early_stopping_criteria): self.dataset = dataset self.class_weights = dataset.class_weights.to(device) self.model = model.to(device) self.save_dir = save_dir self.device = device self.shuffle = shuffle self.num_epochs = num_epochs self.batch_size = batch_size self.loss_func = nn.CrossEntropyLoss(self.class_weights) self.optimizer = optim.Adam(self.model.parameters(), lr=learning_rate) self.scheduler = optim.lr_scheduler.ReduceLROnPlateau( optimizer=self.optimizer, mode='min', factor=0.5, patience=1) self.train_state = { 'done_training': False, 'stop_early': False, 'early_stopping_step': 0, 'early_stopping_best_val': 1e8, 'early_stopping_criteria': early_stopping_criteria, 'learning_rate': learning_rate, 'epoch_index': 0, 'train_loss': [], 'train_acc': [], 'val_loss': [], 'val_acc': [], 'test_loss': -1, 'test_acc': -1, 'model_filename': model_state_file} def update_train_state(self): # Verbose print ("[EPOCH]: {0} | [LR]: {1} | [TRAIN LOSS]: {2:.2f} | [TRAIN ACC]: {3:.1f}% | [VAL LOSS]: {4:.2f} | [VAL ACC]: {5:.1f}%".format( self.train_state['epoch_index'], self.train_state['learning_rate'], self.train_state['train_loss'][-1], self.train_state['train_acc'][-1], self.train_state['val_loss'][-1], self.train_state['val_acc'][-1])) # Save one model at least if self.train_state['epoch_index'] == 0: torch.save(self.model.state_dict(), self.train_state['model_filename']) self.train_state['stop_early'] = False # Save model if performance improved elif self.train_state['epoch_index'] >= 1: loss_tm1, loss_t = self.train_state['val_loss'][-2:] # If loss worsened if loss_t >= self.train_state['early_stopping_best_val']: # Update step self.train_state['early_stopping_step'] += 1 # Loss decreased else: # Save the best model if loss_t < self.train_state['early_stopping_best_val']: torch.save(self.model.state_dict(), self.train_state['model_filename']) # Reset early stopping step self.train_state['early_stopping_step'] = 0 # Stop early ? self.train_state['stop_early'] = self.train_state['early_stopping_step'] \ >= self.train_state['early_stopping_criteria'] return self.train_state def compute_accuracy(self, y_pred, y_target): _, y_pred_indices = y_pred.max(dim=1) n_correct = torch.eq(y_pred_indices, y_target).sum().item() return n_correct / len(y_pred_indices) * 100 def pad_seq(self, seq, length): vector = np.zeros(length, dtype=np.int64) vector[:len(seq)] = seq vector[len(seq):] = self.dataset.vectorizer.title_vocab.mask_index return vector def collate_fn(self, batch): # Make a deep copy batch_copy = copy.deepcopy(batch) processed_batch = {"title": [], "category": []} # Get max sequence length max_seq_len = max([len(sample["title"]) for sample in batch_copy]) # Pad for i, sample in enumerate(batch_copy): seq = sample["title"] category = sample["category"] padded_seq = self.pad_seq(seq, max_seq_len) processed_batch["title"].append(padded_seq) processed_batch["category"].append(category) # Convert to appropriate tensor types processed_batch["title"] = torch.LongTensor( processed_batch["title"]) processed_batch["category"] = torch.LongTensor( processed_batch["category"]) return processed_batch def run_train_loop(self): for epoch_index in range(self.num_epochs): self.train_state['epoch_index'] = epoch_index # Iterate over train dataset # initialize batch generator, set loss and acc to 0, set train mode on self.dataset.set_split('train') batch_generator = self.dataset.generate_batches( batch_size=self.batch_size, collate_fn=self.collate_fn, shuffle=self.shuffle, device=self.device) running_loss = 0.0 running_acc = 0.0 self.model.train() for batch_index, batch_dict in enumerate(batch_generator): # zero the gradients self.optimizer.zero_grad() # compute the output y_pred = self.model(batch_dict['title']) # compute the loss loss = self.loss_func(y_pred, batch_dict['category']) loss_t = loss.item() running_loss += (loss_t - running_loss) / (batch_index + 1) # compute gradients using loss loss.backward() # use optimizer to take a gradient step self.optimizer.step() # compute the accuracy acc_t = self.compute_accuracy(y_pred, batch_dict['category']) running_acc += (acc_t - running_acc) / (batch_index + 1) self.train_state['train_loss'].append(running_loss) self.train_state['train_acc'].append(running_acc) # Iterate over val dataset # initialize batch generator, set loss and acc to 0; set eval mode on self.dataset.set_split('val') batch_generator = self.dataset.generate_batches( batch_size=self.batch_size, collate_fn=self.collate_fn, shuffle=self.shuffle, device=self.device) running_loss = 0. running_acc = 0. self.model.eval() for batch_index, batch_dict in enumerate(batch_generator): # compute the output y_pred = self.model(batch_dict['title']) # compute the loss loss = self.loss_func(y_pred, batch_dict['category']) loss_t = loss.to("cpu").item() running_loss += (loss_t - running_loss) / (batch_index + 1) # compute the accuracy acc_t = self.compute_accuracy(y_pred, batch_dict['category']) running_acc += (acc_t - running_acc) / (batch_index + 1) self.train_state['val_loss'].append(running_loss) self.train_state['val_acc'].append(running_acc) self.train_state = self.update_train_state() self.scheduler.step(self.train_state['val_loss'][-1]) if self.train_state['stop_early']: break def run_test_loop(self): # initialize batch generator, set loss and acc to 0; set eval mode on self.dataset.set_split('test') batch_generator = self.dataset.generate_batches( batch_size=self.batch_size, collate_fn=self.collate_fn, shuffle=self.shuffle, device=self.device) running_loss = 0.0 running_acc = 0.0 self.model.eval() for batch_index, batch_dict in enumerate(batch_generator): # compute the output y_pred = self.model(batch_dict['title']) # compute the loss loss = self.loss_func(y_pred, batch_dict['category']) loss_t = loss.item() running_loss += (loss_t - running_loss) / (batch_index + 1) # compute the accuracy acc_t = self.compute_accuracy(y_pred, batch_dict['category']) running_acc += (acc_t - running_acc) / (batch_index + 1) self.train_state['test_loss'] = running_loss self.train_state['test_acc'] = running_acc def plot_performance(self): # Figure size plt.figure(figsize=(15,5)) # Plot Loss plt.subplot(1, 2, 1) plt.title("Loss") plt.plot(trainer.train_state["train_loss"], label="train") plt.plot(trainer.train_state["val_loss"], label="val") plt.legend(loc='upper right') # Plot Accuracy plt.subplot(1, 2, 2) plt.title("Accuracy") plt.plot(trainer.train_state["train_acc"], label="train") plt.plot(trainer.train_state["val_acc"], label="val") plt.legend(loc='lower right') # Save figure plt.savefig(os.path.join(self.save_dir, "performance.png")) # Show plots plt.show() def save_train_state(self): self.train_state["done_training"] = True with open(os.path.join(self.save_dir, "train_state.json"), "w") as fp: json.dump(self.train_state, fp) # Initialization dataset = NewsDataset.load_dataset_and_make_vectorizer(df=split_df, cutoff=args.cutoff) dataset.save_vectorizer(args.vectorizer_file) vectorizer = dataset.vectorizer model = NewsModel(embedding_dim=args.embedding_dim, num_embeddings=len(vectorizer.title_vocab), num_input_channels=args.embedding_dim, num_channels=args.num_filters, hidden_dim=args.hidden_dim, num_classes=len(vectorizer.category_vocab), dropout_p=args.dropout_p, pretrained_embeddings=None, padding_idx=vectorizer.title_vocab.mask_index) print (model.named_modules) # Train trainer = Trainer(dataset=dataset, model=model, model_state_file=args.model_state_file, save_dir=args.save_dir, device=args.device, shuffle=args.shuffle, num_epochs=args.num_epochs, batch_size=args.batch_size, learning_rate=args.learning_rate, early_stopping_criteria=args.early_stopping_criteria) trainer.run_train_loop() # Plot performance trainer.plot_performance() # Test performance trainer.run_test_loop() print("Test loss: {0:.2f}".format(trainer.train_state['test_loss'])) print("Test Accuracy: {0:.1f}%".format(trainer.train_state['test_acc'])) # Save all results trainer.save_train_state() pretrained_embeddings = torch.from_numpy(pretrained_embeddings).float() self.embeddings = nn.Embedding(embedding_dim=embedding_dim, num_embeddings=num_embeddings, padding_idx=padding_idx, _weight=pretrained_embeddings) def load_glove_embeddings(embeddings_file): word_to_idx = {} embeddings = [] with open(embeddings_file, "r") as fp: for index, line in enumerate(fp): line = line.split(" ") word = line[0] word_to_idx[word] = index embedding_i = np.array([float(val) for val in line[1:]]) embeddings.append(embedding_i) return word_to_idx, np.stack(embeddings) def make_embeddings_matrix(words): word_to_idx, glove_embeddings = load_glove_embeddings(embeddings_file) embedding_dim = glove_embeddings.shape[1] embeddings = np.zeros((len(words), embedding_dim)) for i, word in enumerate(words): if word in word_to_idx: embeddings[i, :] = glove_embeddings[word_to_idx[word]] else: embedding_i = torch.zeros(1, embedding_dim) nn.init.xavier_uniform_(embedding_i) embeddings[i, :] = embedding_i return embeddings args.use_glove_embeddings = True # Initialization dataset = NewsDataset.load_dataset_and_make_vectorizer(df=split_df, cutoff=args.cutoff) dataset.save_vectorizer(args.vectorizer_file) vectorizer = dataset.vectorizer # Create embeddings embeddings = None if args.use_glove_embeddings: embeddings_file = 'glove.6B.{0}d.txt'.format(args.embedding_dim) words = vectorizer.title_vocab.token_to_idx.keys() embeddings = make_embeddings_matrix(words=words) print ("<Embeddings(words={0}, dim={1})>".format( np.shape(embeddings)[0], np.shape(embeddings)[1])) # Initialize model model = NewsModel(embedding_dim=args.embedding_dim, num_embeddings=len(vectorizer.title_vocab), num_input_channels=args.embedding_dim, num_channels=args.num_filters, hidden_dim=args.hidden_dim, num_classes=len(vectorizer.category_vocab), dropout_p=args.dropout_p, pretrained_embeddings=embeddings, padding_idx=vectorizer.title_vocab.mask_index) print (model.named_modules) # Train trainer = Trainer(dataset=dataset, model=model, model_state_file=args.model_state_file, save_dir=args.save_dir, device=args.device, shuffle=args.shuffle, num_epochs=args.num_epochs, batch_size=args.batch_size, learning_rate=args.learning_rate, early_stopping_criteria=args.early_stopping_criteria) trainer.run_train_loop() # Plot performance trainer.plot_performance() # Test performance trainer.run_test_loop() print("Test loss: {0:.2f}".format(trainer.train_state['test_loss'])) print("Test Accuracy: {0:.1f}%".format(trainer.train_state['test_acc'])) # Save all results trainer.save_train_state() if freeze_embeddings: self.embeddings.weight.requires_grad = False args.freeze_embeddings = True # Initialize model model = NewsModel(embedding_dim=args.embedding_dim, num_embeddings=len(vectorizer.title_vocab), num_input_channels=args.embedding_dim, num_channels=args.num_filters, hidden_dim=args.hidden_dim, num_classes=len(vectorizer.category_vocab), dropout_p=args.dropout_p, pretrained_embeddings=embeddings, freeze_embeddings=args.freeze_embeddings, padding_idx=vectorizer.title_vocab.mask_index) print (model.named_modules) # Train trainer = Trainer(dataset=dataset, model=model, model_state_file=args.model_state_file, save_dir=args.save_dir, device=args.device, shuffle=args.shuffle, num_epochs=args.num_epochs, batch_size=args.batch_size, learning_rate=args.learning_rate, early_stopping_criteria=args.early_stopping_criteria) trainer.run_train_loop() # Plot performance trainer.plot_performance() # Test performance trainer.run_test_loop() print("Test loss: {0:.2f}".format(trainer.train_state['test_loss'])) print("Test Accuracy: {0:.1f}%".format(trainer.train_state['test_acc'])) # Save all results trainer.save_train_state()
0.634883
0.985718
``` !pip install -qq transformers !pip install -qq sentencepiece !pip install -qq datasets import transformers import datasets from transformers import AutoTokenizer, AutoModelForSequenceClassification, AdamW, get_linear_schedule_with_warmup,Trainer, TrainingArguments from transformers.file_utils import is_tf_available, is_torch_available, is_torch_tpu_available import torch import numpy as np import pandas as pd import seaborn as sns from pylab import rcParams import matplotlib.pyplot as plt from matplotlib import rc from sklearn.metrics import confusion_matrix, classification_report from collections import defaultdict import random from textwrap import wrap from datetime import datetime from datasets import load_from_disk from datasets import Dataset from sklearn.metrics import accuracy_score,classification_report, confusion_matrix from sklearn.metrics import precision_recall_fscore_support from torch import nn # the model we gonna train, base uncased BERT # check text classification models here: https://huggingface.co/models?filter=text-classification MODEL_NAME = "distilbert-base-uncased" # max sequence length for each document/sentence sample MAX_LENGTH = 64 BATCH_SIZE = 16 EPOCHS = 3 LEARNING_RATE= 6.58e-5 WEIGHT_DECAY = 0.289 WARMUP_STEPS = 464 RANDOM_SEED=22 LEARNING_RATE_DECAY_MULTIPLIER = 0.95 REINIT_LAYERS = 2 QA_OUTPUT_PATH= "/content/drive/MyDrive/Dissertation/disbert_hate_ml/results/best_model_squad" device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") def set_seed(seed): """Set all seeds to make results reproducible (deterministic mode). When seed is None, disables deterministic mode. :param seed: an integer to your choosing """ if seed is not None: torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False np.random.seed(seed) random.seed(seed) def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) # calculate accuracy using sklearn's function acc = accuracy_score(labels, preds) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='macro') acc = accuracy_score(labels, preds) confusion_matrix = classification_report(labels, preds, digits=4,output_dict=True) return { 'accuracy': acc, 'f1': f1, 'precision': precision, 'recall': recall, 'hate_f1': confusion_matrix["0"]["f1-score"], 'hate_recall': confusion_matrix["0"]["recall"], 'hate_precision': confusion_matrix["0"]["precision"], 'offensive_f1': confusion_matrix["1"]["f1-score"], 'offensive_recall': confusion_matrix["1"]["recall"], 'offensive_precision': confusion_matrix["1"]["precision"], 'normal_f1': confusion_matrix["2"]["f1-score"], 'normal_recall': confusion_matrix["2"]["recall"], 'normal_precision': confusion_matrix["2"]["precision"], } def model_init(): temp_model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME,num_labels=3).to(device) return temp_model # Code modified from Stabilizer library to handle DistilBERT architecture #https://github.com/flowerpot-ai/stabilizer def get_optimizer_parameters_with_llrd(model, peak_lr, multiplicative_factor): num_encoder_layers = len(model.distilbert.transformer.layer) # Task specific layer gets the peak_lr tsl_parameters = [ { "params": [param for name, param in model.named_parameters() if 'distilbert' not in name], "param_names": [name for name, param in model.named_parameters() if 'distilbert' not in name], "lr": peak_lr, "name": "tsl", } ] # Starting from the last encoder layer each encoder layers get a lr defined by # current_layer_lr = prev_layer_lr * multiplicative_factor # the last encoder layer lr = peak_lr * multiplicative_factor encoder_parameters = [ { "params": [param for name, param in model.named_parameters() if f"distilbert.transformer.layer.{layer_num}" in name], "param_names": [name for name, param in model.named_parameters() if f"distilbert.transformer.layer.{layer_num}" in name], "lr": peak_lr * (multiplicative_factor ** (num_encoder_layers - layer_num)), "name": f"layer_{layer_num}", } for layer_num, layer in enumerate(model.distilbert.transformer.layer) ] # Embedding layer gets embedding layer lr = first encoder layer lr * multiplicative_factor embedding_parameters = [ { "params": [param for name, param in model.named_parameters() if 'embeddings' in name], "param_names": [name for name, param in model.named_parameters() if 'embeddings' in name], "lr": peak_lr * (multiplicative_factor ** (num_encoder_layers + 1)), "name": "embedding", } ] return tsl_parameters + encoder_parameters + embedding_parameters def reinit_autoencoder_model(model, reinit_num_layers=0): """reinitialize autoencoder model layers""" if reinit_num_layers: for layer in model.distilbert.transformer.layer[-reinit_num_layers:]: for module in layer.modules(): if isinstance(module, nn.Embedding): if module.weight.requires_grad: module.weight.data.normal_(mean=0.0, std=model.config.initializer_range) if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=model.config.initializer_range) elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) if isinstance(module, nn.Linear) and module.bias is not None: module.bias.data.zero_() return model def seq_model_init(): temp_model = AutoModelForSequenceClassification.from_pretrained(QA_OUTPUT_PATH,num_labels=3).to(device) return temp_model def timestamp(): dateTimeObj = datetime.now() timestampStr = dateTimeObj.strftime("%d-%b-%Y (%H:%M:%S.%f)") print(timestampStr) set_seed(RANDOM_SEED) hatetwit_dataset_dfs = load_from_disk('/content/drive/MyDrive/Dissertation/datasets/hatetwit_'+str(1)) train_dataset = hatetwit_dataset_dfs ["train"].remove_columns(["input_ids_bert","attention_mask_bert","token_type_ids_bert"]) eval_dataset = hatetwit_dataset_dfs ["validation"].remove_columns(["input_ids_bert","attention_mask_bert","token_type_ids_bert"]) test_dataset = hatetwit_dataset_dfs ["test"].remove_columns(["input_ids_bert","attention_mask_bert","token_type_ids_bert"]) training_args = TrainingArguments( output_dir='/content/drive/MyDrive/Dissertation/disbert_optimal/results', # output directory num_train_epochs=EPOCHS, # total number of training epochs save_strategy ="epoch" , per_device_train_batch_size=BATCH_SIZE, # batch size per device during training per_device_eval_batch_size=BATCH_SIZE, # batch size for evaluation warmup_steps = WARMUP_STEPS, weight_decay= WEIGHT_DECAY, # strength of weight decay learning_rate= LEARNING_RATE, logging_dir='./disbert_optimal/logs', # directory for storing logs load_best_model_at_end=True, # load the best model when finished training (default metric is loss) evaluation_strategy="epoch", ) results = [] #Model to test combination of Intermediate Task Transfer, Weight Reinitialization and LLRD model = seq_model_init() model = reinit_autoencoder_model(model,2) parameters = get_optimizer_parameters_with_llrd(model, LEARNING_RATE, LEARNING_RATE_DECAY_MULTIPLIER) trainer_one = Trainer( model =model, # the instantiated Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset, # evaluation dataset compute_metrics=compute_metrics, # the callback that computes metrics of interest ) trainer_one.create_optimizer() trainer_one.optimizer = AdamW(parameters, lr=LEARNING_RATE,weight_decay=WEIGHT_DECAY) trainer_one.train() timestamp() eval_results = trainer_one.evaluate(test_dataset) results.append(eval_results) eval_results timestamp() #Model to test combination of Intermediate Task Transfer and LLRD model = seq_model_init() parameters = get_optimizer_parameters_with_llrd(model, LEARNING_RATE, LEARNING_RATE_DECAY_MULTIPLIER) trainer_two = Trainer( model =model, # the instantiated Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset, # evaluation dataset compute_metrics=compute_metrics, # the callback that computes metrics of interest ) trainer_two.create_optimizer() trainer_two.optimizer = AdamW(parameters, lr=LEARNING_RATE,weight_decay=WEIGHT_DECAY) trainer_two.train() eval_results = trainer_two.evaluate(test_dataset) results.append(eval_results) eval_results #Model to test combination of Intermediate Task Transfer and WR model = seq_model_init() model = reinit_autoencoder_model(model,2) trainer_three = Trainer( model =model, # the instantiated Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset, # evaluation dataset compute_metrics=compute_metrics, # the callback that computes metrics of interest ) trainer_three.train() eval_results = trainer_three.evaluate(test_dataset) results.append(eval_results) eval_results #Model to test combination of Weight Reinitialization and LLRD model = model_init() model = reinit_autoencoder_model(model,2) parameters = get_optimizer_parameters_with_llrd(model, LEARNING_RATE, LEARNING_RATE_DECAY_MULTIPLIER) trainer_four = Trainer( model =model, # the instantiated Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset, # evaluation dataset compute_metrics=compute_metrics, # the callback that computes metrics of interest ) trainer_four.create_optimizer() trainer_four.optimizer = AdamW(parameters, lr=LEARNING_RATE,weight_decay=WEIGHT_DECAY) trainer_four.train() eval_results = trainer_four.evaluate(test_dataset) results.append(eval_results) eval_results results_df = pd.DataFrame(results) results_df ```
github_jupyter
!pip install -qq transformers !pip install -qq sentencepiece !pip install -qq datasets import transformers import datasets from transformers import AutoTokenizer, AutoModelForSequenceClassification, AdamW, get_linear_schedule_with_warmup,Trainer, TrainingArguments from transformers.file_utils import is_tf_available, is_torch_available, is_torch_tpu_available import torch import numpy as np import pandas as pd import seaborn as sns from pylab import rcParams import matplotlib.pyplot as plt from matplotlib import rc from sklearn.metrics import confusion_matrix, classification_report from collections import defaultdict import random from textwrap import wrap from datetime import datetime from datasets import load_from_disk from datasets import Dataset from sklearn.metrics import accuracy_score,classification_report, confusion_matrix from sklearn.metrics import precision_recall_fscore_support from torch import nn # the model we gonna train, base uncased BERT # check text classification models here: https://huggingface.co/models?filter=text-classification MODEL_NAME = "distilbert-base-uncased" # max sequence length for each document/sentence sample MAX_LENGTH = 64 BATCH_SIZE = 16 EPOCHS = 3 LEARNING_RATE= 6.58e-5 WEIGHT_DECAY = 0.289 WARMUP_STEPS = 464 RANDOM_SEED=22 LEARNING_RATE_DECAY_MULTIPLIER = 0.95 REINIT_LAYERS = 2 QA_OUTPUT_PATH= "/content/drive/MyDrive/Dissertation/disbert_hate_ml/results/best_model_squad" device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") def set_seed(seed): """Set all seeds to make results reproducible (deterministic mode). When seed is None, disables deterministic mode. :param seed: an integer to your choosing """ if seed is not None: torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False np.random.seed(seed) random.seed(seed) def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) # calculate accuracy using sklearn's function acc = accuracy_score(labels, preds) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='macro') acc = accuracy_score(labels, preds) confusion_matrix = classification_report(labels, preds, digits=4,output_dict=True) return { 'accuracy': acc, 'f1': f1, 'precision': precision, 'recall': recall, 'hate_f1': confusion_matrix["0"]["f1-score"], 'hate_recall': confusion_matrix["0"]["recall"], 'hate_precision': confusion_matrix["0"]["precision"], 'offensive_f1': confusion_matrix["1"]["f1-score"], 'offensive_recall': confusion_matrix["1"]["recall"], 'offensive_precision': confusion_matrix["1"]["precision"], 'normal_f1': confusion_matrix["2"]["f1-score"], 'normal_recall': confusion_matrix["2"]["recall"], 'normal_precision': confusion_matrix["2"]["precision"], } def model_init(): temp_model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME,num_labels=3).to(device) return temp_model # Code modified from Stabilizer library to handle DistilBERT architecture #https://github.com/flowerpot-ai/stabilizer def get_optimizer_parameters_with_llrd(model, peak_lr, multiplicative_factor): num_encoder_layers = len(model.distilbert.transformer.layer) # Task specific layer gets the peak_lr tsl_parameters = [ { "params": [param for name, param in model.named_parameters() if 'distilbert' not in name], "param_names": [name for name, param in model.named_parameters() if 'distilbert' not in name], "lr": peak_lr, "name": "tsl", } ] # Starting from the last encoder layer each encoder layers get a lr defined by # current_layer_lr = prev_layer_lr * multiplicative_factor # the last encoder layer lr = peak_lr * multiplicative_factor encoder_parameters = [ { "params": [param for name, param in model.named_parameters() if f"distilbert.transformer.layer.{layer_num}" in name], "param_names": [name for name, param in model.named_parameters() if f"distilbert.transformer.layer.{layer_num}" in name], "lr": peak_lr * (multiplicative_factor ** (num_encoder_layers - layer_num)), "name": f"layer_{layer_num}", } for layer_num, layer in enumerate(model.distilbert.transformer.layer) ] # Embedding layer gets embedding layer lr = first encoder layer lr * multiplicative_factor embedding_parameters = [ { "params": [param for name, param in model.named_parameters() if 'embeddings' in name], "param_names": [name for name, param in model.named_parameters() if 'embeddings' in name], "lr": peak_lr * (multiplicative_factor ** (num_encoder_layers + 1)), "name": "embedding", } ] return tsl_parameters + encoder_parameters + embedding_parameters def reinit_autoencoder_model(model, reinit_num_layers=0): """reinitialize autoencoder model layers""" if reinit_num_layers: for layer in model.distilbert.transformer.layer[-reinit_num_layers:]: for module in layer.modules(): if isinstance(module, nn.Embedding): if module.weight.requires_grad: module.weight.data.normal_(mean=0.0, std=model.config.initializer_range) if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=model.config.initializer_range) elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) if isinstance(module, nn.Linear) and module.bias is not None: module.bias.data.zero_() return model def seq_model_init(): temp_model = AutoModelForSequenceClassification.from_pretrained(QA_OUTPUT_PATH,num_labels=3).to(device) return temp_model def timestamp(): dateTimeObj = datetime.now() timestampStr = dateTimeObj.strftime("%d-%b-%Y (%H:%M:%S.%f)") print(timestampStr) set_seed(RANDOM_SEED) hatetwit_dataset_dfs = load_from_disk('/content/drive/MyDrive/Dissertation/datasets/hatetwit_'+str(1)) train_dataset = hatetwit_dataset_dfs ["train"].remove_columns(["input_ids_bert","attention_mask_bert","token_type_ids_bert"]) eval_dataset = hatetwit_dataset_dfs ["validation"].remove_columns(["input_ids_bert","attention_mask_bert","token_type_ids_bert"]) test_dataset = hatetwit_dataset_dfs ["test"].remove_columns(["input_ids_bert","attention_mask_bert","token_type_ids_bert"]) training_args = TrainingArguments( output_dir='/content/drive/MyDrive/Dissertation/disbert_optimal/results', # output directory num_train_epochs=EPOCHS, # total number of training epochs save_strategy ="epoch" , per_device_train_batch_size=BATCH_SIZE, # batch size per device during training per_device_eval_batch_size=BATCH_SIZE, # batch size for evaluation warmup_steps = WARMUP_STEPS, weight_decay= WEIGHT_DECAY, # strength of weight decay learning_rate= LEARNING_RATE, logging_dir='./disbert_optimal/logs', # directory for storing logs load_best_model_at_end=True, # load the best model when finished training (default metric is loss) evaluation_strategy="epoch", ) results = [] #Model to test combination of Intermediate Task Transfer, Weight Reinitialization and LLRD model = seq_model_init() model = reinit_autoencoder_model(model,2) parameters = get_optimizer_parameters_with_llrd(model, LEARNING_RATE, LEARNING_RATE_DECAY_MULTIPLIER) trainer_one = Trainer( model =model, # the instantiated Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset, # evaluation dataset compute_metrics=compute_metrics, # the callback that computes metrics of interest ) trainer_one.create_optimizer() trainer_one.optimizer = AdamW(parameters, lr=LEARNING_RATE,weight_decay=WEIGHT_DECAY) trainer_one.train() timestamp() eval_results = trainer_one.evaluate(test_dataset) results.append(eval_results) eval_results timestamp() #Model to test combination of Intermediate Task Transfer and LLRD model = seq_model_init() parameters = get_optimizer_parameters_with_llrd(model, LEARNING_RATE, LEARNING_RATE_DECAY_MULTIPLIER) trainer_two = Trainer( model =model, # the instantiated Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset, # evaluation dataset compute_metrics=compute_metrics, # the callback that computes metrics of interest ) trainer_two.create_optimizer() trainer_two.optimizer = AdamW(parameters, lr=LEARNING_RATE,weight_decay=WEIGHT_DECAY) trainer_two.train() eval_results = trainer_two.evaluate(test_dataset) results.append(eval_results) eval_results #Model to test combination of Intermediate Task Transfer and WR model = seq_model_init() model = reinit_autoencoder_model(model,2) trainer_three = Trainer( model =model, # the instantiated Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset, # evaluation dataset compute_metrics=compute_metrics, # the callback that computes metrics of interest ) trainer_three.train() eval_results = trainer_three.evaluate(test_dataset) results.append(eval_results) eval_results #Model to test combination of Weight Reinitialization and LLRD model = model_init() model = reinit_autoencoder_model(model,2) parameters = get_optimizer_parameters_with_llrd(model, LEARNING_RATE, LEARNING_RATE_DECAY_MULTIPLIER) trainer_four = Trainer( model =model, # the instantiated Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset, # evaluation dataset compute_metrics=compute_metrics, # the callback that computes metrics of interest ) trainer_four.create_optimizer() trainer_four.optimizer = AdamW(parameters, lr=LEARNING_RATE,weight_decay=WEIGHT_DECAY) trainer_four.train() eval_results = trainer_four.evaluate(test_dataset) results.append(eval_results) eval_results results_df = pd.DataFrame(results) results_df
0.838481
0.338241
# Specifying boundary velocities in addition to a custom density file This notebook will go through multiple detailed examples of how to properly run TARDIS with a custom ejecta profile specified by a custom density file and a custom abundance file. ``` import tardis import matplotlib.pyplot as plt import numpy as np ``` __Your custom density file__ First, let's look at an example of a custom density file. 80 day 0 9500 9e-16 1 10500 6e-16 2 12000 2e-17 * The first line specifies the time in days after the explosion * After a skipped line, each row corresponds to a shell with index specified by the first column. * The second column lists the velocities of the outer boundary of the cell in km / s. * The third column lists the density of the cell. __IMPORTANT__ The __default behavior__ of TARDIS is to use the first shell as the inner boundary. This means that v_inner_boundary = 9500, and the corresponding density 9e-16 is ignored because it is within the inner boundary. It can be replaced by an arbitrary number. The outer boundary of the last shell will be used as v_outer_boundary, so the default behavior will set v_outer_boundary = 12000. __Your custom abundance file__ Let's look at an example of a custom density file. H He 0.0 1.0 0.4 0.6 * The first line indicates which elements (or isotopes) correspond to which columns. * After a skipped line, each row specifies the chemical abundance of one shell. Therefore the numbers in a given row should sum to 1.0. __IMPORTANT__ Note that there are only 2 shells specified in this abundance file (despite the custom density file having 3 lines). This is because the custom density file specifies the boundaries of the shells, while the abundance file specifies the abundances within each shell. __Running TARDIS with the custom files__ Now let's run TARDIS using the example custom files. ``` model = tardis.run_tardis('./test_config.yml') ``` You can check to make sure that the model loaded and used by TARDIS during the simulation is consistent with your expectations based on the custom files you provided: ``` print('v_inner_boundary = ',model.model.v_boundary_inner) print('v_outer_boundary = ',model.model.v_boundary_outer) print('\n') print('velocities of shell boundaries: ') print(model.model.velocity) print('\n') print('densities loaded by TARDIS: (NOTE that the density in the first line of the file was ignored! Densities are also rescaled.)') print(model.model.density) ``` ## Specifying boundary velocities in the config file In addition to specifying custom density and abundance files, the user can set the v_inner_boundary and v_outer_boundary velocities in the YAML config file. This can cause some confusion, so we carefully go through some examples. __IMPORTANT__ Boundary velocities set in the YAML config file must be __within__ the velocity range specified in the custom density file (if one is provided). ## Example 1) v_inner_boundary lower than first velocity in density file In this example, the first velocity in the density file is 9500 km/s. The user can specify in the config file the velocity of the inner boundary to be a lower velocity, say v_inner_boundary = 9000 km/s. This will cause TARDIS to raise an error. ``` model = tardis.run_tardis('./test_config_ex1.yml') ``` ## Example 2) v_outer_boundary larger than last velocity in density file In this example, the last velocity in the density file is 12000 km/s. The user can specify in the config file the velocity of the outer boundary to a larger velocity, say v_outer_boundary = 13000 km/s. This will cause TARDIS to raise an error. ``` model = tardis.run_tardis('./test_config_ex2.yml') ``` ## Example 3) v_boundaries in config file are within density file velocity range Here the user sets v_inner_boundary = 9700 and v_outer_boundary = 11500 in the config file. Both values fall within the velocity range specified by the custom density file. ``` model = tardis.run_tardis('./test_config_ex3.yml') print('v_inner_boundary = ',model.model.v_boundary_inner) print('v_outer_boundary = ',model.model.v_boundary_outer) print('\n') print('velocities of shell boundaries: ') print(model.model.velocity) print('\n') print('densities loaded by TARDIS: (NOTE that the density in the first line of the file was ignored! Densities are also rescaled.)') print(model.model.density) ``` __IMPORTANT__ Notice that the inner and outer boundary velocities are the ones specifically set by the user.
github_jupyter
import tardis import matplotlib.pyplot as plt import numpy as np model = tardis.run_tardis('./test_config.yml') print('v_inner_boundary = ',model.model.v_boundary_inner) print('v_outer_boundary = ',model.model.v_boundary_outer) print('\n') print('velocities of shell boundaries: ') print(model.model.velocity) print('\n') print('densities loaded by TARDIS: (NOTE that the density in the first line of the file was ignored! Densities are also rescaled.)') print(model.model.density) model = tardis.run_tardis('./test_config_ex1.yml') model = tardis.run_tardis('./test_config_ex2.yml') model = tardis.run_tardis('./test_config_ex3.yml') print('v_inner_boundary = ',model.model.v_boundary_inner) print('v_outer_boundary = ',model.model.v_boundary_outer) print('\n') print('velocities of shell boundaries: ') print(model.model.velocity) print('\n') print('densities loaded by TARDIS: (NOTE that the density in the first line of the file was ignored! Densities are also rescaled.)') print(model.model.density)
0.219003
0.98366
# Loading Libraries ``` library(tidyverse) data <- read.csv("heart.csv") head(data) tail(data) glimpse(data) ncol(data) nrow(data) colnames(data) summary(data) ``` # Data Transformation ``` data2 <- data %>% mutate(sex = if_else(sex == 1, "MALE" , "FEMALE"), fbs = if_else(fbs == 1 , ">120" , "<=120"), exang = if_else(exang == 1, "YES" , "NO"), cp =if_else(cp == 1, "ATYPICAL ANGINA", if_else(cp == 2, "NON-ANGINAL PAIN", "ASYMPTOMATIC")), restecg = if_else(restecg == 0, "NORMAL", if_else(restecg == 1 ,"ABNORMALITY" , "PROBABLE OR DEFINITE ")), slope = as.factor(slope), ca = as.factor(ca), thal = as.factor(thal), target = if_else(target == 1, "YES" ,"NO") ) %>% mutate_if(is.character, as.factor) %>% dplyr::select(target, sex, fbs, exang, cp, restecg, slope, ca, thal, everything()) ``` # Data Visualization ``` ggplot(data2, aes(x=data2$target, fill=data2$target))+ geom_bar() + xlab("Heart Disease") + ylab("count") + ggtitle("Presence & ABsence of Heart Disease") + scale_fill_discrete(name= 'Heart Disease', labels =c("Absense", "Presesnce")) prop.table(table(data2$target)) # count the freuqency of the values of age data2 %>% group_by(ï..age) %>% count() %>% filter(n>10) %>% ggplot()+ geom_col(aes(ï..age , n),fill = 'green')+ ggtitle("Age Analysis")+ xlab("Age")+ ylab("Agecount") # compare blood pressure across the chest pain data2 %>% ggplot(aes(x=sex, y=trestbps))+ geom_boxplot(fill='blue')+ xlab('sex')+ ylab('BP')+ facet_grid(~cp) data %>% ggplot(aes(x=sex, y=trestbps))+ geom_boxplot(fill='#F6BE00')+ xlab('sex')+ ylab('BP')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=chol))+ geom_boxplot(fill='purple')+ xlab('sex')+ ylab('chol')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=thal))+ geom_boxplot(fill='orange')+ xlab('sex')+ ylab('thal')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=trestbps))+ geom_boxplot(fill='#FF0000')+ xlab('sex')+ ylab('trestbps')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=restecg))+ geom_boxplot(fill='#FF0000')+ xlab('sex')+ ylab('restecg')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=oldpeak))+ geom_boxplot(fill='#008000')+ xlab('sex')+ ylab('oldpeak')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=cp))+ geom_boxplot(fill='#FF0000')+ xlab('sex')+ ylab('cp')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=slope))+ geom_boxplot(fill='#FF0000')+ xlab('sex')+ ylab('slope')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=target))+ geom_boxplot(fill='#008000')+ xlab('sex')+ ylab('target')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=sex))+ geom_boxplot(fill='#008000')+ xlab('sex')+ ylab('sex')+ facet_grid(~cp) data %>% ggplot(aes(x=sex, y=sex))+ geom_boxplot(fill='#008000')+ xlab('sex')+ ylab('sex')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=trestbps))+ geom_boxplot(fill='#033E3E')+ xlab('sex')+ ylab('trestbps')+ facet_grid(~cp) ``` # Correlation ``` install.packages("corrplot") install.packages("ggplot2") library(corrplot) library(ggplot2) cor_heart <- cor(data2[, 10:14]) cor_heart corrplot(cor_heart, method='square', type='upper') cor_heart <- cor(data2[, 10:14]) cor_heart corrplot(cor_heart, method='square', type='lower') cor_heart <- cor(data2[, 10:14]) cor_heart corrplot(cor_heart, method='ellipse', type='upper') cor_heart <- cor(data2[, 10:14]) cor_heart corrplot(cor_heart, method='pie', type='upper') cor_heart <- cor(data2[, 10:14]) cor_heart corrplot(cor_heart, method='number', type='lower') cor_heart <- cor(data2[, 10:14]) cor_heart corrplot(cor_heart, method='number') ??method ```
github_jupyter
library(tidyverse) data <- read.csv("heart.csv") head(data) tail(data) glimpse(data) ncol(data) nrow(data) colnames(data) summary(data) data2 <- data %>% mutate(sex = if_else(sex == 1, "MALE" , "FEMALE"), fbs = if_else(fbs == 1 , ">120" , "<=120"), exang = if_else(exang == 1, "YES" , "NO"), cp =if_else(cp == 1, "ATYPICAL ANGINA", if_else(cp == 2, "NON-ANGINAL PAIN", "ASYMPTOMATIC")), restecg = if_else(restecg == 0, "NORMAL", if_else(restecg == 1 ,"ABNORMALITY" , "PROBABLE OR DEFINITE ")), slope = as.factor(slope), ca = as.factor(ca), thal = as.factor(thal), target = if_else(target == 1, "YES" ,"NO") ) %>% mutate_if(is.character, as.factor) %>% dplyr::select(target, sex, fbs, exang, cp, restecg, slope, ca, thal, everything()) ggplot(data2, aes(x=data2$target, fill=data2$target))+ geom_bar() + xlab("Heart Disease") + ylab("count") + ggtitle("Presence & ABsence of Heart Disease") + scale_fill_discrete(name= 'Heart Disease', labels =c("Absense", "Presesnce")) prop.table(table(data2$target)) # count the freuqency of the values of age data2 %>% group_by(ï..age) %>% count() %>% filter(n>10) %>% ggplot()+ geom_col(aes(ï..age , n),fill = 'green')+ ggtitle("Age Analysis")+ xlab("Age")+ ylab("Agecount") # compare blood pressure across the chest pain data2 %>% ggplot(aes(x=sex, y=trestbps))+ geom_boxplot(fill='blue')+ xlab('sex')+ ylab('BP')+ facet_grid(~cp) data %>% ggplot(aes(x=sex, y=trestbps))+ geom_boxplot(fill='#F6BE00')+ xlab('sex')+ ylab('BP')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=chol))+ geom_boxplot(fill='purple')+ xlab('sex')+ ylab('chol')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=thal))+ geom_boxplot(fill='orange')+ xlab('sex')+ ylab('thal')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=trestbps))+ geom_boxplot(fill='#FF0000')+ xlab('sex')+ ylab('trestbps')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=restecg))+ geom_boxplot(fill='#FF0000')+ xlab('sex')+ ylab('restecg')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=oldpeak))+ geom_boxplot(fill='#008000')+ xlab('sex')+ ylab('oldpeak')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=cp))+ geom_boxplot(fill='#FF0000')+ xlab('sex')+ ylab('cp')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=slope))+ geom_boxplot(fill='#FF0000')+ xlab('sex')+ ylab('slope')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=target))+ geom_boxplot(fill='#008000')+ xlab('sex')+ ylab('target')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=sex))+ geom_boxplot(fill='#008000')+ xlab('sex')+ ylab('sex')+ facet_grid(~cp) data %>% ggplot(aes(x=sex, y=sex))+ geom_boxplot(fill='#008000')+ xlab('sex')+ ylab('sex')+ facet_grid(~cp) data2 %>% ggplot(aes(x=sex, y=trestbps))+ geom_boxplot(fill='#033E3E')+ xlab('sex')+ ylab('trestbps')+ facet_grid(~cp) install.packages("corrplot") install.packages("ggplot2") library(corrplot) library(ggplot2) cor_heart <- cor(data2[, 10:14]) cor_heart corrplot(cor_heart, method='square', type='upper') cor_heart <- cor(data2[, 10:14]) cor_heart corrplot(cor_heart, method='square', type='lower') cor_heart <- cor(data2[, 10:14]) cor_heart corrplot(cor_heart, method='ellipse', type='upper') cor_heart <- cor(data2[, 10:14]) cor_heart corrplot(cor_heart, method='pie', type='upper') cor_heart <- cor(data2[, 10:14]) cor_heart corrplot(cor_heart, method='number', type='lower') cor_heart <- cor(data2[, 10:14]) cor_heart corrplot(cor_heart, method='number') ??method
0.287168
0.749385
<a href="https://colab.research.google.com/github/gabilodeau/INF8770/blob/master/DetectiondePeau1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> INF8770 Technologies multimédias Polytechnique Montréal Exemple de détection de la peau par la méthode Gaussienne sur le rouge et le vert. Exécuter les 2 lignes suivantes pour colab. ``` !git clone https://github.com/gabilodeau/INF8770 %cd INF8770 import numpy as np import matplotlib.pyplot as plt from scipy.stats import multivariate_normal ``` Lecture d'une image d'échantillons de peau. ``` plt.figure(figsize = (10,10)) #Vous pouvez aussi essayer avec EchPeau2.jpg. Les peaux foncées sont mieux détectées. image = plt.imread('EchPeau.jpg') plt.imshow(image) plt.show() ``` Calcul et affichage de l'histogramme R,G ``` Taille = len(image)*len(image[0]) Vec1 = np.zeros(Taille) Vec2 = np.zeros(Taille) k=0 for i in range(len(image)): for j in range(len(image[0])): Vec1[k]=image[i][j][0] Vec2[k]=image[i][j][1] k+=1 plt.hist2d(Vec1, Vec2) plt.show() ``` On calcule les paramètres de la distribution Gaussienne sur R et G. ``` sommeR = 0.0 sommeG = 0.0 #Mu, la moyenne des R et G for i in range(len(image)): for j in range(len(image[0])): sommeR=sommeR+image[i][j][0] sommeG=sommeG+image[i][j][1] nbPixels = len(image)*len(image[0]) MoyR= sommeR / nbPixels MoyG= sommeG / nbPixels mu = np.array([MoyR, MoyG]) print('mu:',mu) #La covariance des R et G covRG = np.zeros((2,2), dtype = "double") for i in range(len(image)): for j in range(len(image[0])): vecTemp=[[image[i][j][0] - MoyR], [image[i][j][1]] - MoyG] vecProdTemp = np.dot(vecTemp,np.transpose(vecTemp)) covRG = np.add(covRG,vecProdTemp) covRG = covRG / nbPixels print('covariance:',covRG) ``` Affichage de la distribution ``` x, y = np.mgrid[0:256:1, 0:256:1] pos = np.empty(x.shape + (2,)) pos[:, :, 0] = x; pos[:, :, 1] = y rv = multivariate_normal(mu, covRG) plt.contourf(x, y, rv.pdf(pos)) plt.show() ``` On teste la détection sur des couleurs de peau claires. ``` plt.figure(figsize = (10,10)) image2 = plt.imread('Trudeau.jpg') plt.imshow(image2) plt.show() # Détection res = np.zeros((len(image2),len(image2[0]))) for i in range(len(image2)): for j in range(len(image2[0])): #Calcul de la probabilité (de faire partie de la distribution) de chaque pixel res[i,j] = rv.pdf([image2[i][j][0],image2[i][j][1]]) Peau = res>0.00008 # Seuil trouvé manuellement plt.figure(figsize = (10,10)) plt.imshow(Peau, plt.get_cmap('binary')) plt.show() ``` Testons aussi avec une couleur de peau plus foncée. ``` plt.figure(figsize = (10,10)) image2 = plt.imread('Obama.jpg') plt.imshow(image2) plt.show() res = np.zeros((len(image2),len(image2[0]))) for i in range(len(image2)): for j in range(len(image2[0])): res[i,j] = rv.pdf([image2[i][j][0],image2[i][j][1]]) Peau = res>0.00001 plt.figure(figsize = (10,10)) plt.imshow(Peau, plt.get_cmap('binary')) plt.show() ```
github_jupyter
!git clone https://github.com/gabilodeau/INF8770 %cd INF8770 import numpy as np import matplotlib.pyplot as plt from scipy.stats import multivariate_normal plt.figure(figsize = (10,10)) #Vous pouvez aussi essayer avec EchPeau2.jpg. Les peaux foncées sont mieux détectées. image = plt.imread('EchPeau.jpg') plt.imshow(image) plt.show() Taille = len(image)*len(image[0]) Vec1 = np.zeros(Taille) Vec2 = np.zeros(Taille) k=0 for i in range(len(image)): for j in range(len(image[0])): Vec1[k]=image[i][j][0] Vec2[k]=image[i][j][1] k+=1 plt.hist2d(Vec1, Vec2) plt.show() sommeR = 0.0 sommeG = 0.0 #Mu, la moyenne des R et G for i in range(len(image)): for j in range(len(image[0])): sommeR=sommeR+image[i][j][0] sommeG=sommeG+image[i][j][1] nbPixels = len(image)*len(image[0]) MoyR= sommeR / nbPixels MoyG= sommeG / nbPixels mu = np.array([MoyR, MoyG]) print('mu:',mu) #La covariance des R et G covRG = np.zeros((2,2), dtype = "double") for i in range(len(image)): for j in range(len(image[0])): vecTemp=[[image[i][j][0] - MoyR], [image[i][j][1]] - MoyG] vecProdTemp = np.dot(vecTemp,np.transpose(vecTemp)) covRG = np.add(covRG,vecProdTemp) covRG = covRG / nbPixels print('covariance:',covRG) x, y = np.mgrid[0:256:1, 0:256:1] pos = np.empty(x.shape + (2,)) pos[:, :, 0] = x; pos[:, :, 1] = y rv = multivariate_normal(mu, covRG) plt.contourf(x, y, rv.pdf(pos)) plt.show() plt.figure(figsize = (10,10)) image2 = plt.imread('Trudeau.jpg') plt.imshow(image2) plt.show() # Détection res = np.zeros((len(image2),len(image2[0]))) for i in range(len(image2)): for j in range(len(image2[0])): #Calcul de la probabilité (de faire partie de la distribution) de chaque pixel res[i,j] = rv.pdf([image2[i][j][0],image2[i][j][1]]) Peau = res>0.00008 # Seuil trouvé manuellement plt.figure(figsize = (10,10)) plt.imshow(Peau, plt.get_cmap('binary')) plt.show() plt.figure(figsize = (10,10)) image2 = plt.imread('Obama.jpg') plt.imshow(image2) plt.show() res = np.zeros((len(image2),len(image2[0]))) for i in range(len(image2)): for j in range(len(image2[0])): res[i,j] = rv.pdf([image2[i][j][0],image2[i][j][1]]) Peau = res>0.00001 plt.figure(figsize = (10,10)) plt.imshow(Peau, plt.get_cmap('binary')) plt.show()
0.389895
0.966914
RUT strings can be converted to the following formats via the `output_format` parameter: * `compact`: only number strings without any seperators or whitespace, like "125319092" * `standard`: RUT strings with proper whitespace in the proper places, like "12.531.909-2" Invalid parsing is handled with the `errors` parameter: * `coerce` (default): invalid parsing will be set to NaN * `ignore`: invalid parsing will return the input * `raise`: invalid parsing will raise an exception The following sections demonstrate the functionality of `clean_cl_rut()` and `validate_cl_rut()`. ### An example dataset containing RUT strings ``` import pandas as pd import numpy as np df = pd.DataFrame( { "rut": [ "125319092", "76086A28-5", "51824753556", "51 824 753 556", "hello", np.nan, "NULL" ], "address": [ "123 Pine Ave.", "main st", "1234 west main heights 57033", "apt 1 789 s maple rd manhattan", "robie house, 789 north main street", "(staples center) 1111 S Figueroa St, Los Angeles", "hello", ] } ) df ``` ## 1. Default `clean_cl_rut` By default, `clean_cl_rut` will clean rut strings and output them in the standard format with proper separators. ``` from dataprep.clean import clean_cl_rut clean_cl_rut(df, column = "rut") ``` ## 2. Output formats This section demonstrates the output parameter. ### `standard` (default) ``` clean_cl_rut(df, column = "rut", output_format="standard") ``` ### `compact` ``` clean_cl_rut(df, column = "rut", output_format="compact") ``` ## 3. `inplace` parameter This deletes the given column from the returned DataFrame. A new column containing cleaned RUT strings is added with a title in the format `"{original title}_clean"`. ``` clean_cl_rut(df, column="rut", inplace=True) ``` ## 4. `errors` parameter ### `coerce` (default) ``` clean_cl_rut(df, "rut", errors="coerce") ``` ### `ignore` ``` clean_cl_rut(df, "rut", errors="ignore") ``` ## 4. `validate_cl_rut()` `validate_cl_rut()` returns `True` when the input is a valid RUT. Otherwise it returns `False`. The input of `validate_cl_rut()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame. When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated. When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_cl_rut()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_cl_rut()` returns the validation result for the whole DataFrame. ``` from dataprep.clean import validate_cl_rut print(validate_cl_rut("125319092")) print(validate_cl_rut("76086A28-5")) print(validate_cl_rut("51824753556")) print(validate_cl_rut("51 824 753 556")) print(validate_cl_rut("hello")) print(validate_cl_rut(np.nan)) print(validate_cl_rut("NULL")) ``` ### Series ``` validate_cl_rut(df["rut"]) ``` ### DataFrame + Specify Column ``` validate_cl_rut(df, column="rut") ``` ### Only DataFrame ``` validate_cl_rut(df) ```
github_jupyter
import pandas as pd import numpy as np df = pd.DataFrame( { "rut": [ "125319092", "76086A28-5", "51824753556", "51 824 753 556", "hello", np.nan, "NULL" ], "address": [ "123 Pine Ave.", "main st", "1234 west main heights 57033", "apt 1 789 s maple rd manhattan", "robie house, 789 north main street", "(staples center) 1111 S Figueroa St, Los Angeles", "hello", ] } ) df from dataprep.clean import clean_cl_rut clean_cl_rut(df, column = "rut") clean_cl_rut(df, column = "rut", output_format="standard") clean_cl_rut(df, column = "rut", output_format="compact") clean_cl_rut(df, column="rut", inplace=True) clean_cl_rut(df, "rut", errors="coerce") clean_cl_rut(df, "rut", errors="ignore") from dataprep.clean import validate_cl_rut print(validate_cl_rut("125319092")) print(validate_cl_rut("76086A28-5")) print(validate_cl_rut("51824753556")) print(validate_cl_rut("51 824 753 556")) print(validate_cl_rut("hello")) print(validate_cl_rut(np.nan)) print(validate_cl_rut("NULL")) validate_cl_rut(df["rut"]) validate_cl_rut(df, column="rut") validate_cl_rut(df)
0.369429
0.988777
**Prerequisite:** OpenAI Baselines has to be installed for this Exercise to work. See section 4.4. Import all required modules from OpenAI baselines and Tensorflow to use PPO algorithm ``` from baselines.ppo2.ppo2 import learn from baselines.ppo2 import defaults from baselines.common.vec_env import VecEnv, VecFrameStack from baselines.common.cmd_util import make_vec_env, make_env from baselines.common.models import register import tensorflow as tf ``` Define and register a custom Multi Layer Perceptron for the policy network ``` @register("custom_mlp") def custom_mlp(num_layers=2, num_hidden=64, activation=tf.tanh): """ Stack of fully-connected layers to be used in a policy / q-function approximator Parameters: ---------- num_layers: int number of fully-connected layers (default: 2) num_hidden: int size of fully-connected layers (default: 64) activation: activation function (default: tf.tanh) Returns: ------- function that builds fully connected network with a given input tensor / placeholder """ def network_fn(input_shape): print('input shape is {}'.format(input_shape)) x_input = tf.keras.Input(shape=input_shape) h = x_input for i in range(num_layers): h = tf.keras.layers.Dense(units=num_hidden, name='custom_mlp_fc{}'.format(i), activation=activation)(h) network = tf.keras.Model(inputs=[x_input], outputs=[h]) network.summary() return network return network_fn ``` Create a function to build the environment in the format required by OpenAI baselines ``` def build_env(env_id, env_type): if env_type in {'atari', 'retro'}: env = make_vec_env(env_id, env_type, 1, None, gamestate=None, reward_scale=1.0) env = VecFrameStack(env, 4) else: env = make_vec_env(env_id, env_type, 1, None, reward_scale=1.0, flatten_dict_observations=True) return env ``` Build the environment, choose policy network parameters and train it ``` env_id = 'CartPole-v0' env_type = 'classic_control' print("Env type = ", env_type) env = build_env(env_id, env_type) hidden_nodes = 64 hidden_layers = 2 model = learn(network="custom_mlp", env=env, total_timesteps=1e4, num_hidden=hidden_nodes, num_layers=hidden_layers) ``` Run the trained agent in the environment and print the cumulative reward ``` obs = env.reset() if not isinstance(env, VecEnv): obs = np.expand_dims(np.array(obs), axis=0) episode_rew = 0 while True: actions, _, state, _ = model.step(obs) obs, reward, done, info = env.step(actions.numpy()) if not isinstance(env, VecEnv): obs = np.expand_dims(np.array(obs), axis=0) env.render() print("Reward = ", reward) episode_rew += reward if done: print('Episode Reward = {}'.format(episode_rew)) break env.close() ``` Use the built-in OpenAI baseline `run` script to train PPO on `CartPole-v0` environment ``` !python -m baselines.run --alg=ppo2 --env=CartPole-v0 --num_timesteps=1e4 --save_path=./models/CartPole_2M_ppo2 --log_path=./logs/CartPole/ ``` Use the built-in OpenAI Baseline `run` script to run the trained model on `CartPole-v0` environment ``` !python -m baselines.run --alg=ppo2 --env=CartPole-v0 --num_timesteps=0 --load_path=./models/CartPole_2M_ppo2 --play ``` Use the pretrained weights provided to see the trianed agent in action ``` !wget -O cartpole_1M_ppo2.tar.gz https://github.com/PacktWorkshops/The-Reinforcement-Learning-Workshop/blob/master/Chapter04/cartpole_1M_ppo2.tar.gz?raw=true !tar xvzf cartpole_1M_ppo2.tar.gz !python -m baselines.run --alg=ppo2 --env=CartPole-v0 --num_timesteps=0 --load_path=./cartpole_1M_ppo2 --play ```
github_jupyter
from baselines.ppo2.ppo2 import learn from baselines.ppo2 import defaults from baselines.common.vec_env import VecEnv, VecFrameStack from baselines.common.cmd_util import make_vec_env, make_env from baselines.common.models import register import tensorflow as tf @register("custom_mlp") def custom_mlp(num_layers=2, num_hidden=64, activation=tf.tanh): """ Stack of fully-connected layers to be used in a policy / q-function approximator Parameters: ---------- num_layers: int number of fully-connected layers (default: 2) num_hidden: int size of fully-connected layers (default: 64) activation: activation function (default: tf.tanh) Returns: ------- function that builds fully connected network with a given input tensor / placeholder """ def network_fn(input_shape): print('input shape is {}'.format(input_shape)) x_input = tf.keras.Input(shape=input_shape) h = x_input for i in range(num_layers): h = tf.keras.layers.Dense(units=num_hidden, name='custom_mlp_fc{}'.format(i), activation=activation)(h) network = tf.keras.Model(inputs=[x_input], outputs=[h]) network.summary() return network return network_fn def build_env(env_id, env_type): if env_type in {'atari', 'retro'}: env = make_vec_env(env_id, env_type, 1, None, gamestate=None, reward_scale=1.0) env = VecFrameStack(env, 4) else: env = make_vec_env(env_id, env_type, 1, None, reward_scale=1.0, flatten_dict_observations=True) return env env_id = 'CartPole-v0' env_type = 'classic_control' print("Env type = ", env_type) env = build_env(env_id, env_type) hidden_nodes = 64 hidden_layers = 2 model = learn(network="custom_mlp", env=env, total_timesteps=1e4, num_hidden=hidden_nodes, num_layers=hidden_layers) obs = env.reset() if not isinstance(env, VecEnv): obs = np.expand_dims(np.array(obs), axis=0) episode_rew = 0 while True: actions, _, state, _ = model.step(obs) obs, reward, done, info = env.step(actions.numpy()) if not isinstance(env, VecEnv): obs = np.expand_dims(np.array(obs), axis=0) env.render() print("Reward = ", reward) episode_rew += reward if done: print('Episode Reward = {}'.format(episode_rew)) break env.close() !python -m baselines.run --alg=ppo2 --env=CartPole-v0 --num_timesteps=1e4 --save_path=./models/CartPole_2M_ppo2 --log_path=./logs/CartPole/ !python -m baselines.run --alg=ppo2 --env=CartPole-v0 --num_timesteps=0 --load_path=./models/CartPole_2M_ppo2 --play !wget -O cartpole_1M_ppo2.tar.gz https://github.com/PacktWorkshops/The-Reinforcement-Learning-Workshop/blob/master/Chapter04/cartpole_1M_ppo2.tar.gz?raw=true !tar xvzf cartpole_1M_ppo2.tar.gz !python -m baselines.run --alg=ppo2 --env=CartPole-v0 --num_timesteps=0 --load_path=./cartpole_1M_ppo2 --play
0.827759
0.905948
# Parameter sensitivity in the SKA Performance Model This notebook is originally work by Tim Cornwell, providing the plots for the TCC memo TCC-SDP-151123-1-1. ``` import sys from ipywidgets import interact, interact_manual, fixed sys.path+=['..'] from sdp_par_model import evaluate from sdp_par_model import reports as iapi from sdp_par_model.config import PipelineConfig from sdp_par_model.parameters.definitions import * # Do plotting inline since sending to a specific file in jupyter/matplotlib is fiddly %matplotlib inline # The lines below define the available telescopes, bands and modes that may be chosen. available_teles = (Telescopes.SKA1_Low, Telescopes.SKA1_Mid) available_bands = (Bands.Low, Bands.Mid1, Bands.Mid2, Bands.Mid5a, Bands.Mid5b,) available_modes = (Pipelines.DPrepA, Pipelines.DPrepC, Pipelines.FastImg) verbose = False # Scale the predict step according to the facet size not the FOV : See TCC-SDP-151123-1-1 scale_predict_by_facet = True ``` ## Evaluate scaling as a function of Q_FOV ``` from IPython.display import display, HTML parameter = 'Qfov' param_val_min =1.0 param_val_max = 3.0 number_steps = 20 for telescope in [Telescopes.SKA1_Low, Telescopes.SKA1_Mid]: if telescope==Telescopes.SKA1_Low: band=Bands.Low else: band=Bands.Mid1 for immode in [Pipelines.DPrepA]: for expression in [ 'Nfacet', 'Rflop', 'Rio', 'Rfacet_vis', 'Rimage', 'Mout']: print(telescope, immode, expression) display(HTML('<font color="blue">Computing the result -- this may take several (tens of) seconds.</font>')) cfg = PipelineConfig(telescope=telescope, pipeline=immode, band=band, scale_predict_by_facet=scale_predict_by_facet) (param_values, results) = cfg.eval_param_sweep_1d(expression, parameter_string=parameter, param_val_min=param_val_min, param_val_max=param_val_max, number_steps=number_steps, verbose=verbose) header = 'Plotting %s for %s in %s mode as a function of %s' % (expression, telescope, immode, parameter) iapi.plot_line_datapoints(header, param_values, np.array(results), xlabel=parameter, ylabel=expression) ``` ## Evaluate scaling as a function of Bmax ``` from IPython.display import display, HTML parameter = 'Bmax' number_steps = 20 for telescope in [Telescopes.SKA1_Mid, Telescopes.SKA1_Low]: if telescope==Telescopes.SKA1_Low: param_val_min =10000 param_val_max = 80000 band=Bands.Low else: param_val_min =15000 param_val_max = 150000 band=Bands.Mid1 for immode in [Pipelines.DPrepA]: for expression in [ 'Rflop', 'Rio', 'Rinterfacet', 'Mw_cache']: print(telescope, immode, expression) display(HTML('<font color="blue">Computing the result -- this may take several (tens of) seconds.</font>')) cfg = PipelineConfig(telescope=telescope, pipeline=immode, band=band, scale_predict_by_facet=scale_predict_by_facet) (param_values, results) = cfg.eval_param_sweep_1d(expression, parameter_string=parameter, param_val_min=param_val_min, param_val_max=param_val_max, number_steps=number_steps, verbose=verbose) header = 'Plotting %s for %s in %s mode as a function of %s' % (expression, telescope, immode, parameter) iapi.plot_line_datapoints(header, param_values, np.array(results), xlabel=parameter, ylabel=expression) ``` ## Evaluate scaling as a function of Tion ``` from IPython.display import display, HTML parameter = 'Tion' number_steps = 9 param_val_min =1 param_val_max = 10 band=Bands.Low for telescope in [Telescopes.SKA1_Low]: for immode in [Pipelines.DPrepA]: for expression in [ 'Rflop', 'Rio', 'Rinterfacet', 'Mw_cache']: print(telescope, immode, expression) display(HTML('<font color="blue">Computing the result -- this may take several (tens of) seconds.</font>')) cfg = PipelineConfig(telescope=telescope, pipeline=immode, band=band, scale_predict_by_facet=scale_predict_by_facet) (param_values, results) = cfg.eval_param_sweep_1d(expression, parameter_string=parameter, param_val_min=param_val_min, param_val_max=param_val_max, number_steps=number_steps, verbose=verbose) header = 'Plotting %s for %s in %s mode as a function of %s' % (expression, telescope, immode, parameter) iapi.plot_line_datapoints(header, param_values, np.array(results), xlabel=parameter, ylabel=expression) ```
github_jupyter
import sys from ipywidgets import interact, interact_manual, fixed sys.path+=['..'] from sdp_par_model import evaluate from sdp_par_model import reports as iapi from sdp_par_model.config import PipelineConfig from sdp_par_model.parameters.definitions import * # Do plotting inline since sending to a specific file in jupyter/matplotlib is fiddly %matplotlib inline # The lines below define the available telescopes, bands and modes that may be chosen. available_teles = (Telescopes.SKA1_Low, Telescopes.SKA1_Mid) available_bands = (Bands.Low, Bands.Mid1, Bands.Mid2, Bands.Mid5a, Bands.Mid5b,) available_modes = (Pipelines.DPrepA, Pipelines.DPrepC, Pipelines.FastImg) verbose = False # Scale the predict step according to the facet size not the FOV : See TCC-SDP-151123-1-1 scale_predict_by_facet = True from IPython.display import display, HTML parameter = 'Qfov' param_val_min =1.0 param_val_max = 3.0 number_steps = 20 for telescope in [Telescopes.SKA1_Low, Telescopes.SKA1_Mid]: if telescope==Telescopes.SKA1_Low: band=Bands.Low else: band=Bands.Mid1 for immode in [Pipelines.DPrepA]: for expression in [ 'Nfacet', 'Rflop', 'Rio', 'Rfacet_vis', 'Rimage', 'Mout']: print(telescope, immode, expression) display(HTML('<font color="blue">Computing the result -- this may take several (tens of) seconds.</font>')) cfg = PipelineConfig(telescope=telescope, pipeline=immode, band=band, scale_predict_by_facet=scale_predict_by_facet) (param_values, results) = cfg.eval_param_sweep_1d(expression, parameter_string=parameter, param_val_min=param_val_min, param_val_max=param_val_max, number_steps=number_steps, verbose=verbose) header = 'Plotting %s for %s in %s mode as a function of %s' % (expression, telescope, immode, parameter) iapi.plot_line_datapoints(header, param_values, np.array(results), xlabel=parameter, ylabel=expression) from IPython.display import display, HTML parameter = 'Bmax' number_steps = 20 for telescope in [Telescopes.SKA1_Mid, Telescopes.SKA1_Low]: if telescope==Telescopes.SKA1_Low: param_val_min =10000 param_val_max = 80000 band=Bands.Low else: param_val_min =15000 param_val_max = 150000 band=Bands.Mid1 for immode in [Pipelines.DPrepA]: for expression in [ 'Rflop', 'Rio', 'Rinterfacet', 'Mw_cache']: print(telescope, immode, expression) display(HTML('<font color="blue">Computing the result -- this may take several (tens of) seconds.</font>')) cfg = PipelineConfig(telescope=telescope, pipeline=immode, band=band, scale_predict_by_facet=scale_predict_by_facet) (param_values, results) = cfg.eval_param_sweep_1d(expression, parameter_string=parameter, param_val_min=param_val_min, param_val_max=param_val_max, number_steps=number_steps, verbose=verbose) header = 'Plotting %s for %s in %s mode as a function of %s' % (expression, telescope, immode, parameter) iapi.plot_line_datapoints(header, param_values, np.array(results), xlabel=parameter, ylabel=expression) from IPython.display import display, HTML parameter = 'Tion' number_steps = 9 param_val_min =1 param_val_max = 10 band=Bands.Low for telescope in [Telescopes.SKA1_Low]: for immode in [Pipelines.DPrepA]: for expression in [ 'Rflop', 'Rio', 'Rinterfacet', 'Mw_cache']: print(telescope, immode, expression) display(HTML('<font color="blue">Computing the result -- this may take several (tens of) seconds.</font>')) cfg = PipelineConfig(telescope=telescope, pipeline=immode, band=band, scale_predict_by_facet=scale_predict_by_facet) (param_values, results) = cfg.eval_param_sweep_1d(expression, parameter_string=parameter, param_val_min=param_val_min, param_val_max=param_val_max, number_steps=number_steps, verbose=verbose) header = 'Plotting %s for %s in %s mode as a function of %s' % (expression, telescope, immode, parameter) iapi.plot_line_datapoints(header, param_values, np.array(results), xlabel=parameter, ylabel=expression)
0.428712
0.856032
``` # Required Python Machine learning Packages import pandas as pd import numpy as np # For preprocessing the data from sklearn.preprocessing import Imputer from sklearn import preprocessing # To split the dataset into train and test datasets from sklearn.cross_validation import train_test_split # To model the Gaussian Navie Bayes classifier from sklearn.naive_bayes import GaussianNB # To calculate the accuracy score of the model from sklearn.metrics import accuracy_score %matplotlib inline import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = (12,8) import seaborn as sns df = pd.read_csv("Replaced.csv",encoding="ISO-8859-1") df.head() ``` #### Drop the columns which are not required and not useful for predictions ``` drop_cols = ['Unnamed: 0','brand','categories','categories','dateAdded','dateUpdated','keys','manufacturer','name','reviewsdate','dateSeen','sourceURLs','text','title','userCity','upc','userProvince'] df = df.drop(drop_cols,axis=1) df.head() ``` #### Test whether there is any null value in our dataset or not. ``` df.isnull().sum() ``` #### The above output shows that there are “null” values in our dataset. #### Fill the NaNs with suitable values ``` df['didPurchase'].fillna(True, inplace=True) df['doRecommend'].fillna(True, inplace=True) ``` #### Convert boolean values to binary values i.e. True to 1 and False to 0 ``` df.didPurchase = (df.didPurchase)*1 df.doRecommend = (df.doRecommend)*1 df.fillna(0, inplace=True) df.head() ``` #### Convert string values to integer values by hashing the column values ``` def get_hash(x): return abs(hash(x)) % 10**9 df['username'] = df['username'].apply(get_hash) df['id'] = df['id'].apply(get_hash) df.head() df.dtypes all_cols=df.select_dtypes(exclude=['object']) all_cols df.isnull().sum() ``` #### The above output shows that there is no “null” value in our dataset. ``` df.describe() ``` #### Scale the column values ``` def scaled_df(df): scaled = pd.DataFrame() for item in df: if item in df.select_dtypes(include=[np.float]): scaled[item] = ((df[item] - df[item].min()) / (df[item].max() - df[item].min())) else: scaled[item] = df[item] return scaled df_scaled = scaled_df(df) f, ax = plt.subplots(figsize=(11, 15)) ax.set_axis_bgcolor('#FFFFFF') plt.title("Box Plot Product Data Unscaled") ax.set(xlim=(-.05, 1.05)) ax = sns.boxplot(data = df[:22], orient = 'h', palette = 'Set3') f, ax = plt.subplots(figsize=(11, 15)) ax.set_axis_bgcolor('#FFFFFF') plt.title("Box Plot Product Data Scaled") ax.set(xlim=(-.05, 1.05)) ax = sns.boxplot(data = df_scaled[:22], orient = 'h', palette = 'Set3') df.head() ``` #### Set predictor columns to determine the results ``` predictor_names=['id','didPurchase','username','rating'] predictor_names ``` #### Find Rank for each of the predictor columns ``` def rank_predictors(dat,l,f='doRecommend'): rank={} max_vals=dat.max() median_vals=dat.groupby(f).median() # We are using the median as the mean is sensitive to outliers for p in l: score=np.abs((median_vals[p][1]-median_vals[p][0])/max_vals[p]) rank[p]=score return rank cat_rank=rank_predictors(df,predictor_names) cat_rank ``` #### Sort the predictors by rank ``` cat_rank=sorted(cat_rank.items(), key=lambda x: x[1]) cat_rank ``` #### Take the top predictors based on median difference ``` ranked_predictors=[] for f in cat_rank[1:]: ranked_predictors.append(f[0]) ranked_predictors ``` #### Predicting if the product will be recommended or not using the predictor columns ``` X = df_scaled[predictor_names] #setting target y = df_scaled['doRecommend'] ``` #### Splitting the data into training and test set. ``` X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= .4, random_state=0) ``` #### Finding the optimal value using Gaussian Naive Bayes algorithm ``` clf = GaussianNB() clf.fit(X_train, y_train) target_pred = clf.predict(X_test) ``` #### Find the accuracy score using Gaussian Naive Bayes Classifier ``` accuracy_score(y_test, target_pred, normalize = True) ``` ### Using Gaussian Naive Bayes, the accuracy score is 94.69% for user to likely to recommend a product.
github_jupyter
# Required Python Machine learning Packages import pandas as pd import numpy as np # For preprocessing the data from sklearn.preprocessing import Imputer from sklearn import preprocessing # To split the dataset into train and test datasets from sklearn.cross_validation import train_test_split # To model the Gaussian Navie Bayes classifier from sklearn.naive_bayes import GaussianNB # To calculate the accuracy score of the model from sklearn.metrics import accuracy_score %matplotlib inline import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = (12,8) import seaborn as sns df = pd.read_csv("Replaced.csv",encoding="ISO-8859-1") df.head() drop_cols = ['Unnamed: 0','brand','categories','categories','dateAdded','dateUpdated','keys','manufacturer','name','reviewsdate','dateSeen','sourceURLs','text','title','userCity','upc','userProvince'] df = df.drop(drop_cols,axis=1) df.head() df.isnull().sum() df['didPurchase'].fillna(True, inplace=True) df['doRecommend'].fillna(True, inplace=True) df.didPurchase = (df.didPurchase)*1 df.doRecommend = (df.doRecommend)*1 df.fillna(0, inplace=True) df.head() def get_hash(x): return abs(hash(x)) % 10**9 df['username'] = df['username'].apply(get_hash) df['id'] = df['id'].apply(get_hash) df.head() df.dtypes all_cols=df.select_dtypes(exclude=['object']) all_cols df.isnull().sum() df.describe() def scaled_df(df): scaled = pd.DataFrame() for item in df: if item in df.select_dtypes(include=[np.float]): scaled[item] = ((df[item] - df[item].min()) / (df[item].max() - df[item].min())) else: scaled[item] = df[item] return scaled df_scaled = scaled_df(df) f, ax = plt.subplots(figsize=(11, 15)) ax.set_axis_bgcolor('#FFFFFF') plt.title("Box Plot Product Data Unscaled") ax.set(xlim=(-.05, 1.05)) ax = sns.boxplot(data = df[:22], orient = 'h', palette = 'Set3') f, ax = plt.subplots(figsize=(11, 15)) ax.set_axis_bgcolor('#FFFFFF') plt.title("Box Plot Product Data Scaled") ax.set(xlim=(-.05, 1.05)) ax = sns.boxplot(data = df_scaled[:22], orient = 'h', palette = 'Set3') df.head() predictor_names=['id','didPurchase','username','rating'] predictor_names def rank_predictors(dat,l,f='doRecommend'): rank={} max_vals=dat.max() median_vals=dat.groupby(f).median() # We are using the median as the mean is sensitive to outliers for p in l: score=np.abs((median_vals[p][1]-median_vals[p][0])/max_vals[p]) rank[p]=score return rank cat_rank=rank_predictors(df,predictor_names) cat_rank cat_rank=sorted(cat_rank.items(), key=lambda x: x[1]) cat_rank ranked_predictors=[] for f in cat_rank[1:]: ranked_predictors.append(f[0]) ranked_predictors X = df_scaled[predictor_names] #setting target y = df_scaled['doRecommend'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= .4, random_state=0) clf = GaussianNB() clf.fit(X_train, y_train) target_pred = clf.predict(X_test) accuracy_score(y_test, target_pred, normalize = True)
0.77193
0.921992
<a href="https://colab.research.google.com/github/kvinne-anc/Data-Science-Notebooks/blob/main/Explanatory_Visualizations_Continuous_Categorical.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Lambda School Data Science *Unit 1, Sprint 1, Module 4* --- # Make Explanatory Visualizations **Objectives** - What are continuous and categorical variables? Using pandas built-in plotting functionality. - Learn about various types of `pandas` plots using `matplotlib` - use `matplotlib` to visualize distributions and relationships with continuous and categorical variables - imitate a real-world example **What are categorical, discrete, and continuous variables?** * Categorical variables contain a finite number of categories or distinct groups. Categorical data might not have a logical order. For example, categorical predictors include gender, material type, and payment method. * Discrete variables are numeric variables that have a countable number of values between any two values. A discrete variable is always numeric. For example, the number of customer complaints or the number of flaws or defects. * Continuous variables are numeric variables that have an infinite number of values between any two values. A continuous variable can be numeric or date/time. For example, the length of a part or the date and time a payment is received. [Source](https://support.minitab.com/en-us/minitab-express/1/help-and-how-to/modeling-statistics/regression/supporting-topics/basics/what-are-categorical-discrete-and-continuous-variables/) ``` ## Imports. import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt # use the 'magic' symbol (%) to specify some non-python code (i.e., affects the underlying jupyter kernel). %matplotlib inline ''' That line is only for jupyter notebooks, and allows plt figures to show up in your noteboook. if you are using another editor, you'll use: plt.show() at the end of all your plotting commands to have the figure pop up in another window. ''' # Specify the 'plot style' we want to use with pandas and matplotlib plt.style.available # "fast" is actually the default style, so you don't necessarily have to set it. # List of other available plot styles you can use instead of "fast". plt.style.use('fast') ``` ## Part 1. The Pandas built-in visualization tool This is useful only for simple, quick-and-dirty plots. [Read the full documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html). For anything more complex you'll want to use a more robust visualization package such as `matplotlib`, `seaborn`, or `plotly`. "Under the hood, pandas plots graphs with the matplotlib library. This is usually pretty convenient since it allows you to just .plot your graphs, but since matplotlib is kind of a train wreck pandas inherits that confusion." [J. Sonoma](http://jonathansoma.com/lede/algorithms-2017/classes/fuzziness-matplotlib/understand-df-plot-in-pandas/) **Continuous vs. categorical variables** ``` # Read in the titanic dataset url = 'https://raw.githubusercontent.com/austinlasseter/pandas_visualization/master/data/titanic.csv' titanic = pd.read_csv(url) titanic.head() # cabin class is a categorical variable # it can be displayed using either number or string values. titanic['Pclass'].value_counts().sort_index() # convert to string categories titanic['cabin_class'] = titanic['Pclass'].map({1:'first', 2:'second', 3:'third'}) titanic.sample(5) # Fare is a continuous variable, so we can display it with a histogram. titanic['Fare'].plot(kind='hist'); # Remove two extreme outliers print(titanic.shape) titanic=titanic[titanic['Fare']<500] print(titanic.shape) # histogram will display better titanic['Fare'].plot(kind='hist', color='dodgerblue', bins=25); # describe your cont var titanic['Fare'].describe() # A continuous variable can be converted to a categorical var using binning. farelabels = ['low', 'medium', 'high', 'vhigh'] titanic['fare_groups'] = pd.cut(titanic['Fare'], bins=[0,8,15,32,264], labels=farelabels) titanic[['Fare', 'fare_groups']].head() # check out the results fare_results = titanic['fare_groups'].value_counts().sort_index() # sometimes it's important to specify the range of your y-axis to get more accurate results. 'lim'='limits' fare_results.plot(kind='bar', color='darkred'); ``` **Display a single continuous variable** ``` # Histogram titanic['Age'].plot(kind='hist', bins=25, figsize=(16,4), color='#17B952'); # Stacked Histogram of continuous variable, broken out by categorical variable titanic[titanic['Pclass']==1]['Age'].plot(kind='hist', bins=25, figsize=(16,4), color='magenta', alpha=0.5) titanic[titanic['Pclass']==2]['Age'].plot(kind='hist', bins=25, figsize=(16,4), color='darkblue', alpha=0.5) titanic[titanic['Pclass']==3]['Age'].plot(kind='hist', bins=25, figsize=(16,4), color='lime', alpha=0.5); # But this is actually easier to read using a Kernel Density Estimate (KDE). Note the use of 'xlim'. titanic[titanic['Pclass']==1]['Age'].plot(kind='density', figsize=(16,4), color='navy', xlim=(-5, 85)); titanic[titanic['Pclass']==2]['Age'].plot(kind='density', figsize=(16,4), color='dodgerblue', xlim=(-5, 85)); titanic[titanic['Pclass']==3]['Age'].plot(kind='density', figsize=(16,4), color='crimson', xlim=(-5, 85)); ``` **Display a single categorical variable** ``` # Bar chart. Use colors to make a particular value 'pop'! titanic['fare_groups'].value_counts().sort_index().plot(kind='bar', color=['gray', 'gray', 'red', 'gray']); # Pie chart titanic['fare_groups'].value_counts().sort_index().plot(kind='pie', figsize=(4,4)); ``` **Compare two categorical variables** ``` # Compare two categorical variables using crosstab cab_death = pd.crosstab(titanic['cabin_class'], titanic['Survived']) cab_death # Make that into a bar chart. cab_death.plot(kind='bar'); # Crosstab with column percents. cab_death1 = pd.crosstab(titanic['cabin_class'], titanic['Survived'], normalize='columns') cab_death1 # make a stacked bar chart, for easy comparison of two categorical variables. cab_death1.plot(kind='bar', stacked=True, color=['orange', 'gray']); # Crosstab with column percents. cab_death2 = pd.crosstab(titanic['cabin_class'], titanic['Survived'], normalize='index') cab_death2 # A paired barchart for comparison of two categorical variables. Note the use of 'barh'. cab_death2.plot(kind='barh', color=['purple', 'lightgreen'], figsize=(10,4)); ``` **Compare a categorical and continuous variable** ``` # Let's look at a single continuous variable titanic['Age'].describe() titanic['Age'].mean() titanic['Age'].median() titanic['Age'].std() titanic['Age'].describe() # Now explore that continuous variable broken out by a categorical (i.e., categorical) variable. cab_age = titanic.groupby('cabin_class')['Age'].mean() cab_age # To visualize a single continuous variable, you can just use `.plot` with `kind=box` titanic['Age'].plot(kind='box'); # But to break out a continuous variable by a categorical variable, use `.boxplot` plt.style.use('fast') titanic.boxplot(column='Age', by='cabin_class'); ``` **Compare two or three continuous variables** ``` # Two continuous variables: age by fare titanic.plot(kind='scatter', x='Fare', y='Age', figsize=(12,3)); # Three variables: age, fare, and cabin class (color can be either continuous or categorical here). plt.style.use('fast') titanic.plot(kind='scatter', x='Fare', y='Age', c='Pclass', colormap='plasma', figsize=(12,3)); ``` **Use a line plot to examine two continuous variables (often time)** ``` # line chart: age by fare titanic.groupby('Age')['Fare'].mean().plot(kind='line', figsize=(12,4)); # Line chart: age by fare and cabin class mycolors=['crimson', 'purple', 'dodgerblue'] # Rather than specifying 3 plots (as we did above, with the histogram) you can just loop through the values. for x in [1,2,3]: titanic[titanic['Pclass']==x].groupby('Age')['Fare'].mean().plot(figsize=(12,4), color=mycolors[x-1]); ``` **Compare multiple continuous variables** ``` # Correlation Matrix titanic.corr() # Display the correlation matrix as a heatmap plt.imshow(titanic.corr()); ``` **But this actually is easier and better looking if you use `seaborn` code.** ``` # Seaborn. sns.heatmap(titanic.corr()); ``` ## Part 2. Intro to `matplotlib` **Basic example** Let's walk through a very simple example using two numpy arrays. You can also use lists, but most likely you'll be passing numpy arrays or pandas columns (which essentially also behave like arrays). ** The data we want to plot:** ``` x = np.linspace(0, 5 ,11) y = x**2 x, y ``` **Basic Matplotlib Commands** We can create a very simple line plot using the following ( I encourage you to pause and use Shift+Tab along the way to check out the document strings for the functions we are using). ``` # a very simple plot, to get started. Notice that NO PANDAS is required! import matplotlib.pyplot as plt plt.plot(x,y); # You can have two 'plt' plots together (and let's add some color). plt.plot(x,y,'r') plt.plot(y,x, 'b'); # Now add some labels, plus a little texture. plt.plot(x,y,'r--') plt.plot(y,x, 'k*-') plt.xlabel('stuff here') plt.ylabel('more stuff') plt.title('lots of stuff'); ``` **Creating Multiplots on Same Canvas** ``` # The basic syntax goes like this: plt.subplot(nrows, ncols, plot_number) plt.subplot(3,1,1) plt.plot(x,y,'b') plt.subplot(3,1,2) plt.plot(x,y,'r--') plt.subplot(3,1,3) plt.plot(x,y,'g-x') ; ``` ___ ### Matplotlib Object Oriented Method Now that we've seen the basics, let's break it all down with a more formal introduction of Matplotlib's Object Oriented API. This means we will instantiate figure objects and then call methods or attributes from that object. In Matplotlib, the figure (an instance of the class `plt.Figure`) can be thought of as a single container that contains all the objects representing axes, graphics, text, and labels. The axes (an instance of the class `plt.Axes`) is what we see above: a bounding box with ticks and labels, which will eventually contain the plot elements that make up our visualization. We'll commonly use the variable name `fig` to refer to a figure instance, and `ax` to refer to an axes instance or group of axes instances. Once we have created an axes, we can use the `ax.plot` function to plot some data. https://jakevdp.github.io/PythonDataScienceHandbook/04.01-simple-line-plots.html **The `.figure()` method** To begin we create a figure instance. Then we can add axes to that figure: ``` # Create Figure (empty canvas) fig = plt.figure() # Add set of axes to figure ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # left, bottom, width, height (range 0 to 1) # Plot on that set of axes ax.plot(x,y,'b') ax.set_xlabel('this label') ax.set_ylabel('this label') ax.set_title('this label'); # Notice the use of set_ to begin methods ``` Code is a little more complicated, but the advantage is that we now have full control of where the plot axes are placed, and we can easily add more than one axis to the figure: ``` # Creates blank canvas # Larger Figure Axes 1 # Insert Figure Axes 2 ``` **The `subplots()` method** The `plt.subplots()` object will act as a more automatic axis manager. It is somewhat more common than using `.figure()` `plt.subplots()` is a function that returns a tuple containing a figure and axes object(s). Thus when using `fig, ax = plt.subplots()` you unpack this tuple into the variables fig and ax. Having fig is useful if you want to change figure-level attributes or save the figure as an image file later (e.g. with fig.savefig('yourfilename.png')). You certainly don't have to use the returned figure object but many people do use it later so it's common to see. Also, all axes objects (the objects that have plotting methods), have a parent figure object anyway, thus: ``` fig, ax = plt.subplots() ``` is more concise than this: ``` fig = plt.figure() ax = fig.add_subplot(111) ``` https://stackoverflow.com/questions/34162443/why-do-many-examples-use-fig-ax-plt-subplots-in-matplotlib-pyplot-python ``` # Use similar to plt.figure() except use tuple unpacking to grab fig and axes # The default is `(1, 1)` so you don't actually have to specify this. # Now use the axes object to add stuff to plot ``` Then you can specify the number of rows and columns when creating the subplots() object: ``` # Empty canvas of 1 by 2 subplots # the parameter labels `nrows` and `ncols` are optional. # more about `.tight_layout()` below. # Axes is an array of axes to plot on ``` We can iterate through this array: ``` # Display the figure object ``` A common issue with matplolib is overlapping subplots or figures. We ca use **fig.tight_layout()** or **plt.tight_layout()** method, which automatically adjusts the positions of the axes on the figure canvas so that there is no overlapping content: ``` ``` ### Figure size, aspect ratio and DPI Matplotlib allows the aspect ratio, DPI and figure size to be specified when the Figure object is created. You can use the `figsize` and `dpi` keyword arguments. * `figsize` is a tuple of the width and height of the figure in inches * `dpi` is the dots-per-inch (pixel per inch). For example: ``` ``` The same arguments can also be passed to layout managers, such as the `subplots` function: ``` ``` ## Saving figures Matplotlib can generate high-quality output in a number formats, including PNG, JPG, EPS, SVG, PGF and PDF. To save a figure to a file we can use the `savefig` method in the `Figure` class: ``` ``` Here we can also optionally specify the DPI and choose between different output formats: ``` ``` ____ ## Legends, labels and titles Now that we have covered the basics of how to create a figure canvas and add axes instances to the canvas, let's look at how decorate a figure with titles, axis labels, and legends. **Figure titles** A title can be added to each axis instance in a figure. To set the title, use the `set_title` method in the axes instance: ``` ``` **Axis labels** Similarly, with the methods `set_xlabel` and `set_ylabel`, we can set the labels of the X and Y axes: ``` ``` ### Legends You can use the **label="label text"** keyword argument when plots or other objects are added to the figure, and then using the **legend** method without arguments to add the legend to the figure: ``` ``` The **legend** function takes an optional keyword argument **loc** that can be used to specify where in the figure the legend is to be drawn. The allowed values of **loc** are numerical codes for the various places the legend can be drawn. See the [documentation page](http://matplotlib.org/users/legend_guide.html#legend-location) for details. Some of the most common **loc** values are: ``` # Lots of options.... # Try replacing the `loc` value with integers 1 through 10. # let matplotlib decide the optimal location ``` ## Setting colors, linewidths, linetypes Matplotlib gives you *a lot* of options for customizing colors, linewidths, and linetypes. There is the basic MATLAB like syntax (which I would suggest you avoid using for more clairty sake: ### Colors with MatLab like syntax With matplotlib, we can define the colors of lines and other graphical elements in a number of ways. First of all, we can use the MATLAB-like syntax where `'b'` means blue, `'g'` means green, etc. The MATLAB API for selecting line styles are also supported: where, for example, 'b.-' means a blue line with dots: ``` # MATLAB style line color and style # blue line with dots # green dashed line ``` ### Colors with the color= parameter We can also define colors by their names or RGB hex codes and optionally provide an alpha value using the `color` and `alpha` keyword arguments. Alpha indicates opacity. ``` # half-transparant # RGB hex code # RGB hex code ``` ### Line and marker styles To change the line width, we can use the `linewidth` or `lw` keyword argument. The line style can be selected using the `linestyle` or `ls` keyword arguments: ``` fig, ax = plt.subplots(figsize=(12,6)) ax.plot(x, x+1, color="red", linewidth=0.25) ax.plot(x, x+2, color="red", linewidth=0.50) ax.plot(x, x+3, color="red", linewidth=1.00) ax.plot(x, x+4, color="red", linewidth=2.00) # possible linestype options ‘-‘, ‘–’, ‘-.’, ‘:’, ‘steps’ ax.plot(x, x+5, color="green", lw=3, linestyle='-') ax.plot(x, x+6, color="green", lw=3, ls='-.') ax.plot(x, x+7, color="green", lw=3, ls=':') # custom dash line, = ax.plot(x, x+8, color="black", lw=1.50) line.set_dashes([5, 10, 15, 10]) # format: line length, space length, ... # possible marker symbols: marker = '+', 'o', '*', 's', ',', '.', '1', '2', '3', '4', ... ax.plot(x, x+ 9, color="blue", lw=3, ls='-', marker='+') ax.plot(x, x+10, color="blue", lw=3, ls='--', marker='o') ax.plot(x, x+11, color="blue", lw=3, ls='-', marker='s') ax.plot(x, x+12, color="blue", lw=3, ls='--', marker='1') # marker size and color ax.plot(x, x+13, color="purple", lw=1, ls='-', marker='o', markersize=2) ax.plot(x, x+14, color="purple", lw=1, ls='-', marker='o', markersize=4) ax.plot(x, x+15, color="purple", lw=1, ls='-', marker='o', markersize=8, markerfacecolor="red") ax.plot(x, x+16, color="purple", lw=1, ls='-', marker='s', markersize=8, markerfacecolor="yellow", markeredgewidth=3, markeredgecolor="green"); ``` ### Control over axis appearance In this section we will look at controlling axis sizing properties in a matplotlib figure. ## Plot range We can configure the ranges of the axes using the `set_ylim` and `set_xlim` methods in the axis object, or `axis('tight')` for automatically getting "tightly fitted" axes ranges: ``` fig, axes = plt.subplots(1, 3, figsize=(12, 4)) axes[0].plot(x, x**2, x, x**3) axes[0].set_title("default axes ranges") axes[1].plot(x, x**2, x, x**3) axes[1].axis('tight') axes[1].set_title("tight axes") axes[2].plot(x, x**2, x, x**3) axes[2].set_ylim([0, 60]) axes[2].set_xlim([2, 5]) axes[2].set_title("custom axes range"); ``` ## Part 3. Imitate a real-world example Today we will reproduce this [example by FiveThirtyEight:](https://fivethirtyeight.com/features/al-gores-new-movie-exposes-the-big-flaw-in-online-movie-ratings/) ``` from IPython.display import display, Image url = 'https://fivethirtyeight.com/wp-content/uploads/2017/09/mehtahickey-inconvenient-0830-1.png' example = Image(url=url, width=400) display(example) ``` Using this data: https://github.com/fivethirtyeight/data/tree/master/inconvenient-sequel Links - [Strong Titles Are The Biggest Bang for Your Buck](http://stephanieevergreen.com/strong-titles/) - [Remove to improve (the data-ink ratio)](https://www.darkhorseanalytics.com/blog/data-looks-better-naked) - [How to Generate FiveThirtyEight Graphs in Python](https://www.dataquest.io/blog/making-538-plots/) ### Make fake prototypes This helps us understand the problem ``` # what styles are available in matplotlib? There's one for 538. plt.style.available plt.style.use('fivethirtyeight') # Create fake data to replicate the blog post figure. fakelist=[38, 3, 2, 1, 2, 4, 6, 5, 5, 33] fake = pd.Series(fakelist, index=range(1,11)) fake # display as bar fake.plot.bar(color='C1', width=0.9); display(example) fakelist2=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10] fake2 = pd.Series(fakelist2) fake2.value_counts().sort_index().plot.bar(color='C1', width=0.9); ``` ### Annotate with text ``` fig, ax = plt.subplots() fig.patch.set(facecolor="white") # Set the bars fake.plot.bar(width=0.9, color="#ED713A") ax.set(facecolor="white") # Set the title and subtitle ax.set_title("'An Inconvenient Sequel: Truth To Power' is divisive", fontsize=12, x=.35, y=1.1, fontweight='bold') ax.text(s="IMDb ratings for the film as of Aug. 29", fontsize=12, x=-1.85, y=42); # Set the x and y axes labels ax.set_xlabel("Rating", fontsize=9) ax.set_ylabel("Percent of total votes", fontsize=9) # Fix the x an y axis tick marks and grid ax.set_xticklabels(range(1,11), rotation=45) ax.set_yticklabels(range(0,50,10), rotation=45) ax.set_yticks(range(0,50,5)); display(example) ``` ### Reproduce with real data Using this dataset relies on us making two discoveries: 1) The dataset shows logs of the data at different timestamps and these timestamps are cumulative, meaning that it has all of the ratings as earlier timestamps, but with the new ones added on top. 2) The dataset logs ratings breakdowns for a bunch of different demographic groups per timestamp. Once we realize these two things, we realize that we only really need 1 line of this dataset to make our graphic, it's the last line that holds the ratings for all IMDb users for the very last time stamp. ``` # read the data from 538's github repo ratings = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/inconvenient-sequel/ratings.csv') ratings.tail() # Convert timestamps strings to actual datetime objects ratings['timestamp1'] = pd.to_datetime(ratings['timestamp']) # check your work1 ratings[['timestamp', 'timestamp1']].sample(5) # check your work print(type(ratings['timestamp'][0])) print(type(ratings['timestamp1'][0])) # Use the timestamp as the unique index identifier # so that we can select rows by timestamp ratings.set_index('timestamp1', inplace=True, drop=True) ratings.head() # grab only the rows corresponding to the last day lastday = ratings['2017-08-29'] print(ratings.shape) print(lastday.shape) # val counts lastday['category'].value_counts().sort_index() # get the demographic breakdowns for all IMDb users on the last day lastday_filtered = lastday[lastday['category']=='IMDb users'] lastday_filtered.tail() # just grab the very last line (latest timestamp) of IMDb user ratings # this should be the most up to date data from the dataset final = lastday_filtered.tail(1) final # transpose that final.T # Grab only the percentage columns since we don't care about the raw # counts in making our graph pct_columns = ['1_pct', '2_pct', '3_pct', '4_pct', '5_pct', '6_pct', '7_pct', '8_pct', '9_pct', '10_pct'] final[pct_columns].T # Reset the index so that it's numeric again # and rename the percent column for easy access in our plotting plot_data = final[pct_columns].T plot_data.index=range(1,11) plot_data.columns =['percent'] plot_data ``` **generate the figure** ``` fig, ax = plt.subplots() # Figure background color fig.patch.set(facecolor="white") # Set the bars ax.bar(x=range(1,11), height=plot_data['percent'], width=0.9, color="#ED713A") # Axes background color ax.set(facecolor="white") # Set the title and subtitle ax.set_title("'An Inconvenient Sequel: Truth To Power' is divisive", fontsize=12, x=.35, y=1.1, fontweight='bold') ax.text(s="IMDb ratings for the film as of Aug. 29", fontsize=12, x=-1.25, y=42) # Set the x and y axes labels ax.set_xlabel("Rating", fontsize=9) ax.set_ylabel("Percent of total votes", fontsize=9) # Fix the x an y axis tick marks and grid ax.set_xticklabels(range(1,11), rotation=0) ax.set_xticks(range(1,11)) ax.set_yticklabels(range(0,50,10)) ax.set_yticks(range(0,50,10)); display(example) ```
github_jupyter
## Imports. import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt # use the 'magic' symbol (%) to specify some non-python code (i.e., affects the underlying jupyter kernel). %matplotlib inline ''' That line is only for jupyter notebooks, and allows plt figures to show up in your noteboook. if you are using another editor, you'll use: plt.show() at the end of all your plotting commands to have the figure pop up in another window. ''' # Specify the 'plot style' we want to use with pandas and matplotlib plt.style.available # "fast" is actually the default style, so you don't necessarily have to set it. # List of other available plot styles you can use instead of "fast". plt.style.use('fast') # Read in the titanic dataset url = 'https://raw.githubusercontent.com/austinlasseter/pandas_visualization/master/data/titanic.csv' titanic = pd.read_csv(url) titanic.head() # cabin class is a categorical variable # it can be displayed using either number or string values. titanic['Pclass'].value_counts().sort_index() # convert to string categories titanic['cabin_class'] = titanic['Pclass'].map({1:'first', 2:'second', 3:'third'}) titanic.sample(5) # Fare is a continuous variable, so we can display it with a histogram. titanic['Fare'].plot(kind='hist'); # Remove two extreme outliers print(titanic.shape) titanic=titanic[titanic['Fare']<500] print(titanic.shape) # histogram will display better titanic['Fare'].plot(kind='hist', color='dodgerblue', bins=25); # describe your cont var titanic['Fare'].describe() # A continuous variable can be converted to a categorical var using binning. farelabels = ['low', 'medium', 'high', 'vhigh'] titanic['fare_groups'] = pd.cut(titanic['Fare'], bins=[0,8,15,32,264], labels=farelabels) titanic[['Fare', 'fare_groups']].head() # check out the results fare_results = titanic['fare_groups'].value_counts().sort_index() # sometimes it's important to specify the range of your y-axis to get more accurate results. 'lim'='limits' fare_results.plot(kind='bar', color='darkred'); # Histogram titanic['Age'].plot(kind='hist', bins=25, figsize=(16,4), color='#17B952'); # Stacked Histogram of continuous variable, broken out by categorical variable titanic[titanic['Pclass']==1]['Age'].plot(kind='hist', bins=25, figsize=(16,4), color='magenta', alpha=0.5) titanic[titanic['Pclass']==2]['Age'].plot(kind='hist', bins=25, figsize=(16,4), color='darkblue', alpha=0.5) titanic[titanic['Pclass']==3]['Age'].plot(kind='hist', bins=25, figsize=(16,4), color='lime', alpha=0.5); # But this is actually easier to read using a Kernel Density Estimate (KDE). Note the use of 'xlim'. titanic[titanic['Pclass']==1]['Age'].plot(kind='density', figsize=(16,4), color='navy', xlim=(-5, 85)); titanic[titanic['Pclass']==2]['Age'].plot(kind='density', figsize=(16,4), color='dodgerblue', xlim=(-5, 85)); titanic[titanic['Pclass']==3]['Age'].plot(kind='density', figsize=(16,4), color='crimson', xlim=(-5, 85)); # Bar chart. Use colors to make a particular value 'pop'! titanic['fare_groups'].value_counts().sort_index().plot(kind='bar', color=['gray', 'gray', 'red', 'gray']); # Pie chart titanic['fare_groups'].value_counts().sort_index().plot(kind='pie', figsize=(4,4)); # Compare two categorical variables using crosstab cab_death = pd.crosstab(titanic['cabin_class'], titanic['Survived']) cab_death # Make that into a bar chart. cab_death.plot(kind='bar'); # Crosstab with column percents. cab_death1 = pd.crosstab(titanic['cabin_class'], titanic['Survived'], normalize='columns') cab_death1 # make a stacked bar chart, for easy comparison of two categorical variables. cab_death1.plot(kind='bar', stacked=True, color=['orange', 'gray']); # Crosstab with column percents. cab_death2 = pd.crosstab(titanic['cabin_class'], titanic['Survived'], normalize='index') cab_death2 # A paired barchart for comparison of two categorical variables. Note the use of 'barh'. cab_death2.plot(kind='barh', color=['purple', 'lightgreen'], figsize=(10,4)); # Let's look at a single continuous variable titanic['Age'].describe() titanic['Age'].mean() titanic['Age'].median() titanic['Age'].std() titanic['Age'].describe() # Now explore that continuous variable broken out by a categorical (i.e., categorical) variable. cab_age = titanic.groupby('cabin_class')['Age'].mean() cab_age # To visualize a single continuous variable, you can just use `.plot` with `kind=box` titanic['Age'].plot(kind='box'); # But to break out a continuous variable by a categorical variable, use `.boxplot` plt.style.use('fast') titanic.boxplot(column='Age', by='cabin_class'); # Two continuous variables: age by fare titanic.plot(kind='scatter', x='Fare', y='Age', figsize=(12,3)); # Three variables: age, fare, and cabin class (color can be either continuous or categorical here). plt.style.use('fast') titanic.plot(kind='scatter', x='Fare', y='Age', c='Pclass', colormap='plasma', figsize=(12,3)); # line chart: age by fare titanic.groupby('Age')['Fare'].mean().plot(kind='line', figsize=(12,4)); # Line chart: age by fare and cabin class mycolors=['crimson', 'purple', 'dodgerblue'] # Rather than specifying 3 plots (as we did above, with the histogram) you can just loop through the values. for x in [1,2,3]: titanic[titanic['Pclass']==x].groupby('Age')['Fare'].mean().plot(figsize=(12,4), color=mycolors[x-1]); # Correlation Matrix titanic.corr() # Display the correlation matrix as a heatmap plt.imshow(titanic.corr()); # Seaborn. sns.heatmap(titanic.corr()); x = np.linspace(0, 5 ,11) y = x**2 x, y # a very simple plot, to get started. Notice that NO PANDAS is required! import matplotlib.pyplot as plt plt.plot(x,y); # You can have two 'plt' plots together (and let's add some color). plt.plot(x,y,'r') plt.plot(y,x, 'b'); # Now add some labels, plus a little texture. plt.plot(x,y,'r--') plt.plot(y,x, 'k*-') plt.xlabel('stuff here') plt.ylabel('more stuff') plt.title('lots of stuff'); # The basic syntax goes like this: plt.subplot(nrows, ncols, plot_number) plt.subplot(3,1,1) plt.plot(x,y,'b') plt.subplot(3,1,2) plt.plot(x,y,'r--') plt.subplot(3,1,3) plt.plot(x,y,'g-x') ; # Create Figure (empty canvas) fig = plt.figure() # Add set of axes to figure ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # left, bottom, width, height (range 0 to 1) # Plot on that set of axes ax.plot(x,y,'b') ax.set_xlabel('this label') ax.set_ylabel('this label') ax.set_title('this label'); # Notice the use of set_ to begin methods # Creates blank canvas # Larger Figure Axes 1 # Insert Figure Axes 2 fig, ax = plt.subplots() fig = plt.figure() ax = fig.add_subplot(111) # Use similar to plt.figure() except use tuple unpacking to grab fig and axes # The default is `(1, 1)` so you don't actually have to specify this. # Now use the axes object to add stuff to plot # Empty canvas of 1 by 2 subplots # the parameter labels `nrows` and `ncols` are optional. # more about `.tight_layout()` below. # Axes is an array of axes to plot on # Display the figure object ``` ### Figure size, aspect ratio and DPI Matplotlib allows the aspect ratio, DPI and figure size to be specified when the Figure object is created. You can use the `figsize` and `dpi` keyword arguments. * `figsize` is a tuple of the width and height of the figure in inches * `dpi` is the dots-per-inch (pixel per inch). For example: The same arguments can also be passed to layout managers, such as the `subplots` function: ## Saving figures Matplotlib can generate high-quality output in a number formats, including PNG, JPG, EPS, SVG, PGF and PDF. To save a figure to a file we can use the `savefig` method in the `Figure` class: Here we can also optionally specify the DPI and choose between different output formats: ____ ## Legends, labels and titles Now that we have covered the basics of how to create a figure canvas and add axes instances to the canvas, let's look at how decorate a figure with titles, axis labels, and legends. **Figure titles** A title can be added to each axis instance in a figure. To set the title, use the `set_title` method in the axes instance: **Axis labels** Similarly, with the methods `set_xlabel` and `set_ylabel`, we can set the labels of the X and Y axes: ### Legends You can use the **label="label text"** keyword argument when plots or other objects are added to the figure, and then using the **legend** method without arguments to add the legend to the figure: The **legend** function takes an optional keyword argument **loc** that can be used to specify where in the figure the legend is to be drawn. The allowed values of **loc** are numerical codes for the various places the legend can be drawn. See the [documentation page](http://matplotlib.org/users/legend_guide.html#legend-location) for details. Some of the most common **loc** values are: ## Setting colors, linewidths, linetypes Matplotlib gives you *a lot* of options for customizing colors, linewidths, and linetypes. There is the basic MATLAB like syntax (which I would suggest you avoid using for more clairty sake: ### Colors with MatLab like syntax With matplotlib, we can define the colors of lines and other graphical elements in a number of ways. First of all, we can use the MATLAB-like syntax where `'b'` means blue, `'g'` means green, etc. The MATLAB API for selecting line styles are also supported: where, for example, 'b.-' means a blue line with dots: ### Colors with the color= parameter We can also define colors by their names or RGB hex codes and optionally provide an alpha value using the `color` and `alpha` keyword arguments. Alpha indicates opacity. ### Line and marker styles To change the line width, we can use the `linewidth` or `lw` keyword argument. The line style can be selected using the `linestyle` or `ls` keyword arguments: ### Control over axis appearance In this section we will look at controlling axis sizing properties in a matplotlib figure. ## Plot range We can configure the ranges of the axes using the `set_ylim` and `set_xlim` methods in the axis object, or `axis('tight')` for automatically getting "tightly fitted" axes ranges: ## Part 3. Imitate a real-world example Today we will reproduce this [example by FiveThirtyEight:](https://fivethirtyeight.com/features/al-gores-new-movie-exposes-the-big-flaw-in-online-movie-ratings/) Using this data: https://github.com/fivethirtyeight/data/tree/master/inconvenient-sequel Links - [Strong Titles Are The Biggest Bang for Your Buck](http://stephanieevergreen.com/strong-titles/) - [Remove to improve (the data-ink ratio)](https://www.darkhorseanalytics.com/blog/data-looks-better-naked) - [How to Generate FiveThirtyEight Graphs in Python](https://www.dataquest.io/blog/making-538-plots/) ### Make fake prototypes This helps us understand the problem ### Annotate with text ### Reproduce with real data Using this dataset relies on us making two discoveries: 1) The dataset shows logs of the data at different timestamps and these timestamps are cumulative, meaning that it has all of the ratings as earlier timestamps, but with the new ones added on top. 2) The dataset logs ratings breakdowns for a bunch of different demographic groups per timestamp. Once we realize these two things, we realize that we only really need 1 line of this dataset to make our graphic, it's the last line that holds the ratings for all IMDb users for the very last time stamp. **generate the figure**
0.615666
0.984124