code
stringlengths
2.5k
6.36M
kind
stringclasses
2 values
parsed_code
stringlengths
0
404k
quality_prob
float64
0
0.98
learning_prob
float64
0.03
1
## Exercise: Wrangling Data: Acquisition, Integration, and Exploration For this lab’s exercise we are going to answer a few questions about AirBnB listings in San Francisco to make a better informed civic descisions. Spurred by Prop F in San Francisco, imagine you are the mayor of SF (or your respective city) and you need to decide what impact AirBnB has had on your own housing situation. We will collect the relevant data, parse and store this data in a structured form, and use statistics and visualization to both better understand our own city and potentially communicate these findings to the public at large. > I will explore SF's data, but the techniques should be generally applicable to any city. Inside AirBnB has many interesting cities to further explore: http://insideairbnb.com/ ## Outline * Start with Effective Questions * Intro + Data Science Overview * Proposition F * How can we answer this? * Acquiring Data * What's an API? (Zillow API, SF Open Data, datausa.io) * How the Web Works (Socrata API) * What if there is no API? * Scrape an AirBnB listing * What to do now that we have data? * Basics of HTML (CSS selectors and grabbing what you want) * Use `lxml` to parse web pages * Storing Data * Schemas and Structure * Relations (users, listings, and reviews) * Store listing in SQLite * Manipulating Data * basics of Pandas * summary stats * split-apply-combine * Aggregations * Prop F. revenue lost * Exploratory Data Analysis * Inside AirBnB * Why visual? * Chart Types (visualizing continuous, categorical, and distributions and facets) * Distributions of Prop F. Revenue vs. point statistics # Visualize ### Time to visualize! Using pandas (and matplotlib) create a visualization of each of the following: * Distribution of room_type (for entire city) * Histogram of # of listings per neighborhood * Histogram of # of listings for each user * City wide distribution of listing price * Distribution of median listing price per neighborhood * Histogram of number of reviews per listing ``` import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %pylab inline # We will use the Inside AirBnB dataset from here on df = pd.read_csv('data/sf_listings.csv') df.head() df.room_type.value_counts().plot.bar() # Since SF doesn't have many neighborhoods (comparatively) we can also see the raw # per neighborhood df.groupby('neighbourhood').count()['id'].plot.bar(figsize=(14,6)) df.groupby('host_id').count()['id'].plot.hist(bins=50) # let's zoom in to the tail subselect = df.groupby('host_id').count()['id'] subselect[subselect > 1].plot.hist(bins=50) def scale_free_plot(df, num): subselect = df.groupby('host_id').count()['id'] return subselect[subselect > num].plot.hist(bins=75) scale_free_plot(df, 2) # the shape of the distribution stays relatively the same as we subselect for i in range(5): scale_free_plot(df, i) plt.show() ``` ### Scatterplot Matrix In an effort to find potential correlations (or outliers) you want a little bit more fine grained loot at the data. Create a scatterplot matrix of the data for your city. http://pandas.pydata.org/pandas-docs/stable/visualization.html#visualization-scatter-matrix ``` from pandas.tools.plotting import scatter_matrix # it only makes sense to plot the continuous columns continuous_columns = ['price', 'minimum_nights', 'number_of_reviews', 'reviews_per_month', \ 'calculated_host_listings_count','availability_365'] # semicolon prevents the axis objests from printing scatter_matrix(df[continuous_columns], alpha=0.6, figsize=(16, 16), diagonal='kde'); ``` #### Interesting insights from the scatter matrix: * `price` heavily skewed towards cheap prices (with a few extreme outliers). `host_listings_count` and `number_of_reviews` have similar distributions. * `minimum_nights` has a sharp bimodal distribution. * Listing are bimodal too and are either: * available for a relatively short period of the year * available for most of it (these are probably the ___"hotels"___) * Host with a large number of listings have them each for a relative low price. * Listings that are expensive have very few reviews (i.e. not many people stay at them) ``` sns.distplot(df[(df.calculated_host_listings_count > 2) & (df.room_type == 'Entire home/apt')].availability_365, bins=50) sns.distplot(df[(df.calculated_host_listings_count <= 2) & (df.room_type == 'Entire home/apt')].availability_365, bins=50) # Host with multiple listing for the entire home distribution is skewed to availability the entire year # implying that these hosts are renting the AirBnB as short term sublets (or hotels) entire_home = df[df.room_type == 'Entire home/apt'] plt.figure(figsize=(14,6)) sns.kdeplot(entire_home[entire_home.calculated_host_listings_count > 1].availability_365, label='Multiple Listings') sns.kdeplot(entire_home[entire_home.calculated_host_listings_count == 1].availability_365, label = 'Single Listing') plt.legend(); # Host with multiple listing for the entire home distribution is skewed to availability the entire year # implying that these hosts are renting the AirBnB as short term sublets (or hotels) plt.figure(figsize=(14,6)) sns.kdeplot(df[df.minimum_nights > 29].availability_365, label='Short term Sublet') sns.kdeplot(df[df.minimum_nights <= 20].availability_365, label = 'Listing') plt.legend(); # Host with multiple listing for the entire home distribution is skewed to availability the entire year # implying that these hosts are renting the AirBnB as short term sublets (or hotels) entire_home = df[df.minimum_nights > 29] plt.figure(figsize=(14,6)) sns.kdeplot(entire_home[entire_home.calculated_host_listings_count > 1].availability_365, label='Multiple Listings') sns.kdeplot(entire_home[entire_home.calculated_host_listings_count == 1].availability_365, label = 'Single Listing') plt.legend(); ``` # Extra! ## Advanced Plots with Seaborn ### Make a violin plot of the price distribution of each neighborhood. > If your city has a large number of neighborhoods plot the 10 with the most listing. ``` # just a tocuh hard to interpret... plt.figure(figsize=(16, 6)) sns.violinplot(data=df, x='neighbourhood', y='price') # boxplots can sometimes handle outliers better, we can see here there are some listings that are high priced extrema plt.figure(figsize=(16, 6)) sns.boxplot(data=df, x='neighbourhood', y='price') ``` Lets try to only show the 10 neighborhoods with the most listings and to zoom in on the distribution of the lower prices (now that we can identify the outliers) we can remove listings priced at > $2000 ``` top_neighborhoods = df.groupby('neighbourhood').count().sort_values('id', ascending = False).index[:10] top_neighborhoods neighborhood_subset = df[df.neighbourhood.isin(top_neighborhoods)] plt.figure(figsize=(16, 6)) sns.boxplot(data=neighborhood_subset[neighborhood_subset.price < 2000], x='neighbourhood', y='price') plt.figure(figsize=(16, 6)) sns.violinplot(data=neighborhood_subset[neighborhood_subset.price < 2000], x='neighbourhood', y='price') ``` ### Boxplots vs. Violinplots * Boxplot * Can be easier to interpret (a visual representation of point statistics, the quartiles) * Shows individual outlier data points (violinplot only shows range of outliers, max values) * More common and familiar * Can be easier to compare a large number of boxplots * Can be easier to compare the spread (upper - lower quartile) of the values * Violinplot * Shows more information (distribution rather than point statistic) * Will show bimodality (or more complex distribution of values, boxplot collapses this to one number) * Shows density of values
github_jupyter
import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %pylab inline # We will use the Inside AirBnB dataset from here on df = pd.read_csv('data/sf_listings.csv') df.head() df.room_type.value_counts().plot.bar() # Since SF doesn't have many neighborhoods (comparatively) we can also see the raw # per neighborhood df.groupby('neighbourhood').count()['id'].plot.bar(figsize=(14,6)) df.groupby('host_id').count()['id'].plot.hist(bins=50) # let's zoom in to the tail subselect = df.groupby('host_id').count()['id'] subselect[subselect > 1].plot.hist(bins=50) def scale_free_plot(df, num): subselect = df.groupby('host_id').count()['id'] return subselect[subselect > num].plot.hist(bins=75) scale_free_plot(df, 2) # the shape of the distribution stays relatively the same as we subselect for i in range(5): scale_free_plot(df, i) plt.show() from pandas.tools.plotting import scatter_matrix # it only makes sense to plot the continuous columns continuous_columns = ['price', 'minimum_nights', 'number_of_reviews', 'reviews_per_month', \ 'calculated_host_listings_count','availability_365'] # semicolon prevents the axis objests from printing scatter_matrix(df[continuous_columns], alpha=0.6, figsize=(16, 16), diagonal='kde'); sns.distplot(df[(df.calculated_host_listings_count > 2) & (df.room_type == 'Entire home/apt')].availability_365, bins=50) sns.distplot(df[(df.calculated_host_listings_count <= 2) & (df.room_type == 'Entire home/apt')].availability_365, bins=50) # Host with multiple listing for the entire home distribution is skewed to availability the entire year # implying that these hosts are renting the AirBnB as short term sublets (or hotels) entire_home = df[df.room_type == 'Entire home/apt'] plt.figure(figsize=(14,6)) sns.kdeplot(entire_home[entire_home.calculated_host_listings_count > 1].availability_365, label='Multiple Listings') sns.kdeplot(entire_home[entire_home.calculated_host_listings_count == 1].availability_365, label = 'Single Listing') plt.legend(); # Host with multiple listing for the entire home distribution is skewed to availability the entire year # implying that these hosts are renting the AirBnB as short term sublets (or hotels) plt.figure(figsize=(14,6)) sns.kdeplot(df[df.minimum_nights > 29].availability_365, label='Short term Sublet') sns.kdeplot(df[df.minimum_nights <= 20].availability_365, label = 'Listing') plt.legend(); # Host with multiple listing for the entire home distribution is skewed to availability the entire year # implying that these hosts are renting the AirBnB as short term sublets (or hotels) entire_home = df[df.minimum_nights > 29] plt.figure(figsize=(14,6)) sns.kdeplot(entire_home[entire_home.calculated_host_listings_count > 1].availability_365, label='Multiple Listings') sns.kdeplot(entire_home[entire_home.calculated_host_listings_count == 1].availability_365, label = 'Single Listing') plt.legend(); # just a tocuh hard to interpret... plt.figure(figsize=(16, 6)) sns.violinplot(data=df, x='neighbourhood', y='price') # boxplots can sometimes handle outliers better, we can see here there are some listings that are high priced extrema plt.figure(figsize=(16, 6)) sns.boxplot(data=df, x='neighbourhood', y='price') top_neighborhoods = df.groupby('neighbourhood').count().sort_values('id', ascending = False).index[:10] top_neighborhoods neighborhood_subset = df[df.neighbourhood.isin(top_neighborhoods)] plt.figure(figsize=(16, 6)) sns.boxplot(data=neighborhood_subset[neighborhood_subset.price < 2000], x='neighbourhood', y='price') plt.figure(figsize=(16, 6)) sns.violinplot(data=neighborhood_subset[neighborhood_subset.price < 2000], x='neighbourhood', y='price')
0.63624
0.973037
# Aula 03 ## Tipos primitivos de Python, Entrada de Dados, Formatação da Saída Um **tipo** é um conjunto de valores equipados com um conjunto de operações. Por exemplo, o tipo inteiro possui valores inteiros e podemos realizar operações de soma, subtração, multiplicação e divisão. **Tipos Primitivos** são os tipos que a linguagem já disponibiliza para uso e que não precisam ser definidos. Existem ainda os tipos definidos pelo usuário, que veremos mais adiante. ### Tipos Dinâmicos vs Tipos Estáticos Uma das pricipais características dos defensores de Python é sua facilidade de uso. Dentre as propriedades da linguagem que contribuem para isso é a chamada `tipagem dinâmica`. Ou seja, programadores Python não precisam declarar previamente o tipo da variável. Esse tipo é inferido quando um valor é atribuído à variável. Além disso, a depender dos valores atribuídos a ela, uma mesma variável pode assumir diferentes tipos em um mesmo programa. Por outro lado, linguagens com `tipos estáticos`, como Java e C, requerem a declaração prévia do tipo. A variável manterá o mesmo tipo até o fim do programa. Vamos comparar as duas abordagens: ```python /* código C */ int cont; cont = 0; for(int i=0; i<100; i++){ cont += i; } # código Python cont = 0 for i in range(100): cont += i ``` Note que, enquanto C declara a variável ```cont``` (int cont;), Python atribui 0 à ```cont``` sem declarar o tipo. No exemplo abaixo, são feitas duas atribuições a uma variáve ```val```. Na primeira, ```val``` recebe um valor numérico e assume um tipo numérico. Na segunda, ```val``` recebe um valor textual e assume um tipo textual. ```python /* código Python */ val = 3 val = 'três' ``` Se tentarmos fazer o mesmo em C, o compilador enviaria uma mensagem de erro, pois o tipo de ```val``` deve permanecer o mesmo em todo o programa. ```python /* código C */ int val; val = 3; val = "três"; // Erro: o tipo de val foi declarado como int e não pode receber um string ``` #### There is no free lunch Mas, se é a tipagem dinâmica é tão boa, pois simplifica a escrita do código, por quê todas as linguagens não seguem o mesmo caminho? A resposta é: por que elas querem ter mais garantias de que o código está correto. De fato, nas linguagens com tipo estático, não há risco de que o programador atribua acidentalmente um valor de um tipo diferente daquele esperado pela variável. Assim, há uma maior consistência ao longo de todo programa sobre o uso daquela variável. Além disso, ao ter que definir um tipo para uma variável ou função, o programador é forçado a pensar com mais cuidado sobre as propriedades e formas de uso dessa variável ou função. Esse exercício, tem o benefício de produzir códigos mais corretos e de melhor qualidade. Em resumo, existem vantagens e desvantagens da tipagem estática e tipagem dinâmica. É preciso sempre colocar tais questões em perspectiva no momento da escolha de uma linguagem de programação. ### Tipos Numéricos Existem dois tipos numéricos: * integers: … -1, -2, 0, 1, 2 ... * floats: - 2.24 - 32.2E-5 (a notação E indica potência de 10: 32.2 * 10^5 Em Python, não existem diferentes tamanhos para tipos numéricos, como ocorre em outras linguagens, como C e Java. Alguns exemplos do uso de strings são: - nome e sobrenome - nome de usuário - passwords - endereço postal - endereço de email - mensagens para o usuário Vamos agora aplicar algumas funções para manipulação com Strings. Quando usado em strings, o operador **+** realiza a concatenação dos strings. O operador foi usado no programa abaixo, que deveria imprimir a mensagen: *Bom dia, Augusto!* *Augusto, o que você deseja hoje?* Teste o programa. Se algo der errado, corrija para que imprima a mensagem desejada. Existe uma grande variedade de funções disponíveis para manipular strings. Para consultá-las, utilize a opção de autocomplete, que pode ser ativada no Jupyter através do comando a seguir: ``` %config IPCompleter.greedy=True ``` Agora, basta digitar o nome da variável que armazena o String seguida de ponto e clicar na tecla tab. ### Formatação da Saída Vamos agora estudar maneira mais sofisticadas para formatar a saída do programa através do método format(). Até agora, temos utilizado a seguinte estrutura: Não é uma forma tão interessante de formatar a saída, pois o programador precisa: 1. Incluir os operadores de concatenação 2. Ter cuidado para inserir espaços corretamente 3. Converter explicitamente os tipos numéricos para strings O método `format` ajuda a o programador a evitar essas dificuldades. O código a seguir produzirá o mesmo efeito: Note que: 1. o índice `{0}` inicia em 0; 2. o comando `.format(nome,idade)` já converte o valor numérico armazenado em `idade` em um String. Algumas formas alternativas para uso do `format`: Imprimindo com padrões regionais: O comando `print` sempre inclui uma quebra de linhas no final. Para evitar isso, é possível usar o parâmetro `end`. ### Entrada de Dados A entrada de dados serve para que o usuário forneça valores para o programa. Os valores fornecidos pelo usuário podem ser armazenados em variáveis, por exemplo: O comando `input` sempre lê o valor digitado como uma string. Se você testar o tipo de operando1 e operando2: Para efetuar a leitura de valores int e float, temos que forçar a conversão: ### Exercício Escreva um programa que pede o nome e a idade (inteiro) e peso (double) de uma pessoa e imprime uma mensagem com tais informações. O peso tem que ser impresso com duas casas decimais (utilize pontos como separador decimal). - *Ex: José, 25 anos, pesa 72,18 kg!*
github_jupyter
/* código C */ int cont; cont = 0; for(int i=0; i<100; i++){ cont += i; } # código Python cont = 0 for i in range(100): cont += i /* código Python */ val = 3 val = 'três' /* código C */ int val; val = 3; val = "três"; // Erro: o tipo de val foi declarado como int e não pode receber um string %config IPCompleter.greedy=True
0.094302
0.93852
# `str` - Karaktere kateak * `str` motako objektuak * Karaktere sekuentzia **ALDAEZINA**. * Barne errepresentazioa: [UNICODE](https://en.wikipedia.org/wiki/Unicode) * Balio literalak adierazi daitezke: * `'...'` komatxo sinpleen artean * `"..."` komatxo bikoitzen artean * `'''...'''` `"""..."""` hiru komatxo sinple/bikoitz artean ``` a = 'Hemen "komatxo bikoitzak" sar daitezke' b = "Hemen 'komatxo sinpleak' sar daitezke" c = """Hemen ilara bat baina gehiago sar daiteke, eta edozein "motetako" 'komatxoak' ere""" print(a) print(b) print(c) ``` Karaktere kate literal bat baina gehiago hutsunez bananduak badaude, literal bakarra adierazten dute: ``` a = "karaktere" "kate" "bat" "baina" "hutsunerik" "gabe" print(a) a = "karakterekatebatbainahutsunerikgabe" print(a) ``` Komatxo sinple eta bikoitzak dituen literala sortzeko aukera: ``` a = 'Orain "komatxo bikoitzak"' " eta 'komatxo sinpleak' sar daitezke!" print(a) a = """Baina beste aukera "hau" 'askozaz' argiagoa da...""" print(a) ``` ## Karaktere kateen ezaugarriak * Aldaezinak * Indexagarriak &rarr; `[]` eragilea * ```python "karaktere katea"[5] ``` * Iteragarriak &rarr; `for` kontrol egitura * ```python for i in "karaktere katea" : ``` ## Indexagarritasuna * *Normalean*, objektu indexagarrien luzera kontsulta daiteke: ``` a = "karaktere katea" luz = len(a) luz ``` * *Normalean*, indizeak zenbaki osoak izango dira. * `0`-tik aurrera indexatzen da: * `a[0]`, `a[1]`, ..., `a[luz-1]` * Indize negatiboek atzeranzko indexazioa adierazten dute: * `a[-i]` &rarr; `a[luz-i]` * `a[-1], a[-2], ..., a[-luz]` * `a[luz-1],a[luz-2], ...,a[0]` ``` a = "kaixo, zer moduz?" luz = len(a) print("katea:", a) print("luzera:", luz) print("a[0]:", a[0]) print("a[-luz]:", a[-luz]) print("a[luz-1]:", a[luz-1]) print("a[-1]:", a[-1]) print(type(a[0]),len(a[0])) a = "kaixo, zer moduz?" for i in range(len(a)) : print(type(a[i]),a[i]) ``` `[-luz,luz-1]` tarte kanpoko indizeek erroreak sortuko dituzte: ``` a = "kaixo, zer moduz?" print(a[100]) print(a[-100]) ``` Gogoratu: Karaktere Kateak **aldaezinak** dira, eta beraz, * **EZIN DIRA ALDATU**: ``` a = "Kaixo" b = a[0] print(b) a[0] = "X" ``` ## *Slice* notazioa Objetu indexagarrietan elementu bat baina gehiago adierazteko aukera * `a[i:j]` : `a`-ko azpi-sekuentzia, `i` indizetik (barne) `j`-ra (kanpo). * `i` edo `j` negatiboak &rarr; `len(a)-i` , `len(a)-j` * `i` edo `j` > `len(a)` &rarr; `len(a)` * `i` ez adierazia edo `None` &rarr; `0` * `j` ez adierazia edo `None` &rarr; `len(a)` * `a[i:j:k]` : `a`-ko azpi-sekuentzia, `i`-tik (barne) `j`-ra (kanpo), `k` hurratsarekin. * `a[i:j]` eta `a[i:j:k]`-k objektu berriak sortzen dituzte... **ia beti**. ``` a = "kaixo denoi!!!" a[::4] a = "abcdefghijklmnñopqrstuvwxyz" print(a[0:5]) print(a[5:len(a)]) print(a[:5]) print(a[5:]) print(a[:]) print(a==a[:],a is a[:]) a = "abcdefghijklmnñopqrstuvwxyz" print(a[0:len(a):2]) print(a[::2]) print('froga1:',a[0:len(a):-1]) print('froga2:',a[len(a)::-1]) print(a[::-1]) ``` ## Iteragarritasuna * Objektu iteragarriak `for` kontrol egitura batekin zeharkatu daitezke. * Karaktere kate bat zeharkatzean, bere karaktereak lortzen ditugu: ``` a = "Karaktere kate bat zeharkatzen..." for kar in a : print(kar,end="|") ``` * Karaktereak, karaktere kate moduan lortzen ditugu: ``` for x in "aeiou" : print("Balioa:",x,"Datu mota:",type(x)) a = "aeiou" print("Balioa:",a[2],"Datu mota:",type(a[2])) ``` ## Karaktere kateen eragileak * `a==b`, `a!=b`, `a>b`, `a>=b`, `a<b`, `a<=b` &rarr; `bool` : konparaketa *"alfabetikoa"* (UNIKODE-n oinarritua, **kontuz...**) * `a + b` &rarr; `str` : konkatenazioa * `a * 4` , `4 * a` &rarr; `str` : auto-errepikapena * `a in b`, `a not in b` &rarr; `bool` : `a` `b`-ren barnean ote dagoen * `a is b` &rarr; `bool` : `a` objektua `b` ote den ``` a = "Azaroa" b = "Abendua" c = "azaroa" print(b < a, a != c, a < c, "A" < "a", "ñ" > "p", "é" > "z") print("mendi" in "Errotamendi", "Mendi" not in "Errotamendi") print("Kontuz konparaketa alfabetikoekin:", "n" < "o", "ñ" < "o") ``` ## Karaktere kateen metodoak (funtzioak) * Objektu mota bakoitzak berezko metodo sorta bat eduki dezake. * Metodo hauek izendatzeko: `objektua.metodo_izena` * Karaktere kateek, [45 bat metodo](https://docs.python.org/3.5/library/stdtypes.html#string-methods) dituzte... ``` a = "egunon, kaixo kaixo!! " print(a.upper(), "|", a.capitalize(), "|", a.title()) print("xo", a.find("xo"), "posizioan dago eta",a.count("xo"),"aldiz agertzen da") print("|" + a.strip() + "|") print(a.split()) print(a.replace(" ","_")) print("_".join(a.split())) a = "kaixo aixo aixo" a.count("aisdh") a.endswith("txt") a.find('aixo') a.find('sdfyhsfghs') a.index('aixo') #a.index('sdfyhsfghs') " ".join(['bat','bi','hiru']) a = " Kaixo kaixo Zelan ÑÜÁÉÍÓÚ? " a.lower() a.rstrip() a.lstrip() a.strip() a.replace("ai","____") a.replace("ai","____",1) a.split() #a.split("ai") a.upper() a.startswith(" Kaixo") ``` <table border="0" width="100%" style="margin: 0px;"> <tr> <td style="text-align:left"><a href="Argumentuak eta defektuzko balioak.ipynb">&lt; &lt; Argumentuak eta defektuzko balioak &lt; &lt;</a></td> <td style="text-align:right"><a href="Fitxategiak.ipynb">&gt; &gt; Fitxategiak &gt; &gt;</a></td> </tr> </table>
github_jupyter
a = 'Hemen "komatxo bikoitzak" sar daitezke' b = "Hemen 'komatxo sinpleak' sar daitezke" c = """Hemen ilara bat baina gehiago sar daiteke, eta edozein "motetako" 'komatxoak' ere""" print(a) print(b) print(c) a = "karaktere" "kate" "bat" "baina" "hutsunerik" "gabe" print(a) a = "karakterekatebatbainahutsunerikgabe" print(a) a = 'Orain "komatxo bikoitzak"' " eta 'komatxo sinpleak' sar daitezke!" print(a) a = """Baina beste aukera "hau" 'askozaz' argiagoa da...""" print(a) "karaktere katea"[5] ``` * Iteragarriak &rarr; `for` kontrol egitura * ```python for i in "karaktere katea" : ``` ## Indexagarritasuna * *Normalean*, objektu indexagarrien luzera kontsulta daiteke: * *Normalean*, indizeak zenbaki osoak izango dira. * `0`-tik aurrera indexatzen da: * `a[0]`, `a[1]`, ..., `a[luz-1]` * Indize negatiboek atzeranzko indexazioa adierazten dute: * `a[-i]` &rarr; `a[luz-i]` * `a[-1], a[-2], ..., a[-luz]` * `a[luz-1],a[luz-2], ...,a[0]` `[-luz,luz-1]` tarte kanpoko indizeek erroreak sortuko dituzte: Gogoratu: Karaktere Kateak **aldaezinak** dira, eta beraz, * **EZIN DIRA ALDATU**: ## *Slice* notazioa Objetu indexagarrietan elementu bat baina gehiago adierazteko aukera * `a[i:j]` : `a`-ko azpi-sekuentzia, `i` indizetik (barne) `j`-ra (kanpo). * `i` edo `j` negatiboak &rarr; `len(a)-i` , `len(a)-j` * `i` edo `j` > `len(a)` &rarr; `len(a)` * `i` ez adierazia edo `None` &rarr; `0` * `j` ez adierazia edo `None` &rarr; `len(a)` * `a[i:j:k]` : `a`-ko azpi-sekuentzia, `i`-tik (barne) `j`-ra (kanpo), `k` hurratsarekin. * `a[i:j]` eta `a[i:j:k]`-k objektu berriak sortzen dituzte... **ia beti**. ## Iteragarritasuna * Objektu iteragarriak `for` kontrol egitura batekin zeharkatu daitezke. * Karaktere kate bat zeharkatzean, bere karaktereak lortzen ditugu: * Karaktereak, karaktere kate moduan lortzen ditugu: ## Karaktere kateen eragileak * `a==b`, `a!=b`, `a>b`, `a>=b`, `a<b`, `a<=b` &rarr; `bool` : konparaketa *"alfabetikoa"* (UNIKODE-n oinarritua, **kontuz...**) * `a + b` &rarr; `str` : konkatenazioa * `a * 4` , `4 * a` &rarr; `str` : auto-errepikapena * `a in b`, `a not in b` &rarr; `bool` : `a` `b`-ren barnean ote dagoen * `a is b` &rarr; `bool` : `a` objektua `b` ote den ## Karaktere kateen metodoak (funtzioak) * Objektu mota bakoitzak berezko metodo sorta bat eduki dezake. * Metodo hauek izendatzeko: `objektua.metodo_izena` * Karaktere kateek, [45 bat metodo](https://docs.python.org/3.5/library/stdtypes.html#string-methods) dituzte...
0.448426
0.811713
``` import HARK.ConsumptionSaving.ConsumerParameters as Params from time import process_time import numpy as np import matplotlib.pyplot as plt from HARK.utilities import plotFuncs from HARK.ConsumptionSaving.ConsAggShockModel import ( AggShockConsumerType, CobbDouglasEconomy, AggShockMarkovConsumerType, CobbDouglasMarkovEconomy, ) from copy import deepcopy def mystr(number): return "{:.4f}".format(number) # Solve an AggShockConsumerType's microeconomic problem solve_agg_shocks_micro = False # Solve for the equilibrium aggregate saving rule in a CobbDouglasEconomy solve_agg_shocks_market = True # Solve an AggShockMarkovConsumerType's microeconomic problem solve_markov_micro = False # Solve for the equilibrium aggregate saving rule in a CobbDouglasMarkovEconomy solve_markov_market = True # Solve a simple Krusell-Smith-style two state, two shock model solve_krusell_smith = True # Solve a CobbDouglasEconomy with many states, potentially utilizing the "state jumper" solve_poly_state = False ``` ### Example impelementation of AggShockConsumerType ``` if solve_agg_shocks_micro or solve_agg_shocks_market: # Make an aggregate shocks consumer type AggShockExample = AggShockConsumerType() AggShockExample.cycles = 0 # Make a Cobb-Douglas economy for the agents EconomyExample = CobbDouglasEconomy(agents=[AggShockExample]) EconomyExample.makeAggShkHist() # Simulate a history of aggregate shocks # Have the consumers inherit relevant objects from the economy AggShockExample.getEconomyData(EconomyExample) if solve_agg_shocks_micro: # Solve the microeconomic model for the aggregate shocks example type (and display results) t_start = process_time() AggShockExample.solve() t_end = process_time() print( "Solving an aggregate shocks consumer took " + mystr(t_end - t_start) + " seconds." ) print( "Consumption function at each aggregate market resources-to-labor ratio gridpoint:" ) m_grid = np.linspace(0, 10, 200) AggShockExample.unpackcFunc() for M in AggShockExample.Mgrid.tolist(): mMin = AggShockExample.solution[0].mNrmMin(M) c_at_this_M = AggShockExample.cFunc[0](m_grid + mMin, M * np.ones_like(m_grid)) plt.plot(m_grid + mMin, c_at_this_M) plt.ylim(0.0, None) plt.show() if solve_agg_shocks_market: # Solve the "macroeconomic" model by searching for a "fixed point dynamic rule" t_start = process_time() print( "Now solving for the equilibrium of a Cobb-Douglas economy. This might take a few minutes..." ) EconomyExample.solve() t_end = process_time() print( 'Solving the "macroeconomic" aggregate shocks model took ' + str(t_end - t_start) + " seconds." ) print("Aggregate savings as a function of aggregate market resources:") plotFuncs(EconomyExample.AFunc, 0, 2 * EconomyExample.kSS) print( "Consumption function at each aggregate market resources gridpoint (in general equilibrium):" ) AggShockExample.unpackcFunc() m_grid = np.linspace(0, 10, 200) AggShockExample.unpackcFunc() for M in AggShockExample.Mgrid.tolist(): mMin = AggShockExample.solution[0].mNrmMin(M) c_at_this_M = AggShockExample.cFunc[0](m_grid + mMin, M * np.ones_like(m_grid)) plt.plot(m_grid + mMin, c_at_this_M) plt.ylim(0.0, None) plt.show() ``` ### Example Implementations of AggShockMarkovConsumerType ``` if solve_markov_micro or solve_markov_market or solve_krusell_smith: # Make a Markov aggregate shocks consumer type AggShockMrkvExample = AggShockMarkovConsumerType() AggShockMrkvExample.IncomeDstn[0] = 2 * [AggShockMrkvExample.IncomeDstn[0]] AggShockMrkvExample.cycles = 0 # Make a Cobb-Douglas economy for the agents MrkvEconomyExample = CobbDouglasMarkovEconomy(agents=[AggShockMrkvExample]) MrkvEconomyExample.DampingFac = 0.2 # Turn down damping MrkvEconomyExample.makeAggShkHist() # Simulate a history of aggregate shocks AggShockMrkvExample.getEconomyData( MrkvEconomyExample ) # Have the consumers inherit relevant objects from the economy if solve_markov_micro: # Solve the microeconomic model for the Markov aggregate shocks example type (and display results) t_start = process_time() AggShockMrkvExample.solve() t_end = process_time() print( "Solving an aggregate shocks Markov consumer took " + mystr(t_end - t_start) + " seconds." ) print( "Consumption function at each aggregate market \ resources-to-labor ratio gridpoint (for each macro state):" ) m_grid = np.linspace(0, 10, 200) AggShockMrkvExample.unpackcFunc() for i in range(2): for M in AggShockMrkvExample.Mgrid.tolist(): mMin = AggShockMrkvExample.solution[0].mNrmMin[i](M) c_at_this_M = AggShockMrkvExample.cFunc[0][i]( m_grid + mMin, M * np.ones_like(m_grid) ) plt.plot(m_grid + mMin, c_at_this_M) plt.ylim(0.0, None) plt.show() if solve_markov_market: # Solve the "macroeconomic" model by searching for a "fixed point dynamic rule" t_start = process_time() print("Now solving a two-state Markov economy. This should take a few minutes...") MrkvEconomyExample.solve() t_end = process_time() print( 'Solving the "macroeconomic" aggregate shocks model took ' + str(t_end - t_start) + " seconds." ) print( "Consumption function at each aggregate market \ resources-to-labor ratio gridpoint (for each macro state):" ) m_grid = np.linspace(0, 10, 200) AggShockMrkvExample.unpackcFunc() for i in range(2): for M in AggShockMrkvExample.Mgrid.tolist(): mMin = AggShockMrkvExample.solution[0].mNrmMin[i](M) c_at_this_M = AggShockMrkvExample.cFunc[0][i]( m_grid + mMin, M * np.ones_like(m_grid) ) plt.plot(m_grid + mMin, c_at_this_M) plt.ylim(0.0, None) plt.show() if solve_krusell_smith: # Make a Krusell-Smith agent type # NOTE: These agents aren't exactly like KS, as they don't have serially correlated unemployment KSexampleType = deepcopy(AggShockMrkvExample) KSexampleType.IncomeDstn[0] = [ [np.array([0.96, 0.04]), np.array([1.0, 1.0]), np.array([1.0 / 0.96, 0.0])], [np.array([0.90, 0.10]), np.array([1.0, 1.0]), np.array([1.0 / 0.90, 0.0])], ] # Make a KS economy KSeconomy = deepcopy(MrkvEconomyExample) KSeconomy.agents = [KSexampleType] KSeconomy.AggShkDstn = [ [np.array([1.0]), np.array([1.0]), np.array([1.05])], [np.array([1.0]), np.array([1.0]), np.array([0.95])], ] KSeconomy.PermGroFacAgg = [1.0, 1.0] KSexampleType.getEconomyData(KSeconomy) KSeconomy.makeAggShkHist() # Solve the K-S model t_start = process_time() print( "Now solving a Krusell-Smith-style economy. This should take about a minute..." ) KSeconomy.solve() t_end = process_time() print("Solving the Krusell-Smith model took " + str(t_end - t_start) + " seconds.") if solve_poly_state: StateCount = 15 # Number of Markov states GrowthAvg = 1.01 # Average permanent income growth factor GrowthWidth = 0.02 # PermGroFacAgg deviates from PermGroFacAgg in this range Persistence = 0.90 # Probability of staying in the same Markov state PermGroFacAgg = np.linspace( GrowthAvg - GrowthWidth, GrowthAvg + GrowthWidth, num=StateCount ) # Make the Markov array with chosen states and persistence PolyMrkvArray = np.zeros((StateCount, StateCount)) for i in range(StateCount): for j in range(StateCount): if i == j: PolyMrkvArray[i, j] = Persistence elif (i == (j - 1)) or (i == (j + 1)): PolyMrkvArray[i, j] = 0.5 * (1.0 - Persistence) PolyMrkvArray[0, 0] += 0.5 * (1.0 - Persistence) PolyMrkvArray[StateCount - 1, StateCount - 1] += 0.5 * (1.0 - Persistence) # Make a consumer type to inhabit the economy PolyStateExample = AggShockMarkovConsumerType() PolyStateExample.MrkvArray = PolyMrkvArray PolyStateExample.PermGroFacAgg = PermGroFacAgg PolyStateExample.IncomeDstn[0] = StateCount * [PolyStateExample.IncomeDstn[0]] PolyStateExample.cycles = 0 # Make a Cobb-Douglas economy for the agents # Use verbose=False to remove printing of intercept PolyStateEconomy = CobbDouglasMarkovEconomy(agents=[PolyStateExample], verbose=False) PolyStateEconomy.MrkvArray = PolyMrkvArray PolyStateEconomy.PermGroFacAgg = PermGroFacAgg PolyStateEconomy.PermShkAggStd = StateCount * [0.006] PolyStateEconomy.TranShkAggStd = StateCount * [0.003] PolyStateEconomy.slope_prev = StateCount * [1.0] PolyStateEconomy.intercept_prev = StateCount * [0.0] PolyStateEconomy.update() PolyStateEconomy.makeAggShkDstn() PolyStateEconomy.makeAggShkHist() # Simulate a history of aggregate shocks PolyStateExample.getEconomyData( PolyStateEconomy ) # Have the consumers inherit relevant objects from the economy # Solve the many state model t_start = process_time() print( "Now solving an economy with " + str(StateCount) + " Markov states. This might take a while..." ) PolyStateEconomy.solve() t_end = process_time() print( "Solving a model with " + str(StateCount) + " states took " + str(t_end - t_start) + " seconds." ) ```
github_jupyter
import HARK.ConsumptionSaving.ConsumerParameters as Params from time import process_time import numpy as np import matplotlib.pyplot as plt from HARK.utilities import plotFuncs from HARK.ConsumptionSaving.ConsAggShockModel import ( AggShockConsumerType, CobbDouglasEconomy, AggShockMarkovConsumerType, CobbDouglasMarkovEconomy, ) from copy import deepcopy def mystr(number): return "{:.4f}".format(number) # Solve an AggShockConsumerType's microeconomic problem solve_agg_shocks_micro = False # Solve for the equilibrium aggregate saving rule in a CobbDouglasEconomy solve_agg_shocks_market = True # Solve an AggShockMarkovConsumerType's microeconomic problem solve_markov_micro = False # Solve for the equilibrium aggregate saving rule in a CobbDouglasMarkovEconomy solve_markov_market = True # Solve a simple Krusell-Smith-style two state, two shock model solve_krusell_smith = True # Solve a CobbDouglasEconomy with many states, potentially utilizing the "state jumper" solve_poly_state = False if solve_agg_shocks_micro or solve_agg_shocks_market: # Make an aggregate shocks consumer type AggShockExample = AggShockConsumerType() AggShockExample.cycles = 0 # Make a Cobb-Douglas economy for the agents EconomyExample = CobbDouglasEconomy(agents=[AggShockExample]) EconomyExample.makeAggShkHist() # Simulate a history of aggregate shocks # Have the consumers inherit relevant objects from the economy AggShockExample.getEconomyData(EconomyExample) if solve_agg_shocks_micro: # Solve the microeconomic model for the aggregate shocks example type (and display results) t_start = process_time() AggShockExample.solve() t_end = process_time() print( "Solving an aggregate shocks consumer took " + mystr(t_end - t_start) + " seconds." ) print( "Consumption function at each aggregate market resources-to-labor ratio gridpoint:" ) m_grid = np.linspace(0, 10, 200) AggShockExample.unpackcFunc() for M in AggShockExample.Mgrid.tolist(): mMin = AggShockExample.solution[0].mNrmMin(M) c_at_this_M = AggShockExample.cFunc[0](m_grid + mMin, M * np.ones_like(m_grid)) plt.plot(m_grid + mMin, c_at_this_M) plt.ylim(0.0, None) plt.show() if solve_agg_shocks_market: # Solve the "macroeconomic" model by searching for a "fixed point dynamic rule" t_start = process_time() print( "Now solving for the equilibrium of a Cobb-Douglas economy. This might take a few minutes..." ) EconomyExample.solve() t_end = process_time() print( 'Solving the "macroeconomic" aggregate shocks model took ' + str(t_end - t_start) + " seconds." ) print("Aggregate savings as a function of aggregate market resources:") plotFuncs(EconomyExample.AFunc, 0, 2 * EconomyExample.kSS) print( "Consumption function at each aggregate market resources gridpoint (in general equilibrium):" ) AggShockExample.unpackcFunc() m_grid = np.linspace(0, 10, 200) AggShockExample.unpackcFunc() for M in AggShockExample.Mgrid.tolist(): mMin = AggShockExample.solution[0].mNrmMin(M) c_at_this_M = AggShockExample.cFunc[0](m_grid + mMin, M * np.ones_like(m_grid)) plt.plot(m_grid + mMin, c_at_this_M) plt.ylim(0.0, None) plt.show() if solve_markov_micro or solve_markov_market or solve_krusell_smith: # Make a Markov aggregate shocks consumer type AggShockMrkvExample = AggShockMarkovConsumerType() AggShockMrkvExample.IncomeDstn[0] = 2 * [AggShockMrkvExample.IncomeDstn[0]] AggShockMrkvExample.cycles = 0 # Make a Cobb-Douglas economy for the agents MrkvEconomyExample = CobbDouglasMarkovEconomy(agents=[AggShockMrkvExample]) MrkvEconomyExample.DampingFac = 0.2 # Turn down damping MrkvEconomyExample.makeAggShkHist() # Simulate a history of aggregate shocks AggShockMrkvExample.getEconomyData( MrkvEconomyExample ) # Have the consumers inherit relevant objects from the economy if solve_markov_micro: # Solve the microeconomic model for the Markov aggregate shocks example type (and display results) t_start = process_time() AggShockMrkvExample.solve() t_end = process_time() print( "Solving an aggregate shocks Markov consumer took " + mystr(t_end - t_start) + " seconds." ) print( "Consumption function at each aggregate market \ resources-to-labor ratio gridpoint (for each macro state):" ) m_grid = np.linspace(0, 10, 200) AggShockMrkvExample.unpackcFunc() for i in range(2): for M in AggShockMrkvExample.Mgrid.tolist(): mMin = AggShockMrkvExample.solution[0].mNrmMin[i](M) c_at_this_M = AggShockMrkvExample.cFunc[0][i]( m_grid + mMin, M * np.ones_like(m_grid) ) plt.plot(m_grid + mMin, c_at_this_M) plt.ylim(0.0, None) plt.show() if solve_markov_market: # Solve the "macroeconomic" model by searching for a "fixed point dynamic rule" t_start = process_time() print("Now solving a two-state Markov economy. This should take a few minutes...") MrkvEconomyExample.solve() t_end = process_time() print( 'Solving the "macroeconomic" aggregate shocks model took ' + str(t_end - t_start) + " seconds." ) print( "Consumption function at each aggregate market \ resources-to-labor ratio gridpoint (for each macro state):" ) m_grid = np.linspace(0, 10, 200) AggShockMrkvExample.unpackcFunc() for i in range(2): for M in AggShockMrkvExample.Mgrid.tolist(): mMin = AggShockMrkvExample.solution[0].mNrmMin[i](M) c_at_this_M = AggShockMrkvExample.cFunc[0][i]( m_grid + mMin, M * np.ones_like(m_grid) ) plt.plot(m_grid + mMin, c_at_this_M) plt.ylim(0.0, None) plt.show() if solve_krusell_smith: # Make a Krusell-Smith agent type # NOTE: These agents aren't exactly like KS, as they don't have serially correlated unemployment KSexampleType = deepcopy(AggShockMrkvExample) KSexampleType.IncomeDstn[0] = [ [np.array([0.96, 0.04]), np.array([1.0, 1.0]), np.array([1.0 / 0.96, 0.0])], [np.array([0.90, 0.10]), np.array([1.0, 1.0]), np.array([1.0 / 0.90, 0.0])], ] # Make a KS economy KSeconomy = deepcopy(MrkvEconomyExample) KSeconomy.agents = [KSexampleType] KSeconomy.AggShkDstn = [ [np.array([1.0]), np.array([1.0]), np.array([1.05])], [np.array([1.0]), np.array([1.0]), np.array([0.95])], ] KSeconomy.PermGroFacAgg = [1.0, 1.0] KSexampleType.getEconomyData(KSeconomy) KSeconomy.makeAggShkHist() # Solve the K-S model t_start = process_time() print( "Now solving a Krusell-Smith-style economy. This should take about a minute..." ) KSeconomy.solve() t_end = process_time() print("Solving the Krusell-Smith model took " + str(t_end - t_start) + " seconds.") if solve_poly_state: StateCount = 15 # Number of Markov states GrowthAvg = 1.01 # Average permanent income growth factor GrowthWidth = 0.02 # PermGroFacAgg deviates from PermGroFacAgg in this range Persistence = 0.90 # Probability of staying in the same Markov state PermGroFacAgg = np.linspace( GrowthAvg - GrowthWidth, GrowthAvg + GrowthWidth, num=StateCount ) # Make the Markov array with chosen states and persistence PolyMrkvArray = np.zeros((StateCount, StateCount)) for i in range(StateCount): for j in range(StateCount): if i == j: PolyMrkvArray[i, j] = Persistence elif (i == (j - 1)) or (i == (j + 1)): PolyMrkvArray[i, j] = 0.5 * (1.0 - Persistence) PolyMrkvArray[0, 0] += 0.5 * (1.0 - Persistence) PolyMrkvArray[StateCount - 1, StateCount - 1] += 0.5 * (1.0 - Persistence) # Make a consumer type to inhabit the economy PolyStateExample = AggShockMarkovConsumerType() PolyStateExample.MrkvArray = PolyMrkvArray PolyStateExample.PermGroFacAgg = PermGroFacAgg PolyStateExample.IncomeDstn[0] = StateCount * [PolyStateExample.IncomeDstn[0]] PolyStateExample.cycles = 0 # Make a Cobb-Douglas economy for the agents # Use verbose=False to remove printing of intercept PolyStateEconomy = CobbDouglasMarkovEconomy(agents=[PolyStateExample], verbose=False) PolyStateEconomy.MrkvArray = PolyMrkvArray PolyStateEconomy.PermGroFacAgg = PermGroFacAgg PolyStateEconomy.PermShkAggStd = StateCount * [0.006] PolyStateEconomy.TranShkAggStd = StateCount * [0.003] PolyStateEconomy.slope_prev = StateCount * [1.0] PolyStateEconomy.intercept_prev = StateCount * [0.0] PolyStateEconomy.update() PolyStateEconomy.makeAggShkDstn() PolyStateEconomy.makeAggShkHist() # Simulate a history of aggregate shocks PolyStateExample.getEconomyData( PolyStateEconomy ) # Have the consumers inherit relevant objects from the economy # Solve the many state model t_start = process_time() print( "Now solving an economy with " + str(StateCount) + " Markov states. This might take a while..." ) PolyStateEconomy.solve() t_end = process_time() print( "Solving a model with " + str(StateCount) + " states took " + str(t_end - t_start) + " seconds." )
0.668339
0.843831
# Geometric objects - Spatial data model **Sources and Credits:** These materials are directly taken from [Intro to Python Course by CSC Finland](https://automating-gis-processes.github.io/CSC/notebooks/L1/geometric-objects.html) authored by [HTenkanen](https://github.com/HTenkanen). Those materials were partly based on [Shapely-documentation](https://shapely.readthedocs.io/en/latest/manual.html) and [Westra E. (2013), Chapter 3](https://www.packtpub.com/application-development/python-geospatial-development-second-edition). We made some extensions to the original text. Those are is clearly indicated (via "By:"). ## Overview of geometric objects and Shapely package ![Spatial data model](images/spatialdatamodel.png) *Fundamental geometric objects that can be used in Python with the* [Shapely](https://shapely.readthedocs.io/en/latest/manual.html) *package* The most fundamental geometric objects are `Points`, `Lines` and `Polygons` which are the basic ingredients when working with spatial data in vector format. Python has a specific package called [Shapely](https://toblerity.org/shapely/manual.html) that can be used to create and work with `Geometric Objects`. There are many useful functionalities that you can do with Shapely such as: - Create a `Line` or `Polygon` from a `Collection` of `Point` geometries - Calculate areas/length/bounds etc. of input geometries - Conduct geometric operations based on the input geometries such as `Union`, `Difference`, `Distance` etc. - Conduct spatial queries between geometries such `Intersects`, `Touches`, `Crosses`, `Within` etc. **Geometric Objects consist of coordinate tuples where:** - `Point` object representing a single point in space. Points can be either two-dimensional (x, y) or three dimensional (x, y, z). - `LineString` object (i.e. a line) representing a sequence of points joined together to form a line. Hence, a line consist of a list of at least two coordinate tuples - `Polygon` object representing a filled area that consists of a list of at least three coordinate tuples that forms the outerior ring and a (possible) list of hole polygons. **It is also possible to have a collection of geometric objects (e.g. Polygons with multiple parts):** - `MultiPoint`: object representing a collection of points and consists of a list of coordinate-tuples - `MultiLineString`: object representing a collection of lines and consists of a list of line-like sequences - `MultiPolygon`: object representing a collection of polygons that consists of a list of polygon-like sequences that construct from exterior ring and (possible) hole list tuples ## Point - Creating point is easy, you pass x and y coordinates into a `Point()` object (+ possibly also z -coordinate): ``` # Import necessary geometric objects from Shapely package from shapely.geometry import Point, LineString, Polygon # Create Point geometric object(s) with coordinates point1 = Point(2.2, 4.2) point2 = Point(7.2, -25.1) point3 = Point(9.26, -2.456) point3D = Point(9.26, -2.456, 0.57) # What is the type of the point? point_type = type(point1) ``` - Let's see what the variables look like ``` print(point1) print(point3D) print(type(point1)) ``` We can see that the type of the point is a Shapely [Point](https://shapely.readthedocs.io/en/stable/manual.html#points) which is represented in a specific format that is based on [GEOS](https://trac.osgeo.org/geos) C++ library that is one of the standard libraries in GIS. It runs under the hood e.g. in [QGIS](https://www.qgis.org). 3D-point can be recognized from the capital Z -letter in front of the coordinates. ### Point attributes and functions Point -object has some built-in attributes that can be accessed and also some useful functionalities. One of the most useful ones are the ability to extract the coordinates of a Point and calculate the Euclidian distance between points. - Extracting the coordinates of a Point can be done in a couple of different ways: ``` # Get the coordinates point_coords = point1.coords # What is the type of this? type(point_coords) ``` As we can see, the data type of our `point_coords` variable is a Shapely [CoordinateSequence](https://shapely.readthedocs.io/en/stable/manual.html#coordinate-sequences). - Let's see how we can get out the actual coordinates from this object: ``` # Get x and y coordinates xy = point_coords.xy # Get only x coordinates of Point1 x = point1.x # Whatabout y coordinate? y = point1.y # Print out print(f'xy variable: {xy}') print(f'x variable: {x}') print(f'y variable: {y}') ``` As we can see from above the `xy` -variable contains a tuple where x and y coordinates are stored inside numpy arrays. Using the attributes `point1.x` and `point1.y` it is possible to get the coordinates directly as plain decimal numbers. - It is also possible to calculate the distance between points which can be useful in many applications. The returned distance is based on the projection of the points (e.g. degrees in WGS84, meters in UTM): ``` # Calculate the distance between point1 and point2 point_dist = point1.distance(point2) print(f'Distance between the points is {point_dist:.2f} decimal degrees') ``` ## About Projections and Shapely By: [justb4](https://github.com/Justb4) In Shapely, the distance is the Euclidean Distance or Linear distance (Pythagoras Law!) between two points on a plane and not the [Great-circle distance](https://en.wikipedia.org/wiki/Great-circle_distance) between two points on a sphere! If you are working with data in WGS84 (EPSG:4326), 'lat/lon' (think of GPS coordinates) in degrees, Shapely's calculations like `length` and `area` will not be what you would expect. We have several options (see also [this SE discussion](https://gis.stackexchange.com/questions/80881/what-is-unit-of-shapely-length-attribute)): * add-hoc: calculate the [Great Circle Distance](https://en.wikipedia.org/wiki/Great-circle_distance), using functions for the [Haversine Formula](https://en.wikipedia.org/wiki/Haversine_formula) or [Law of Cosines](https://en.wikipedia.org/wiki/Spherical_law_of_cosines). * reproject your source data to a 'metric' projection like Web Mercator (EPSG:3857, worldwide, used for tiles by Google, OSM and others) using e.g. GDAL or GeoPandas (uses `pyproj`). * use `pyproj` to apply the proper formulas Below an example to illustrate: ``` from shapely.geometry import Point import pyproj point1 = Point(50.67, 4.62) point2 = Point(51.67, 4.64) # Shapely Distance in degrees point1.distance(point2) geod = pyproj.Geod(ellps='WGS84') angle1,angle2,distance = geod.inv(point1.x, point1.y, point2.x, point2.y) # "Real" Distance in km distance / 1000.0 ``` ## LineString Creating a LineString object is fairly similar to how Point is created. - Now instead using a single coordinate-tuple we can construct the line using either a list of shapely Point objects or pass coordinate-tuples: ``` # Create a LineString from our Point objects line = LineString([point1, point2, point3]) # It is also possible to use coordinate tuples having the same outcome line2 = LineString([(2.2, 4.2), (7.2, -25.1), (9.26, -2.456)]) # Print the results print(f'line variable: {line}') print(f'line2 variable: {line2}') print(f'type of the line: {type(line)}') ``` As we can see from above, the `line` variable constitutes of multiple coordinate-pairs and the type of the data is a Shapely [LineString](https://shapely.readthedocs.io/en/stable/manual.html#linestrings). ### LineString attributes and functions `LineString` -object has many useful built-in attributes and functionalities. It is for instance possible to extract the coordinates or the length of a LineString (line), calculate the centroid of the line, create points along the line at specific distance, calculate the closest distance from a line to specified Point and simplify the geometry. See full list of functionalities from [Shapely documentation](https://shapely.readthedocs.io/en/stable/manual.html). Here, we go through a few of them. - We can extract the coordinates of a LineString similarly as with `Point` ``` # Get x and y coordinates of the line lxy = line.xy print(lxy) ``` As we can see, the coordinates are again stored as a numpy arrays where first array includes all x-coordinates and the second one all the y-coordinates respectively. - We can extract only x or y coordinates by referring to those arrays as follows: ``` # Extract x coordinates line_x = lxy[0] # Extract y coordinates straight from the LineObject by referring to a array at index 1 line_y = line.xy[1] print(f'line_x: {line_x}') print(f'line_y: {line_y}') ``` - It is possible to retrieve specific attributes such as lenght of the line and center of the line (centroid) straight from the LineString object itself: ``` # Get the lenght of the line l_length = line.length # Get the centroid of the line l_centroid = line.centroid # What type is the centroid? centroid_type = type(l_centroid) # Print the outputs print(f'Length of our line: {l_length:.2f}') print(f'Centroid of our line: {l_centroid}') print(f'Type of the centroid: {centroid_type}') ``` Nice! These are already fairly useful information for many different GIS tasks, and we didn't even calculate anything yet! These attributes are built-in in every LineString object that is created. Notice that the centroid that is returned is a `Point` object that has its own functions as was described earlier. ## Polygon Creating a `Polygon` -object continues the same logic of how `Point` and `LineString` were created but Polygon object only accepts coordinate-tuples as input. - Polygon needs at least three coordinate-tuples (that basically forms a triangle): ``` # Create a Polygon from the coordinates poly = Polygon([(2.2, 4.2), (7.2, -25.1), (9.26, -2.456)]) # We can also use our previously created Point objects (same outcome) # --> notice that Polygon object requires x,y coordinates as input poly2 = Polygon([[p.x, p.y] for p in [point1, point2, point3]]) # Geometry type can be accessed as a String poly_type = poly.geom_type # Using the Python's type function gives the type in a different format poly_type2 = type(poly) # Let's see how our Polygon looks like print(f'poly: {poly}') print(f'poly2: {poly2}') print(f'Geometry type as text: {poly_type}') print(f'Geometry how Python shows it: {poly_type2}') ``` Notice that `Polygon` representation has double parentheses around the coordinates (i.e. `POLYGON ((<values in here>))` ). This is because a Polygon can also have holes inside of it. As the help of Polygon object tells, a Polygon can be constructed using exterior coordinates and interior coordinates (optional) where the interior coordinates creates a hole inside the Polygon: ``` Help on Polygon in module shapely.geometry.polygon object: class Polygon(shapely.geometry.base.BaseGeometry) | A two-dimensional figure bounded by a linear ring | | A polygon has a non-zero area. It may have one or more negative-space | "holes" which are also bounded by linear rings. If any rings cross each | other, the feature is invalid and operations on it may fail. | | Attributes | ---------- | exterior : LinearRing | The ring which bounds the positive space of the polygon. | interiors : sequence | A sequence of rings which bound all existing holes. ``` - Let's see how we can create a `Polygon` with a hole inside ``` # Let's create a bounding box of the world and make a whole in it # First we define our exterior world_exterior = [(-180, 90), (-180, -90), (180, -90), (180, 90)] # Let's create a single big hole where we leave ten decimal degrees at the boundaries of the world # Notice: there could be multiple holes, thus we need to provide a list of holes hole = [[(-170, 80), (-170, -80), (170, -80), (170, 80)]] # World without a hole world = Polygon(shell=world_exterior) # Now we can construct our Polygon with the hole inside world_has_a_hole = Polygon(shell=world_exterior, holes=hole) ``` - Let's see what we have now: ``` print(f'world: {world}') print(f'world_has_a_hole: {world_has_a_hole}') print(f'type: {type(world_has_a_hole)}') ``` As we can see the `Polygon` has now two different tuples of coordinates. The first one represents the **exterior** and the second one represents the **hole** inside the Polygon. ### Polygon attributes and functions We can again access different attributes directly from the `Polygon` object itself that can be really useful for many analyses, such as `area`, `centroid`, `bounding box`, `exterior`, and `exterior-length`. - Here, we can see a few of the available attributes and how to access them: ``` # Get the centroid of the Polygon world_centroid = world.centroid # Get the area of the Polygon world_area = world.area # Get the bounds of the Polygon (i.e. bounding box) world_bbox = world.bounds # Get the exterior of the Polygon world_ext = world.exterior # Get the length of the exterior world_ext_length = world_ext.length # Print the outputs print(f'Poly centroid: {world_centroid}') print(f'Poly Area: {world_area}') print(f'Poly Bounding Box: {world_bbox}') print(f'Poly Exterior: {world_ext}') print(f'Poly Exterior Length: {world_ext_length}') ``` As we can see above, it is again fairly straightforward to access different attributes from the `Polygon` -object. Notice, that the extrerior lenght is given here with decimal degrees because we passed latitude and longitude coordinates into our Polygon. ## Geometry collections (optional) In some occasions it is useful to store e.g. multiple lines or polygons under a single feature (i.e. a single row in a Shapefile represents more than one line or polygon object). Collections of points are implemented by using a MultiPoint -object, collections of curves by using a MultiLineString -object, and collections of surfaces by a MultiPolygon -object. These collections are not computationally significant, but are useful for modeling certain kinds of features. A Y-shaped line feature (such as road), or multiple polygons (e.g. islands on a like), can be presented nicely as a whole by a using MultiLineString or MultiPolygon accordingly. Creating and visualizing a minimum [bounding box](https://en.wikipedia.org/wiki/Minimum_bounding_box) e.g. around your data points is a really useful function for many purposes (e.g. trying to understand the extent of your data), here we demonstrate how to create one using Shapely. - Geometry collections can be constructed in a following manner: ``` # Import collections of geometric objects + bounding box from shapely.geometry import MultiPoint, MultiLineString, MultiPolygon, box # Create a MultiPoint object of our points 1,2 and 3 multi_point = MultiPoint([point1, point2, point3]) # It is also possible to pass coordinate tuples inside multi_point2 = MultiPoint([(2.2, 4.2), (7.2, -25.1), (9.26, -2.456)]) # We can also create a MultiLineString with two lines line1 = LineString([point1, point2]) line2 = LineString([point2, point3]) multi_line = MultiLineString([line1, line2]) # MultiPolygon can be done in a similar manner # Let's divide our world into western and eastern hemispheres with a hole on the western hemisphere # -------------------------------------------------------------------------------------------------- # Let's create the exterior of the western part of the world west_exterior = [(-180, 90), (-180, -90), (0, -90), (0, 90)] # Let's create a hole --> remember there can be multiple holes, thus we need to have a list of hole(s). # Here we have just one. west_hole = [[(-170, 80), (-170, -80), (-10, -80), (-10, 80)]] # Create the Polygon west_poly = Polygon(shell=west_exterior, holes=west_hole) # Let's create the Polygon of our Eastern hemisphere polygon using bounding box # For bounding box we need to specify the lower-left corner coordinates and upper-right coordinates min_x, min_y = 0, -90 max_x, max_y = 180, 90 # Create the polygon using box() function east_poly_box = box(minx=min_x, miny=min_y, maxx=max_x, maxy=max_y) # Let's create our MultiPolygon. We can pass multiple Polygon -objects into our MultiPolygon as a list multi_poly = MultiPolygon([west_poly, east_poly_box]) # Print outputs print(f'MultiPoint: {multi_point}') print(f'MultiLine: {multi_line}') print(f'Bounding box: {east_poly_box}') print(f'MultiPoly: {multi_poly}') ``` We can see that the outputs are similar to the basic geometric objects that we created previously but now these objects contain multiple features of those points, lines or polygons. ### Geometry collection -objects' attributes and functions - We can also get many useful attributes from those objects such as `Convex Hull`: ``` # Convex Hull of our MultiPoint --> https://en.wikipedia.org/wiki/Convex_hull convex = multi_point.convex_hull # How many lines do we have inside our MultiLineString? lines_count = len(multi_line) # Let's calculate the area of our MultiPolygon multi_poly_area = multi_poly.area # We can also access different items inside our geometry collections. We can e.g. access a single polygon from # our MultiPolygon -object by referring to the index # Let's calculate the area of our Western hemisphere (with a hole) which is at index 0 west_area = multi_poly[0].area # We can check if we have a "valid" MultiPolygon. MultiPolygon is thought as valid if the individual polygons # does notintersect with each other. Here, because the polygons have a common 0-meridian, we should NOT have # a valid polygon. This can be really useful information when trying to find topological errors from your data valid = multi_poly.is_valid # Print outputs print(f'Convex hull of the points: {convex}') print(f'Number of lines in MultiLineString: {lines_count}') print(f'Area of our MultiPolygon: {multi_poly_area}') print(f'Area of our Western Hemisphere polygon: {west_area}') print(f'Is polygon valid?: {valid}') ``` From the above we can see that MultiPolygons have exactly the same attributes available as single geometric objects but now the information such as area calculates the area of **ALL** of the individual objects combined. There are also some extra features available such as **is_valid** attribute that tells if the polygons or lines intersect with each other. ### Converting JSON to geometry objects Fiona will be treated in a next lesson. Here we mainly use Fiona to read Vector data (Features) into memory for subsequent Shapely manipulation. Feature geometry can be accessed using the `geometry` property of each feature, for example we can open the dataset that contains a (Multi)Polygon for each country and print out the geometry of the 10th Feature: First we import `Shapely` and its functions and then convert the JSON-encoded geometries to Geometry objects using the `shape` function. ``` import fiona from shapely.geometry import shape with fiona.open("../data/countries.3857.gpkg") as countries: country = countries[4] print(f'This is {country["properties"]["NAME"]}') geom = shape(country["geometry"]) geom # Jupyter can display geometry data directly print(geom.type) print(geom.area) # In km print(geom.length / 1000) ``` Let's have a look at some geometry methods. Tip: Shapely code is well-documented, you can always use the Python built-in `help()` function. ``` help(geom) ``` For example we can make a buffer of 500 meter around our polygon (making Canada somewhat bigger): ``` buffered_geom = geom.buffer(500) buffered_geom # In km buffered_geom.length / 1000 ``` #### Converting the geometry back to JSON format Once we are finished, we can convert the geometry back to JSON format using `shapely.geometry.mapping` function ``` from shapely.geometry import mapping # let's create new GeoJSON-encoded vector feature new_feature = { 'type': 'Feature', 'properties': { 'name': 'My buffered feature' }, 'geometry': mapping(buffered_geom) } new_feature # Now we could e.g. write the Feature back to file ``` --- [<- Introduction](01-introduction.ipynb) | [Spatial Reference Systems ->](03-spatial-reference-systems.ipynb)
github_jupyter
# Import necessary geometric objects from Shapely package from shapely.geometry import Point, LineString, Polygon # Create Point geometric object(s) with coordinates point1 = Point(2.2, 4.2) point2 = Point(7.2, -25.1) point3 = Point(9.26, -2.456) point3D = Point(9.26, -2.456, 0.57) # What is the type of the point? point_type = type(point1) print(point1) print(point3D) print(type(point1)) # Get the coordinates point_coords = point1.coords # What is the type of this? type(point_coords) # Get x and y coordinates xy = point_coords.xy # Get only x coordinates of Point1 x = point1.x # Whatabout y coordinate? y = point1.y # Print out print(f'xy variable: {xy}') print(f'x variable: {x}') print(f'y variable: {y}') # Calculate the distance between point1 and point2 point_dist = point1.distance(point2) print(f'Distance between the points is {point_dist:.2f} decimal degrees') from shapely.geometry import Point import pyproj point1 = Point(50.67, 4.62) point2 = Point(51.67, 4.64) # Shapely Distance in degrees point1.distance(point2) geod = pyproj.Geod(ellps='WGS84') angle1,angle2,distance = geod.inv(point1.x, point1.y, point2.x, point2.y) # "Real" Distance in km distance / 1000.0 # Create a LineString from our Point objects line = LineString([point1, point2, point3]) # It is also possible to use coordinate tuples having the same outcome line2 = LineString([(2.2, 4.2), (7.2, -25.1), (9.26, -2.456)]) # Print the results print(f'line variable: {line}') print(f'line2 variable: {line2}') print(f'type of the line: {type(line)}') # Get x and y coordinates of the line lxy = line.xy print(lxy) # Extract x coordinates line_x = lxy[0] # Extract y coordinates straight from the LineObject by referring to a array at index 1 line_y = line.xy[1] print(f'line_x: {line_x}') print(f'line_y: {line_y}') # Get the lenght of the line l_length = line.length # Get the centroid of the line l_centroid = line.centroid # What type is the centroid? centroid_type = type(l_centroid) # Print the outputs print(f'Length of our line: {l_length:.2f}') print(f'Centroid of our line: {l_centroid}') print(f'Type of the centroid: {centroid_type}') # Create a Polygon from the coordinates poly = Polygon([(2.2, 4.2), (7.2, -25.1), (9.26, -2.456)]) # We can also use our previously created Point objects (same outcome) # --> notice that Polygon object requires x,y coordinates as input poly2 = Polygon([[p.x, p.y] for p in [point1, point2, point3]]) # Geometry type can be accessed as a String poly_type = poly.geom_type # Using the Python's type function gives the type in a different format poly_type2 = type(poly) # Let's see how our Polygon looks like print(f'poly: {poly}') print(f'poly2: {poly2}') print(f'Geometry type as text: {poly_type}') print(f'Geometry how Python shows it: {poly_type2}') Help on Polygon in module shapely.geometry.polygon object: class Polygon(shapely.geometry.base.BaseGeometry) | A two-dimensional figure bounded by a linear ring | | A polygon has a non-zero area. It may have one or more negative-space | "holes" which are also bounded by linear rings. If any rings cross each | other, the feature is invalid and operations on it may fail. | | Attributes | ---------- | exterior : LinearRing | The ring which bounds the positive space of the polygon. | interiors : sequence | A sequence of rings which bound all existing holes. # Let's create a bounding box of the world and make a whole in it # First we define our exterior world_exterior = [(-180, 90), (-180, -90), (180, -90), (180, 90)] # Let's create a single big hole where we leave ten decimal degrees at the boundaries of the world # Notice: there could be multiple holes, thus we need to provide a list of holes hole = [[(-170, 80), (-170, -80), (170, -80), (170, 80)]] # World without a hole world = Polygon(shell=world_exterior) # Now we can construct our Polygon with the hole inside world_has_a_hole = Polygon(shell=world_exterior, holes=hole) print(f'world: {world}') print(f'world_has_a_hole: {world_has_a_hole}') print(f'type: {type(world_has_a_hole)}') # Get the centroid of the Polygon world_centroid = world.centroid # Get the area of the Polygon world_area = world.area # Get the bounds of the Polygon (i.e. bounding box) world_bbox = world.bounds # Get the exterior of the Polygon world_ext = world.exterior # Get the length of the exterior world_ext_length = world_ext.length # Print the outputs print(f'Poly centroid: {world_centroid}') print(f'Poly Area: {world_area}') print(f'Poly Bounding Box: {world_bbox}') print(f'Poly Exterior: {world_ext}') print(f'Poly Exterior Length: {world_ext_length}') # Import collections of geometric objects + bounding box from shapely.geometry import MultiPoint, MultiLineString, MultiPolygon, box # Create a MultiPoint object of our points 1,2 and 3 multi_point = MultiPoint([point1, point2, point3]) # It is also possible to pass coordinate tuples inside multi_point2 = MultiPoint([(2.2, 4.2), (7.2, -25.1), (9.26, -2.456)]) # We can also create a MultiLineString with two lines line1 = LineString([point1, point2]) line2 = LineString([point2, point3]) multi_line = MultiLineString([line1, line2]) # MultiPolygon can be done in a similar manner # Let's divide our world into western and eastern hemispheres with a hole on the western hemisphere # -------------------------------------------------------------------------------------------------- # Let's create the exterior of the western part of the world west_exterior = [(-180, 90), (-180, -90), (0, -90), (0, 90)] # Let's create a hole --> remember there can be multiple holes, thus we need to have a list of hole(s). # Here we have just one. west_hole = [[(-170, 80), (-170, -80), (-10, -80), (-10, 80)]] # Create the Polygon west_poly = Polygon(shell=west_exterior, holes=west_hole) # Let's create the Polygon of our Eastern hemisphere polygon using bounding box # For bounding box we need to specify the lower-left corner coordinates and upper-right coordinates min_x, min_y = 0, -90 max_x, max_y = 180, 90 # Create the polygon using box() function east_poly_box = box(minx=min_x, miny=min_y, maxx=max_x, maxy=max_y) # Let's create our MultiPolygon. We can pass multiple Polygon -objects into our MultiPolygon as a list multi_poly = MultiPolygon([west_poly, east_poly_box]) # Print outputs print(f'MultiPoint: {multi_point}') print(f'MultiLine: {multi_line}') print(f'Bounding box: {east_poly_box}') print(f'MultiPoly: {multi_poly}') # Convex Hull of our MultiPoint --> https://en.wikipedia.org/wiki/Convex_hull convex = multi_point.convex_hull # How many lines do we have inside our MultiLineString? lines_count = len(multi_line) # Let's calculate the area of our MultiPolygon multi_poly_area = multi_poly.area # We can also access different items inside our geometry collections. We can e.g. access a single polygon from # our MultiPolygon -object by referring to the index # Let's calculate the area of our Western hemisphere (with a hole) which is at index 0 west_area = multi_poly[0].area # We can check if we have a "valid" MultiPolygon. MultiPolygon is thought as valid if the individual polygons # does notintersect with each other. Here, because the polygons have a common 0-meridian, we should NOT have # a valid polygon. This can be really useful information when trying to find topological errors from your data valid = multi_poly.is_valid # Print outputs print(f'Convex hull of the points: {convex}') print(f'Number of lines in MultiLineString: {lines_count}') print(f'Area of our MultiPolygon: {multi_poly_area}') print(f'Area of our Western Hemisphere polygon: {west_area}') print(f'Is polygon valid?: {valid}') import fiona from shapely.geometry import shape with fiona.open("../data/countries.3857.gpkg") as countries: country = countries[4] print(f'This is {country["properties"]["NAME"]}') geom = shape(country["geometry"]) geom # Jupyter can display geometry data directly print(geom.type) print(geom.area) # In km print(geom.length / 1000) help(geom) buffered_geom = geom.buffer(500) buffered_geom # In km buffered_geom.length / 1000 from shapely.geometry import mapping # let's create new GeoJSON-encoded vector feature new_feature = { 'type': 'Feature', 'properties': { 'name': 'My buffered feature' }, 'geometry': mapping(buffered_geom) } new_feature # Now we could e.g. write the Feature back to file
0.854551
0.993661
# AI NI Academy 2 *** This is a complimentary notebook to go alongside the Azure ML Studio project that we will be taking you through. Feel through to follow on with this notebook, or save it for later so you can compare the code and learn how to complete this model with Python! *** ## Imports *** ``` import pandas as pd import numpy as np import re from sklearn.linear_model import LogisticRegression from sklearn import svm from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer from sklearn import metrics from sklearn.metrics import precision_recall_fscore_support, classification_report ``` ## Data *** Load in your data using the Pandas library, so that we have something to work with. It's good practice to understand your data before you start working with it. We do this by using the .head() function which will print the first 5 records. __Tip:__ If you put a number inside the parenthesis it will print that amount instead of the default, 5. *** ``` reviews_df = pd.read_csv("Electronics.csv") reviews_df.head() # We are reducing the amount of data we are working with here because 300,000 will take a while to process reviews_df_small = reviews_df.head(1000) reviews_df_small.count()["overall"] reviews_df_small["overall"][0] # We are manipulating our data so that we are working with binary classification instead of multi-class classification, # because we simply want to know if it is positive or nagative. threshold = 3 reviews_df_small["overall"] = np.where(reviews_df_small["overall"] >= threshold, 1,0) reviews_df_small.head() reviews_df_small.count() reviews_df_small = reviews_df_small.dropna() reviews_df_small = reviews_df_small.reset_index() reviews_df_small.count() ``` ## Feature Engineering *** We need to do a bit of work with the data before we can train our model with it and get predictions. To Do: - Seperate sentiment and associated text - Replace the punction and numbers found in the review's text with space - Turn all the text to lowercase *** ``` sentiment_label = reviews_df_small["overall"] review_text = reviews_df_small["reviewText"] # Here we are replacing the punctuation and numbers with space, and making all the text lowercase for i in range (review_text.count()): review_text[i] = re.sub("\W", " ", review_text[i]).lower() review_text[i] = re.sub("\d", " ", review_text[i]) review_text.head(10) ``` ## Let's Train *** Ok, so now we have formatted and organised our data. We need to set it up to be fed into our model; to do that we will need to do the following: To Do: - Assign the review text and sentiment data to X and Y - Split the data into training data and testing data - Use a count vectorizer to count the words, creating a bag of words - Use a tfidf transformer to reduce the significance of more common words, like "the", "it" and "a" - Train a Logistic Regression model and a Support Vector Machine - Find the accuracy of both - Generate a classification report *** ``` X = review_text Y = sentiment_label x_train, x_test, y_train, y_test = train_test_split(X, Y, train_size=0.8, random_state=42) count_vect = CountVectorizer() x_train_counts = count_vect.fit_transform(x_train) tfidf_transformer = TfidfTransformer() x_train_tfidf = tfidf_transformer.fit_transform(x_train_counts) clf = LogisticRegression(random_state=0).fit(x_train_counts, y_train) predictions = clf.predict(count_vect.transform(x_test)) print (metrics.accuracy_score(y_test, predictions)) clf_svm = svm.SVC().fit(x_train_counts, y_train) svm_predictions = clf_svm.predict(count_vect.transform(x_test)) print (metrics.accuracy_score(y_test, svm_predictions)) precision_recall_fscore_support(y_test, predictions, average="binary") precision_recall_fscore_support(y_test, svm_predictions, average="binary") print(classification_report(y_test, predictions)) print(classification_report(y_test, svm_predictions)) ```
github_jupyter
import pandas as pd import numpy as np import re from sklearn.linear_model import LogisticRegression from sklearn import svm from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer from sklearn import metrics from sklearn.metrics import precision_recall_fscore_support, classification_report reviews_df = pd.read_csv("Electronics.csv") reviews_df.head() # We are reducing the amount of data we are working with here because 300,000 will take a while to process reviews_df_small = reviews_df.head(1000) reviews_df_small.count()["overall"] reviews_df_small["overall"][0] # We are manipulating our data so that we are working with binary classification instead of multi-class classification, # because we simply want to know if it is positive or nagative. threshold = 3 reviews_df_small["overall"] = np.where(reviews_df_small["overall"] >= threshold, 1,0) reviews_df_small.head() reviews_df_small.count() reviews_df_small = reviews_df_small.dropna() reviews_df_small = reviews_df_small.reset_index() reviews_df_small.count() sentiment_label = reviews_df_small["overall"] review_text = reviews_df_small["reviewText"] # Here we are replacing the punctuation and numbers with space, and making all the text lowercase for i in range (review_text.count()): review_text[i] = re.sub("\W", " ", review_text[i]).lower() review_text[i] = re.sub("\d", " ", review_text[i]) review_text.head(10) X = review_text Y = sentiment_label x_train, x_test, y_train, y_test = train_test_split(X, Y, train_size=0.8, random_state=42) count_vect = CountVectorizer() x_train_counts = count_vect.fit_transform(x_train) tfidf_transformer = TfidfTransformer() x_train_tfidf = tfidf_transformer.fit_transform(x_train_counts) clf = LogisticRegression(random_state=0).fit(x_train_counts, y_train) predictions = clf.predict(count_vect.transform(x_test)) print (metrics.accuracy_score(y_test, predictions)) clf_svm = svm.SVC().fit(x_train_counts, y_train) svm_predictions = clf_svm.predict(count_vect.transform(x_test)) print (metrics.accuracy_score(y_test, svm_predictions)) precision_recall_fscore_support(y_test, predictions, average="binary") precision_recall_fscore_support(y_test, svm_predictions, average="binary") print(classification_report(y_test, predictions)) print(classification_report(y_test, svm_predictions))
0.533884
0.950595
<center> <h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1> <h2> Conjugate Gradient Method </h2> <h2> [[S]cientific [C]omputing [T]eam](#acknowledgements)</h2> <h2> Version: 1.1</h2> </center> ## Table of Contents * [Introduction](#intro) * [Gradient Descent](#GDragon) * [Conjugate Gradient Method](#CGM) * [Conjugate Gradient Method with Preconditioning](#CGMp) * [Let's Play: Practical Exercises and Profiling](#LP) * [Acknowledgements](#acknowledgements) ``` import numpy as np import matplotlib.pyplot as plt from scipy.linalg import solve_triangular %matplotlib inline # pip install memory_profiler %load_ext memory_profiler ``` <div id='intro' /> ## Introduction Welcome to another edition of our IPython Notebooks. Here, we'll teach you how to solve $A\,x = b$ with $A$ being a _symmetric positive-definite matrix_, but the following methods have a key difference with the previous ones: these do not depend on a matrix factorization. The two methods that we'll see are called the Gradient Descent and the Conjugate Gradient Method. On the latter, we'll also see the benefits of preconditioning. <div id='GDragon' /> ## Gradient Descent This is an iterative method. If you remember the iterative methods in the previous Notebook, to find the next approximate solution $\vec{x}_{k+1}$ you'd add a vector to the current approximate solution, $\vec{x}_k$, that is: $\vec{x}_{k+1} = \vec{x}_k + \text{vector}$. In this method, $\text{vector}$ is $\alpha_{k}\,\vec{r}_k$, where $\vec{r}_k$ is the residue ($\vec{b} - A\,\vec{x}_k$) and $\alpha_k = \cfrac{(\vec{r}_k)^T\,\vec{r}_k}{(\vec{r}_k)^T\,A\,\vec{r}_k}$, starting with some initial guess $\vec{x}_0$. Let's look at the implementation below: ``` def gradient_descent(A, b, x0, n_iter=10): n = A.shape[0] #array with solutions X = np.empty((n_iter, n)) X[0] = x0 for k in range(1, n_iter): r = b - np.dot(A, X[k-1]) if (all( v == 0 for v in r)): # The algorithm converged X[k:] = X[k-1] return X break alpha = np.dot(np.transpose(r), r)/np.dot(np.transpose(r), np.dot(A, r)) X[k] = X[k-1] + alpha*r return X ``` Now let's try our algorithm! But first, let's borrow a function to generate a random symmetric positive-definite matrix, kindly provided by the previous notebook, and another one to calculate the vectorized euclidean metric. ``` """ Randomly generates an nxn symmetric positive- definite matrix A. """ def generate_spd_matrix(n): A = np.random.random((n,n)) #constructing symmetry A += A.T #symmetric+diagonally dominant -> symmetric positive-definite deltas = 0.1*np.random.random(n) row_sum = A.sum(axis=1)-np.diag(A) np.fill_diagonal(A, row_sum+deltas) return A ``` We'll try our algorithm with some matrices of different sizes, and we'll compare it with the solution given by Numpy's solver. ``` A3 = generate_spd_matrix(3) b3 = np.ones(3) x30 = np.zeros(3) X = gradient_descent(A3, b3, x30, 15) sol = np.linalg.solve(A3, b3) print (X[-1]) print (sol) print (np.linalg.norm(X[-1] - sol)) # difference bewteen gradient_descent's solution and Numpy's solver's solution A10 = generate_spd_matrix(10) b10 = np.ones(10) x100 = np.zeros(10) X = gradient_descent(A10, b10, x100, 15) sol = np.linalg.solve(A10, b10) print (X[-1]) print (sol) print (np.linalg.norm(X[-1] - sol)) # difference bewteen gradient_descent's solution and Numpy's solver's solution A50 = generate_spd_matrix(50) b50 = np.ones(50) x500 = np.zeros(50) X = gradient_descent(A50, b50, x500, 15) sol = np.linalg.solve(A50, b50) print (X[-1]) print (sol) print (np.linalg.norm(X[-1] - sol)) # difference bewteen gradient_descent's solution and Numpy's solver's solution ``` As we can see, we're getting good solutions with 15 iterations, even for matrices on the bigger side. However, this method is not used too often; rather, its younger sibling, the Conjugate Gradient Method, is the more preferred choice. <div id='CGM' /> ## Conjugate Gradient Method This method works by succesively eliminating the $n$ orthogonal components of the error, one by one. The method arrives at the solution with the following finite loop: ``` def conjugate_gradient(A, b, x0): n = A.shape[0] X = np.empty((n, n)) d = b - np.dot(A, x0) R = np.empty((n, n)) X[0] = x0 R[0] = b - np.dot(A, x0) for k in range(1, n): if (all( v == 0 for v in R[k-1])): # The algorithm converged X[k:] = X[k-1] return X break alpha = np.dot(np.transpose(R[k-1]), R[k-1]) / np.dot(np.transpose(d), np.dot(A, d)) X[k] = X[k-1] + alpha*d R[k] = R[k-1] - alpha*np.dot(A, d) beta = np.dot(np.transpose(R[k]), R[k])/np.dot(np.transpose(R[k-1]), R[k-1]) d = R[k] + beta*d return X ``` The science behind this algorithm is a bit long to explain, but for the curious ones, the explanation is on the official textbook (Numerical Analysis, 2nd Edition, Timothy Sauer). Now let's try it! ``` A3 = generate_spd_matrix(3) b3 = np.ones(3) x30 = np.zeros(3) X = conjugate_gradient(A3, b3, x30) sol = np.linalg.solve(A3, b3) print (X[-1]) print (sol) print (np.linalg.norm(X[-1]- sol)) # difference bewteen conjugate_gradient's solution and Numpy's solver's solution A50 = generate_spd_matrix(50) b50 = np.ones(50) x500 = np.zeros(50) X = conjugate_gradient(A50, b50, x500) sol = np.linalg.solve(A50, b50) print (X[-1]) print (sol) print (np.linalg.norm(X[-1]-sol)) # difference bewteen conjugate_gradient's solution and Numpy's solver's solution A100 = generate_spd_matrix(100) b100 = np.ones(100) x1000 = np.zeros(100) X = conjugate_gradient(A100, b100, x1000) sol = np.linalg.solve(A100, b100) print (X[-1]) print (sol) print (np.linalg.norm(X[-1] - sol)) # difference bewteen conjugate_gradient's solution and Numpy's solver's solution ``` We can see that for small matrices the error for `gradient_descent` is somewhat smaller than the error for `conjugate_gradient`, but for big matrices this method has an extremely small error, practically zero. Isn't that amazing?! Here are some questions for the student to think about: * In which cases can the Conjugate Gradient Method converge in less than $n$ iterations? * What will happen if you use the Gradient Descent or Conjugate Gradient Method with non-symmetric, non-positive-definite matrices? <div id='CGMp' /> ## Conjugate Gradient Method with Preconditioning We've seen that the Conjugate Gradient Method works very well, but can we make it better? Very often, the convergence rate of iterative methods depends on the condition number of matrix $A$. By preconditioning, we'll reduce the condition number of the problem. The preconditioned version of the problem $A\,x = b$ is: $$M^{-1}\,A\,x = M^{-1}\,b$$ The matrix $M$ must be as close to $A$ as possible and easy to invert. One simple choice is the Jacobi Preconditioner $M = D$, since it shares its diagonal with $A$ and, as a diagonal matrix, is easy to invert. By applying this modification, we'll find that the method converges even faster. ``` def diag_dot(D, v): n = len(D) sol = np.zeros(n) for i in range(n): sol[i] = D[i] * v[i] return sol def conjugate_gradient_J(A, b, x0): M = np.diag(A) M_p = M**-1 n = A.shape[0] X = np.empty((n, n)) Z = np.empty((n, n)) R = np.empty((n, n)) X[0] = x0 R[0] = b - np.dot(A, x0) Z[0] = diag_dot(M_p, R[0]) d = diag_dot(M_p, R[0]) for k in range(1, n): if (all( v == 0 for v in R[k-1]) and pr): # The algorithm converged X[k:] = X[k-1] return X break alpha = np.dot(np.transpose(R[k-1]), Z[k-1]) / np.dot(np.transpose(d), np.dot(A, d)) X[k] = X[k-1] + alpha*d R[k] = R[k-1] - alpha*np.dot(A, d) Z[k] = diag_dot(M_p, R[k]) beta = np.dot(np.transpose(R[k]), Z[k])/np.dot(np.transpose(R[k-1]), Z[k-1]) d = Z[k] + beta*d return X ``` Now let's try it out: ``` Aj100 = generate_spd_matrix(100) bj100 = np.ones(100) xj1000 = np.zeros(100) X = conjugate_gradient_J(Aj100, bj100, xj1000) X2 = conjugate_gradient(Aj100, bj100, xj1000) sol = np.linalg.solve(Aj100, bj100) print (np.linalg.norm(X[-1]- sol)) # difference bewteen gradient_descent's solution and Numpy's solver's solution print (np.linalg.norm(X2[-1]- sol)) ``` The absolute errors for both are very much alike and practically zero, but the difference is the _speed_ with which the error decreases, as we'll see in the Exercises section of this notebook. Can you think of other preconditioners and try them out? <div id='LP' /> ## Let's Play: Practical Exercises and Profiling First of all, define a function to calculate the progress of the relative error for a given method, that is, input the array of approximate solutions `X` and the real solution provided by Numpy's solver `r_sol` and return an array with the relative error for each step. ``` def relative_error(X, r_sol): n_steps = X.shape[0] n_r_sol = np.linalg.norm(r_sol) E = np.zeros(n_steps) for i in range(n_steps): E[i] = np.linalg.norm(X[i] - r_sol) / n_r_sol return E ``` Try the three methods with a small non-symmetric, non-positive-definite matrix. Plot the relative error for all three methods. ``` n = 10 B = 10 * np.random.random((n,n)) b = 10 * np.random.random(n) x0 = np.zeros(n) X1 = gradient_descent(B, b, x0, n) X2 = conjugate_gradient(B, b, x0) X3 = conjugate_gradient_J(B, b, x0) r_sol = np.linalg.solve(B, b) E1 = relative_error(X1, r_sol) E2 = relative_error(X2, r_sol) E3 = relative_error(X3, r_sol) iterations = np.linspace(1, n, n) plt.xlabel('Iteration') plt.ylabel('Relative Error') plt.title('Evolution of the Relative Error for each method') plt.semilogy(iterations, E1, 'go', markersize=8) # Green spots are for Gradient Descent plt.semilogy(iterations, E2, 'ro', markersize=8) # Red spots are for Conjugate Gradient plt.semilogy(iterations, E3, 'co', markersize=8) # Cyan spots are for Conjugate Gradient with Jacobi Preconditioner plt.show() ``` As you can see, if the matrix doesn't meet the requirements for these methods, the results can be quite terrible. Let's try again, this time using an appropriate matrix. ``` n = 100 A = 10 * generate_spd_matrix(n) b = 10 * np.random.random(n) x0 = np.random.random(n) X1 = gradient_descent(A, b, x0, n) X2 = conjugate_gradient(A, b, x0) X3 = conjugate_gradient_J(A, b, x0) r_sol = np.linalg.solve(A, b) E1 = relative_error(X1, r_sol) E2 = relative_error(X2, r_sol) E3 = relative_error(X3, r_sol) iterations = np.linspace(1, n, n) plt.xlabel('Iteration') plt.ylabel('Relative Error') plt.title('Evolution of the Relative Error for each method') plt.semilogy(iterations, E1, 'go', markersize=4) # Green spots are for Gradient Descent plt.semilogy(iterations, E2, 'ro', markersize=4) # Red spots are for Conjugate Gradient plt.semilogy(iterations, E3, 'co', markersize=4) # Cyan spots are for Conjugate Gradient with Jacobi Preconditioner plt.grid(True) plt.xlim(0,10) plt.show() ``` Amazing! We started with a huge relative error and reduced it to practically zero in just under 6 iterations (the algorithms all have 100 iterations but we're showing you the first 10). We can clearly see that the Conjugate Gradient Method with Preconditioning needs the least of the three, with the Gradient Descent needing the most. Let's try with an even bigger matrix! ``` n = 1000 A = 10 * generate_spd_matrix(n) b = 10 * np.random.random(n) x0 = np.random.random(n) X1 = gradient_descent(A, b, x0, n) X2 = conjugate_gradient(A, b, x0) X3 = conjugate_gradient_J(A, b, x0) r_sol = np.linalg.solve(A, b) E1 = relative_error(X1, r_sol) E2 = relative_error(X2, r_sol) E3 = relative_error(X3, r_sol) iterations = np.linspace(1, n, n) plt.xlabel('Iteration') plt.ylabel('Relative Error') plt.title('Evolution of the Relative Error for each method') plt.semilogy(iterations, E1, 'go', markersize=4) # Green spots are for Gradient Descent plt.semilogy(iterations, E2, 'ro', markersize=4) # Red spots are for Conjugate Gradient plt.semilogy(iterations, E3, 'co', markersize=4) # Cyan spots are for Conjugate Gradient with Jacobi Preconditioner plt.grid(True) plt.xlim(0,10) plt.show() ``` We can see that, reached a certain size for the matrix, the amount of iterations needed to reach a small error remains more or less the same. We encourage you to try other kinds of matrices to see how the algorithms behave, and experiment with the codes to your liking. Now let's move on to profiling. Of course, you win some, you lose some. Accelerating the convergence of the algorithm means you have to spend more of other resources. We'll use the functions `%timeit` and `%memit` to see how the algorithms behave. ``` A = generate_spd_matrix(100) b = np.ones(100) x0 = np.random.random(100) %timeit gradient_descent(A, b, x0, 100) %timeit conjugate_gradient(A, b, x0) %timeit conjugate_gradient_J(A, b, x0) %memit gradient_descent(A, b, x0, 100) %memit conjugate_gradient(A, b, x0) %memit conjugate_gradient_J(A, b, x0) ``` We see something interesting here: all three algorithms need the same amount of memory. What happened with the measure of time? Why is it so big for the algorithm that has the best convergence rate? Besides the end of the loop, we have one other criteria for stopping the algorithm: When the residue r reaches the _exact_ value of zero, we say that the algorithm converged, and stop. However it's very hard to get an error of zero for randomized initial guesses, so this almost never happens, and we can't take advantage of the convergence rate of the algorithms. There's a way we can fix this: instead of using this criteria, make the algorithm stop when a certain _tolerance_ is reached. That way, when the error gets small enough (which happens faster for the third method), we can stop and say that we got a good enough solution. We'll give the task of modifying the algorithms to let this happen. You can try with different matrices, different initial conditions, different sizes, etcetera. Try some more plotting, profiling, and experimenting. Have fun! <div id='acknowledgements' /> # Acknowledgements * _Material created by professor Claudio Torres_ (`ctorres@inf.utfsm.cl`) _and assistants: Laura Bermeo, Alvaro Salinas, Axel Simonsen and Martín Villanueva. DI UTFSM. April 2016._
github_jupyter
import numpy as np import matplotlib.pyplot as plt from scipy.linalg import solve_triangular %matplotlib inline # pip install memory_profiler %load_ext memory_profiler def gradient_descent(A, b, x0, n_iter=10): n = A.shape[0] #array with solutions X = np.empty((n_iter, n)) X[0] = x0 for k in range(1, n_iter): r = b - np.dot(A, X[k-1]) if (all( v == 0 for v in r)): # The algorithm converged X[k:] = X[k-1] return X break alpha = np.dot(np.transpose(r), r)/np.dot(np.transpose(r), np.dot(A, r)) X[k] = X[k-1] + alpha*r return X """ Randomly generates an nxn symmetric positive- definite matrix A. """ def generate_spd_matrix(n): A = np.random.random((n,n)) #constructing symmetry A += A.T #symmetric+diagonally dominant -> symmetric positive-definite deltas = 0.1*np.random.random(n) row_sum = A.sum(axis=1)-np.diag(A) np.fill_diagonal(A, row_sum+deltas) return A A3 = generate_spd_matrix(3) b3 = np.ones(3) x30 = np.zeros(3) X = gradient_descent(A3, b3, x30, 15) sol = np.linalg.solve(A3, b3) print (X[-1]) print (sol) print (np.linalg.norm(X[-1] - sol)) # difference bewteen gradient_descent's solution and Numpy's solver's solution A10 = generate_spd_matrix(10) b10 = np.ones(10) x100 = np.zeros(10) X = gradient_descent(A10, b10, x100, 15) sol = np.linalg.solve(A10, b10) print (X[-1]) print (sol) print (np.linalg.norm(X[-1] - sol)) # difference bewteen gradient_descent's solution and Numpy's solver's solution A50 = generate_spd_matrix(50) b50 = np.ones(50) x500 = np.zeros(50) X = gradient_descent(A50, b50, x500, 15) sol = np.linalg.solve(A50, b50) print (X[-1]) print (sol) print (np.linalg.norm(X[-1] - sol)) # difference bewteen gradient_descent's solution and Numpy's solver's solution def conjugate_gradient(A, b, x0): n = A.shape[0] X = np.empty((n, n)) d = b - np.dot(A, x0) R = np.empty((n, n)) X[0] = x0 R[0] = b - np.dot(A, x0) for k in range(1, n): if (all( v == 0 for v in R[k-1])): # The algorithm converged X[k:] = X[k-1] return X break alpha = np.dot(np.transpose(R[k-1]), R[k-1]) / np.dot(np.transpose(d), np.dot(A, d)) X[k] = X[k-1] + alpha*d R[k] = R[k-1] - alpha*np.dot(A, d) beta = np.dot(np.transpose(R[k]), R[k])/np.dot(np.transpose(R[k-1]), R[k-1]) d = R[k] + beta*d return X A3 = generate_spd_matrix(3) b3 = np.ones(3) x30 = np.zeros(3) X = conjugate_gradient(A3, b3, x30) sol = np.linalg.solve(A3, b3) print (X[-1]) print (sol) print (np.linalg.norm(X[-1]- sol)) # difference bewteen conjugate_gradient's solution and Numpy's solver's solution A50 = generate_spd_matrix(50) b50 = np.ones(50) x500 = np.zeros(50) X = conjugate_gradient(A50, b50, x500) sol = np.linalg.solve(A50, b50) print (X[-1]) print (sol) print (np.linalg.norm(X[-1]-sol)) # difference bewteen conjugate_gradient's solution and Numpy's solver's solution A100 = generate_spd_matrix(100) b100 = np.ones(100) x1000 = np.zeros(100) X = conjugate_gradient(A100, b100, x1000) sol = np.linalg.solve(A100, b100) print (X[-1]) print (sol) print (np.linalg.norm(X[-1] - sol)) # difference bewteen conjugate_gradient's solution and Numpy's solver's solution def diag_dot(D, v): n = len(D) sol = np.zeros(n) for i in range(n): sol[i] = D[i] * v[i] return sol def conjugate_gradient_J(A, b, x0): M = np.diag(A) M_p = M**-1 n = A.shape[0] X = np.empty((n, n)) Z = np.empty((n, n)) R = np.empty((n, n)) X[0] = x0 R[0] = b - np.dot(A, x0) Z[0] = diag_dot(M_p, R[0]) d = diag_dot(M_p, R[0]) for k in range(1, n): if (all( v == 0 for v in R[k-1]) and pr): # The algorithm converged X[k:] = X[k-1] return X break alpha = np.dot(np.transpose(R[k-1]), Z[k-1]) / np.dot(np.transpose(d), np.dot(A, d)) X[k] = X[k-1] + alpha*d R[k] = R[k-1] - alpha*np.dot(A, d) Z[k] = diag_dot(M_p, R[k]) beta = np.dot(np.transpose(R[k]), Z[k])/np.dot(np.transpose(R[k-1]), Z[k-1]) d = Z[k] + beta*d return X Aj100 = generate_spd_matrix(100) bj100 = np.ones(100) xj1000 = np.zeros(100) X = conjugate_gradient_J(Aj100, bj100, xj1000) X2 = conjugate_gradient(Aj100, bj100, xj1000) sol = np.linalg.solve(Aj100, bj100) print (np.linalg.norm(X[-1]- sol)) # difference bewteen gradient_descent's solution and Numpy's solver's solution print (np.linalg.norm(X2[-1]- sol)) def relative_error(X, r_sol): n_steps = X.shape[0] n_r_sol = np.linalg.norm(r_sol) E = np.zeros(n_steps) for i in range(n_steps): E[i] = np.linalg.norm(X[i] - r_sol) / n_r_sol return E n = 10 B = 10 * np.random.random((n,n)) b = 10 * np.random.random(n) x0 = np.zeros(n) X1 = gradient_descent(B, b, x0, n) X2 = conjugate_gradient(B, b, x0) X3 = conjugate_gradient_J(B, b, x0) r_sol = np.linalg.solve(B, b) E1 = relative_error(X1, r_sol) E2 = relative_error(X2, r_sol) E3 = relative_error(X3, r_sol) iterations = np.linspace(1, n, n) plt.xlabel('Iteration') plt.ylabel('Relative Error') plt.title('Evolution of the Relative Error for each method') plt.semilogy(iterations, E1, 'go', markersize=8) # Green spots are for Gradient Descent plt.semilogy(iterations, E2, 'ro', markersize=8) # Red spots are for Conjugate Gradient plt.semilogy(iterations, E3, 'co', markersize=8) # Cyan spots are for Conjugate Gradient with Jacobi Preconditioner plt.show() n = 100 A = 10 * generate_spd_matrix(n) b = 10 * np.random.random(n) x0 = np.random.random(n) X1 = gradient_descent(A, b, x0, n) X2 = conjugate_gradient(A, b, x0) X3 = conjugate_gradient_J(A, b, x0) r_sol = np.linalg.solve(A, b) E1 = relative_error(X1, r_sol) E2 = relative_error(X2, r_sol) E3 = relative_error(X3, r_sol) iterations = np.linspace(1, n, n) plt.xlabel('Iteration') plt.ylabel('Relative Error') plt.title('Evolution of the Relative Error for each method') plt.semilogy(iterations, E1, 'go', markersize=4) # Green spots are for Gradient Descent plt.semilogy(iterations, E2, 'ro', markersize=4) # Red spots are for Conjugate Gradient plt.semilogy(iterations, E3, 'co', markersize=4) # Cyan spots are for Conjugate Gradient with Jacobi Preconditioner plt.grid(True) plt.xlim(0,10) plt.show() n = 1000 A = 10 * generate_spd_matrix(n) b = 10 * np.random.random(n) x0 = np.random.random(n) X1 = gradient_descent(A, b, x0, n) X2 = conjugate_gradient(A, b, x0) X3 = conjugate_gradient_J(A, b, x0) r_sol = np.linalg.solve(A, b) E1 = relative_error(X1, r_sol) E2 = relative_error(X2, r_sol) E3 = relative_error(X3, r_sol) iterations = np.linspace(1, n, n) plt.xlabel('Iteration') plt.ylabel('Relative Error') plt.title('Evolution of the Relative Error for each method') plt.semilogy(iterations, E1, 'go', markersize=4) # Green spots are for Gradient Descent plt.semilogy(iterations, E2, 'ro', markersize=4) # Red spots are for Conjugate Gradient plt.semilogy(iterations, E3, 'co', markersize=4) # Cyan spots are for Conjugate Gradient with Jacobi Preconditioner plt.grid(True) plt.xlim(0,10) plt.show() A = generate_spd_matrix(100) b = np.ones(100) x0 = np.random.random(100) %timeit gradient_descent(A, b, x0, 100) %timeit conjugate_gradient(A, b, x0) %timeit conjugate_gradient_J(A, b, x0) %memit gradient_descent(A, b, x0, 100) %memit conjugate_gradient(A, b, x0) %memit conjugate_gradient_J(A, b, x0)
0.529507
0.98599
# Simple Go-To-Goal for Cerus The following code implements a simple go-to-goal behavior for Cerus. It uses a closed feedback loop to continuously asses Cerus' state (position and heading) in the world using data from two wheel encoders. It subsequently calculates the error between a given goal location and its current pose and will attempt to minimize the error until it reaches the goal location. A P-regulator (see PID regulator) function uses the error as an input and outputs the angular velocity for the Arduino and motor controllers that drive the robot. All models used in this program are adapted from Georgia Tech's "Control of Mobile Robots" by Dr. Magnus Egerstedt. ``` #Import useful libraries import serial import time import math ``` We first define our goal location. Units are metric, real-world coordinates in an X/Y coordinate system ``` goal_x = 0.5 #Goal X coordinate in meters goal_y = -0.5 #Goal Y coordinate in meters atGoal = False constVel = 0.5 #To simplify this program, we're using a constant linear velocity to reach our goal ``` We use pySerial to read encoder data from the Arduino and send move commands to it: ``` #Opening a serial port on the Arduino resets it, so our encoder count is also reset to 0,0 ser = serial.Serial('COM3', 115200) #replace 'COM3' with the appropriate serial port on your device time.sleep(1) ``` The Cerus class keeps track of all the important robot parameters. ``` class Cerus(): def __init__(self, pose_x, pose_y, pose_phi, R_wheel, N_ticks, L_track): self.pose_x = pose_x #X position self.pose_y = pose_y #Y position self.pose_phi = pose_phi #Heading self.R_wheel = R_wheel #wheel radius in meters self.N_ticks = N_ticks #encoder ticks per wheel revolution self.L_track = L_track #wheel track in meters #Create a Cerus instance and initialize it to a 0,0,0 world position and with some physical dimensions cerus = Cerus(0,0,0,0.03,500,0.23) ``` Pose calculation allows us track where our robot is in space as it moves. The pose is often also referred to as the 'state'. ``` def calculatePose(deltaTicks): #Calculate the centerline distance moved distanceLeft = 2 * math.pi * cerus.R_wheel * (deltaTicks[0] / cerus.N_ticks) distanceRight = 2 * math.pi * cerus.R_wheel * (deltaTicks[1] / cerus.N_ticks) distanceCenter = (distanceLeft + distanceRight) / 2 #Update the position and heading cerus.pose_x = round((cerus.pose_x + distanceCenter * math.cos(cerus.pose_phi)), 4) cerus.pose_y = round((cerus.pose_y + distanceCenter * math.sin(cerus.pose_phi)), 4) cerus.pose_phi = round((cerus.pose_phi + ((distanceRight - distanceLeft) / cerus.L_track)), 4) ``` Additionally, we want to keep track of how far we're off the goal point defined initially. ``` #Calculate the error between Cerus' heading and the goal point def calculateError(): phi_desired = math.atan2((goal_y - cerus.pose_y),(goal_x - cerus.pose_x)) temp = phi_desired - cerus.pose_phi error_heading = round((math.atan2(math.sin(temp), math.cos(temp))), 4) #ensure that error is within [-pi, pi] error_x = round((goal_x - cerus.pose_x), 4) error_y = round((goal_y - cerus.pose_y), 4) print("The heading error is: ", error_heading) print("The X error is: ", error_x) print("The Y error is: ", error_y) return error_x, error_y, error_heading ``` Finally, we want to read our encoders, calculate our pose, calculate the goal error and issue a move command if necessary. ``` def moveRobot(): global atGoal atGoal = False #Everytime we call this function, we read two sets of encoder values and evaluate the delta data = ["0,0","0,0"] i = 0 while i < 2 and atGoal == False: if ser.inWaiting(): temp = ser.readline() data[i] = temp.decode() leftValOld, rightValOld = formatData(data[0]) leftValNew, rightValNew = formatData(data[1]) i += 1 #From these values we can calculate the momentary encoder values for both sides of the robot leftDelta = leftValNew - leftValOld rightDelta = rightValNew - rightValOld deltaTicks = [leftDelta, rightDelta] #Calculate current pose calculatePose(deltaTicks) #Calculate the current pose to goal error error_x, error_y, error_heading = calculateError() #If we're within 5 cm of the goal if error_x <= 0.05 and error_y <= 0.05: twist(0.0,0.0) atGoal = True #Otherwise keep driving, using P-controller to adjust angular velocity else: omega = - (2 * error_heading) twist(constVel, omega) ``` The functions below are helpers and will be called through our main loop. ``` #Functions to read and format encoder data received from the Serial port def formatData(data): delimiter = "x" leftVal = "" rightVal = "" for i in range(len(data)): if data[i] == ",": delimiter = "," elif delimiter != ",": leftVal += data[i] elif delimiter == ",": rightVal += data[i] leftVal, rightVal = int(leftVal), int(rightVal) return leftVal, rightVal #Create a function that sends a movement command to the Arduino def twist(linearVelocity, angularVelocity): command = f"<{linearVelocity},{angularVelocity}>" ser.write(str.encode(command)) ``` This is the main part for our program that will loop over and over until Cerus has reached its goal. For our simple go-to-goal behavior, we will drive the robot at a constant speed and only adjust our heading so that we reach the goal location. __WARNING: This will move the robot!__ ``` while not atGoal: moveRobot() print("Robot at goal position.") # Close the serial connection when done ser.close() ```
github_jupyter
#Import useful libraries import serial import time import math goal_x = 0.5 #Goal X coordinate in meters goal_y = -0.5 #Goal Y coordinate in meters atGoal = False constVel = 0.5 #To simplify this program, we're using a constant linear velocity to reach our goal #Opening a serial port on the Arduino resets it, so our encoder count is also reset to 0,0 ser = serial.Serial('COM3', 115200) #replace 'COM3' with the appropriate serial port on your device time.sleep(1) class Cerus(): def __init__(self, pose_x, pose_y, pose_phi, R_wheel, N_ticks, L_track): self.pose_x = pose_x #X position self.pose_y = pose_y #Y position self.pose_phi = pose_phi #Heading self.R_wheel = R_wheel #wheel radius in meters self.N_ticks = N_ticks #encoder ticks per wheel revolution self.L_track = L_track #wheel track in meters #Create a Cerus instance and initialize it to a 0,0,0 world position and with some physical dimensions cerus = Cerus(0,0,0,0.03,500,0.23) def calculatePose(deltaTicks): #Calculate the centerline distance moved distanceLeft = 2 * math.pi * cerus.R_wheel * (deltaTicks[0] / cerus.N_ticks) distanceRight = 2 * math.pi * cerus.R_wheel * (deltaTicks[1] / cerus.N_ticks) distanceCenter = (distanceLeft + distanceRight) / 2 #Update the position and heading cerus.pose_x = round((cerus.pose_x + distanceCenter * math.cos(cerus.pose_phi)), 4) cerus.pose_y = round((cerus.pose_y + distanceCenter * math.sin(cerus.pose_phi)), 4) cerus.pose_phi = round((cerus.pose_phi + ((distanceRight - distanceLeft) / cerus.L_track)), 4) #Calculate the error between Cerus' heading and the goal point def calculateError(): phi_desired = math.atan2((goal_y - cerus.pose_y),(goal_x - cerus.pose_x)) temp = phi_desired - cerus.pose_phi error_heading = round((math.atan2(math.sin(temp), math.cos(temp))), 4) #ensure that error is within [-pi, pi] error_x = round((goal_x - cerus.pose_x), 4) error_y = round((goal_y - cerus.pose_y), 4) print("The heading error is: ", error_heading) print("The X error is: ", error_x) print("The Y error is: ", error_y) return error_x, error_y, error_heading def moveRobot(): global atGoal atGoal = False #Everytime we call this function, we read two sets of encoder values and evaluate the delta data = ["0,0","0,0"] i = 0 while i < 2 and atGoal == False: if ser.inWaiting(): temp = ser.readline() data[i] = temp.decode() leftValOld, rightValOld = formatData(data[0]) leftValNew, rightValNew = formatData(data[1]) i += 1 #From these values we can calculate the momentary encoder values for both sides of the robot leftDelta = leftValNew - leftValOld rightDelta = rightValNew - rightValOld deltaTicks = [leftDelta, rightDelta] #Calculate current pose calculatePose(deltaTicks) #Calculate the current pose to goal error error_x, error_y, error_heading = calculateError() #If we're within 5 cm of the goal if error_x <= 0.05 and error_y <= 0.05: twist(0.0,0.0) atGoal = True #Otherwise keep driving, using P-controller to adjust angular velocity else: omega = - (2 * error_heading) twist(constVel, omega) #Functions to read and format encoder data received from the Serial port def formatData(data): delimiter = "x" leftVal = "" rightVal = "" for i in range(len(data)): if data[i] == ",": delimiter = "," elif delimiter != ",": leftVal += data[i] elif delimiter == ",": rightVal += data[i] leftVal, rightVal = int(leftVal), int(rightVal) return leftVal, rightVal #Create a function that sends a movement command to the Arduino def twist(linearVelocity, angularVelocity): command = f"<{linearVelocity},{angularVelocity}>" ser.write(str.encode(command)) while not atGoal: moveRobot() print("Robot at goal position.") # Close the serial connection when done ser.close()
0.507812
0.975901
# Solutions for String Manipulation Exercises **Question 1** Try this code, and see whether both of them are the same or not. `a = 'kcgi'` `b = "kcgi"` `a == b ` Is it True or False? ``` a = 'kcgi' b = "kcgi" a == b ``` **Question 2** Try to write a string with triple-quote syntax and is it can be displayed when we called out the variable? ``` multi_line = """ a b c""" print(multi_line) ``` **Question 3** Let's try to adjust the case for this string. `the Kyoto College of Graduate studies for informatics` 1) Turn to all capital letters by using `.upper()` 2) Turn to all small letters by using `.lower()` 3) Capitalize each words of the string by using `.title()` 4) Capitalize the first letter of the string by using `.capitalize()` 5) Swap the case by using `.swapcase()` ``` kcgi = "the Kyoto College of Graduate studies for informatics" kcgi.upper() kcgi.lower() kcgi.title() kcgi.capitalize() kcgi.swapcase() ``` **Question 4** Use `.strip()` to remove unnecessary whitespace in the strings below. ` kyoto computer gakuin ` ``` kcg = " kyoto computer gakuin " kcg.strip() kcg.rstrip() kcg.lstrip() ``` **Question 5** What if we want to take out string '0' of the string below? How we are going to do that? `"00000456"` Then, we want to fill the zero back again. How to do that? ``` num = "00000456" num.strip('0') num.zfill(5) ``` **Question 6** We can also find and replace substrings inside a string. To find the substring, you can use `.find()`, `.rfind()` or `.index()`, to replace a substring you can use `.replace()` Given a string as below. `"the kyoto college of graduate studies for informatics"` 1. Find the `kyoto` substring by using `.find()` and `.index()` 2. Find the `university` substring by using `.find()` and `.index()` 3. Check whether `kyoto` is the first substring of the string by using `.startswith()`. 4. Check whether `informatics` is the last substring of the string by using `endswith()` 5. Replace character `e` with `--` in the string by using `.replace()` ``` kcgi = "the kyoto college of graduate studies for informatics" kcgi.find("kyoto") kcgi.index("kyoto") kcgi.find("university") kcgi.index("university") kcgi.startswith("kyoto") kcgi.endswith("informatics") kcgi.replace("e", "--") ``` **Question 7** By using the same string as in *Question 6* partition the string into three parts. Use substring `graduate` as the cut-off. Use `.partition()` to do this. Next, split the strings individually by using `.split()` ``` kcgi.partition("graduate") kcgi.split() ``` **Question 8** Try to join both of the strings below by using `.join()` `string1 = "--"` `string2 = ["a", "b", "c"]` ``` string1 = "--" string2 = ["a", "b", "c"] string1.join(string2) ``` **Question 9** Change an integer to a string. b = 3 Then, use `.format()` to write the sentence below. `He has {b} oranges.` ``` b = 3 str(b) print("He has {} oranges.".format(str(b)) ) ``` **Question 10** You can also use indexing to include a variable in a string. Or determine how many decimal places would you want it to display. By using `{0:.3f}` for 3 decimal places. 1. Write the following string. `"""First letter is {0}, second letter is {1}.""".format('A','B')` then, change index `{0}` to `{first}` and `{1}` to `{second}` 2. Write the `pi` value in nearest 2 decimal places. Use `from math import pi` to get the `pi` value ``` print("""First letter is {0}, second letter is {1}.""".format('A','B')) print("""First letter is {first}, second letter is {second}.""".format(first= 'A',second='B')) from math import pi print("pi = {0: .2f}".format(pi)) ```
github_jupyter
a = 'kcgi' b = "kcgi" a == b multi_line = """ a b c""" print(multi_line) kcgi = "the Kyoto College of Graduate studies for informatics" kcgi.upper() kcgi.lower() kcgi.title() kcgi.capitalize() kcgi.swapcase() kcg = " kyoto computer gakuin " kcg.strip() kcg.rstrip() kcg.lstrip() num = "00000456" num.strip('0') num.zfill(5) kcgi = "the kyoto college of graduate studies for informatics" kcgi.find("kyoto") kcgi.index("kyoto") kcgi.find("university") kcgi.index("university") kcgi.startswith("kyoto") kcgi.endswith("informatics") kcgi.replace("e", "--") kcgi.partition("graduate") kcgi.split() string1 = "--" string2 = ["a", "b", "c"] string1.join(string2) b = 3 str(b) print("He has {} oranges.".format(str(b)) ) print("""First letter is {0}, second letter is {1}.""".format('A','B')) print("""First letter is {first}, second letter is {second}.""".format(first= 'A',second='B')) from math import pi print("pi = {0: .2f}".format(pi))
0.292595
0.966695
## **Viscoelastic wave equation implementation on a staggered grid** This is a first attempt at implementing the viscoelastic wave equation as described in [1]. See also the FDELMODC implementation by Jan Thorbecke [2]. In the following example, a three dimensional toy problem will be introduced consisting of a single Ricker source located at (100, 50, 35) in a 200 m $\times$ 100 m $\times$ 100 *m* domain. ``` # Required imports: import numpy as np import sympy as sp from devito import * from examples.seismic.source import RickerSource, TimeAxis from examples.seismic import ModelViscoelastic, plot_image ``` The model domain is now constructed. It consists of an upper layer of water, 50 m in depth, and a lower rock layer separated by a 4 m thick sediment layer. ``` # Domain size: extent = (200., 100., 100.) # 200 x 100 x 100 m domain h = 1.0 # Desired grid spacing shape = (int(extent[0]/h+1), int(extent[1]/h+1), int(extent[2]/h+1)) # Model physical parameters: vp = np.zeros(shape) qp = np.zeros(shape) vs = np.zeros(shape) qs = np.zeros(shape) rho = np.zeros(shape) # Set up three horizontally separated layers: vp[:,:,:int(0.5*shape[2])+1] = 1.52 qp[:,:,:int(0.5*shape[2])+1] = 10000. vs[:,:,:int(0.5*shape[2])+1] = 0. qs[:,:,:int(0.5*shape[2])+1] = 0. rho[:,:,:int(0.5*shape[2])+1] = 1.05 vp[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 1.6 qp[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 40. vs[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 0.4 qs[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 30. rho[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 1.3 vp[:,:,int(0.5*shape[2])+1+int(4/h):] = 2.2 qp[:,:,int(0.5*shape[2])+1+int(4/h):] = 100. vs[:,:,int(0.5*shape[2])+1+int(4/h):] = 1.2 qs[:,:,int(0.5*shape[2])+1+int(4/h):] = 70. rho[:,:,int(0.5*shape[2])+1+int(4/h):] = 2. ``` Now create a Devito vsicoelastic model generating an appropriate computational grid along with absorbing boundary layers: ``` # Create model origin = (0, 0, 0) spacing = (h, h, h) so = 4 # FD space order (Note that the time order is by default 1). nbl = 20 # Number of absorbing boundary layers cells model = ModelViscoelastic(space_order=so, vp=vp, qp=qp, vs=vs, qs=qs, rho=rho, origin=origin, shape=shape, spacing=spacing, nbl=nbl) ``` The source frequency is now set along with the required model parameters: ``` # Source freq. in MHz (note that the source is defined below): f0 = 0.12 # Thorbecke's parameter notation l = model.lam mu = model.mu ro = model.irho k = 1.0/(l + 2*mu) pi = l + 2*mu t_s = (sp.sqrt(1.+1./model.qp**2)-1./model.qp)/f0 t_ep = 1./(f0**2*t_s) t_es = (1.+f0*model.qs*t_s)/(f0*model.qs-f0**2*t_s) # Time step in ms and time range: t0, tn = 0., 30. dt = model.critical_dt time_range = TimeAxis(start=t0, stop=tn, step=dt) ``` Generate Devito time functions for the velocity, stress and memory variables appearing in the viscoelastic model equations. By default, the initial data of each field will be set to zero. ``` # PDE fn's: x, y, z = model.grid.dimensions damp = model.damp # Staggered grid setup: # Velocity: v = VectorTimeFunction(name="v", grid=model.grid, time_order=1, space_order=so) # Stress: tau = TensorTimeFunction(name='t', grid=model.grid, space_order=so, time_order=1) # Memory variable: r = TensorTimeFunction(name='r', grid=model.grid, space_order=so, time_order=1) s = model.grid.stepping_dim.spacing # Symbolic representation of the model grid spacing ``` And now the source and PDE's are constructed: ``` # Source src = RickerSource(name='src', grid=model.grid, f0=f0, time_range=time_range) src.coordinates.data[:] = np.array([100., 50., 35.]) # The source injection term src_xx = src.inject(field=tau[0, 0].forward, expr=src*s) src_yy = src.inject(field=tau[1, 1].forward, expr=src*s) src_zz = src.inject(field=tau[2, 2].forward, expr=src*s) # Particle velocity u_v = Eq(v.forward, model.damp * (v + s*ro*div(tau))) # Stress equations: u_t = Eq(tau.forward, model.damp * (s*r.forward + tau + s * (l * t_ep / t_s * diag(div(v.forward)) + mu * t_es / t_s * (grad(v.forward) + grad(v.forward).T)))) # Memory variable equations: u_r = Eq(r.forward, damp * (r - s / t_s * (r + l * (t_ep/t_s-1) * diag(div(v.forward)) + mu * (t_es/t_s-1) * (grad(v.forward) + grad(v.forward).T) ))) ``` We now create and then run the operator: ``` # Create the operator: op = Operator([u_v, u_r, u_t] + src_xx + src_yy + src_zz, subs=model.spacing_map) #NBVAL_IGNORE_OUTPUT # Execute the operator: op(dt=dt) ``` Before plotting some results, let us first look at the shape of the data stored in one of our time functions: ``` v[0].data.shape ``` Since our functions are first order in time, the time dimension is of length 2. The spatial extent of the data includes the absorbing boundary layers in each dimension (i.e. each spatial dimension is padded by 20 grid points to the left and to the right). The total number of instances in time considered is obtained from: ``` time_range.num ``` Hence 223 time steps were executed. Thus the final time step will be stored in index given by: ``` np.mod(time_range.num,2) ``` Now, let us plot some 2D slices of the fields `vx` and `szz` at the final time step: ``` #NBVAL_SKIP # Mid-points: mid_x = int(0.5*(v[0].data.shape[1]-1))+1 mid_y = int(0.5*(v[0].data.shape[2]-1))+1 # Plot some selected results: plot_image(v[0].data[1, :, mid_y, :], cmap="seismic") plot_image(v[0].data[1, mid_x, :, :], cmap="seismic") plot_image(tau[2, 2].data[1, :, mid_y, :], cmap="seismic") plot_image(tau[2, 2].data[1, mid_x, :, :], cmap="seismic") #NBVAL_IGNORE_OUTPUT assert np.isclose(norm(v[0]), 0.102959, atol=1e-4, rtol=0) ``` # References [1] Johan O. A. Roberston, *et.al.* (1994). "Viscoelatic finite-difference modeling" GEOPHYSICS, 59(9), 1444-1456. [2] https://janth.home.xs4all.nl/Software/fdelmodcManual.pdf
github_jupyter
# Required imports: import numpy as np import sympy as sp from devito import * from examples.seismic.source import RickerSource, TimeAxis from examples.seismic import ModelViscoelastic, plot_image # Domain size: extent = (200., 100., 100.) # 200 x 100 x 100 m domain h = 1.0 # Desired grid spacing shape = (int(extent[0]/h+1), int(extent[1]/h+1), int(extent[2]/h+1)) # Model physical parameters: vp = np.zeros(shape) qp = np.zeros(shape) vs = np.zeros(shape) qs = np.zeros(shape) rho = np.zeros(shape) # Set up three horizontally separated layers: vp[:,:,:int(0.5*shape[2])+1] = 1.52 qp[:,:,:int(0.5*shape[2])+1] = 10000. vs[:,:,:int(0.5*shape[2])+1] = 0. qs[:,:,:int(0.5*shape[2])+1] = 0. rho[:,:,:int(0.5*shape[2])+1] = 1.05 vp[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 1.6 qp[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 40. vs[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 0.4 qs[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 30. rho[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 1.3 vp[:,:,int(0.5*shape[2])+1+int(4/h):] = 2.2 qp[:,:,int(0.5*shape[2])+1+int(4/h):] = 100. vs[:,:,int(0.5*shape[2])+1+int(4/h):] = 1.2 qs[:,:,int(0.5*shape[2])+1+int(4/h):] = 70. rho[:,:,int(0.5*shape[2])+1+int(4/h):] = 2. # Create model origin = (0, 0, 0) spacing = (h, h, h) so = 4 # FD space order (Note that the time order is by default 1). nbl = 20 # Number of absorbing boundary layers cells model = ModelViscoelastic(space_order=so, vp=vp, qp=qp, vs=vs, qs=qs, rho=rho, origin=origin, shape=shape, spacing=spacing, nbl=nbl) # Source freq. in MHz (note that the source is defined below): f0 = 0.12 # Thorbecke's parameter notation l = model.lam mu = model.mu ro = model.irho k = 1.0/(l + 2*mu) pi = l + 2*mu t_s = (sp.sqrt(1.+1./model.qp**2)-1./model.qp)/f0 t_ep = 1./(f0**2*t_s) t_es = (1.+f0*model.qs*t_s)/(f0*model.qs-f0**2*t_s) # Time step in ms and time range: t0, tn = 0., 30. dt = model.critical_dt time_range = TimeAxis(start=t0, stop=tn, step=dt) # PDE fn's: x, y, z = model.grid.dimensions damp = model.damp # Staggered grid setup: # Velocity: v = VectorTimeFunction(name="v", grid=model.grid, time_order=1, space_order=so) # Stress: tau = TensorTimeFunction(name='t', grid=model.grid, space_order=so, time_order=1) # Memory variable: r = TensorTimeFunction(name='r', grid=model.grid, space_order=so, time_order=1) s = model.grid.stepping_dim.spacing # Symbolic representation of the model grid spacing # Source src = RickerSource(name='src', grid=model.grid, f0=f0, time_range=time_range) src.coordinates.data[:] = np.array([100., 50., 35.]) # The source injection term src_xx = src.inject(field=tau[0, 0].forward, expr=src*s) src_yy = src.inject(field=tau[1, 1].forward, expr=src*s) src_zz = src.inject(field=tau[2, 2].forward, expr=src*s) # Particle velocity u_v = Eq(v.forward, model.damp * (v + s*ro*div(tau))) # Stress equations: u_t = Eq(tau.forward, model.damp * (s*r.forward + tau + s * (l * t_ep / t_s * diag(div(v.forward)) + mu * t_es / t_s * (grad(v.forward) + grad(v.forward).T)))) # Memory variable equations: u_r = Eq(r.forward, damp * (r - s / t_s * (r + l * (t_ep/t_s-1) * diag(div(v.forward)) + mu * (t_es/t_s-1) * (grad(v.forward) + grad(v.forward).T) ))) # Create the operator: op = Operator([u_v, u_r, u_t] + src_xx + src_yy + src_zz, subs=model.spacing_map) #NBVAL_IGNORE_OUTPUT # Execute the operator: op(dt=dt) v[0].data.shape time_range.num np.mod(time_range.num,2) #NBVAL_SKIP # Mid-points: mid_x = int(0.5*(v[0].data.shape[1]-1))+1 mid_y = int(0.5*(v[0].data.shape[2]-1))+1 # Plot some selected results: plot_image(v[0].data[1, :, mid_y, :], cmap="seismic") plot_image(v[0].data[1, mid_x, :, :], cmap="seismic") plot_image(tau[2, 2].data[1, :, mid_y, :], cmap="seismic") plot_image(tau[2, 2].data[1, mid_x, :, :], cmap="seismic") #NBVAL_IGNORE_OUTPUT assert np.isclose(norm(v[0]), 0.102959, atol=1e-4, rtol=0)
0.675336
0.967686
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import cv2 as cv alpha=pd.read_csv('A_Z Handwritten Data.csv') alphadata=alpha.sample(frac=.15,random_state=0)d alpha.shape alphadata.shape alphalabels=alphadata["0"] len(np.unique(alphalabels)) alphadata.drop(['0'],axis=1,inplace=True) from keras.datasets import mnist ((dtrain,dtrainlabels),(dtest,dtestlabels))=mnist.load_data() dtrain.shape alphalabels=alphalabels+10 azdata=np.array(alphadata) azdata.shape azdata=azdata.reshape(55868,28,28) azdata.shape alphalabels=np.array(alphalabels) data=np.vstack([dtrain,azdata]) label=np.hstack([dtrainlabels,alphalabels]) data=data.astype('float32') # data = [cv.resize(image, (32, 32)) for image in data] # data = np.array(data, dtype="float32") data.shape data=data.reshape(115868, 28, 28,1) len(np.unique(label)) labels=pd.get_dummies(label) labels.shape from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test=train_test_split(data,labels) X_test.shape from keras.models import Sequential from keras.layers import Conv2D,MaxPooling2D,Flatten,Dense,Dropout from keras.preprocessing.image import ImageDataGenerator from keras.applications.vgg16 import VGG16 datagen=ImageDataGenerator(rescale=1/255,rotation_range=10,zoom_range=0.05,shear_range=0.15,width_shift_range=0.1,height_shift_range=.1,fill_mode='nearest') ## the 1dim array GRAY model=Sequential() model.add(Conv2D(32,kernel_size=3,input_shape=(28,28,1),activation='relu')) model.add(MaxPooling2D(2,2)) model.add(Conv2D(32,kernel_size=3,activation='relu')) model.add(MaxPooling2D(2,2)) model.add(Flatten()) model.add(Dense(100,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(36,activation='softmax')) model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy']) bs=128 model.fit_generator(datagen.flow(X_train,y_train,batch_size=bs),steps_per_epoch=len(X_train)//bs,validation_data=(X_test,y_test),epochs=10,class_weight='classWeight', verbose=1) labeldata=[0,1,2,3,4,5,6,7,8,9,'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z'] labels.columns dic={} j=0 for i in labeldata: dic.update({j:i}) j+=1 X_test.shape for i in X_test[:10]: k=np.argmax(model.predict(i.reshape(1,28,28,1))) im=i.reshape(28,28) plt.figure() print(dic[k]) plt.imshow(im) model.save('OCRmodel.h5') from keras.models import load_model import cv2 as cv import imutils from imutils.contours import sort_contours model=load_model('model.h5') model.summary() ```
github_jupyter
import pandas as pd import numpy as np import matplotlib.pyplot as plt import cv2 as cv alpha=pd.read_csv('A_Z Handwritten Data.csv') alphadata=alpha.sample(frac=.15,random_state=0)d alpha.shape alphadata.shape alphalabels=alphadata["0"] len(np.unique(alphalabels)) alphadata.drop(['0'],axis=1,inplace=True) from keras.datasets import mnist ((dtrain,dtrainlabels),(dtest,dtestlabels))=mnist.load_data() dtrain.shape alphalabels=alphalabels+10 azdata=np.array(alphadata) azdata.shape azdata=azdata.reshape(55868,28,28) azdata.shape alphalabels=np.array(alphalabels) data=np.vstack([dtrain,azdata]) label=np.hstack([dtrainlabels,alphalabels]) data=data.astype('float32') # data = [cv.resize(image, (32, 32)) for image in data] # data = np.array(data, dtype="float32") data.shape data=data.reshape(115868, 28, 28,1) len(np.unique(label)) labels=pd.get_dummies(label) labels.shape from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test=train_test_split(data,labels) X_test.shape from keras.models import Sequential from keras.layers import Conv2D,MaxPooling2D,Flatten,Dense,Dropout from keras.preprocessing.image import ImageDataGenerator from keras.applications.vgg16 import VGG16 datagen=ImageDataGenerator(rescale=1/255,rotation_range=10,zoom_range=0.05,shear_range=0.15,width_shift_range=0.1,height_shift_range=.1,fill_mode='nearest') ## the 1dim array GRAY model=Sequential() model.add(Conv2D(32,kernel_size=3,input_shape=(28,28,1),activation='relu')) model.add(MaxPooling2D(2,2)) model.add(Conv2D(32,kernel_size=3,activation='relu')) model.add(MaxPooling2D(2,2)) model.add(Flatten()) model.add(Dense(100,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(36,activation='softmax')) model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy']) bs=128 model.fit_generator(datagen.flow(X_train,y_train,batch_size=bs),steps_per_epoch=len(X_train)//bs,validation_data=(X_test,y_test),epochs=10,class_weight='classWeight', verbose=1) labeldata=[0,1,2,3,4,5,6,7,8,9,'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z'] labels.columns dic={} j=0 for i in labeldata: dic.update({j:i}) j+=1 X_test.shape for i in X_test[:10]: k=np.argmax(model.predict(i.reshape(1,28,28,1))) im=i.reshape(28,28) plt.figure() print(dic[k]) plt.imshow(im) model.save('OCRmodel.h5') from keras.models import load_model import cv2 as cv import imutils from imutils.contours import sort_contours model=load_model('model.h5') model.summary()
0.538012
0.446676
# Convolutional Autoencoder Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data. ``` %matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) img = mnist.train.images[2] plt.axis("off") plt.imshow(img.reshape((28, 28)), cmap='Greys_r') ``` ## Network Architecture The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below. <img src='assets/convolutional_autoencoder.png' width=500px> Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. ### What's going on with the decoder Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling. > **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d). ``` learning_rate = 0.001 n_classes = mnist.train.images.shape[1] # Input and target placeholders inputs_ = tf.placeholder(tf.float32,[None,28,28,1]) targets_ = tf.placeholder(tf.float32,[None,28,28,1]) ### Encoder conv1 = tf.layers.conv2d(inputs_,filters = 16,kernel_size=2,activation=tf.nn.relu,padding="same") print(conv1) # Now 28x28x16 maxpool1 = tf.layers.max_pooling2d(conv1,strides =2,pool_size=2) print(maxpool1) # Now 14x14x16 conv2 = tf.layers.conv2d(maxpool1,filters = 8,kernel_size=2,activation = tf.nn.relu,padding="same") # Now 14x14x8 maxpool2 = tf.layers.max_pooling2d(conv2,strides = 2,pool_size=2) # Now 7x7x8 conv3 = tf.layers.conv2d(maxpool2, filters = 8,kernel_size=2,activation=tf.nn.relu,padding="same") # Now 7x7x8 encoded = tf.layers.max_pooling2d(conv3,strides=2,padding="same",pool_size=2) # Now 4x4x8 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded,[7,7]) # Now 7x7x8 conv4 = tf.layers.conv2d(upsample1,kernel_size=2,filters=8,activation=tf.nn.relu,padding="same") # Now 7x7x8 upsample2 = tf.image.resize_nearest_neighbor(conv4,[14,14]) # Now 14x14x8 conv5 = tf.layers.conv2d(upsample2,kernel_size=2,filters=8,activation=tf.nn.relu,padding="same") # Now 14x14x8 upsample3 = tf.image.resize_nearest_neighbor(conv5,[28,28]) # Now 28x28x8 conv6 = tf.layers.conv2d(upsample3,kernel_size=2,activation=tf.nn.relu,filters=16,padding="same") # Now 28x28x16 logits = tf.layers.conv2d(conv6,kernel_size=3,activation=None,filters=1,padding="same") #Now 28x28x1 # Pass logits through sigmoid to get reconstructed image decoded = tf.nn.sigmoid(logits) # Pass logits through sigmoid and calculate the cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=targets_) # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) ``` ## Training As before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays. ``` sess = tf.Session() epochs = 5 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) imgs = batch[0].reshape((-1, 28, 28, 1)) batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() ``` ## Denoising As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images. ![Denoising autoencoder](assets/denoising.png) Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before. > **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers. ``` learning_rate = 0.001 inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs') targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets') ### Encoder conv1 =tf.layers.conv2d(inputs_,kernel_size=2,activation=tf.nn.relu,filters=32) # Now 28x28x32 maxpool1 = tf.layers.max_pooling2d(conv1,pool_size=2,strides=2,padding="same") # Now 14x14x32 conv2 = tf.layers.conv2d(maxpool1,kernel_size=2,activation=tf.nn.relu,filters=32) # Now 14x14x32 maxpool2 = tf.layers.max_pooling2d(conv2,pool_size=2,strides=2,padding="same") # Now 7x7x32 conv3 = tf.layers.conv2d(maxpool2,filters=16,kernel_size=2,activation=tf.nn.relu,padding="same") # Now 7x7x16 encoded = tf.layers.max_pooling2d(conv3,pool_size=2,strides=2,padding="same") print(encoded) # Now 4x4x16 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded,[7,7]) # Now 7x7x16 conv4 = tf.layers.conv2d(upsample1,filters=16,activation=tf.nn.relu,kernel_size=2,padding="same") # Now 7x7x16 upsample2 = tf.image.resize_nearest_neighbor(conv4,[14,14]) # Now 14x14x16 conv5 = tf.layers.conv2d(upsample2,filters=32,kernel_size=2,padding="same",activation = tf.nn.relu) # Now 14x14x32 upsample3 = tf.image.resize_nearest_neighbor(conv5,[28,28]) # Now 28x28x32 conv6 = tf.layers.conv2d(upsample3,filters=32,activation=tf.nn.relu,kernel_size=2,padding="same") # Now 28x28x32 logits = tf.layers.conv2d(conv6,filters=1,activation=None,padding="same",kernel_size=2) #Now 28x28x1 # Pass logits through sigmoid to get reconstructed image decoded =tf.nn.sigmoid(logits) # Pass logits through sigmoid and calculate the cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=targets_) # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) sess = tf.Session() epochs = 100 batch_size = 200 # Set's how much noise we're adding to the MNIST images noise_factor = 0.5 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images from the batch imgs = batch[0].reshape((-1, 28, 28, 1)) # Add random noise to the input images noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape) # Clip the images to be between 0 and 1 noisy_imgs = np.clip(noisy_imgs, 0., 1.) # Noisy images as inputs, original images as targets batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) ``` ## Checking out the performance Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. ``` fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape) noisy_imgs = np.clip(noisy_imgs, 0., 1.) reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([noisy_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) ```
github_jupyter
%matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) img = mnist.train.images[2] plt.axis("off") plt.imshow(img.reshape((28, 28)), cmap='Greys_r') learning_rate = 0.001 n_classes = mnist.train.images.shape[1] # Input and target placeholders inputs_ = tf.placeholder(tf.float32,[None,28,28,1]) targets_ = tf.placeholder(tf.float32,[None,28,28,1]) ### Encoder conv1 = tf.layers.conv2d(inputs_,filters = 16,kernel_size=2,activation=tf.nn.relu,padding="same") print(conv1) # Now 28x28x16 maxpool1 = tf.layers.max_pooling2d(conv1,strides =2,pool_size=2) print(maxpool1) # Now 14x14x16 conv2 = tf.layers.conv2d(maxpool1,filters = 8,kernel_size=2,activation = tf.nn.relu,padding="same") # Now 14x14x8 maxpool2 = tf.layers.max_pooling2d(conv2,strides = 2,pool_size=2) # Now 7x7x8 conv3 = tf.layers.conv2d(maxpool2, filters = 8,kernel_size=2,activation=tf.nn.relu,padding="same") # Now 7x7x8 encoded = tf.layers.max_pooling2d(conv3,strides=2,padding="same",pool_size=2) # Now 4x4x8 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded,[7,7]) # Now 7x7x8 conv4 = tf.layers.conv2d(upsample1,kernel_size=2,filters=8,activation=tf.nn.relu,padding="same") # Now 7x7x8 upsample2 = tf.image.resize_nearest_neighbor(conv4,[14,14]) # Now 14x14x8 conv5 = tf.layers.conv2d(upsample2,kernel_size=2,filters=8,activation=tf.nn.relu,padding="same") # Now 14x14x8 upsample3 = tf.image.resize_nearest_neighbor(conv5,[28,28]) # Now 28x28x8 conv6 = tf.layers.conv2d(upsample3,kernel_size=2,activation=tf.nn.relu,filters=16,padding="same") # Now 28x28x16 logits = tf.layers.conv2d(conv6,kernel_size=3,activation=None,filters=1,padding="same") #Now 28x28x1 # Pass logits through sigmoid to get reconstructed image decoded = tf.nn.sigmoid(logits) # Pass logits through sigmoid and calculate the cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=targets_) # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) sess = tf.Session() epochs = 5 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) imgs = batch[0].reshape((-1, 28, 28, 1)) batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() learning_rate = 0.001 inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs') targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets') ### Encoder conv1 =tf.layers.conv2d(inputs_,kernel_size=2,activation=tf.nn.relu,filters=32) # Now 28x28x32 maxpool1 = tf.layers.max_pooling2d(conv1,pool_size=2,strides=2,padding="same") # Now 14x14x32 conv2 = tf.layers.conv2d(maxpool1,kernel_size=2,activation=tf.nn.relu,filters=32) # Now 14x14x32 maxpool2 = tf.layers.max_pooling2d(conv2,pool_size=2,strides=2,padding="same") # Now 7x7x32 conv3 = tf.layers.conv2d(maxpool2,filters=16,kernel_size=2,activation=tf.nn.relu,padding="same") # Now 7x7x16 encoded = tf.layers.max_pooling2d(conv3,pool_size=2,strides=2,padding="same") print(encoded) # Now 4x4x16 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded,[7,7]) # Now 7x7x16 conv4 = tf.layers.conv2d(upsample1,filters=16,activation=tf.nn.relu,kernel_size=2,padding="same") # Now 7x7x16 upsample2 = tf.image.resize_nearest_neighbor(conv4,[14,14]) # Now 14x14x16 conv5 = tf.layers.conv2d(upsample2,filters=32,kernel_size=2,padding="same",activation = tf.nn.relu) # Now 14x14x32 upsample3 = tf.image.resize_nearest_neighbor(conv5,[28,28]) # Now 28x28x32 conv6 = tf.layers.conv2d(upsample3,filters=32,activation=tf.nn.relu,kernel_size=2,padding="same") # Now 28x28x32 logits = tf.layers.conv2d(conv6,filters=1,activation=None,padding="same",kernel_size=2) #Now 28x28x1 # Pass logits through sigmoid to get reconstructed image decoded =tf.nn.sigmoid(logits) # Pass logits through sigmoid and calculate the cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=targets_) # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) sess = tf.Session() epochs = 100 batch_size = 200 # Set's how much noise we're adding to the MNIST images noise_factor = 0.5 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images from the batch imgs = batch[0].reshape((-1, 28, 28, 1)) # Add random noise to the input images noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape) # Clip the images to be between 0 and 1 noisy_imgs = np.clip(noisy_imgs, 0., 1.) # Noisy images as inputs, original images as targets batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape) noisy_imgs = np.clip(noisy_imgs, 0., 1.) reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([noisy_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1)
0.69035
0.990459
# Investigating dropped data packets My team is building a centralized log collection service in AWS and we've found that some data packets are being dropped between our on-site logs server and the other end of a VPN tunnel. I'm hoping to use logs from both the on-site log server and a proxy server on the other end of the tunnel in order to learn more about why packets are being lost. ``` # Libraries import pandas as pd import numpy as np import hashlib import matplotlib.pyplot as plt import seaborn as sns ``` ## Inspecting the data ``` # Read .csv files delft_df = pd.read_csv('/home/welced12/Downloads/delft-syslog.csv') proxy_df = pd.read_csv('/home/welced12/Downloads/proxy-syslog.csv') delft_df.info() proxy_df.info() ``` These dataframes contain a few pieces of information about the data packets being sent from delft to the proxy server, including a timestamp, a source ip address, destination ip address, protocol label, data packet length (size), and a field of additional 'Info'. Of particular interest in this case are the Time and Info fields. ``` delft_df.head(5) proxy_df.head(1500).tail(5) ``` The Time field tells us how many seconds after some starting time t the packet was sent. ``` proxy_df.loc[0,'Info'] delft_df.loc[0,'Info'] ``` The Info field conatins some additional information that's unique to this data packet. If a data packet is sent from delft, it should be received with the same 'Info' entry, so each log entry from the proxy server should have a matching pair in the logs from delft. In this case, it will be useful to determine which of the packets sent from delft was received by the proxy. ## Working with the data I want to add a column to the entries from delft saying whether or not they were received by the proxy. ``` # Convert the Info field into something more easily comparable at scale delft_df['uid'] = [ hashlib.sha1(x.encode('utf-8')).hexdigest() for x in delft_df.Info.values ] proxy_df['uid'] = [ hashlib.sha1(x.encode('utf-8')).hexdigest() for x in proxy_df.Info.values ] ``` For each of the entries from delft, determine whether that entry appears on the proxy server. ``` # Start by working with a subset recvd_packets = [ 1 if txt in proxy_df.uid.values[:50000] else 0 for txt in delft_df.uid.values[:50000] ] # sanity check sum(recvd_packets) ``` Looks like we see evidence of dropped packets in this subset. The next thing I want to do is try and bin the packets by timestamp. I can then use each millisecond as a data point and see how many packets were sent during that millisecond and how many of those were dropped. ``` # Start by looking at first 50000 packets sent_df = delft_df.head(50000) sent_df['received'] = recvd_packets sent_df.head(10) # Group by timestamp, calculate traffic and drop rate each millisecond. agg_df = sent_df[['Time','received']].groupby('Time').agg(['mean','count']) agg_df.head(15) ``` Looks like the first few milliseconds saw no dropped packets, but packet loss did start occurring pretty quickly. ``` # Get ready to start plotting things. agg_df.columns = agg_df.columns.droplevel() agg_df = agg_df.reset_index() agg_df.head(5) # Histogram when packets are being sent. agg_df['Time'].hist(bins=1000) plt.show() ``` It looks like packets are localized to particular timestamps. I suspect this means that delft is building a list of packets in a buffer and then sending everything in the buffer each second. Let's take a closer look at the spikes. ``` # Rename columns so that they make more sense on a plot agg_df.rename(index=str, columns={'mean':'received rate', 'count':'packets sent'}, inplace=True) # Create a plot for each of the bursts in this data subset for i in range(8): fig, ax = plt.subplots() ax = sns.regplot('Time', 'received rate', agg_df, fit_reg=False) ax2 = ax.twinx() sns.regplot('Time', 'packets sent', agg_df, fit_reg=False, ax=ax2, color='red') ax.set(xlim=(i-0.01,i+0.1)) plt.show() ``` These plots all show very similar behavior with respect to the number of packets being sent and the packet drop rate. It looks like the packets are consistently being sent at a rate of about 120 packets per millisecond. At the beginning of each burst, all of those packets are being received, but after 10 milliseconds or so, packets start being lost. ## Analysis and Conclusions This exploration has provided us with valuable information. We know that data packets are being buffered and sent in batches every second, at a consistent rate. We know that the first packets sent from each buffer are received, at least for a short time, but the length of that time varies. After that, about half of the packets are lost. We also know that this behavior is very consistent, at least over the few seconds of this dataset. What this indicates to me is that whatever process is receiving the data packets on the proxy server is unable to handle the volume of data packets being sent. The first few packets are received without issue, but this process is quickly overloaded, causing the observed packet loss. Our next step is to investigate the process that should be receiving the packets.
github_jupyter
# Libraries import pandas as pd import numpy as np import hashlib import matplotlib.pyplot as plt import seaborn as sns # Read .csv files delft_df = pd.read_csv('/home/welced12/Downloads/delft-syslog.csv') proxy_df = pd.read_csv('/home/welced12/Downloads/proxy-syslog.csv') delft_df.info() proxy_df.info() delft_df.head(5) proxy_df.head(1500).tail(5) proxy_df.loc[0,'Info'] delft_df.loc[0,'Info'] # Convert the Info field into something more easily comparable at scale delft_df['uid'] = [ hashlib.sha1(x.encode('utf-8')).hexdigest() for x in delft_df.Info.values ] proxy_df['uid'] = [ hashlib.sha1(x.encode('utf-8')).hexdigest() for x in proxy_df.Info.values ] # Start by working with a subset recvd_packets = [ 1 if txt in proxy_df.uid.values[:50000] else 0 for txt in delft_df.uid.values[:50000] ] # sanity check sum(recvd_packets) # Start by looking at first 50000 packets sent_df = delft_df.head(50000) sent_df['received'] = recvd_packets sent_df.head(10) # Group by timestamp, calculate traffic and drop rate each millisecond. agg_df = sent_df[['Time','received']].groupby('Time').agg(['mean','count']) agg_df.head(15) # Get ready to start plotting things. agg_df.columns = agg_df.columns.droplevel() agg_df = agg_df.reset_index() agg_df.head(5) # Histogram when packets are being sent. agg_df['Time'].hist(bins=1000) plt.show() # Rename columns so that they make more sense on a plot agg_df.rename(index=str, columns={'mean':'received rate', 'count':'packets sent'}, inplace=True) # Create a plot for each of the bursts in this data subset for i in range(8): fig, ax = plt.subplots() ax = sns.regplot('Time', 'received rate', agg_df, fit_reg=False) ax2 = ax.twinx() sns.regplot('Time', 'packets sent', agg_df, fit_reg=False, ax=ax2, color='red') ax.set(xlim=(i-0.01,i+0.1)) plt.show()
0.383064
0.92111
<img src='https://radiant-assets.s3-us-west-2.amazonaws.com/PrimaryRadiantMLHubLogo.png' alt='Radiant MLHub Logo' width='300'/> # CV4A ICRL Crop Type Classification Challenge # A Guide to Access the data on Radiant MLHub This notebook walks you through the steps to get access to Radiant MLHub and access the data for the crop type classification competition being organized as part of the [CV4A](https://www.cv4gc.org/cv4a2020/) workshop at 2020 ICLR. ### Radiant MLHub API The Radiant MLHub API gives access to open Earth imagery training data for machine learning applications. You can learn more about the repository at the [Radiant MLHub site](https://mlhub.earth) and about the organization behind it at the [Radiant Earth Foundation site](https://radiant.earth). Full documentation for the API is available at [docs.mlhub.earth](docs.mlhub.earth). Each item in our collection is explained in json format compliant with [STAC](https://stacspec.org/) [label extension](https://github.com/radiantearth/stac-spec/tree/master/extensions/label) definition. ``` # Required libraries import requests from urllib.parse import urlparse from pathlib import Path from datetime import datetime # output path where you want to download the data output_path = Path("data/") ``` ## Authentication Access to the Radiant MLHub API requires an access token. To get your access token, go to [dashboard.mlhub.earth](https://dashboard.mlhub.earth). If you have not used Radiant MLHub before, you will need to sign up and create a new account. Otherwise, sign in. Under **Usage**, you'll see your access token, which you will need. *Do not share* your access token with others: your usage may be limited and sharing your access token is a security risk. Copy the access token, and paste it in the box bellow. This header block will work for all API calls. ``` # copy your access token from dashboard.mlhub.earth and paste it in the following ACCESS_TOKEN = 'PASTE_YOUR_ACCESS_TOKEN_HERE' # these headers will be used in each request headers = { 'Authorization': f'Bearer {ACCESS_TOKEN}', 'Accept':'application/json' } ``` ## Retrieving the competition dataset Datasets are stored as collections on Radiant MLHub catalog. A collection represents the top-most data level. Typically this means the data comes from the same source for the same geography. It might include different years or sub-geographies. The two collections for this competition are: - `ref_african_crops_kenya_02_source`: includes the multi-temporal bands of Sentinel-2 - `ref_african_crops_kenya_02_labels`: includes the labels and field IDs ``` def get_download_url(item, asset_key, headers): asset = item.get('assets', {}).get(asset_key, None) if asset is None: print(f'Asset "{asset_key}" does not exist in this item') return None r = requests.get(asset.get('href'), headers=headers, allow_redirects=False) return r.headers.get('Location') def download_label(url, output_path, tileid): filename = urlparse(url).path.split('/')[-1] outpath = output_path/tileid outpath.mkdir(parents=True, exist_ok=True) r = requests.get(url) f = open(outpath/filename, 'wb') for chunk in r.iter_content(chunk_size=512 * 1024): if chunk: f.write(chunk) f.close() print(f'Downloaded {filename}') return def download_imagery(url, output_path, tileid, date): filename = urlparse(url).path.split('/')[-1] outpath = output_path/tileid/date outpath.mkdir(parents=True, exist_ok=True) r = requests.get(url) f = open(outpath/filename, 'wb') for chunk in r.iter_content(chunk_size=512 * 1024): if chunk: f.write(chunk) f.close() print(f'Downloaded {filename}') return ``` ### Downloading Labels The `assets` property of the items in a collection contains all the assets associated with that item and links to download them. The labels for the item will always be the asset with the key `labels`. The following code will go through every item in the collection and download the labels and field_ids raster feature. ``` # paste the id of the labels collection: collectionId = 'ref_african_crops_kenya_02_labels' # these optional parameters can be used to control what items are returned. # Here, we want to download all the items so: limit = 100 bounding_box = [] date_time = [] # retrieves the items and their metadata in the collection r = requests.get(f'https://api.radiant.earth/mlhub/v1/collections/{collectionId}/items', params={'limit':limit, 'bbox':bounding_box,'datetime':date_time}, headers=headers) collection = r.json() # retrieve list of features (in this case tiles) in the collection for feature in collection.get('features', []): assets = feature.get('assets').keys() print("Feature", feature.get('id'), 'with the following assets', list(assets)) for feature in collection.get('features', []): tileid = feature.get('id').split('tile_')[-1][:2] # download labels download_url = get_download_url(feature, 'labels', headers) download_label(download_url, output_path, tileid) #download field_ids download_url = get_download_url(feature, 'field_ids', headers) download_label(download_url, output_path, tileid) ``` ### Downloading Imagery The imagery items associated with the tiles are linked within the links array of the tile metadata. Links which have a rel type of "source" are links to imagery items. By requesting the metadata for the imagery item you can retrieve download URLs for each band of the imagery. ``` # This cell downloads all the multi-spectral images throughout the growing season for this competition. # The size of data is about 1.5 GB, and download time depends on your internet connection. # Note that you only need to run this cell and download the data once. for feature in collection.get('features', []): for link in feature.get('links', []): if link.get('rel') != 'source': continue r = requests.get(link['href'], headers=headers) feature = r.json() assets = feature.get('assets').keys() tileid = feature.get('id').split('tile_')[-1][:2] date = datetime.strftime(datetime.strptime(feature.get('properties')['datetime'], "%Y-%m-%dT%H:%M:%SZ"), "%Y%m%d") for asset in assets: download_url = get_download_url(feature, asset, headers) download_imagery(download_url, output_path, tileid, date) ```
github_jupyter
# Required libraries import requests from urllib.parse import urlparse from pathlib import Path from datetime import datetime # output path where you want to download the data output_path = Path("data/") # copy your access token from dashboard.mlhub.earth and paste it in the following ACCESS_TOKEN = 'PASTE_YOUR_ACCESS_TOKEN_HERE' # these headers will be used in each request headers = { 'Authorization': f'Bearer {ACCESS_TOKEN}', 'Accept':'application/json' } def get_download_url(item, asset_key, headers): asset = item.get('assets', {}).get(asset_key, None) if asset is None: print(f'Asset "{asset_key}" does not exist in this item') return None r = requests.get(asset.get('href'), headers=headers, allow_redirects=False) return r.headers.get('Location') def download_label(url, output_path, tileid): filename = urlparse(url).path.split('/')[-1] outpath = output_path/tileid outpath.mkdir(parents=True, exist_ok=True) r = requests.get(url) f = open(outpath/filename, 'wb') for chunk in r.iter_content(chunk_size=512 * 1024): if chunk: f.write(chunk) f.close() print(f'Downloaded {filename}') return def download_imagery(url, output_path, tileid, date): filename = urlparse(url).path.split('/')[-1] outpath = output_path/tileid/date outpath.mkdir(parents=True, exist_ok=True) r = requests.get(url) f = open(outpath/filename, 'wb') for chunk in r.iter_content(chunk_size=512 * 1024): if chunk: f.write(chunk) f.close() print(f'Downloaded {filename}') return # paste the id of the labels collection: collectionId = 'ref_african_crops_kenya_02_labels' # these optional parameters can be used to control what items are returned. # Here, we want to download all the items so: limit = 100 bounding_box = [] date_time = [] # retrieves the items and their metadata in the collection r = requests.get(f'https://api.radiant.earth/mlhub/v1/collections/{collectionId}/items', params={'limit':limit, 'bbox':bounding_box,'datetime':date_time}, headers=headers) collection = r.json() # retrieve list of features (in this case tiles) in the collection for feature in collection.get('features', []): assets = feature.get('assets').keys() print("Feature", feature.get('id'), 'with the following assets', list(assets)) for feature in collection.get('features', []): tileid = feature.get('id').split('tile_')[-1][:2] # download labels download_url = get_download_url(feature, 'labels', headers) download_label(download_url, output_path, tileid) #download field_ids download_url = get_download_url(feature, 'field_ids', headers) download_label(download_url, output_path, tileid) # This cell downloads all the multi-spectral images throughout the growing season for this competition. # The size of data is about 1.5 GB, and download time depends on your internet connection. # Note that you only need to run this cell and download the data once. for feature in collection.get('features', []): for link in feature.get('links', []): if link.get('rel') != 'source': continue r = requests.get(link['href'], headers=headers) feature = r.json() assets = feature.get('assets').keys() tileid = feature.get('id').split('tile_')[-1][:2] date = datetime.strftime(datetime.strptime(feature.get('properties')['datetime'], "%Y-%m-%dT%H:%M:%SZ"), "%Y%m%d") for asset in assets: download_url = get_download_url(feature, asset, headers) download_imagery(download_url, output_path, tileid, date)
0.380874
0.950686
# Reservoir of Izhikevich neuron models In this script a reservoir of neurons models with the differential equations proposed by Izhikevich is defined. ``` %matplotlib inline import pyNN.nest as p from pyNN.random import NumpyRNG, RandomDistribution from pyNN.utility import Timer import matplotlib.pyplot as plt import numpy as np timer = Timer() p.setup(timestep=0.1) # 0.1ms ``` ## Definition of Inputs The input can be: - the joint position of the robot arm (rate coded or temporal coded) ``` poisson_input = p.SpikeSourcePoisson(rate = 10, start = 20.) #input_neuron = p.Population(2, p.SpikeSourcePoisson, {'rate': 0.7}, label='input') input_neuron = p.Population(2, poisson_input, label='input') ``` ## Definition of neural populations Izhikevich spiking model with a quadratic non-linearity: dv/dt = 0.04*v^2 + 5*v + 140 - u + I du/dt = a*(b*v - u) ``` n = 500 # number of cells exc_ratio = 0.8 # ratio of excitatory neurons n_exc = int(round(n*0.8)) n_inh = n-n_exc print n_exc, n_inh celltype = p.Izhikevich() #celltype = p.IF_cond_exp() print celltype.get_parameter_names exc_cells = p.Population(n_exc, celltype, label="Excitatory_Cells") inh_cells = p.Population(n_inh, celltype, label="Inhibitory_Cells") # initialize with a uniform random distributin # use seeding for reproducability rngseed = 98766987 parallel_safe = True rng = NumpyRNG(seed=rngseed, parallel_safe=parallel_safe) unifDistr = RandomDistribution('uniform', (-75,-65), rng=rng) exc_cells.initialize(v=unifDistr) inh_cells.initialize(v=unifDistr) ``` ## Definition of readout neurons Decide: - 2 readout neurons: representing the desired displacement of the two motors - 1 readout neuron: representing the desired goal position of the joint ``` readout_neurons = p.Population(2, celltype, label="readout_neuron") ``` ## Define the connections between the neurons ``` res_pconn = 0.01 # sparse connection probability input_pconn= 0.4 input_conn = p.FixedProbabilityConnector(input_pconn, rng=rng) rout_conn = p.AllToAllConnector() exc_conn = p.FixedProbabilityConnector(res_pconn, rng=rng) inh_conn = p.FixedProbabilityConnector(res_pconn, rng=rng) w_exc = 18 # later add unit w_inh = -52. # later add unit w_input = 60. delay_exc = 1 # defines how long (ms) the synapse takes for transmission delay_inh = 1 stat_syn_exc = p.StaticSynapse(weight =w_exc, delay=delay_exc) stat_syn_inh = p.StaticSynapse(weight =w_inh, delay=delay_inh) weight_distr_input = RandomDistribution('normal', [w_input, 1e-3], rng=rng) weight_distr_exc = RandomDistribution('normal', [w_exc, 1e-3], rng=rng) weight_distr_inh = RandomDistribution('normal', [w_inh, 1e-3], rng=rng) connections = {} connections['e2e'] = p.Projection(exc_cells, exc_cells, exc_conn, synapse_type=stat_syn_exc, receptor_type='excitatory') connections['e2i'] = p.Projection(exc_cells, inh_cells, exc_conn, synapse_type=stat_syn_exc,receptor_type='excitatory') connections['i2e'] = p.Projection(inh_cells, exc_cells, inh_conn, synapse_type=stat_syn_inh,receptor_type='inhibitory') connections['i2i'] = p.Projection(inh_cells, inh_cells, inh_conn, synapse_type=stat_syn_inh,receptor_type='inhibitory') connections['inp2e'] = p.Projection(input_neuron, exc_cells, input_conn, synapse_type=stat_syn_exc,receptor_type='excitatory') connections['inp2i'] = p.Projection(input_neuron, inh_cells, input_conn, synapse_type=stat_syn_exc,receptor_type='excitatory') connections['e2rout'] = p.Projection(exc_cells, readout_neurons, rout_conn, synapse_type=stat_syn_exc,receptor_type='excitatory') connections['i2rout'] = p.Projection(inh_cells, readout_neurons, rout_conn, synapse_type=stat_syn_inh,receptor_type='inhibitory') ``` ## Setup recording and run the simulation ``` readout_neurons.record(['v','spikes']) exc_cells.record(['v','spikes']) p.run(500) ``` ## Plotting the Results ``` p.end() data_rout = readout_neurons.get_data() data_exc = exc_cells.get_data() fig_settings = { 'lines.linewidth': 0.5, 'axes.linewidth': 0.5, 'axes.labelsize': 'small', 'legend.fontsize': 'small', 'font.size': 8 } plt.rcParams.update(fig_settings) plt.figure(1, figsize=(6,8)) def plot_spiketrains(segment): for spiketrain in segment.spiketrains: y = np.ones_like(spiketrain) * spiketrain.annotations['source_id'] plt.plot(spiketrain, y, '.') plt.ylabel(segment.name) plt.setp(plt.gca().get_xticklabels(), visible=False) def plot_signal(signal, index, colour='b'): label = "Neuron %d" % signal.annotations['source_ids'][index] plt.plot(signal.times, signal[:, index], colour, label=label) plt.ylabel("%s (%s)" % (signal.name, signal.units._dimensionality.string)) plt.setp(plt.gca().get_xticklabels(), visible=False) plt.legend() ``` Plot readout neurons ``` n_panels = sum(a.shape[1] for a in data_rout.segments[0].analogsignalarrays) + 2 plt.subplot(n_panels, 1, 1) plot_spiketrains(data_rout.segments[0]) panel = 3 for array in data_rout.segments[0].analogsignalarrays: for i in range(array.shape[1]): plt.subplot(n_panels, 1, panel) plot_signal(array, i, colour='bg'[panel%2]) panel += 1 plt.xlabel("time (%s)" % array.times.units._dimensionality.string) plt.setp(plt.gca().get_xticklabels(), visible=True) plt.savefig("neo_example.png") ``` Plot excitatory cells ``` n_panels = sum(a.shape[1] for a in data_exc.segments[0].analogsignalarrays) + 2 plt.subplot(n_panels, 1, 1) plot_spiketrains(data_exc.segments[0]) panel = 3 for array in data_exc.segments[0].analogsignalarrays: for i in range(array.shape[1]): plt.subplot(n_panels, 1, panel) plot_signal(array, i, colour='bg'[panel%2]) panel += 1 plt.xlabel("time (%s)" % array.times.units._dimensionality.string) plt.setp(plt.gca().get_xticklabels(), visible=True) plt.savefig("neo_example.png") ```
github_jupyter
%matplotlib inline import pyNN.nest as p from pyNN.random import NumpyRNG, RandomDistribution from pyNN.utility import Timer import matplotlib.pyplot as plt import numpy as np timer = Timer() p.setup(timestep=0.1) # 0.1ms poisson_input = p.SpikeSourcePoisson(rate = 10, start = 20.) #input_neuron = p.Population(2, p.SpikeSourcePoisson, {'rate': 0.7}, label='input') input_neuron = p.Population(2, poisson_input, label='input') n = 500 # number of cells exc_ratio = 0.8 # ratio of excitatory neurons n_exc = int(round(n*0.8)) n_inh = n-n_exc print n_exc, n_inh celltype = p.Izhikevich() #celltype = p.IF_cond_exp() print celltype.get_parameter_names exc_cells = p.Population(n_exc, celltype, label="Excitatory_Cells") inh_cells = p.Population(n_inh, celltype, label="Inhibitory_Cells") # initialize with a uniform random distributin # use seeding for reproducability rngseed = 98766987 parallel_safe = True rng = NumpyRNG(seed=rngseed, parallel_safe=parallel_safe) unifDistr = RandomDistribution('uniform', (-75,-65), rng=rng) exc_cells.initialize(v=unifDistr) inh_cells.initialize(v=unifDistr) readout_neurons = p.Population(2, celltype, label="readout_neuron") res_pconn = 0.01 # sparse connection probability input_pconn= 0.4 input_conn = p.FixedProbabilityConnector(input_pconn, rng=rng) rout_conn = p.AllToAllConnector() exc_conn = p.FixedProbabilityConnector(res_pconn, rng=rng) inh_conn = p.FixedProbabilityConnector(res_pconn, rng=rng) w_exc = 18 # later add unit w_inh = -52. # later add unit w_input = 60. delay_exc = 1 # defines how long (ms) the synapse takes for transmission delay_inh = 1 stat_syn_exc = p.StaticSynapse(weight =w_exc, delay=delay_exc) stat_syn_inh = p.StaticSynapse(weight =w_inh, delay=delay_inh) weight_distr_input = RandomDistribution('normal', [w_input, 1e-3], rng=rng) weight_distr_exc = RandomDistribution('normal', [w_exc, 1e-3], rng=rng) weight_distr_inh = RandomDistribution('normal', [w_inh, 1e-3], rng=rng) connections = {} connections['e2e'] = p.Projection(exc_cells, exc_cells, exc_conn, synapse_type=stat_syn_exc, receptor_type='excitatory') connections['e2i'] = p.Projection(exc_cells, inh_cells, exc_conn, synapse_type=stat_syn_exc,receptor_type='excitatory') connections['i2e'] = p.Projection(inh_cells, exc_cells, inh_conn, synapse_type=stat_syn_inh,receptor_type='inhibitory') connections['i2i'] = p.Projection(inh_cells, inh_cells, inh_conn, synapse_type=stat_syn_inh,receptor_type='inhibitory') connections['inp2e'] = p.Projection(input_neuron, exc_cells, input_conn, synapse_type=stat_syn_exc,receptor_type='excitatory') connections['inp2i'] = p.Projection(input_neuron, inh_cells, input_conn, synapse_type=stat_syn_exc,receptor_type='excitatory') connections['e2rout'] = p.Projection(exc_cells, readout_neurons, rout_conn, synapse_type=stat_syn_exc,receptor_type='excitatory') connections['i2rout'] = p.Projection(inh_cells, readout_neurons, rout_conn, synapse_type=stat_syn_inh,receptor_type='inhibitory') readout_neurons.record(['v','spikes']) exc_cells.record(['v','spikes']) p.run(500) p.end() data_rout = readout_neurons.get_data() data_exc = exc_cells.get_data() fig_settings = { 'lines.linewidth': 0.5, 'axes.linewidth': 0.5, 'axes.labelsize': 'small', 'legend.fontsize': 'small', 'font.size': 8 } plt.rcParams.update(fig_settings) plt.figure(1, figsize=(6,8)) def plot_spiketrains(segment): for spiketrain in segment.spiketrains: y = np.ones_like(spiketrain) * spiketrain.annotations['source_id'] plt.plot(spiketrain, y, '.') plt.ylabel(segment.name) plt.setp(plt.gca().get_xticklabels(), visible=False) def plot_signal(signal, index, colour='b'): label = "Neuron %d" % signal.annotations['source_ids'][index] plt.plot(signal.times, signal[:, index], colour, label=label) plt.ylabel("%s (%s)" % (signal.name, signal.units._dimensionality.string)) plt.setp(plt.gca().get_xticklabels(), visible=False) plt.legend() n_panels = sum(a.shape[1] for a in data_rout.segments[0].analogsignalarrays) + 2 plt.subplot(n_panels, 1, 1) plot_spiketrains(data_rout.segments[0]) panel = 3 for array in data_rout.segments[0].analogsignalarrays: for i in range(array.shape[1]): plt.subplot(n_panels, 1, panel) plot_signal(array, i, colour='bg'[panel%2]) panel += 1 plt.xlabel("time (%s)" % array.times.units._dimensionality.string) plt.setp(plt.gca().get_xticklabels(), visible=True) plt.savefig("neo_example.png") n_panels = sum(a.shape[1] for a in data_exc.segments[0].analogsignalarrays) + 2 plt.subplot(n_panels, 1, 1) plot_spiketrains(data_exc.segments[0]) panel = 3 for array in data_exc.segments[0].analogsignalarrays: for i in range(array.shape[1]): plt.subplot(n_panels, 1, panel) plot_signal(array, i, colour='bg'[panel%2]) panel += 1 plt.xlabel("time (%s)" % array.times.units._dimensionality.string) plt.setp(plt.gca().get_xticklabels(), visible=True) plt.savefig("neo_example.png")
0.42656
0.904482
Lo siguiente está basado en el libro de B. Rumbos, Pensando Antes de Actuar: Fundamentos de Elección Racional, 2009 y de G. J. Kerns, Introduction to Probability and Statistics Using R, 2014. El libro de G. J. Kerns tiene github: [jkerns/IPSUR](https://github.com/gjkerns/IPSUR) **Notas:** * Se utilizará el paquete *prob* de *R* para los experimentos descritos en la nota y aunque con funciones nativas de *R* se pueden crear los experimentos, se le da preferencia a mostrar cómo en *R* se tienen paquetes para muchas aplicaciones. * En algunas líneas no es necesario colocar `print` y sólo se ha realizado para mostrar los resultados de las funciones en un formato similar al de R pues la nota se escribió con *jupyterlab* y *R*. * Cuidado al utilizar las funciones del paquete *prob* para construir espacios de probabilidad grandes como lanzar un dado 9 veces... (tal experimento tiene 10 millones de posibles resultados) ``` options(repr.plot.width=4, repr.plot.height=4) #esta línea sólo se ejecuta para jupyterlab con R library(prob) ``` # Tipos de experimentos Se tienen dos tipos: **determinísticos y aleatorios**. Un **experimento determinístico** es aquel cuyo resultado puede ser predicho con seguridad antes de realizarlo, por ejemplo: combinar hidrógeno y oxígeno o sumar $2+3$. Un **experimento aleatorio** es aquel cuyo resultado está determinado por el **azar**, por esto **no es posible predecir su resultado antes de realizarlo**. Ejemplos de experimentos aleatorios se encuentran lanzar una moneda, lanzar un dado, lanzar un dardo a un tiro al blanco, número de semáforos de color rojo que te encontrarás al ir a casa, cuántas hormigas caminan por la acera de peatones en un tiempo. # Sesgo, independencia y justicia Decimos que un juego de azar es **justo u honesto** si sus resultados no presentan asimetría (aparecen con la misma frecuencia) y son **independientes** si no presentan patrón alguno. Tirar una moneda, un dado o girar una ruleta son **juegos justos** siempre y cuando no estén alterados de alguna manera. ## Ejemplos: 1) Supóngase que se lanza un oso de peluche al aire. El oso gira varias veces y cae al suelo de cuatro posibles maneras, panza abajo, panza arriba, sentado o de cabeza. Al lanzarlo 100 veces se obtiene el número de veces que cae de cada forma como se observa en la siguiente tabla: |resultado|panza abajo|panza arriba|sentado |de cabeza |:---------:|:-----------:|:------------:|:--------:|:---------: |# de veces| 54|40|5|1 Claramente se trata de resultados **asimétricos** ya que el oso cae panza abajo más de la mitad de las veces y sólo cae de cabeza uno de cada cien lanzamientos. Los resultados, sin embargo, **son independientes** pues si el oso cae en alguna posición, esto es irrelevante para el siguiente lanzamiento. 2) Consideremos una urna con 25 canicas blancas, 25 rojas, 25 amarillas y 25 azules. Sacamos una canica de la urna y **observamos** que es amarilla. **Sin reemplazar** la canica amarilla en la urna tomamos otra canica (equivalente a haber sacado dos canicas). Claramente la urna ya no es la misma pues ahora contiene 25 canicas blancas, rojas y azules y 24 canicas amarillas. Nuestras expectativas para el color de la segunda canica **no son independientes** del resultado de haber tomado una canica amarilla inicialmente. Si la segunda canica es, por ejemplo roja, y **tampoco la reemplazamos**, entonces la urna contiene 25 canicas blancas y azules y 24 rojas y amarillas. Las expectativas para el color de la tercera canica cambian. En este ejemplo, los resultados de **sacar canicas de colores en sucesión y sin reemplazo no son independientes**. 3) Consideremos la misma urna del ejemplo anterior. Observemos que si cada vez que **sacamos una canica y anotamos su color la reemplazamos nuevamente en la urna**, entonces el sacar la segunda canica **es independiente** de lo que hayamos hecho antes. La razón es que tenemos, esencialmente, **la misma urna inicial**. Aún mas, si se repite el experimento de extraer y anotar su color (con o sin reemplazo), entonces se le llama **muestreo ordenado** y si no se anota su color se le llama **muestreo no ordenado**, no tenemos idea en qué orden se eligieron las canicas, sólo observamos una o más canicas y no importa el orden en que se sacaron de acuerdo a lo que observamos y esto es equivalente a haber extraído las canicas y colocarlas en una bolsa antes de observar qué sacamos. **Obs:** Este modelo de la urna con canicas es utilizado con frecuencia ya que es sumamente práctico para ciertas abstracciones de la realidad y es considerado dentro de la clase de **experimentos generales** pues contiene a experimentos aleatorios más comunes. Por ejemplo, lanzar una moneda dos veces es equivalente a sacar dos canicas de una urna que están etiquetadas con águila y sol. Lanzar un dado es equialente a sacar una canica de una urna con seis canicas etiquetadas del 1 al 6. 4) En un casino observamos que los últimos cinco resultados de la ruleta han sido los siguientes: 10 negro, 17 negro, 4 negro, 15 negro y 22 negro. Al observar esto escuchamos el consejo de un experimentado apostador: “ponga todo su dinero en el rojo pues ya toca que salga rojo”. Sabiamente no le hacemos caso. La razón es simple: **la ruleta no es una urna sin reemplazo sino más bien es una urna con reemplazo**. En cada giro, **cada número tiene la misma probabilidad de aparecer** y se trata de la misma ruleta. Cada giro es **independiente** de los demás por lo que **no hay un patrón definido** y los resultados previos no modifican la habilidad para predecir el resultado del siguiente giro. ## Espacio de resultados o espacio muestral Supongamos que una acción o experimento puede tener distintas consecuencias o resultados (*outcomes*) y sea $S = \{r_1, r_2, \dots, r_n\}$ el conjunto de resultados posibles. A este conjunto se le conoce como **espacio de resultados o espacio muestral**. Por ejemplo, si lanzamos una moneda al aire el espacio muestral es *{águila, sol}* y al tirar un dado de seis caras el espacio muestral es $\{1,2,3,4,5,6\}$. Es importante notar que, en cada caso, los resultados son **mutuamente excluyentes**, es decir, **no pueden ocurrir simultáneamente**. Asimismo, **el espacio muestral comprende a todos los resultados posibles**. ### ¿Cómo representar el espacio de resultados o espacio muestral en R? Nos podemos apoyar de la estructura *data frame* la cual es una colección rectangular de variables. Cada renglón del *data frame* corresponde a un resultado del experimento (pero se verá más adelante que el *data frame* sólo nos ayudará a describir ciertos espacios de resultados de experimentos). #### Ejemplo 1) **Experimento:** lanzar un oso de peluche al aire. Entonces el espacio muestral es: ``` S = data.frame(cae=c('panza abajo', 'panza arriba', 'sentado', 'de cabeza')) S ``` 2) **Experimento:** sacar canicas de una urna. Supóngase que se tiene una urna con tres canicas con etiquetas $1, 2, 3$ respectivamente. Se sacarán $2$ canicas. #### ¿Cómo realizar el experimento en R? En el paquete *prob* se tiene la función *urnsamples* la cual tiene argumentos $x, size, replace, ordered$. El argumento $x$ representa la urna de la cual se realizará el muestreo, $size$ representa el tamaño de la muestra, $replace$ y $ordered$ son argumentos lógicos y especifican cómo se debe realizar el muestreo. #### Con reemplazo y orden Como el experimento es con reemplazo se pueden sacar cualquiera de las canicas $1, 2, 3$ en cualquier extracción, además como es con orden **se llevará un registro del orden de las extracciones** que se realizan. ``` print(urnsamples(1:3, size = 2, replace = TRUE, ordered = TRUE)) ``` La primer columna con etiqueta $X1$ representa la primera extracción y el primer renglón representa una realización del experimento. **Obs:** * Obsérvese que los renglones $2$ y $4$ son idénticos salvo el orden en el que se muestran los números. * Este experimento **es equivalente a** lanzar dos veces un dado de tres lados. Lo anterior se realiza en $R$ con: ``` print(rolldie(2, nsides = 3)) ``` #### Sin reemplazo y orden Como es sin reemplazo no observaremos en uno de los renglones $1, 1$ por ejemplo (mismo número en un renglón) y como es con orden se tendrán renglones de la forma $2, 1$ y $1, 2$ (pues se consideran distintos). ``` print(urnsamples(1:3, size=2, replace = F, order = T)) ``` **Obs:** obsérvese que hay menos renglones en este caso debido al procedimiento más restrictivo de muestreo. Si los números $1, 2, 3$ representaran "Alicia", "Ana" y "Paulina" respectivamente entonces este experimento sería **equivalente a** seleccionar dos personas de tres para que fueran la presidenta y vice-presidenta respectivamente de alguna compañía. El *data frame* anterior representa todas las posibilidades en que esto podría hacerse. #### Sin reemplazo y sin orden Nuevamente no observaremos en uno de los renglones $1, 1$ por ejemplo (mismo número en un renglón) y como es sin orden tendremos menos renglones que el caso anterior pues al sacar las canicas no se tendrán duplicados de extracciones anteriores no importando el orden de los números. ``` print(urnsamples(1:3, size=2, replace = F, order = F)) ``` Este experimento es **equivalente a** ir a donde está la urna, mirar en ella y elegir un par de canicas. Este es el default de la función `urnsamples`: ``` print(urnsamples(1:3,2)) ``` #### Con reemplazo y sin orden Se reemplazan las canicas en cada extracción pero no se "recuerda" el orden en el que fueron extraídas. ``` print(urnsamples(1:3, size = 2, replace = T, order = F)) ``` Este experimento es **equivalente a**: * Tener una taza en el que agitamos dos dados de tres caras y nos acercamos a ver la taza. * Los resultados de distribuir dos pelotas idénticas de golf en tres cajas etiquetadas con 1, 2 y 3. **Notas respecto a la función urnsamples:** * La urna no necesita tener números, esto es, se podría haber definido un vector $x$ como `x = c('Roja', 'Azul', 'Amarilla')`. * Los elementos que contiene la urna siempre son distinguibles para la función `urnsamples`. Entonces situaciones como `x = c('Roja', 'Roja', 'Azul')` no se sugieren ejecutar pues el resultado puede **no ser correcto** (por ejemplo, realizar un experimento en el que se tienen canicas no distinguibles resultan renglones del *data frame* como si se hubiera usado `ordered=T` aún si se eligió `ordered=F`. e Enunciados similares aplican para el argumento `replace`). ## Eventos Un evento $E$ es una colección de resultados del experimento, un subconjunto del espacio muestral **Obs:** El conjunto vacío, $\emptyset$, es un evento pues es subconjunto de todo conjunto y en el contexto de eventos representa un evento sin espacio de resultados. Bajo la notación de $S = \{r_1, r_2, \dots, r_n\}$ como el espacio muestral, todos los eventos posibles son: $\emptyset , \{r_1\}, \{r_2\}, \dots, \{r_1, r_2\}, \{r_1, r_3\}, \dots \{r_2, r_3\}, \dots \{r_{n-1}, r_n\}, \dots \{r_1, r_2, \dots , r_n\}$ ### Ocurrencia de un evento Decimos que el evento $E$ ocurrió si el resultado de un experimento pertenece a $E$. Lo usual es que los eventos se refieran a resultados con alguna característica de interés, por ejemplo, si lanzamos dos dados podrían interesarnos todas las parejas de números cuya suma sea mayor a cinco. Si se trata de una población de individuos, podríamos querer saber algo acerca de todos los que tienen cierto nivel de ingreso, o los que adquieren cierto nivel educativo o los que tuvieron sarampion de niños, etcétera. ### Eventos mutuamente excluyentes Decimos que los eventos $E_1, E_2, \dots$ son mutuamente excluyentes o ajenos si $E_i \cap E_j = \emptyset$ $\forall E_i \neq E_j$ (sólo ocurre exactamente uno de ellos). **Obs:** Como los eventos son subconjuntos es permitido realizar operaciones típicas de conjuntos, en la definición anterior se usó la intersección $\cap$ y $E_i \cap E_j$ consiste de todos los resultados comunes a $E_i$ y $E_j$. Por ejemplo, en el caso del lanzamiento de una moneda, los eventos $E_1=\{\text{obtener águila}\}$ y $E_2 = \{\text{obtener sol}\}$ son mutuamente excluyentes y en el caso de $E_1 = \{\text{hoy día soleado}\}$, $E_2 = \{\text{hoy día nublado} \}$ no son mutuamente excluyentes pues tenemos días que son nublados y soleados. #### Ejemplo ##### 1) Lanzamiento de dos monedas ``` S <- tosscoin(2, makespace = TRUE) #lanzamiento de dos monedas print(S[1:3, ]) #tres eventos print(S[c(2,4), ]) ``` ##### 2) Baraja ``` S<-cards() print(subset(S, suit == 'Heart')) #Eventos extraídos del espacio muestral que satisfacen #una expresión lógica print(subset(S, rank %in% 7:9)) #la función %in% checa si cada elemento de un vector #está contenido en algún lugar de otro, en este caso #se checa para cada renglón de la columna rank de S #que se encuentre en el vector c(7,8,9) ``` ##### 3) Lanzamiento de tres dados ``` print(subset(rolldie(3), X1+X2+X3 > 16)) #también son aceptadas expresiones matemáticas ``` ##### 4) Lanzamiento de cuatro dados ``` S <- rolldie(4) print(subset(S, isin(S, c(2,2,6), ordered = TRUE))) # la función isin checa que todo el vector #c(2,2,6) esté en cada renglón del #data.frame S ``` **Nota:** otras funciones del paquete `prob` útiles para encontrar espacios muestrales son: `countrep` e `isrep`. Ver ayuda de estas funciones. **Obs:** obsérvese que `%in%` e `%isin%` no realizan lo mismo pues: ``` x <- 1:10 y <- c(3,3,7) all(y %in% x) #esta línea checa que el 3 esté en x, que el 3 esté en x y que el 7 #esté en x y devuelve el valor lógico de los tres chequeos, en este #caso all(c(T,T,T)) isin(x,y) #checa que c(3,3,7) esté en x ``` ### Eventos a partir de operaciones entre conjuntos #### Union, Intersección y diferencia Un evento es un subconjunto y como tal se realizan operaciones de conjuntos para obtener nuevos eventos. En R se utilizan las funciones `union`, `intersect` y `setdiff` para tales operaciones. Por ejemplo: ``` S <- cards() A <- subset(S, suit == "Heart") B <- subset(S, rank %in% 7:9) head(union(A,B)) #se utiliza head para obtener sólo algunos renglones de la operación intersect(A,B) head(setdiff(A,B)) head(setdiff(B,A)) head(setdiff(S,A)) #Aquí se calcula A^c (el complemento de A definido como S\A) ``` ## Modelos de probabilidad #### Modelo de la teoría de la medida Este modelo consiste en definir una medida de probabilidad en el espacio muestral. Tal medida es una función matemática que satisface ciertos axiomas y tienen ciertas propiedades matemáticas. Existen una amplia gama de medidas de probabilidad de las cuales se elegirá una sola basada en los experimentos y la persona en cuestión que los realizará. Una vez elegida la medida de probabilidad, todas las asignaciones de probabilidad a eventos están hechas por la misma. Este modelo se sugiere para experimentos que exhiban simetría, por ejemplo el lanzamiento de una moneda. Si no exhibe simetría o si se desea incorporar conocimiento subjetivo al modelo resulta más difícil la elección de la medida de probabilidad. Andréi Nikoláyevich Kolmogórov revolucionó la teoría de la probabilidad con este modelo. #### Modelo frecuentista Este modelo enuncia que la forma de asignar probabilidades a eventos es por medio de la realización repetida del experimento bajo las mismas condiciones. Por ejemplo, si se desea calcular la probabilidad del evento: $E=${obtener sol} entonces: $$P(E) \approx \frac{n_E}{n}$$ donde: $n_E$ representa el número observado de soles (ocurrencia del evento $E$) en $n$ experimentos. Tal modelo utiliza la **ley fuerte de los grandes números** en la que se describe y asegura que bajo mismas condiciones de experimentos realizados e independientes, si $n \rightarrow \infty$ entonces $\frac{n_E}{n} \rightarrow P(E)$. La probabilidad en este enfoque proporciona una medida cuantitativa de qué tan frecuentemente podemos esperar que ocurra un evento. Este modelo es sugerido aún si los experimentos no son simétricos (caso del modelo anterior) pero el cálculo de la probabilidad está basado en una aproximación de la forma *in the long run* por lo que no se conoce de forma exacta la misma ni funciona en experimentos que no puedan repetirse indefinidamente, como la probabilidad del evento {el día $x$ lloverá} o {temblor en una zona $z$}. Richard von Mises fue un personaje importante en el impulso de este modelo, además algunas de sus ideas fueron incorporadas en el modelo de teoría de la medida. #### Modelo subjetivo Se interpreta a la probabilidad como un "grado de creencia" que ocurrirá el evento de acuerdo a la persona que realizará el experimento. La estimación de la probabilidad de un evento se basa en el conocimiento individual de la persona en un punto del tiempo, sin embargo, al ir adquiriendo o poseyendo mayor conocimiento, la estimación se modifica/actualiza de acuerdo a esto. El método típico por el que se actualiza la probabilidad es con la **regla o fórmula o teorema de Bayes**. Por ejemplo, supóngase que al inicio del experimento del lanzamiento de una moneda y el evento {sol} la observadora asigna $P({\text{sol}}) = \frac{1}{2}$. Sin embargo, por alguna razón la observadora conoce información adicional acerca de la moneda o de la persona que lanzará la moneda por lo que **decide** modificar su asignación **inicial** de la probabilidad de obtener sol alejado del valor $\frac{1}{2}$. Se define la probabilidad como el grado (personal) de creencia o certeza que se tiene de que el evento suceda. Este modelo se sugiere en situaciones que no es posible repetir indefinidamente el experimento o carece de datos confiables o es prácticamente imposible. Sin embargo, cuando se trata de analizar situaciones para las cuales los datos son escasos, cuestionables o inexistentes, entonces las probabilidades subjetivas difieren enormemente. Un analista deportivo puede pensar que los Cavaliers ganarán el campeonato con un 60% de certeza, mientras que otro puede asegurar que los Lakers de Los Ángeles serán campeones con 95% de certeza. Pierre Simone-Laplace, Frank Ramsey, Bruno De Finetti, Leonard Savage y John Keynes fueron de las personas que popularizaron este modelo. **Nota:** Cuando se trabaja con un gran número de datos, los modelos frecuentistas y subjetivos tienden a coincidir pero, cuando los datos son escasos o prácticamente inexistentes, las interpretaciones difieren. #### Modelo equiprobable Este modelo asigna igual probabilidad a todos los resultados de un experimento y lo podemos encontrar en los modelos anteriores: * En el modelo de la teoría de la medida al tener un experimento que exhibe simetría de algún tipo, por ejemplo en el lanzamiento de una moneda o dados justos o de un dardo a un tiro al blanco con un mismo radio de circunferencia. * En el modelo subjetivo si la persona que realiza el experimento tiene ignorancia o indiferencia respecto a su grado de creencia del resultado del experimento. * En el modelo frecuentista al observar la proporción de veces que al lanzar una moneda se obtiene sol. Obsérvese que este modelo es posible utilizar si se pueden ennumerar todos los resultados de un experimento. ### ¿Cómo representar en R un espacio de probabilidad? Una opción para responder esta pregunta es considerar un objeto, `S`, que represente los *outcomes* o resultados del experimento y un vector de probabilidades, `probs`, con entradas que correspondan a cada outcome en `S`. Además en el paquete *prob* se tiene una función `probspace` que tiene por argumentos $x$ que es un espacio muestral de los outcomes y $probs$ es un vector del mismo tamaño que el número de outcomes en $x$. ### Ejemplos #### 1) Lanzamiento de un dado justo ``` outcomes <- rolldie(1) p <- rep(1/6, times = 6) probspace(outcomes, probs = p) #y es equivalente sólo haber ejecutado: #probspace(1:6, probs = p) o bien probspace(1:6) o rolldie(1, makespace = TRUE) ``` #### 2) Lanzamiento de una moneda cargada Supóngase que $P(\{\text{sol}\}) = .7$ y $P(\{\text{águila}\}) = .3$ entonces: ``` probspace(tosscoin(1), probs = c(0.70, 0.30)) ``` **Ejercicio:** ¿cómo calcular la probabilidad anterior con la función `urnsamples`?
github_jupyter
options(repr.plot.width=4, repr.plot.height=4) #esta línea sólo se ejecuta para jupyterlab con R library(prob) S = data.frame(cae=c('panza abajo', 'panza arriba', 'sentado', 'de cabeza')) S print(urnsamples(1:3, size = 2, replace = TRUE, ordered = TRUE)) print(rolldie(2, nsides = 3)) print(urnsamples(1:3, size=2, replace = F, order = T)) print(urnsamples(1:3, size=2, replace = F, order = F)) print(urnsamples(1:3,2)) print(urnsamples(1:3, size = 2, replace = T, order = F)) S <- tosscoin(2, makespace = TRUE) #lanzamiento de dos monedas print(S[1:3, ]) #tres eventos print(S[c(2,4), ]) S<-cards() print(subset(S, suit == 'Heart')) #Eventos extraídos del espacio muestral que satisfacen #una expresión lógica print(subset(S, rank %in% 7:9)) #la función %in% checa si cada elemento de un vector #está contenido en algún lugar de otro, en este caso #se checa para cada renglón de la columna rank de S #que se encuentre en el vector c(7,8,9) print(subset(rolldie(3), X1+X2+X3 > 16)) #también son aceptadas expresiones matemáticas S <- rolldie(4) print(subset(S, isin(S, c(2,2,6), ordered = TRUE))) # la función isin checa que todo el vector #c(2,2,6) esté en cada renglón del #data.frame S x <- 1:10 y <- c(3,3,7) all(y %in% x) #esta línea checa que el 3 esté en x, que el 3 esté en x y que el 7 #esté en x y devuelve el valor lógico de los tres chequeos, en este #caso all(c(T,T,T)) isin(x,y) #checa que c(3,3,7) esté en x S <- cards() A <- subset(S, suit == "Heart") B <- subset(S, rank %in% 7:9) head(union(A,B)) #se utiliza head para obtener sólo algunos renglones de la operación intersect(A,B) head(setdiff(A,B)) head(setdiff(B,A)) head(setdiff(S,A)) #Aquí se calcula A^c (el complemento de A definido como S\A) outcomes <- rolldie(1) p <- rep(1/6, times = 6) probspace(outcomes, probs = p) #y es equivalente sólo haber ejecutado: #probspace(1:6, probs = p) o bien probspace(1:6) o rolldie(1, makespace = TRUE) probspace(tosscoin(1), probs = c(0.70, 0.30))
0.150216
0.935346
``` import numpy as np import pandas as pd import pickle from keras.utils import to_categorical from keras.utils import plot_model from keras.models import Model from keras.layers import Input, Dense, LSTM, Embedding, Dropout, RepeatVector, TimeDistributed, Merge, Masking from keras.layers.merge import add, concatenate from keras.callbacks import ModelCheckpoint from keras.optimizers import SGD def load_npy(path): with open(path, "rb") as handle: arr = np.load(handle) handle.close() return (arr) X_train_photos = load_npy("../data/preprocessed/X_train_photos.npy") X_train_captions = load_npy("../data/preprocessed/X_train_captions.npy") embedding_matrix = load_npy("../data/embedding_matrix/embedding_matrix.npy") y_train = load_npy("../data/preprocessed/y_train.npy") print(X_train_photos.shape) print(X_train_captions.shape) print(y_train.shape) print(embedding_matrix.shape) VOCAB_SIZE = 30212 inputs_photo = Input(shape = (4096,), name="Inputs-photo") drop1 = Dropout(0.5)(inputs_photo) dense1 = Dense(256, activation='relu')(drop1) inputs_caption = Input(shape=(15,), name = "Inputs-caption") embedding = Embedding(VOCAB_SIZE, 300, mask_zero = True, trainable = False, weights=[embedding_matrix])(inputs_caption) drop2 = Dropout(0.5)(embedding) lstm1 = LSTM(256)(drop2) merged = concatenate([dense1, lstm1]) dense2 = Dense(256, activation='relu')(merged) outputs = Dense(VOCAB_SIZE, activation='softmax')(dense2) model = Model(inputs=[inputs_photo, inputs_caption], outputs=outputs) sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='categorical_crossentropy', optimizer=sgd) print(model.summary()) plot_model(model, to_file='images/model1.png', show_shapes=True, show_layer_names=False) ``` ![](images/model1.png) ``` model.fit([X_train_photos,X_train_captions], to_categorical(y_train, VOCAB_SIZE), epochs = 1, verbose = 1) inputs_photo = Input(shape = (4096,), name="Inputs-photo") drop1 = Dropout(0.5)(inputs_photo) dense1 = Dense(300, activation='relu')(drop1) cnn_feats = Masking()(RepeatVector(1)(dense1)) inputs_caption = Input(shape=(15,), name = "Inputs-caption") embedding = Embedding(VOCAB_SIZE, 300, mask_zero = True, trainable = False, weights=[embedding_matrix])(inputs_caption) merged = concatenate([cnn_feats, embedding], axis=1) lstm_layer = LSTM(units=300, input_shape=(15 + 1, 300), return_sequences=False, dropout=.5)(merged) outputs = Dense(units=VOCAB_SIZE,activation='softmax')(lstm_layer) model = Model(inputs=[inputs_photo, inputs_caption], outputs=outputs) sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='sparse_categorical_crossentropy', optimizer=sgd) print(model.summary()) plot_model(model, to_file='images/model6.png', show_shapes=True,show_layer_names=False ) ``` ![](images/model6.png) ``` model.fit([X_train_photos,X_train_captions], y_train, epochs = 1, verbose = 1) ```
github_jupyter
import numpy as np import pandas as pd import pickle from keras.utils import to_categorical from keras.utils import plot_model from keras.models import Model from keras.layers import Input, Dense, LSTM, Embedding, Dropout, RepeatVector, TimeDistributed, Merge, Masking from keras.layers.merge import add, concatenate from keras.callbacks import ModelCheckpoint from keras.optimizers import SGD def load_npy(path): with open(path, "rb") as handle: arr = np.load(handle) handle.close() return (arr) X_train_photos = load_npy("../data/preprocessed/X_train_photos.npy") X_train_captions = load_npy("../data/preprocessed/X_train_captions.npy") embedding_matrix = load_npy("../data/embedding_matrix/embedding_matrix.npy") y_train = load_npy("../data/preprocessed/y_train.npy") print(X_train_photos.shape) print(X_train_captions.shape) print(y_train.shape) print(embedding_matrix.shape) VOCAB_SIZE = 30212 inputs_photo = Input(shape = (4096,), name="Inputs-photo") drop1 = Dropout(0.5)(inputs_photo) dense1 = Dense(256, activation='relu')(drop1) inputs_caption = Input(shape=(15,), name = "Inputs-caption") embedding = Embedding(VOCAB_SIZE, 300, mask_zero = True, trainable = False, weights=[embedding_matrix])(inputs_caption) drop2 = Dropout(0.5)(embedding) lstm1 = LSTM(256)(drop2) merged = concatenate([dense1, lstm1]) dense2 = Dense(256, activation='relu')(merged) outputs = Dense(VOCAB_SIZE, activation='softmax')(dense2) model = Model(inputs=[inputs_photo, inputs_caption], outputs=outputs) sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='categorical_crossentropy', optimizer=sgd) print(model.summary()) plot_model(model, to_file='images/model1.png', show_shapes=True, show_layer_names=False) model.fit([X_train_photos,X_train_captions], to_categorical(y_train, VOCAB_SIZE), epochs = 1, verbose = 1) inputs_photo = Input(shape = (4096,), name="Inputs-photo") drop1 = Dropout(0.5)(inputs_photo) dense1 = Dense(300, activation='relu')(drop1) cnn_feats = Masking()(RepeatVector(1)(dense1)) inputs_caption = Input(shape=(15,), name = "Inputs-caption") embedding = Embedding(VOCAB_SIZE, 300, mask_zero = True, trainable = False, weights=[embedding_matrix])(inputs_caption) merged = concatenate([cnn_feats, embedding], axis=1) lstm_layer = LSTM(units=300, input_shape=(15 + 1, 300), return_sequences=False, dropout=.5)(merged) outputs = Dense(units=VOCAB_SIZE,activation='softmax')(lstm_layer) model = Model(inputs=[inputs_photo, inputs_caption], outputs=outputs) sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='sparse_categorical_crossentropy', optimizer=sgd) print(model.summary()) plot_model(model, to_file='images/model6.png', show_shapes=True,show_layer_names=False ) model.fit([X_train_photos,X_train_captions], y_train, epochs = 1, verbose = 1)
0.778902
0.472744
## Importing guns file ``` import csv import datetime #Open and load guns file data = list(csv.reader(open("guns.csv"))) #QC of loaded file print(data[0:4]) ``` ## Removing the headers from the list ``` headers = data[0] data = data[1:] #QC of sliced header print(headers) print(data[0:4]) ``` ## Extracting how many guns deaths occure per year ``` years = [row[1] for row in data] #print(years) year_counts = {} for year in years: if year in year_counts: year_counts[year] = year_counts[year] + 1 else: year_counts[year] = 1 year_counts ``` ## Exctracting how many guns deaths per month/year ``` dates = [ datetime.datetime(year=int(row[1]), month=int(row[2]), day=1) for row in data ] #print(dates[0:4]) date_counts = {} for date in dates: if date in date_counts: date_counts[date] = date_counts[date] + 1 else: date_counts[date] = 1 date_counts ``` ## Extracting how deaths are vary by gender and race ``` sex_counts = {} sexs = [row[5] for row in data] for sex in sexs: if sex in sex_counts: sex_counts[sex] = sex_counts[sex] + 1 else: sex_counts[sex] = 1 sex_counts race_counts = {} races = [row[7] for row in data] for race in races: if race in race_counts: race_counts[race] = race_counts[race] + 1 else: race_counts[race] = 1 race_counts ``` ## Observations Observations: - More M deaths than F in 5.5 times - Race: Mainly white and black To check: - gender & race & age - gender & place - gender & race & education - now many police were invlved by month ## Importing of the second file census.csv ``` #Open and load census.csv file census = list(csv.reader(open("census.csv"))) census ``` ## Calculating deaths by race per 100000 people ``` mapping = { "Asian/Pacific Islander":15159516 + 674625, "Black": 40250635, "Native American/Native Alaskan": 3739506, "Hispanic": 44618105, "White": 197318956 } race_per_hundredk = {} for k, v in race_counts.items(): race_per_hundredk[k] = v / mapping[k] * 100000 race_per_hundredk ``` ## Filtering by Intent ``` intents = [row[3] for row in data] #intents races = [row[7] for row in data] homicide_race_counts = {} for i, race in enumerate(races): if race not in homicide_race_counts: homicide_race_counts[race] = 0 else: homicide_race_counts[race] = homicide_race_counts[race] + 1 homicide_race_counts race_per_hundredk = {} for k, v in homicide_race_counts.items(): race_per_hundredk[k] = (v / mapping[k]) * 100000 race_per_hundredk ``` In the US Gun homicides related disproportionally Black and Hispanic races. Some areas to investigate further: * The link between month and homicide rate. * Homicide rate by gender. * The rates of other intents by gender and race. * Gun death rates by location and education.
github_jupyter
import csv import datetime #Open and load guns file data = list(csv.reader(open("guns.csv"))) #QC of loaded file print(data[0:4]) headers = data[0] data = data[1:] #QC of sliced header print(headers) print(data[0:4]) years = [row[1] for row in data] #print(years) year_counts = {} for year in years: if year in year_counts: year_counts[year] = year_counts[year] + 1 else: year_counts[year] = 1 year_counts dates = [ datetime.datetime(year=int(row[1]), month=int(row[2]), day=1) for row in data ] #print(dates[0:4]) date_counts = {} for date in dates: if date in date_counts: date_counts[date] = date_counts[date] + 1 else: date_counts[date] = 1 date_counts sex_counts = {} sexs = [row[5] for row in data] for sex in sexs: if sex in sex_counts: sex_counts[sex] = sex_counts[sex] + 1 else: sex_counts[sex] = 1 sex_counts race_counts = {} races = [row[7] for row in data] for race in races: if race in race_counts: race_counts[race] = race_counts[race] + 1 else: race_counts[race] = 1 race_counts #Open and load census.csv file census = list(csv.reader(open("census.csv"))) census mapping = { "Asian/Pacific Islander":15159516 + 674625, "Black": 40250635, "Native American/Native Alaskan": 3739506, "Hispanic": 44618105, "White": 197318956 } race_per_hundredk = {} for k, v in race_counts.items(): race_per_hundredk[k] = v / mapping[k] * 100000 race_per_hundredk intents = [row[3] for row in data] #intents races = [row[7] for row in data] homicide_race_counts = {} for i, race in enumerate(races): if race not in homicide_race_counts: homicide_race_counts[race] = 0 else: homicide_race_counts[race] = homicide_race_counts[race] + 1 homicide_race_counts race_per_hundredk = {} for k, v in homicide_race_counts.items(): race_per_hundredk[k] = (v / mapping[k]) * 100000 race_per_hundredk
0.067508
0.854521
# Single Qubit Gates In the previous section we looked at all the possible states a qubit could be in. We saw that qubits could be represented by 2D vectors, and that their states are limited to the form: $$ |q\rangle = \cos{(\tfrac{\theta}{2})}|0\rangle + e^{i\phi}\sin{\tfrac{\theta}{2}}|1\rangle $$ Where $\theta$ and $\phi$ are real numbers. In this section we will cover _gates,_ the operations that change a qubit between these states. Due to the number of gates and the similarities between them, this chapter is at risk of becoming a list. To counter this, we have included a few digressions to introduce important ideas at appropriate places throughout the chapter. In _The Atoms of Computation_ we came across some gates and used them to perform a classical computation. An important feature of quantum circuits is that, between initialising the qubits and measuring them, the operations (gates) are *_always_* reversible! These reversible gates can be represented as matrices, and as rotations around the Bloch sphere. ``` from qiskit import QuantumCircuit, assemble, Aer from math import pi, sqrt from qiskit.visualization import plot_bloch_multivector, plot_histogram sim = Aer.get_backend('aer_simulator') ``` ## 1. The Pauli Gates <a id="pauli"></a> You should be familiar with the Pauli matrices from the linear algebra section. If any of the maths here is new to you, you should use the linear algebra section to bring yourself up to speed. We will see here that the Pauli matrices can represent some very commonly used quantum gates. ### 1.1 The X-Gate <a id="xgate"></a> The X-gate is represented by the Pauli-X matrix: $$ X = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = |0\rangle\langle1| + |1\rangle\langle0| $$ To see the effect a gate has on a qubit, we simply multiply the qubit’s statevector by the gate. We can see that the X-gate switches the amplitudes of the states $|0\rangle$ and $|1\rangle$: $$ X|0\rangle = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} = |1\rangle$$ <!-- ::: q-block.reminder --> ## Reminders <details> <summary>Multiplying Vectors by Matrices</summary> Matrix multiplication is a generalisation of the inner product we saw in the last chapter. In the specific case of multiplying a vector by a matrix (as seen above), we always get a vector back: $$ M|v\rangle = \begin{bmatrix}a & b \\ c & d \end{bmatrix}\begin{bmatrix}v_0 \\ v_1 \end{bmatrix} = \begin{bmatrix}a\cdot v_0 + b \cdot v_1 \\ c \cdot v_0 + d \cdot v_1 \end{bmatrix} $$ In quantum computing, we can write our matrices in terms of basis vectors: $$X = |0\rangle\langle1| + |1\rangle\langle0|$$ This can sometimes be clearer than using a box matrix as we can see what different multiplications will result in: $$ \begin{aligned} X|1\rangle & = (|0\rangle\langle1| + |1\rangle\langle0|)|1\rangle \\ & = |0\rangle\langle1|1\rangle + |1\rangle\langle0|1\rangle \\ & = |0\rangle \times 1 + |1\rangle \times 0 \\ & = |0\rangle \end{aligned} $$ In fact, when we see a ket and a bra multiplied like this: $$ |a\rangle\langle b| $$ this is called the _outer product_, which follows the rule: $$ |a\rangle\langle b| = \begin{bmatrix} a_0 b_0 & a_0 b_1 & \dots & a_0 b_n\\ a_1 b_0 & \ddots & & \vdots \\ \vdots & & \ddots & \vdots \\ a_n b_0 & \dots & \dots & a_n b_n \\ \end{bmatrix} $$ We can see this does indeed result in the X-matrix as seen above: $$ |0\rangle\langle1| + |1\rangle\langle0| = \begin{bmatrix}0 & 1 \\ 0 & 0 \end{bmatrix} + \begin{bmatrix}0 & 0 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix}0 & 1 \\ 1 & 0 \end{bmatrix} = X $$ </details> <!-- ::: --> In Qiskit, we can create a short circuit to verify this: ``` # Let's do an X-gate on a |0> qubit qc = QuantumCircuit(1) qc.x(0) qc.draw() ``` Let's see the result of the above circuit. **Note:** Here we use `plot_bloch_multivector()` which takes a qubit's statevector instead of the Bloch vector. ``` # Let's see the result qc.save_statevector() qobj = assemble(qc) state = sim.run(qobj).result().get_statevector() plot_bloch_multivector(state) ``` We can indeed see the state of the qubit is $|1\rangle$ as expected. We can think of this as a rotation by $\pi$ radians around the *x-axis* of the Bloch sphere. The X-gate is also often called a NOT-gate, referring to its classical analogue. ### 1.2 The Y & Z-gates <a id="ynzgatez"></a> Similarly to the X-gate, the Y & Z Pauli matrices also act as the Y & Z-gates in our quantum circuits: $$ Y = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} \quad\quad\quad\quad Z = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} $$ $$ Y = -i|0\rangle\langle1| + i|1\rangle\langle0| \quad\quad Z = |0\rangle\langle0| - |1\rangle\langle1| $$ And, unsurprisingly, they also respectively perform rotations by [[$\pi$|$2\pi$|$\frac{\pi}{2}$]] around the y and z-axis of the Bloch sphere. Below is a widget that displays a qubit’s state on the Bloch sphere, pressing one of the buttons will perform the gate on the qubit: ``` # Run the code in this cell to see the widget from qiskit_textbook.widgets import gate_demo gate_demo(gates='pauli') ``` In Qiskit, we can apply the Y and Z-gates to our circuit using: ``` qc.y(0) # Do Y-gate on qubit 0 qc.z(0) # Do Z-gate on qubit 0 qc.draw() ``` ## 2. Digression: The X, Y & Z-Bases <a id="xyzbases"></a> <!-- ::: q-block.reminder --> ## Reminders <details> <summary>Eigenvectors of Matrices</summary> We have seen that multiplying a vector by a matrix results in a vector: $$ M|v\rangle = |v'\rangle \leftarrow \text{new vector} $$ If we chose the right vectors and matrices, we can find a case in which this matrix multiplication is the same as doing a multiplication by a scalar: $$ M|v\rangle = \lambda|v\rangle $$ (Above, $M$ is a matrix, and $\lambda$ is a scalar). For a matrix $M$, any vector that has this property is called an <i>eigenvector</i> of $M$. For example, the eigenvectors of the Z-matrix are the states $|0\rangle$ and $|1\rangle$: $$ \begin{aligned} Z|0\rangle & = |0\rangle \\ Z|1\rangle & = -|1\rangle \end{aligned} $$ Since we use vectors to describe the state of our qubits, we often call these vectors <i>eigenstates</i> in this context. Eigenvectors are very important in quantum computing, and it is important you have a solid grasp of them. </details> <!-- ::: --> You may also notice that the Z-gate appears to have no effect on our qubit when it is in either of these two states. This is because the states $|0\rangle$ and $|1\rangle$ are the two _eigenstates_ of the Z-gate. In fact, the _computational basis_ (the basis formed by the states $|0\rangle$ and $|1\rangle$) is often called the Z-basis. This is not the only basis we can use, a popular basis is the X-basis, formed by the eigenstates of the X-gate. We call these two vectors $|+\rangle$ and $|-\rangle$: $$ |+\rangle = \tfrac{1}{\sqrt{2}}(|0\rangle + |1\rangle) = \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 1 \end{bmatrix}$$ $$ |-\rangle = \tfrac{1}{\sqrt{2}}(|0\rangle - |1\rangle) = \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ -1 \end{bmatrix} $$ Another less commonly used basis is that formed by the eigenstates of the Y-gate. These are called: $$ |\circlearrowleft\rangle, \quad |\circlearrowright\rangle$$ We leave it as an exercise to calculate these. There are in fact an infinite number of bases; to form one, we simply need two orthogonal vectors. The eigenvectors of both Hermitian and unitary matrices form a basis for the vector space. Due to this property, we can be sure that the eigenstates of the X-gate and the Y-gate indeed form a basis for 1-qubit states (read more about this in the [linear algebra page](/course/ch-appendix/an-introduction-to-linear-algebra-for-quantum-computing) in the appendix) ### Quick Exercises 1. Verify that $|+\rangle$ and $|-\rangle$ are in fact eigenstates of the X-gate. 2. What eigenvalues do they have? 3. Find the eigenstates of the Y-gate, and their co-ordinates on the Bloch sphere. Using only the Pauli-gates it is impossible to move our initialized qubit to any state other than $|0\rangle$ or $|1\rangle$, i.e. we cannot achieve superposition. This means we can see no behaviour different to that of a classical bit. To create more interesting states we will need more gates! ## 3. The Hadamard Gate <a id="hgate"></a> The Hadamard gate (H-gate) is a fundamental quantum gate. It allows us to move away from the poles of the Bloch sphere and create a superposition of $|0\rangle$ and $|1\rangle$. It has the matrix: $$ H = \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} $$ We can see that this performs the transformations below: $$ H|0\rangle = |+\rangle $$ $$ H|1\rangle = |-\rangle $$ This can be thought of as a rotation around the Bloch vector `[1,0,1]` (the line between the x & z-axis), or as transforming the state of the qubit between the X and Z bases. You can play around with these gates using the widget below: ``` # Run the code in this cell to see the widget from qiskit_textbook.widgets import gate_demo gate_demo(gates='pauli+h') ``` ### Quick Exercise 1. Write the H-gate as the outer products of vectors $|0\rangle$, $|1\rangle$, $|+\rangle$ and $|-\rangle$. 2. Show that applying the sequence of gates: HZH, to any qubit state is equivalent to applying an X-gate. 3. Find a combination of X, Z and H-gates that is equivalent to a Y-gate (ignoring global phase). ## 4. Digression: Measuring in Different Bases <a id="measuring"></a> We have seen that the Z-axis is not intrinsically special, and that there are infinitely many other bases. Similarly with measurement, we don’t always have to measure in the computational basis (the Z-basis), we can measure our qubits in any basis. As an example, let’s try measuring in the X-basis. We can calculate the probability of measuring either $|+\rangle$ or $|-\rangle$: $$ p(|+\rangle) = |\langle+|q\rangle|^2, \quad p(|-\rangle) = |\langle-|q\rangle|^2 $$ And after measurement, the superposition is destroyed. Since Qiskit only allows measuring in the Z-basis, we must create our own using Hadamard gates: ``` # Create the X-measurement function: def x_measurement(qc, qubit, cbit): """Measure 'qubit' in the X-basis, and store the result in 'cbit'""" qc.h(qubit) qc.measure(qubit, cbit) return qc initial_state = [1/sqrt(2), -1/sqrt(2)] # Initialize our qubit and measure it qc = QuantumCircuit(1,1) qc.initialize(initial_state, 0) x_measurement(qc, 0, 0) # measure qubit 0 to classical bit 0 qc.draw() ``` In the quick exercises above, we saw you could create an X-gate by sandwiching our Z-gate between two H-gates: $$ X = HZH $$ Starting in the Z-basis, the H-gate switches our qubit to the X-basis, the Z-gate performs a NOT in the X-basis, and the final H-gate returns our qubit to the Z-basis. We can verify this always behaves like an X-gate by multiplying the matrices: $$ HZH = \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} =X $$ Following the same logic, we have created an X-measurement by transforming from the X-basis to the Z-basis before our measurement. Since the process of measuring can have different effects depending on the system (e.g. some systems always return the qubit to $|0\rangle$ after measurement, whereas others may leave it as the measured state), the state of the qubit post-measurement is undefined and we must reset it if we want to use it again. There is another way to see why the Hadamard gate indeed takes us from the Z-basis to the X-basis. Suppose the qubit we want to measure in the X-basis is in the (normalized) state $a |0\rangle + b |1\rangle$. To measure it in X-basis, we first express the state as a linear combination of $|+\rangle$ and $|-\rangle$. Using the relations $|0\rangle = \frac{|+\rangle + |-\rangle}{\sqrt{2}}$ and $|1\rangle = \frac{|+\rangle - |-\rangle}{\sqrt{2}}$, the state becomes $\frac{a + b}{\sqrt{2}}|+\rangle + \frac{a - b}{\sqrt{2}}|-\rangle$. Observe that the probability amplitudes in X-basis can be obtained by applying a Hadamard matrix on the state vector expressed in Z-basis. Let’s now see the results: ``` qobj = assemble(qc) # Assemble circuit into a Qobj that can be run counts = sim.run(qobj).result().get_counts() # Do the simulation, returning the state vector plot_histogram(counts) # Display the output on measurement of state vector ``` We initialized our qubit in the state $|-\rangle$, but we can see that, after the measurement, we have collapsed our qubit to the state $|1\rangle$. If you run the cell again, you will see the same result, since along the X-basis, the state $|-\rangle$ is a basis state and measuring it along X will always yield the same result. ### Quick Exercises 1. If we initialize our qubit in the state $|+\rangle$, what is the probability of measuring it in state $|-\rangle$? 2. Use Qiskit to display the probability of measuring a $|0\rangle$ qubit in the states $|+\rangle$ and $|-\rangle$ (**Hint:** you might want to use `.get_counts()` and `plot_histogram()`). 3. Try to create a function that measures in the Y-basis. Measuring in different bases allows us to see Heisenberg’s famous uncertainty principle in action. Having certainty of measuring a state in the Z-basis removes all certainty of measuring a specific state in the X-basis, and vice versa. A common misconception is that the uncertainty is due to the limits in our equipment, but here we can see the uncertainty is actually part of the nature of the qubit. For example, if we put our qubit in the state $|0\rangle$, our measurement in the Z-basis is certain to be $|0\rangle$, but our measurement in the X-basis is completely random! Similarly, if we put our qubit in the state $|-\rangle$, our measurement in the X-basis is certain to be $|-\rangle$, but now any measurement in the Z-basis will be completely random. More generally: _Whatever state our quantum system is in, there is always a measurement that has a deterministic outcome._ The introduction of the H-gate has allowed us to explore some interesting phenomena, but we are still very limited in our quantum operations. Let us now introduce a new type of gate: ## 5. The P-gate <a id="rzgate"></a> The P-gate (phase gate) is _parametrised,_ that is, it needs a number ($\phi$) to tell it exactly what to do. The P-gate performs a rotation of $\phi$ around the Z-axis direction. It has the matrix form: $$ P(\phi) = \begin{bmatrix} 1 & 0 \\ 0 & e^{i\phi} \end{bmatrix} $$ Where $\phi$ is a real number. You can use the widget below to play around with the P-gate, specify $\phi$ using the slider: ``` # Run the code in this cell to see the widget from qiskit_textbook.widgets import gate_demo gate_demo(gates='pauli+h+p') ``` In Qiskit, we specify a P-gate using `p(phi, qubit)`: ``` qc = QuantumCircuit(1) qc.p(pi/4, 0) qc.draw() ``` You may notice that the Z-gate is a special case of the P-gate, with $\phi = \pi$. In fact there are three more commonly referenced gates we will mention in this chapter, all of which are special cases of the P-gate: ## 6. The I, S and T-gates <a id="istgates"></a> ### 6.1 The I-gate <a id="igate"></a> First comes the I-gate (aka ‘Id-gate’ or ‘Identity gate’). This is simply a gate that does nothing. Its matrix is the identity matrix: $$ I = \begin{bmatrix} 1 & 0 \\ 0 & 1\end{bmatrix} $$ Applying the identity gate anywhere in your circuit should have no effect on the qubit state, so it’s interesting this is even considered a gate. There are two main reasons behind this, one is that it is often used in calculations, for example: proving the X-gate is its own inverse: $$ I = XX $$ The second, is that it is often useful when considering real hardware to specify a ‘do-nothing’ or ‘none’ operation. #### Quick Exercise 1. What are the eigenstates of the I-gate? ### 6.2 The S-gates <a id="sgate"></a> The next gate to mention is the S-gate (sometimes known as the $\sqrt{Z}$-gate), this is a P-gate with $\phi = \pi/2$. It does a quarter-turn around the Bloch sphere. It is important to note that unlike every gate introduced in this chapter so far, the S-gate is **not** its own inverse! As a result, you will often see the S<sup>†</sup>-gate, (also “S-dagger”, “Sdg” or $\sqrt{Z}^\dagger$-gate). The S<sup>†</sup>-gate is clearly an P-gate with $\phi = -\pi/2$: $$ S = \begin{bmatrix} 1 & 0 \\ 0 & e^{\frac{i\pi}{2}} \end{bmatrix}, \quad S^\dagger = \begin{bmatrix} 1 & 0 \\ 0 & e^{-\frac{i\pi}{2}} \end{bmatrix}$$ The name "$\sqrt{Z}$-gate" is due to the fact that two successively applied S-gates has the same effect as one Z-gate: $$ SS|q\rangle = Z|q\rangle $$ This notation is common throughout quantum computing. To add an S-gate in Qiskit: ``` qc = QuantumCircuit(1) qc.s(0) # Apply S-gate to qubit 0 qc.sdg(0) # Apply Sdg-gate to qubit 0 qc.draw() ``` ### 6.3 The T-gate <a id="tgate"></a> The T-gate is a very commonly used gate, it is an P-gate with $\phi = \pi/4$: $$ T = \begin{bmatrix} 1 & 0 \\ 0 & e^{\frac{i\pi}{4}} \end{bmatrix}, \quad T^\dagger = \begin{bmatrix} 1 & 0 \\ 0 & e^{-\frac{i\pi}{4}} \end{bmatrix}$$ As with the S-gate, the T-gate is sometimes also known as the $\sqrt[4]{Z}$-gate. In Qiskit: ``` qc = QuantumCircuit(1) qc.t(0) # Apply T-gate to qubit 0 qc.tdg(0) # Apply Tdg-gate to qubit 0 qc.draw() ``` You can use the widget below to play around with all the gates introduced in this chapter so far: ``` # Run the code in this cell to see the widget from qiskit_textbook.widgets import gate_demo gate_demo() ``` ## 7. The U-gate <a id="generalU"></a> As we saw earlier, the I, Z, S & T-gates were all special cases of the more general P-gate. In the same way, the U-gate is the most general of all single-qubit quantum gates. It is a parametrised gate of the form: $$ U(\theta, \phi, \lambda) = \begin{bmatrix} \cos(\frac{\theta}{2}) & -e^{i\lambda}\sin(\frac{\theta}{2}) \\ e^{i\phi}\sin(\frac{\theta}{2}) & e^{i(\phi+\lambda)}\cos(\frac{\theta}{2}) \end{bmatrix} $$ Every gate in this chapter could be specified as $U(\theta,\phi,\lambda)$, but it is unusual to see this in a circuit diagram, possibly due to the difficulty in reading this. As an example, we see some specific cases of the U-gate in which it is equivalent to the H-gate and P-gate respectively. $$ \begin{aligned} U(\tfrac{\pi}{2}, 0, \pi) = \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} = H & \quad & U(0, 0, \lambda) = \begin{bmatrix} 1 & 0 \\ 0 & e^{i\lambda}\\ \end{bmatrix} = P \end{aligned} $$ ``` # Let's have U-gate transform a |0> to |+> state qc = QuantumCircuit(1) qc.u(pi/2, 0, pi, 0) qc.draw() # Let's see the result qc.save_statevector() qobj = assemble(qc) state = sim.run(qobj).result().get_statevector() plot_bloch_multivector(state) ``` It should be obvious from this that there are an infinite number of possible gates, and that this also includes R<sub>x</sub> and R<sub>y</sub>-gates, although they are not mentioned here. It must also be noted that there is nothing special about the Z-basis, except that it has been selected as the standard computational basis. Qiskit also provides the X equivalent of the S and Sdg-gate i.e. the SX-gate and SXdg-gate respectively. These gates do a quarter-turn with respect to the X-axis around the Block sphere and are a special case of the R<sub>x</sub>-gate. Before running on real IBM quantum hardware, all single-qubit operations are compiled down to $I$ , $X$, $SX$ and $R_{z}$ . For this reason they are sometimes called the _physical gates_. ## 8. Additional resources You can find a community-created cheat-sheet with some of the common quantum gates, and their properties [here](https://raw.githubusercontent.com/qiskit-community/qiskit-textbook/main/content/ch-states/supplements/single-gates-cheatsheet.pdf). ``` import qiskit.tools.jupyter %qiskit_version_table ```
github_jupyter
from qiskit import QuantumCircuit, assemble, Aer from math import pi, sqrt from qiskit.visualization import plot_bloch_multivector, plot_histogram sim = Aer.get_backend('aer_simulator') # Let's do an X-gate on a |0> qubit qc = QuantumCircuit(1) qc.x(0) qc.draw() # Let's see the result qc.save_statevector() qobj = assemble(qc) state = sim.run(qobj).result().get_statevector() plot_bloch_multivector(state) # Run the code in this cell to see the widget from qiskit_textbook.widgets import gate_demo gate_demo(gates='pauli') qc.y(0) # Do Y-gate on qubit 0 qc.z(0) # Do Z-gate on qubit 0 qc.draw() # Run the code in this cell to see the widget from qiskit_textbook.widgets import gate_demo gate_demo(gates='pauli+h') # Create the X-measurement function: def x_measurement(qc, qubit, cbit): """Measure 'qubit' in the X-basis, and store the result in 'cbit'""" qc.h(qubit) qc.measure(qubit, cbit) return qc initial_state = [1/sqrt(2), -1/sqrt(2)] # Initialize our qubit and measure it qc = QuantumCircuit(1,1) qc.initialize(initial_state, 0) x_measurement(qc, 0, 0) # measure qubit 0 to classical bit 0 qc.draw() qobj = assemble(qc) # Assemble circuit into a Qobj that can be run counts = sim.run(qobj).result().get_counts() # Do the simulation, returning the state vector plot_histogram(counts) # Display the output on measurement of state vector # Run the code in this cell to see the widget from qiskit_textbook.widgets import gate_demo gate_demo(gates='pauli+h+p') qc = QuantumCircuit(1) qc.p(pi/4, 0) qc.draw() qc = QuantumCircuit(1) qc.s(0) # Apply S-gate to qubit 0 qc.sdg(0) # Apply Sdg-gate to qubit 0 qc.draw() qc = QuantumCircuit(1) qc.t(0) # Apply T-gate to qubit 0 qc.tdg(0) # Apply Tdg-gate to qubit 0 qc.draw() # Run the code in this cell to see the widget from qiskit_textbook.widgets import gate_demo gate_demo() # Let's have U-gate transform a |0> to |+> state qc = QuantumCircuit(1) qc.u(pi/2, 0, pi, 0) qc.draw() # Let's see the result qc.save_statevector() qobj = assemble(qc) state = sim.run(qobj).result().get_statevector() plot_bloch_multivector(state) import qiskit.tools.jupyter %qiskit_version_table
0.870776
0.995934
# Colombian Minimum Wage Analysis and Visualizations ## Libraries ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib.pylab import rcParams import seaborn as sn from statsmodels.tsa.stattools import adfuller %matplotlib inline ``` ## Reading the Data ``` df_TRM=pd.read_csv("Data\Tasa_de_Cambio_Representativa_del_Mercado-_TRM.csv") df_min_wage=pd.read_excel("Data\SLR_Serie historica IQY.xlsx",header=5) ``` ## Cleaning the Data Cleaning Colombian Minimum Wage Data ``` df_min_wage.head() df_min_wage.drop(columns=["Unnamed: 5","Salario mínimo diario (COP)","Decretos del Gobierno Nacional"],inplace=True) df_min_wage.rename(columns={"Año (aaaa)":"Year","Salario mínimo mensual (COP)":"Monthly Minimum Wage (COP)","Variación porcentual anual %":"Yearly Percentage Variation %"},inplace=True) df_min_wage.drop(df_min_wage.index[39:],inplace=True) df_min_wage.head() ``` Cleaning of Representative Market exchange Rate (TRM -> spanish abreviation) ``` df_TRM.head() df_TRM.drop(columns="UNIDAD",inplace=True) df_TRM.rename(columns={"VALOR":"Value","VIGENCIADESDE":"Valid Since","VIGENCIAHASTA":"Valid until"},inplace=True) df_TRM["Valid Since"]=pd.to_datetime(df_TRM["Valid Since"],dayfirst=True) df_TRM["Valid until"]=pd.to_datetime(df_TRM["Valid until"],dayfirst=True) #Values that were valid for more than the same day are an indicator of the value being inaccurate, the value should change daily. df_TRM["Date"]=df_TRM["Valid Since"] df_TRM.drop(columns=["Valid Since","Valid until"],inplace=True) df_TRM.head() ``` ## Exploring the Data ### TRM Visualization ``` rcParams["figure.figsize"]=15,10 plot=df_TRM.plot(x="Date",y="Value") plt.title("USD to COP Time Series",fontsize=15) plt.ylabel("USD to COP",fontsize=15) plt.xlabel("Date",fontsize=15) plt.legend(fontsize=15) ax = plt.gca() ``` With this plot we can confirm the data has an aggresive upwards trend, thus meaning the Colombian peso has severly depreciated in comaprison to US Dollars. ### Minimum Wage Visualization ``` rcParams['figure.figsize']=25,10 fig, axs = plt.subplots(2,1) df_min_wage.plot(x="Year",y=["Monthly Minimum Wage (COP)","Yearly Percentage Variation %"],xlim=[1983,2023],subplots=True,color=["tab:blue","red"],ax=axs) axs[0].bar(x=df_min_wage["Year"],height=df_min_wage["Monthly Minimum Wage (COP)"]) axs[0].get_yaxis().get_major_formatter().set_scientific(False) axs[0].locator_params(axis="x", nbins=43) axs[0].locator_params(axis="y", nbins=20) axs[1].locator_params(axis="x", nbins=43) fig.suptitle("Colombian Minimum Wage (1983-2022)",fontsize=25) axs[0].set_title("Monthly Minimum Wage (COP) vs Time",fontsize=15) axs[1].set_title("Yearly Percentage Variation % vs Time",fontsize=15) ``` We can see that the Colombian minimum wage increases each year, but the percentage increase has a decreasing tendency. In other words; each year, Colombian minimum wage increases less. ### Analysis ``` df_TRM["Year"]=df_TRM["Date"].dt.year df_TRM.head() df_TRM_grouped=df_TRM.drop(columns="Date") df_TRM_grouped=df_TRM_grouped.groupby(by="Year").max() df_TRM_grouped.reset_index(inplace=True) df_TRM_grouped.rename(columns={"Value":"USD to COP"},inplace=True) df_TRM_grouped.head() df_joined=pd.merge(df_TRM_grouped,df_min_wage,on="Year") df_joined["Monthly Minimum Wage (USD)"]=df_joined["Monthly Minimum Wage (COP)"]/df_joined["USD to COP"] df_joined.head() rcParams['figure.figsize']=25,10 fig, axs = plt.subplots(2,1) df_joined.plot(x="Year",y=["Monthly Minimum Wage (COP)","Monthly Minimum Wage (USD)"],xlim=[1990,2023],subplots=True,color=["tab:blue","tab:red"],ax=axs) axs[0].bar(x=df_joined["Year"],height=df_joined["Monthly Minimum Wage (COP)"]) axs[0].get_yaxis().get_major_formatter().set_scientific(False) axs[0].locator_params(axis="x", nbins=43) axs[0].locator_params(axis="y", nbins=20) axs[1].bar(x=df_joined["Year"],height=df_joined["Monthly Minimum Wage (USD)"],color="tab:red") axs[1].locator_params(axis="x", nbins=43) fig.suptitle("Colombian Minimum Wage (1991-2022)",fontsize=25) axs[0].set_title("Monthly Minimum Wage (COP) vs Time",fontsize=15) axs[1].set_title("Monthly Minimum Wage (USD) vs Time",fontsize=15) fig.patch.set_facecolor('white') ``` In this visualization its clear that while the Colombian minimum wage in Colombian Pesos shows an upwards trend, the depreciation of the Colombian Peso when compared to US Dollars causes a decrease in Colombian minimum wage in US Dollars in some years. Thus, we can conclude that though in hindsight the Colombian minimum wage has been increasing since 1991 up to 2022, relative to US Dollars, this is not true; so the purchasing power of Colombians has not behaved proportionately to the apparent increases in minimum wage.
github_jupyter
import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib.pylab import rcParams import seaborn as sn from statsmodels.tsa.stattools import adfuller %matplotlib inline df_TRM=pd.read_csv("Data\Tasa_de_Cambio_Representativa_del_Mercado-_TRM.csv") df_min_wage=pd.read_excel("Data\SLR_Serie historica IQY.xlsx",header=5) df_min_wage.head() df_min_wage.drop(columns=["Unnamed: 5","Salario mínimo diario (COP)","Decretos del Gobierno Nacional"],inplace=True) df_min_wage.rename(columns={"Año (aaaa)":"Year","Salario mínimo mensual (COP)":"Monthly Minimum Wage (COP)","Variación porcentual anual %":"Yearly Percentage Variation %"},inplace=True) df_min_wage.drop(df_min_wage.index[39:],inplace=True) df_min_wage.head() df_TRM.head() df_TRM.drop(columns="UNIDAD",inplace=True) df_TRM.rename(columns={"VALOR":"Value","VIGENCIADESDE":"Valid Since","VIGENCIAHASTA":"Valid until"},inplace=True) df_TRM["Valid Since"]=pd.to_datetime(df_TRM["Valid Since"],dayfirst=True) df_TRM["Valid until"]=pd.to_datetime(df_TRM["Valid until"],dayfirst=True) #Values that were valid for more than the same day are an indicator of the value being inaccurate, the value should change daily. df_TRM["Date"]=df_TRM["Valid Since"] df_TRM.drop(columns=["Valid Since","Valid until"],inplace=True) df_TRM.head() rcParams["figure.figsize"]=15,10 plot=df_TRM.plot(x="Date",y="Value") plt.title("USD to COP Time Series",fontsize=15) plt.ylabel("USD to COP",fontsize=15) plt.xlabel("Date",fontsize=15) plt.legend(fontsize=15) ax = plt.gca() rcParams['figure.figsize']=25,10 fig, axs = plt.subplots(2,1) df_min_wage.plot(x="Year",y=["Monthly Minimum Wage (COP)","Yearly Percentage Variation %"],xlim=[1983,2023],subplots=True,color=["tab:blue","red"],ax=axs) axs[0].bar(x=df_min_wage["Year"],height=df_min_wage["Monthly Minimum Wage (COP)"]) axs[0].get_yaxis().get_major_formatter().set_scientific(False) axs[0].locator_params(axis="x", nbins=43) axs[0].locator_params(axis="y", nbins=20) axs[1].locator_params(axis="x", nbins=43) fig.suptitle("Colombian Minimum Wage (1983-2022)",fontsize=25) axs[0].set_title("Monthly Minimum Wage (COP) vs Time",fontsize=15) axs[1].set_title("Yearly Percentage Variation % vs Time",fontsize=15) df_TRM["Year"]=df_TRM["Date"].dt.year df_TRM.head() df_TRM_grouped=df_TRM.drop(columns="Date") df_TRM_grouped=df_TRM_grouped.groupby(by="Year").max() df_TRM_grouped.reset_index(inplace=True) df_TRM_grouped.rename(columns={"Value":"USD to COP"},inplace=True) df_TRM_grouped.head() df_joined=pd.merge(df_TRM_grouped,df_min_wage,on="Year") df_joined["Monthly Minimum Wage (USD)"]=df_joined["Monthly Minimum Wage (COP)"]/df_joined["USD to COP"] df_joined.head() rcParams['figure.figsize']=25,10 fig, axs = plt.subplots(2,1) df_joined.plot(x="Year",y=["Monthly Minimum Wage (COP)","Monthly Minimum Wage (USD)"],xlim=[1990,2023],subplots=True,color=["tab:blue","tab:red"],ax=axs) axs[0].bar(x=df_joined["Year"],height=df_joined["Monthly Minimum Wage (COP)"]) axs[0].get_yaxis().get_major_formatter().set_scientific(False) axs[0].locator_params(axis="x", nbins=43) axs[0].locator_params(axis="y", nbins=20) axs[1].bar(x=df_joined["Year"],height=df_joined["Monthly Minimum Wage (USD)"],color="tab:red") axs[1].locator_params(axis="x", nbins=43) fig.suptitle("Colombian Minimum Wage (1991-2022)",fontsize=25) axs[0].set_title("Monthly Minimum Wage (COP) vs Time",fontsize=15) axs[1].set_title("Monthly Minimum Wage (USD) vs Time",fontsize=15) fig.patch.set_facecolor('white')
0.47926
0.909787
# Similarity Queries using Annoy Tutorial This tutorial is about using the ([Annoy Approximate Nearest Neighbors Oh Yeah](https://github.com/spotify/annoy "Link to annoy repo")) library for similarity queries with a Word2Vec model built with gensim. ## Why use Annoy? The current implementation for finding k nearest neighbors in a vector space in gensim has linear complexity via brute force in the number of indexed documents, although with extremely low constant factors. The retrieved results are exact, which is an overkill in many applications: approximate results retrieved in sub-linear time may be enough. Annoy can find approximate nearest neighbors much faster. ## Prerequisites Additional libraries needed for this tutorial: - annoy - psutil - matplotlib ## Outline 1. Download Text8 Corpus 2. Build Word2Vec Model 3. Construct AnnoyIndex with model & make a similarity query 4. Verify & Evaluate performance 5. Evaluate relationship of `num_trees` to initialization time and accuracy 6. Work with Google's word2vec C formats ``` # pip install watermark %reload_ext watermark %watermark -v -m -p gensim,numpy,scipy,psutil,matplotlib ``` ### 1. Download Text8 Corpus ``` import os.path if not os.path.isfile('text8'): !wget -c http://mattmahoney.net/dc/text8.zip !unzip text8.zip ``` #### Import & Set up Logging I'm not going to set up logging due to the verbose input displaying in notebooks, but if you want that, uncomment the lines in the cell below. ``` LOGS = False if LOGS: import logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) ``` ### 2. Build Word2Vec Model ``` from gensim.models import Word2Vec, KeyedVectors from gensim.models.word2vec import Text8Corpus # Using params from Word2Vec_FastText_Comparison params = { 'alpha': 0.05, 'size': 100, 'window': 5, 'iter': 5, 'min_count': 5, 'sample': 1e-4, 'sg': 1, 'hs': 0, 'negative': 5 } model = Word2Vec(Text8Corpus('text8'), **params) print(model) ``` See the [Word2Vec tutorial](word2vec.ipynb) for how to initialize and save this model. #### Comparing the traditional implementation and the Annoy approximation ``` # Set up the model and vector that we are using in the comparison from gensim.similarities.index import AnnoyIndexer model.init_sims() annoy_index = AnnoyIndexer(model, 100) # Dry run to make sure both indices are fully in RAM vector = model.wv.syn0norm[0] model.most_similar([vector], topn=5, indexer=annoy_index) model.most_similar([vector], topn=5) import time import numpy as np def avg_query_time(annoy_index=None, queries=1000): """ Average query time of a most_similar method over 1000 random queries, uses annoy if given an indexer """ total_time = 0 for _ in range(queries): rand_vec = model.wv.syn0norm[np.random.randint(0, len(model.wv.vocab))] start_time = time.clock() model.most_similar([rand_vec], topn=5, indexer=annoy_index) total_time += time.clock() - start_time return total_time / queries queries = 10000 gensim_time = avg_query_time(queries=queries) annoy_time = avg_query_time(annoy_index, queries=queries) print("Gensim (s/query):\t{0:.5f}".format(gensim_time)) print("Annoy (s/query):\t{0:.5f}".format(annoy_time)) speed_improvement = gensim_time / annoy_time print ("\nAnnoy is {0:.2f} times faster on average on this particular run".format(speed_improvement)) ``` **This speedup factor is by no means constant** and will vary greatly from run to run and is particular to this data set, BLAS setup, Annoy parameters(as tree size increases speedup factor decreases), machine specifications, among other factors. >**Note**: Initialization time for the annoy indexer was not included in the times. The optimal knn algorithm for you to use will depend on how many queries you need to make and the size of the corpus. If you are making very few similarity queries, the time taken to initialize the annoy indexer will be longer than the time it would take the brute force method to retrieve results. If you are making many queries however, the time it takes to initialize the annoy indexer will be made up for by the incredibly fast retrieval times for queries once the indexer has been initialized >**Note** : Gensim's 'most_similar' method is using numpy operations in the form of dot product whereas Annoy's method isnt. If 'numpy' on your machine is using one of the BLAS libraries like ATLAS or LAPACK, it'll run on multiple cores(only if your machine has multicore support ). Check [SciPy Cookbook](http://scipy-cookbook.readthedocs.io/items/ParallelProgramming.html) for more details. ## 3. Construct AnnoyIndex with model & make a similarity query ### Creating an indexer An instance of `AnnoyIndexer` needs to be created in order to use Annoy in gensim. The `AnnoyIndexer` class is located in `gensim.similarities.index` `AnnoyIndexer()` takes two parameters: **`model`**: A `Word2Vec` or `Doc2Vec` model **`num_trees`**: A positive integer. `num_trees` effects the build time and the index size. **A larger value will give more accurate results, but larger indexes**. More information on what trees in Annoy do can be found [here](https://github.com/spotify/annoy#how-does-it-work). The relationship between `num_trees`, build time, and accuracy will be investigated later in the tutorial. Now that we are ready to make a query, lets find the top 5 most similar words to "science" in the Text8 corpus. To make a similarity query we call `Word2Vec.most_similar` like we would traditionally, but with an added parameter, `indexer`. The only supported indexer in gensim as of now is Annoy. ``` # 100 trees are being used in this example annoy_index = AnnoyIndexer(model, 100) # Derive the vector for the word "science" in our model vector = model["science"] # The instance of AnnoyIndexer we just created is passed approximate_neighbors = model.most_similar([vector], topn=11, indexer=annoy_index) # Neatly print the approximate_neighbors and their corresponding cosine similarity values print("Approximate Neighbors") for neighbor in approximate_neighbors: print(neighbor) normal_neighbors = model.most_similar([vector], topn=11) print("\nNormal (not Annoy-indexed) Neighbors") for neighbor in normal_neighbors: print(neighbor) ``` #### Analyzing the results The closer the cosine similarity of a vector is to 1, the more similar that word is to our query, which was the vector for "science". There are some differences in the ranking of similar words and the set of words included within the 10 most similar words. ### 4. Verify & Evaluate performance #### Persisting Indexes You can save and load your indexes from/to disk to prevent having to construct them each time. This will create two files on disk, _fname_ and _fname.d_. Both files are needed to correctly restore all attributes. Before loading an index, you will have to create an empty AnnoyIndexer object. ``` fname = '/tmp/mymodel.index' # Persist index to disk annoy_index.save(fname) # Load index back if os.path.exists(fname): annoy_index2 = AnnoyIndexer() annoy_index2.load(fname) annoy_index2.model = model # Results should be identical to above vector = model["science"] approximate_neighbors2 = model.most_similar([vector], topn=11, indexer=annoy_index2) for neighbor in approximate_neighbors2: print(neighbor) assert approximate_neighbors == approximate_neighbors2 ``` Be sure to use the same model at load that was used originally, otherwise you will get unexpected behaviors. #### Save memory by memory-mapping indices saved to disk Annoy library has a useful feature that indices can be memory-mapped from disk. It saves memory when the same index is used by several processes. Below are two snippets of code. First one has a separate index for each process. The second snipped shares the index between two processes via memory-mapping. The second example uses less total RAM as it is shared. ``` # Remove verbosity from code below (if logging active) if LOGS: logging.disable(logging.CRITICAL) from multiprocessing import Process import os import psutil ``` #### Bad Example: Two processes load the Word2vec model from disk and create there own Annoy indices from that model. ``` %%time model.save('/tmp/mymodel.pkl') def f(process_id): print('Process Id: {}'.format(os.getpid())) process = psutil.Process(os.getpid()) new_model = Word2Vec.load('/tmp/mymodel.pkl') vector = new_model["science"] annoy_index = AnnoyIndexer(new_model,100) approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index) print('\nMemory used by process {}: {}\n---'.format(os.getpid(), process.memory_info())) # Creating and running two parallel process to share the same index file. p1 = Process(target=f, args=('1',)) p1.start() p1.join() p2 = Process(target=f, args=('2',)) p2.start() p2.join() ``` #### Good example. Two processes load both the Word2vec model and index from disk and memory-map the index ``` %%time model.save('/tmp/mymodel.pkl') def f(process_id): print('Process Id: {}'.format(os.getpid())) process = psutil.Process(os.getpid()) new_model = Word2Vec.load('/tmp/mymodel.pkl') vector = new_model["science"] annoy_index = AnnoyIndexer() annoy_index.load('/tmp/mymodel.index') annoy_index.model = new_model approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index) print('\nMemory used by process {}: {}\n---'.format(os.getpid(), process.memory_info())) # Creating and running two parallel process to share the same index file. p1 = Process(target=f, args=('1',)) p1.start() p1.join() p2 = Process(target=f, args=('2',)) p2.start() p2.join() ``` ### 5. Evaluate relationship of `num_trees` to initialization time and accuracy ``` import matplotlib.pyplot as plt %matplotlib inline ``` #### Build dataset of Initialization times and accuracy measures ``` exact_results = [element[0] for element in model.most_similar([model.wv.syn0norm[0]], topn=100)] x_values = [] y_values_init = [] y_values_accuracy = [] for x in range(1, 300, 10): x_values.append(x) start_time = time.time() annoy_index = AnnoyIndexer(model, x) y_values_init.append(time.time() - start_time) approximate_results = model.most_similar([model.wv.syn0norm[0]], topn=100, indexer=annoy_index) top_words = [result[0] for result in approximate_results] y_values_accuracy.append(len(set(top_words).intersection(exact_results))) ``` #### Plot results ``` plt.figure(1, figsize=(12, 6)) plt.subplot(121) plt.plot(x_values, y_values_init) plt.title("num_trees vs initalization time") plt.ylabel("Initialization time (s)") plt.xlabel("num_trees") plt.subplot(122) plt.plot(x_values, y_values_accuracy) plt.title("num_trees vs accuracy") plt.ylabel("% accuracy") plt.xlabel("num_trees") plt.tight_layout() plt.show() ``` ##### Initialization: Initialization time of the annoy indexer increases in a linear fashion with num_trees. Initialization time will vary from corpus to corpus, in the graph above the lee corpus was used ##### Accuracy: In this dataset, the accuracy seems logarithmically related to the number of trees. We see an improvement in accuracy with more trees, but the relationship is nonlinear. ### 6. Work with Google word2vec files Our model can be exported to a word2vec C format. There is a binary and a plain text word2vec format. Both can be read with a variety of other software, or imported back into gensim as a `KeyedVectors` object. ``` # To export our model as text model.wv.save_word2vec_format('/tmp/vectors.txt', binary=False) from smart_open import smart_open # View the first 3 lines of the exported file # The first line has the total number of entries and the vector dimension count. # The next lines have a key (a string) followed by its vector. with smart_open('/tmp/vectors.txt') as myfile: for i in range(3): print(myfile.readline().strip()) # To import a word2vec text model wv = KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False) # To export our model as binary model.wv.save_word2vec_format('/tmp/vectors.bin', binary=True) # To import a word2vec binary model wv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True) # To create and save Annoy Index from a loaded `KeyedVectors` object (with 100 trees) annoy_index = AnnoyIndexer(wv, 100) annoy_index.save('/tmp/mymodel.index') # Load and test the saved word vectors and saved annoy index wv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True) annoy_index = AnnoyIndexer() annoy_index.load('/tmp/mymodel.index') annoy_index.model = wv vector = wv["cat"] approximate_neighbors = wv.most_similar([vector], topn=11, indexer=annoy_index) # Neatly print the approximate_neighbors and their corresponding cosine similarity values print("Approximate Neighbors") for neighbor in approximate_neighbors: print(neighbor) normal_neighbors = wv.most_similar([vector], topn=11) print("\nNormal (not Annoy-indexed) Neighbors") for neighbor in normal_neighbors: print(neighbor) ``` ### Recap In this notebook we used the Annoy module to build an indexed approximation of our word embeddings. To do so, we did the following steps: 1. Download Text8 Corpus 2. Build Word2Vec Model 3. Construct AnnoyIndex with model & make a similarity query 4. Verify & Evaluate performance 5. Evaluate relationship of `num_trees` to initialization time and accuracy 6. Work with Google's word2vec C formats
github_jupyter
# pip install watermark %reload_ext watermark %watermark -v -m -p gensim,numpy,scipy,psutil,matplotlib import os.path if not os.path.isfile('text8'): !wget -c http://mattmahoney.net/dc/text8.zip !unzip text8.zip LOGS = False if LOGS: import logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) from gensim.models import Word2Vec, KeyedVectors from gensim.models.word2vec import Text8Corpus # Using params from Word2Vec_FastText_Comparison params = { 'alpha': 0.05, 'size': 100, 'window': 5, 'iter': 5, 'min_count': 5, 'sample': 1e-4, 'sg': 1, 'hs': 0, 'negative': 5 } model = Word2Vec(Text8Corpus('text8'), **params) print(model) # Set up the model and vector that we are using in the comparison from gensim.similarities.index import AnnoyIndexer model.init_sims() annoy_index = AnnoyIndexer(model, 100) # Dry run to make sure both indices are fully in RAM vector = model.wv.syn0norm[0] model.most_similar([vector], topn=5, indexer=annoy_index) model.most_similar([vector], topn=5) import time import numpy as np def avg_query_time(annoy_index=None, queries=1000): """ Average query time of a most_similar method over 1000 random queries, uses annoy if given an indexer """ total_time = 0 for _ in range(queries): rand_vec = model.wv.syn0norm[np.random.randint(0, len(model.wv.vocab))] start_time = time.clock() model.most_similar([rand_vec], topn=5, indexer=annoy_index) total_time += time.clock() - start_time return total_time / queries queries = 10000 gensim_time = avg_query_time(queries=queries) annoy_time = avg_query_time(annoy_index, queries=queries) print("Gensim (s/query):\t{0:.5f}".format(gensim_time)) print("Annoy (s/query):\t{0:.5f}".format(annoy_time)) speed_improvement = gensim_time / annoy_time print ("\nAnnoy is {0:.2f} times faster on average on this particular run".format(speed_improvement)) # 100 trees are being used in this example annoy_index = AnnoyIndexer(model, 100) # Derive the vector for the word "science" in our model vector = model["science"] # The instance of AnnoyIndexer we just created is passed approximate_neighbors = model.most_similar([vector], topn=11, indexer=annoy_index) # Neatly print the approximate_neighbors and their corresponding cosine similarity values print("Approximate Neighbors") for neighbor in approximate_neighbors: print(neighbor) normal_neighbors = model.most_similar([vector], topn=11) print("\nNormal (not Annoy-indexed) Neighbors") for neighbor in normal_neighbors: print(neighbor) fname = '/tmp/mymodel.index' # Persist index to disk annoy_index.save(fname) # Load index back if os.path.exists(fname): annoy_index2 = AnnoyIndexer() annoy_index2.load(fname) annoy_index2.model = model # Results should be identical to above vector = model["science"] approximate_neighbors2 = model.most_similar([vector], topn=11, indexer=annoy_index2) for neighbor in approximate_neighbors2: print(neighbor) assert approximate_neighbors == approximate_neighbors2 # Remove verbosity from code below (if logging active) if LOGS: logging.disable(logging.CRITICAL) from multiprocessing import Process import os import psutil %%time model.save('/tmp/mymodel.pkl') def f(process_id): print('Process Id: {}'.format(os.getpid())) process = psutil.Process(os.getpid()) new_model = Word2Vec.load('/tmp/mymodel.pkl') vector = new_model["science"] annoy_index = AnnoyIndexer(new_model,100) approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index) print('\nMemory used by process {}: {}\n---'.format(os.getpid(), process.memory_info())) # Creating and running two parallel process to share the same index file. p1 = Process(target=f, args=('1',)) p1.start() p1.join() p2 = Process(target=f, args=('2',)) p2.start() p2.join() %%time model.save('/tmp/mymodel.pkl') def f(process_id): print('Process Id: {}'.format(os.getpid())) process = psutil.Process(os.getpid()) new_model = Word2Vec.load('/tmp/mymodel.pkl') vector = new_model["science"] annoy_index = AnnoyIndexer() annoy_index.load('/tmp/mymodel.index') annoy_index.model = new_model approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index) print('\nMemory used by process {}: {}\n---'.format(os.getpid(), process.memory_info())) # Creating and running two parallel process to share the same index file. p1 = Process(target=f, args=('1',)) p1.start() p1.join() p2 = Process(target=f, args=('2',)) p2.start() p2.join() import matplotlib.pyplot as plt %matplotlib inline exact_results = [element[0] for element in model.most_similar([model.wv.syn0norm[0]], topn=100)] x_values = [] y_values_init = [] y_values_accuracy = [] for x in range(1, 300, 10): x_values.append(x) start_time = time.time() annoy_index = AnnoyIndexer(model, x) y_values_init.append(time.time() - start_time) approximate_results = model.most_similar([model.wv.syn0norm[0]], topn=100, indexer=annoy_index) top_words = [result[0] for result in approximate_results] y_values_accuracy.append(len(set(top_words).intersection(exact_results))) plt.figure(1, figsize=(12, 6)) plt.subplot(121) plt.plot(x_values, y_values_init) plt.title("num_trees vs initalization time") plt.ylabel("Initialization time (s)") plt.xlabel("num_trees") plt.subplot(122) plt.plot(x_values, y_values_accuracy) plt.title("num_trees vs accuracy") plt.ylabel("% accuracy") plt.xlabel("num_trees") plt.tight_layout() plt.show() # To export our model as text model.wv.save_word2vec_format('/tmp/vectors.txt', binary=False) from smart_open import smart_open # View the first 3 lines of the exported file # The first line has the total number of entries and the vector dimension count. # The next lines have a key (a string) followed by its vector. with smart_open('/tmp/vectors.txt') as myfile: for i in range(3): print(myfile.readline().strip()) # To import a word2vec text model wv = KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False) # To export our model as binary model.wv.save_word2vec_format('/tmp/vectors.bin', binary=True) # To import a word2vec binary model wv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True) # To create and save Annoy Index from a loaded `KeyedVectors` object (with 100 trees) annoy_index = AnnoyIndexer(wv, 100) annoy_index.save('/tmp/mymodel.index') # Load and test the saved word vectors and saved annoy index wv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True) annoy_index = AnnoyIndexer() annoy_index.load('/tmp/mymodel.index') annoy_index.model = wv vector = wv["cat"] approximate_neighbors = wv.most_similar([vector], topn=11, indexer=annoy_index) # Neatly print the approximate_neighbors and their corresponding cosine similarity values print("Approximate Neighbors") for neighbor in approximate_neighbors: print(neighbor) normal_neighbors = wv.most_similar([vector], topn=11) print("\nNormal (not Annoy-indexed) Neighbors") for neighbor in normal_neighbors: print(neighbor)
0.629091
0.971019
# Training with Cloud Machine Learning Engine This notebook is the second of a set of steps to run machine learning on the cloud. In this step, we will use the data and associated analysis metadata prepared in the [previous notebook](./2 Service Preprocess.ipynb) and continue with training a model. ## Workspace Setup The first step is to setup the workspace that we will use within this notebook - the python libraries, and the Google Cloud Storage bucket that will be used to contain the inputs and outputs produced over the course of the steps. ``` import google.datalab as datalab import google.datalab.ml as ml import mltoolbox.regression.dnn as regression import os import time ``` The storage bucket was created in the previous notebook. We'll re-declare it here, so we can use it. ``` storage_bucket = 'gs://' + datalab.Context.default().project_id + '-datalab-workspace/' storage_region = 'us-central1' workspace_path = os.path.join(storage_bucket, 'census') ``` ### Data and DataSets We'll also enumerate our data and declare DataSets for use during training. ``` !gsutil ls -r {workspace_path}/data train_data_path = os.path.join(workspace_path, 'data/train.csv') eval_data_path = os.path.join(workspace_path, 'data/eval.csv') schema_path = os.path.join(workspace_path, 'data/schema.json') train_data = ml.CsvDataSet(file_pattern=train_data_path, schema_file=schema_path) eval_data = ml.CsvDataSet(file_pattern=eval_data_path, schema_file=schema_path) ``` ### Data Analysis We had previously analyzed training data to produce statistics and vocabularies. These will be used during training. ``` analysis_path = os.path.join(workspace_path, 'analysis') !gsutil ls {analysis_path} ``` ## Training Training in cloud is accomplished by submitting jobs to Cloud Machine Learning Engine. When submitting jobs, it is a good idea to name each job, so it can be looked up easily (names do need to be unique within the scope of a project). Additionally you'll want to pick a region where your job will run. Usually this is in the same region as where your training data resides. Finally, you'll want to pick a scale tier. The [documentation](https://cloud.google.com/ml/reference/rest/v1beta1/projects.jobs#ScaleTier) describes different scale tiers or custom cluster setups you can use with ML Engine. For the purposes of this sample, a simple single node cluster suffices. ``` config = ml.CloudTrainingConfig(region=storage_region, scale_tier='BASIC') training_job_name = 'census_regression_' + str(int(time.time())) training_path = os.path.join(workspace_path, 'training') features = { "WAGP": {"transform": "target"}, "SERIALNO": {"transform": "key"}, "AGEP": {"transform": "embedding", "embedding_dim": 2}, # Age "COW": {"transform": "one_hot"}, # Class of worker "ESP": {"transform": "embedding", "embedding_dim": 2}, # Employment status of parents "ESR": {"transform": "one_hot"}, # Employment status "FOD1P": {"transform": "embedding", "embedding_dim": 3}, # Field of degree "HINS4": {"transform": "one_hot"}, # Medicaid "INDP": {"transform": "embedding", "embedding_dim": 5}, # Industry "JWMNP": {"transform": "embedding", "embedding_dim": 2}, # Travel time to work "JWTR": {"transform": "one_hot"}, # Transportation "MAR": {"transform": "one_hot"}, # Marital status "POWPUMA": {"transform": "one_hot"}, # Place of work "PUMA": {"transform": "one_hot"}, # Area code "RAC1P": {"transform": "one_hot"}, # Race "SCHL": {"transform": "one_hot"}, # School "SCIENGRLP": {"transform": "one_hot"}, # Science "SEX": {"transform": "one_hot"}, "WKW": {"transform": "one_hot"} # Weeks worked } ``` NOTE: To facilitate re-running this notebook, any previous training outputs are first deleted, if they exist. ``` !gsutil rm -rf {training_path} ``` **NOTE**: The job submitted below can take a few minutes to complete. Once you have submitted, you can continue with more steps in the notebook, until the call to `job.wait()`. ``` job = regression.train_async(train_dataset=train_data, eval_dataset=eval_data, features=features, analysis_dir=analysis_path, output_dir=training_path, max_steps=2000, layer_sizes=[5, 5, 5], job_name=training_job_name, cloud=config) ``` When a job is submitted to ML Engine, a few things happen. The code for the job is staged in Google Cloud Storage, and a job definition is submitted to the service. The service queues the job, and thereafter the job can be monitored in the console (status and logs), as well as using TensorBoard. The service also provisions computation resources based on the choice of scale tier, installs your code package and its dependencies, and starts your training process. Thereafter, the service monitors the job for completion, and retries if necessary. The first step in the process - launching a training cluster - can take a few minutes. It is recommended to use `BASIC` tier to first validate jobs on cloud and use that for faster iteration to benefit from quicker job starts, and then launch larger scaled jobs where the overhead of launching a cluster is small relative to the life of the job itself. You can check the progress of the job using the link to the console page above, as well as its logs. ### TensorBoard TensorBoard can be launched against your training output directory. As summaries are produced from your running job, they will show up in TensorBoard. ``` tensorboard_pid = ml.TensorBoard.start(training_path) ``` ### The Trained Model Once training is completed, the resulting trained model is saved and placed into Cloud Storage. ``` # Wait for the job to be complete before proceeding. job.wait() !gsutil ls -r {training_path}/model ``` ### Cleanup ``` ml.TensorBoard.stop(tensorboard_pid) ``` # Next Steps Once a model has been created, the next step is to evaluate it, possibly against multiple evaluation steps. We'll continue with this step in the [next notebook](./4 Service Evaluate.ipynb).
github_jupyter
import google.datalab as datalab import google.datalab.ml as ml import mltoolbox.regression.dnn as regression import os import time storage_bucket = 'gs://' + datalab.Context.default().project_id + '-datalab-workspace/' storage_region = 'us-central1' workspace_path = os.path.join(storage_bucket, 'census') !gsutil ls -r {workspace_path}/data train_data_path = os.path.join(workspace_path, 'data/train.csv') eval_data_path = os.path.join(workspace_path, 'data/eval.csv') schema_path = os.path.join(workspace_path, 'data/schema.json') train_data = ml.CsvDataSet(file_pattern=train_data_path, schema_file=schema_path) eval_data = ml.CsvDataSet(file_pattern=eval_data_path, schema_file=schema_path) analysis_path = os.path.join(workspace_path, 'analysis') !gsutil ls {analysis_path} config = ml.CloudTrainingConfig(region=storage_region, scale_tier='BASIC') training_job_name = 'census_regression_' + str(int(time.time())) training_path = os.path.join(workspace_path, 'training') features = { "WAGP": {"transform": "target"}, "SERIALNO": {"transform": "key"}, "AGEP": {"transform": "embedding", "embedding_dim": 2}, # Age "COW": {"transform": "one_hot"}, # Class of worker "ESP": {"transform": "embedding", "embedding_dim": 2}, # Employment status of parents "ESR": {"transform": "one_hot"}, # Employment status "FOD1P": {"transform": "embedding", "embedding_dim": 3}, # Field of degree "HINS4": {"transform": "one_hot"}, # Medicaid "INDP": {"transform": "embedding", "embedding_dim": 5}, # Industry "JWMNP": {"transform": "embedding", "embedding_dim": 2}, # Travel time to work "JWTR": {"transform": "one_hot"}, # Transportation "MAR": {"transform": "one_hot"}, # Marital status "POWPUMA": {"transform": "one_hot"}, # Place of work "PUMA": {"transform": "one_hot"}, # Area code "RAC1P": {"transform": "one_hot"}, # Race "SCHL": {"transform": "one_hot"}, # School "SCIENGRLP": {"transform": "one_hot"}, # Science "SEX": {"transform": "one_hot"}, "WKW": {"transform": "one_hot"} # Weeks worked } !gsutil rm -rf {training_path} job = regression.train_async(train_dataset=train_data, eval_dataset=eval_data, features=features, analysis_dir=analysis_path, output_dir=training_path, max_steps=2000, layer_sizes=[5, 5, 5], job_name=training_job_name, cloud=config) tensorboard_pid = ml.TensorBoard.start(training_path) # Wait for the job to be complete before proceeding. job.wait() !gsutil ls -r {training_path}/model ml.TensorBoard.stop(tensorboard_pid)
0.216425
0.973139
``` BRANCH = 'r1.0.0rc1' """ You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab. Instructions for setting up Colab are as follows: 1. Open a new Python 3 notebook. 2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL) 3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator) 4. Run this cell to set up dependencies. """ # If you're using Google Colab and not running locally, run this cell # install NeMo !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[nlp] from nemo.utils.exp_manager import exp_manager from nemo.collections import nlp as nemo_nlp import os import wget import torch import pytorch_lightning as pl from omegaconf import OmegaConf ``` # Task Description Given a question and a context both in natural language, predict the span within the context with a start and end position which indicates the answer to the question. For every word in our training dataset we’re going to predict: - likelihood this word is the start of the span - likelihood this word is the end of the span We are using a pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) encoder with 2 span prediction heads for prediction start and end position of the answer. The span predictions are token classifiers consisting of a single linear layer. # Dataset This model expects the dataset to be in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, e.g. a JSON file for each dataset split. In the following we will show example for a training file. Each title has one or multiple paragraph entries, each consisting of the text - "context", and question-answer entries. Each question-answer entry has: * a question * a globally unique id * a boolean flag "is_impossible" which shows if the question is answerable or not * in case the question is answerable one answer entry, which contains the text span and its starting character index in the context. If not answerable, the "answers" list is empty The evaluation files (for validation and testing) follow the above format except for it can provide more than one answer to the same question. The inference file follows the above format except for it does not require the "answers" and "is_impossible" keywords. ``` { "data": [ { "title": "Super_Bowl_50", "paragraphs": [ { "context": "Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24\u201310 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50.", "qas": [ { "question": "Where did Super Bowl 50 take place?", "is_impossible": "false", "id": "56be4db0acb8001400a502ee", "answers": [ { "answer_start": "403", "text": "Santa Clara, California" } ] }, { "question": "What was the winning score of the Super Bowl 50?", "is_impossible": "true", "id": "56be4db0acb8001400a502ez", "answers": [ ] } ] } ] } ] } ... ``` ## Download the data In this notebook we are going download the [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) dataset to showcase how to do training and inference. There are two datasets, SQuAD1.0 and SQuAD2.0. SQuAD 1.1, the previous version of the SQuAD dataset, contains 100,000+ question-answer pairs on 500+ articles. SQuAD2.0 dataset combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To download both datasets, we use [NeMo/examples/nlp/question_answering/get_squad.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/question_answering/get_squad.py). ``` # set the following paths DATA_DIR = "PATH_TO_DATA" WORK_DIR = "PATH_TO_CHECKPOINTS_AND_LOGS" ## download get_squad.py script to download and preprocess the SQuAD data os.makedirs(WORK_DIR, exist_ok=True) if not os.path.exists(WORK_DIR + '/get_squad.py'): print('Downloading get_squad.py...') wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/question_answering/get_squad.py', WORK_DIR) else: print ('get_squad.py already exists') # download and preprocess the data ! python $WORK_DIR/get_squad.py --destDir $DATA_DIR ``` after execution of the above cell, your data folder will contain a subfolder "squad" the following 4 files for training and evaluation - v1.1/train-v1.1.json - v1.1/dev-v1.1.json - v2.0/train-v2.0.json - v2.0/dev-v2.0.json ``` ! ls -LR {DATA_DIR}/squad ``` ## Data preprocessing The input into the model is the concatenation of two tokenized sequences: " [CLS] query [SEP] context [SEP]". This is the tokenization used for BERT, i.e. [WordPiece](https://arxiv.org/pdf/1609.08144.pdf) Tokenizer, which uses the [Google's BERT vocabulary](https://github.com/google-research/bert). This tokenizer is configured with `model.tokenizer.tokenizer_name=bert-base-uncased` and is automatically instantiated using [Huggingface](https://huggingface.co/)'s API. The benefit of this tokenizer is that this is compatible with a pretrained BERT model, from which we can finetune instead of training the question answering model from scratch. However, we also support other tokenizers, such as `model.tokenizer.tokenizer_name=sentencepiece`. Unlike the BERT WordPiece tokenizer, the [SentencePiece](https://github.com/google/sentencepiece) tokenizer model needs to be first created from a text file. See [02_NLP_Tokenizers.ipynb](https://colab.research.google.com/github/NVIDIA/NeMo/blob/main/tutorials/nlp/02_NLP_Tokenizers.ipynb) for more details on how to use NeMo Tokenizers. # Data and Model Parameters Note, this is only an example to showcase usage and is not optimized for accuracy. In the following, we will download and adjust the model configuration to create a toy example, where we only use a small fraction of the original dataset. In order to train the full SQuAD model, leave the model parameters from the configuration file unchanged. This sets NUM_SAMPLES=-1 to use the entire dataset, which will slow down performance significantly. We recommend to use bash script and multi-GPU to accelerate this. ``` # This is the model configuration file that we will download, do not change this MODEL_CONFIG = "question_answering_squad_config.yaml" # model parameters, play with these BATCH_SIZE = 12 MAX_SEQ_LENGTH = 384 # specify BERT-like model, you want to use PRETRAINED_BERT_MODEL = "bert-base-uncased" TOKENIZER_NAME = "bert-base-uncased" # tokenizer name # Number of data examples used for training, validation, test and inference TRAIN_NUM_SAMPLES = VAL_NUM_SAMPLES = TEST_NUM_SAMPLES = 5000 INFER_NUM_SAMPLES = 5 TRAIN_FILE = f"{DATA_DIR}/squad/v1.1/train-v1.1.json" VAL_FILE = f"{DATA_DIR}/squad/v1.1/dev-v1.1.json" TEST_FILE = f"{DATA_DIR}/squad/v1.1/dev-v1.1.json" INFER_FILE = f"{DATA_DIR}/squad/v1.1/dev-v1.1.json" INFER_PREDICTION_OUTPUT_FILE = "output_prediction.json" INFER_NBEST_OUTPUT_FILE = "output_nbest.json" # training parameters LEARNING_RATE = 0.00003 # number of epochs MAX_EPOCHS = 1 ``` # Model Configuration The model is defined in a config file which declares multiple important sections. They are: - **model**: All arguments that will relate to the Model - language model, span prediction, optimizer and schedulers, datasets and any other related information - **trainer**: Any argument to be passed to PyTorch Lightning ``` # download the model's default configuration file config_dir = WORK_DIR + '/configs/' os.makedirs(config_dir, exist_ok=True) if not os.path.exists(config_dir + MODEL_CONFIG): print('Downloading config file...') wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/question_answering/conf/{MODEL_CONFIG}', config_dir) else: print ('config file is already exists') # this line will print the entire default config of the model config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}' print(config_path) config = OmegaConf.load(config_path) print(OmegaConf.to_yaml(config)) ``` ## Setting up data within the config Among other things, the config file contains dictionaries called dataset, train_ds and validation_ds, test_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config. Specify data paths using `model.train_ds.file`, `model.valuation_ds.file` and `model.test_ds.file`. Let's now add the data paths to the config. ``` config.model.train_ds.file = TRAIN_FILE config.model.validation_ds.file = VAL_FILE config.model.test_ds.file = TEST_FILE config.model.train_ds.num_samples = TRAIN_NUM_SAMPLES config.model.validation_ds.num_samples = VAL_NUM_SAMPLES config.model.test_ds.num_samples = TEST_NUM_SAMPLES config.model.tokenizer.tokenizer_name = TOKENIZER_NAME ``` # Building the PyTorch Lightning Trainer NeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem! Let's first instantiate a Trainer object! ``` # lets modify some trainer configs # checks if we have GPU available and uses it cuda = 1 if torch.cuda.is_available() else 0 config.trainer.gpus = cuda config.trainer.precision = 16 if torch.cuda.is_available() else 32 # For mixed precision training, use precision=16 and amp_level=O1 config.trainer.max_epochs = MAX_EPOCHS # Remove distributed training flags if only running on a single GPU or CPU config.trainer.accelerator = None print("Trainer config - \n") print(OmegaConf.to_yaml(config.trainer)) trainer = pl.Trainer(**config.trainer) ``` # Setting up a NeMo Experiment¶ NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it! ``` config.exp_manager.exp_dir = WORK_DIR exp_dir = exp_manager(trainer, config.get("exp_manager", None)) # the exp_dir provides a path to the current experiment for easy access exp_dir = str(exp_dir) ``` # Using an Out-Of-Box Model ``` # list available pretrained models nemo_nlp.models.QAModel.list_available_models() # load pretained model pretrained_model_name="BERTBaseUncasedSQuADv1.1" model = nemo_nlp.models.QAModel.from_pretrained(model_name='BERTBaseUncasedSQuADv1.1') ``` # Model Training Before initializing the model, we might want to modify some of the model configs. ``` # complete list of supported BERT-like models nemo_nlp.modules.get_pretrained_lm_models_list() # add the specified above model parameters to the config config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL config.model.train_ds.batch_size = BATCH_SIZE config.model.validation_ds.batch_size = BATCH_SIZE config.model.test_ds.batch_size = BATCH_SIZE config.model.optim.lr = LEARNING_RATE print("Updated model config - \n") print(OmegaConf.to_yaml(config.model)) # initialize the model # dataset we'll be prepared for training and evaluation during model = nemo_nlp.models.QAModel(cfg=config.model, trainer=trainer) ``` ## Monitoring Training Progress Optionally, you can create a Tensorboard visualization to monitor training progress. ``` try: from google import colab COLAB_ENV = True except (ImportError, ModuleNotFoundError): COLAB_ENV = False # Load the TensorBoard notebook extension if COLAB_ENV: %load_ext tensorboard %tensorboard --logdir {exp_dir} else: print("To use tensorboard, please use this notebook in a Google Colab environment.") # start the training trainer.fit(model) ``` After training for 1 epoch, exact match on the evaluation data should be around 59.2%, F1 around 70.2%. # Evaluation To see how the model performs, let’s run evaluation on the test dataset. ``` model.setup_test_data(test_data_config=config.model.test_ds) trainer.test(model) ``` # Inference To use the model for creating predictions, let’s run inference on the unlabeled inference dataset. ``` # # store test prediction under the experiment output folder output_prediction_file = f"{exp_dir}/{INFER_PREDICTION_OUTPUT_FILE}" output_nbest_file = f"{exp_dir}/{INFER_NBEST_OUTPUT_FILE}" all_preds, all_nbests = model.inference(file=INFER_FILE, batch_size=5, num_samples=INFER_NUM_SAMPLES, output_nbest_file=output_nbest_file, output_prediction_file=output_prediction_file) for _, item in all_preds.items(): print(f"question: {item[0]} answer: {item[1]}") #The prediction file contains the predicted answer to each question id for the first TEST_NUM_SAMPLES. ! python -m json.tool ${exp_dir}/$INFER_PREDICTION_OUTPUT_FILE ``` If you have NeMo installed locally, you can also train the model with [NeMo/examples/nlp/question_answering/get_squad.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/question_answering/question_answering_squad.py). To run training script, use: `python question_answering_squad.py model.train_ds.file=TRAIN_FILE model.validation_ds.file=VAL_FILE model.test_ds.file=TEST_FILE` To improve the performance of the model, train with multi-GPU and a global batch size of 24. So if you use 8 GPUs with `trainer.gpus=8`, set `model.train_ds.batch_size=3`
github_jupyter
BRANCH = 'r1.0.0rc1' """ You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab. Instructions for setting up Colab are as follows: 1. Open a new Python 3 notebook. 2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL) 3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator) 4. Run this cell to set up dependencies. """ # If you're using Google Colab and not running locally, run this cell # install NeMo !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[nlp] from nemo.utils.exp_manager import exp_manager from nemo.collections import nlp as nemo_nlp import os import wget import torch import pytorch_lightning as pl from omegaconf import OmegaConf { "data": [ { "title": "Super_Bowl_50", "paragraphs": [ { "context": "Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24\u201310 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50.", "qas": [ { "question": "Where did Super Bowl 50 take place?", "is_impossible": "false", "id": "56be4db0acb8001400a502ee", "answers": [ { "answer_start": "403", "text": "Santa Clara, California" } ] }, { "question": "What was the winning score of the Super Bowl 50?", "is_impossible": "true", "id": "56be4db0acb8001400a502ez", "answers": [ ] } ] } ] } ] } ... # set the following paths DATA_DIR = "PATH_TO_DATA" WORK_DIR = "PATH_TO_CHECKPOINTS_AND_LOGS" ## download get_squad.py script to download and preprocess the SQuAD data os.makedirs(WORK_DIR, exist_ok=True) if not os.path.exists(WORK_DIR + '/get_squad.py'): print('Downloading get_squad.py...') wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/question_answering/get_squad.py', WORK_DIR) else: print ('get_squad.py already exists') # download and preprocess the data ! python $WORK_DIR/get_squad.py --destDir $DATA_DIR ! ls -LR {DATA_DIR}/squad # This is the model configuration file that we will download, do not change this MODEL_CONFIG = "question_answering_squad_config.yaml" # model parameters, play with these BATCH_SIZE = 12 MAX_SEQ_LENGTH = 384 # specify BERT-like model, you want to use PRETRAINED_BERT_MODEL = "bert-base-uncased" TOKENIZER_NAME = "bert-base-uncased" # tokenizer name # Number of data examples used for training, validation, test and inference TRAIN_NUM_SAMPLES = VAL_NUM_SAMPLES = TEST_NUM_SAMPLES = 5000 INFER_NUM_SAMPLES = 5 TRAIN_FILE = f"{DATA_DIR}/squad/v1.1/train-v1.1.json" VAL_FILE = f"{DATA_DIR}/squad/v1.1/dev-v1.1.json" TEST_FILE = f"{DATA_DIR}/squad/v1.1/dev-v1.1.json" INFER_FILE = f"{DATA_DIR}/squad/v1.1/dev-v1.1.json" INFER_PREDICTION_OUTPUT_FILE = "output_prediction.json" INFER_NBEST_OUTPUT_FILE = "output_nbest.json" # training parameters LEARNING_RATE = 0.00003 # number of epochs MAX_EPOCHS = 1 # download the model's default configuration file config_dir = WORK_DIR + '/configs/' os.makedirs(config_dir, exist_ok=True) if not os.path.exists(config_dir + MODEL_CONFIG): print('Downloading config file...') wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/question_answering/conf/{MODEL_CONFIG}', config_dir) else: print ('config file is already exists') # this line will print the entire default config of the model config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}' print(config_path) config = OmegaConf.load(config_path) print(OmegaConf.to_yaml(config)) config.model.train_ds.file = TRAIN_FILE config.model.validation_ds.file = VAL_FILE config.model.test_ds.file = TEST_FILE config.model.train_ds.num_samples = TRAIN_NUM_SAMPLES config.model.validation_ds.num_samples = VAL_NUM_SAMPLES config.model.test_ds.num_samples = TEST_NUM_SAMPLES config.model.tokenizer.tokenizer_name = TOKENIZER_NAME # lets modify some trainer configs # checks if we have GPU available and uses it cuda = 1 if torch.cuda.is_available() else 0 config.trainer.gpus = cuda config.trainer.precision = 16 if torch.cuda.is_available() else 32 # For mixed precision training, use precision=16 and amp_level=O1 config.trainer.max_epochs = MAX_EPOCHS # Remove distributed training flags if only running on a single GPU or CPU config.trainer.accelerator = None print("Trainer config - \n") print(OmegaConf.to_yaml(config.trainer)) trainer = pl.Trainer(**config.trainer) config.exp_manager.exp_dir = WORK_DIR exp_dir = exp_manager(trainer, config.get("exp_manager", None)) # the exp_dir provides a path to the current experiment for easy access exp_dir = str(exp_dir) # list available pretrained models nemo_nlp.models.QAModel.list_available_models() # load pretained model pretrained_model_name="BERTBaseUncasedSQuADv1.1" model = nemo_nlp.models.QAModel.from_pretrained(model_name='BERTBaseUncasedSQuADv1.1') # complete list of supported BERT-like models nemo_nlp.modules.get_pretrained_lm_models_list() # add the specified above model parameters to the config config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL config.model.train_ds.batch_size = BATCH_SIZE config.model.validation_ds.batch_size = BATCH_SIZE config.model.test_ds.batch_size = BATCH_SIZE config.model.optim.lr = LEARNING_RATE print("Updated model config - \n") print(OmegaConf.to_yaml(config.model)) # initialize the model # dataset we'll be prepared for training and evaluation during model = nemo_nlp.models.QAModel(cfg=config.model, trainer=trainer) try: from google import colab COLAB_ENV = True except (ImportError, ModuleNotFoundError): COLAB_ENV = False # Load the TensorBoard notebook extension if COLAB_ENV: %load_ext tensorboard %tensorboard --logdir {exp_dir} else: print("To use tensorboard, please use this notebook in a Google Colab environment.") # start the training trainer.fit(model) model.setup_test_data(test_data_config=config.model.test_ds) trainer.test(model) # # store test prediction under the experiment output folder output_prediction_file = f"{exp_dir}/{INFER_PREDICTION_OUTPUT_FILE}" output_nbest_file = f"{exp_dir}/{INFER_NBEST_OUTPUT_FILE}" all_preds, all_nbests = model.inference(file=INFER_FILE, batch_size=5, num_samples=INFER_NUM_SAMPLES, output_nbest_file=output_nbest_file, output_prediction_file=output_prediction_file) for _, item in all_preds.items(): print(f"question: {item[0]} answer: {item[1]}") #The prediction file contains the predicted answer to each question id for the first TEST_NUM_SAMPLES. ! python -m json.tool ${exp_dir}/$INFER_PREDICTION_OUTPUT_FILE
0.735262
0.825132
<small><i>This notebook was put together by [Jake Vanderplas](http://www.vanderplas.com). Source and license info is on [GitHub](https://github.com/jakevdp/sklearn_tutorial/).</i></small> # Supervised Learning In-Depth: Support Vector Machines Previously we introduced supervised machine learning. There are many supervised learning algorithms available; here we'll go into brief detail one of the most powerful and interesting methods: **Support Vector Machines (SVMs)**. ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy import stats plt.style.use('seaborn') ``` ## Motivating Support Vector Machines Support Vector Machines (SVMs) are a powerful supervised learning algorithm used for **classification** or for **regression**. SVMs are a **discriminative** classifier: that is, they draw a boundary between clusters of data. Let's show a quick example of support vector classification. First we need to create a dataset: ``` from sklearn.datasets.samples_generator import make_blobs X, y = make_blobs(n_samples=50, centers=2, random_state=0, cluster_std=0.60) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring'); ``` A discriminative classifier attempts to draw a line between the two sets of data. Immediately we see a problem: such a line is ill-posed! For example, we could come up with several possibilities which perfectly discriminate between the classes in this example: ``` xfit = np.linspace(-1, 3.5) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]: plt.plot(xfit, m * xfit + b, '-k') plt.xlim(-1, 3.5); ``` These are three *very* different separaters which perfectly discriminate between these samples. Depending on which you choose, a new data point will be classified almost entirely differently! How can we improve on this? ### Support Vector Machines: Maximizing the *Margin* Support vector machines are one way to address this. What support vector machined do is to not only draw a line, but consider a *region* about the line of some given width. Here's an example of what it might look like: ``` xfit = np.linspace(-1, 3.5) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]: yfit = m * xfit + b plt.plot(xfit, yfit, '-k') plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none', color='#AAAAAA', alpha=0.4) plt.xlim(-1, 3.5); ``` Notice here that if we want to maximize this width, the middle fit is clearly the best. This is the intuition of **support vector machines**, which optimize a linear discriminant model in conjunction with a **margin** representing the perpendicular distance between the datasets. #### Fitting a Support Vector Machine Now we'll fit a Support Vector Machine Classifier to these points. While the mathematical details of the likelihood model are interesting, we'll let you read about those elsewhere. Instead, we'll just treat the scikit-learn algorithm as a black box which accomplishes the above task. ``` from sklearn.svm import SVC # "Support Vector Classifier" clf = SVC(kernel='linear') clf.fit(X, y) ``` To better visualize what's happening here, let's create a quick convenience function that will plot SVM decision boundaries for us: ``` def plot_svc_decision_function(clf, ax=None): """Plot the decision function for a 2D SVC""" if ax is None: ax = plt.gca() x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30) y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30) Y, X = np.meshgrid(y, x) P = np.zeros_like(X) for i, xi in enumerate(x): for j, yj in enumerate(y): P[i, j] = clf.decision_function([[xi, yj]]) # plot the margins ax.contour(X, Y, P, colors='k', levels=[-1, 0, 1], alpha=0.5, linestyles=['--', '-', '--']) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plot_svc_decision_function(clf); ``` Notice that the dashed lines touch a couple of the points: these points are the pivotal pieces of this fit, and are known as the *support vectors* (giving the algorithm its name). In scikit-learn, these are stored in the ``support_vectors_`` attribute of the classifier: ``` plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plot_svc_decision_function(clf) plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=200, facecolors='none'); ``` Let's use IPython's ``interact`` functionality to explore how the distribution of points affects the support vectors and the discriminative fit. (This is only available in IPython 2.0+, and will not work in a static view) ``` from ipywidgets import interact def plot_svm(N=10): X, y = make_blobs(n_samples=200, centers=2, random_state=0, cluster_std=0.60) X = X[:N] y = y[:N] clf = SVC(kernel='linear') clf.fit(X, y) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plt.xlim(-1, 4) plt.ylim(-1, 6) plot_svc_decision_function(clf, plt.gca()) plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=200, facecolors='none') interact(plot_svm, N=[10, 200], kernel='linear'); ``` Notice the unique thing about SVM is that only the support vectors matter: that is, if you moved any of the other points without letting them cross the decision boundaries, they would have no effect on the classification results! #### Going further: Kernel Methods Where SVM gets incredibly exciting is when it is used in conjunction with *kernels*. To motivate the need for kernels, let's look at some data which is not linearly separable: ``` from sklearn.datasets.samples_generator import make_circles X, y = make_circles(100, factor=.1, noise=.1) clf = SVC(kernel='linear').fit(X, y) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plot_svc_decision_function(clf); ``` Clearly, no linear discrimination will ever separate these data. One way we can adjust this is to apply a **kernel**, which is some functional transformation of the input data. For example, one simple model we could use is a **radial basis function** ``` r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2)) ``` If we plot this along with our data, we can see the effect of it: ``` from mpl_toolkits import mplot3d def plot_3D(elev=30, azim=30): ax = plt.subplot(projection='3d') ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='spring') ax.view_init(elev=elev, azim=azim) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('r') interact(plot_3D, elev=(-90, 90), azip=(-180, 180)); ``` We can see that with this additional dimension, the data becomes trivially linearly separable! This is a relatively simple kernel; SVM has a more sophisticated version of this kernel built-in to the process. This is accomplished by using ``kernel='rbf'``, short for *radial basis function*: ``` clf = SVC(kernel='rbf') clf.fit(X, y) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plot_svc_decision_function(clf) plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=200, facecolors='none'); ``` Here there are effectively $N$ basis functions: one centered at each point! Through a clever mathematical trick, this computation proceeds very efficiently using the "Kernel Trick", without actually constructing the matrix of kernel evaluations. We'll leave SVMs for the time being and take a look at another classification algorithm: Random Forests.
github_jupyter
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy import stats plt.style.use('seaborn') from sklearn.datasets.samples_generator import make_blobs X, y = make_blobs(n_samples=50, centers=2, random_state=0, cluster_std=0.60) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring'); xfit = np.linspace(-1, 3.5) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]: plt.plot(xfit, m * xfit + b, '-k') plt.xlim(-1, 3.5); xfit = np.linspace(-1, 3.5) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]: yfit = m * xfit + b plt.plot(xfit, yfit, '-k') plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none', color='#AAAAAA', alpha=0.4) plt.xlim(-1, 3.5); from sklearn.svm import SVC # "Support Vector Classifier" clf = SVC(kernel='linear') clf.fit(X, y) def plot_svc_decision_function(clf, ax=None): """Plot the decision function for a 2D SVC""" if ax is None: ax = plt.gca() x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30) y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30) Y, X = np.meshgrid(y, x) P = np.zeros_like(X) for i, xi in enumerate(x): for j, yj in enumerate(y): P[i, j] = clf.decision_function([[xi, yj]]) # plot the margins ax.contour(X, Y, P, colors='k', levels=[-1, 0, 1], alpha=0.5, linestyles=['--', '-', '--']) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plot_svc_decision_function(clf); plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plot_svc_decision_function(clf) plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=200, facecolors='none'); from ipywidgets import interact def plot_svm(N=10): X, y = make_blobs(n_samples=200, centers=2, random_state=0, cluster_std=0.60) X = X[:N] y = y[:N] clf = SVC(kernel='linear') clf.fit(X, y) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plt.xlim(-1, 4) plt.ylim(-1, 6) plot_svc_decision_function(clf, plt.gca()) plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=200, facecolors='none') interact(plot_svm, N=[10, 200], kernel='linear'); from sklearn.datasets.samples_generator import make_circles X, y = make_circles(100, factor=.1, noise=.1) clf = SVC(kernel='linear').fit(X, y) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plot_svc_decision_function(clf); r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2)) from mpl_toolkits import mplot3d def plot_3D(elev=30, azim=30): ax = plt.subplot(projection='3d') ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='spring') ax.view_init(elev=elev, azim=azim) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('r') interact(plot_3D, elev=(-90, 90), azip=(-180, 180)); clf = SVC(kernel='rbf') clf.fit(X, y) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring') plot_svc_decision_function(clf) plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=200, facecolors='none');
0.845433
0.986403
``` %load_ext autoreload %autoreload 2 import os import glob import datetime import subprocess print(datetime.datetime.now()) # need pygentoolbox.Tools also #dir(pygentoolbox.Tools) %matplotlib inline import matplotlib.pyplot as plt print(datetime.datetime.now()) path = '/media/sf_LinuxShare/Projects/Theresa/Hisat2/EV_Total_sRNA/Late/Pt_51_Mac/fastp/hisat2' bedfile = '52_Late_Ptiwi08RIP.34sRNAClusters.w100.d500.bed' bedfile = os.path.join(path, bedfile) filenames = [] for i in range(15, 42): name = f'190118_NB501473_A_L1-4_ADPF-56_AdapterTrimmed_R1_{i}bp.trim.Pt_51_Mac.sort.bam' filenames.append(os.path.join(path, name)) filenames.append(os.path.join(path, '190118_NB501473_A_L1-4_ADPF-56_AdapterTrimmed_R1_50bp.trim.Pt_51_Mac.sort.bam')) for f in filenames: cmd = f'samtools view -h -b -L {bedfile} {f}' outfile = '.'.join(f.split('.')[:-1] + ['Ptiwi08Clusters', 'bam']) cmd2 = f'samtools flagstat {outfile}' outfile2 = '.'.join(f.split('.')[:-1] + ['Ptiwi08Clusters', 'bam', 'flagstat']) with open(outfile, 'w') as OUT: # cmd = 'samtools sort %s > %s' % (bamfile, sortbamfile) ps = subprocess.Popen(cmd.split(), stdout=OUT) ps.wait() with open(outfile2, 'w') as OUT: # cmd = 'samtools sort %s > %s' % (bamfile, sortbamfile) ps = subprocess.Popen(cmd2.split(), stdout=OUT) ps.wait() print(datetime.datetime.now()) %load_ext autoreload %autoreload 2 import os import glob import datetime import subprocess print(datetime.datetime.now()) # need pygentoolbox.Tools also #dir(pygentoolbox.Tools) %matplotlib inline import matplotlib.pyplot as plt print(datetime.datetime.now()) path = '/media/sf_LinuxShare/Projects/Theresa/Hisat2/EV_Total_sRNA/Late/Pt_51_Mac/fastp/hisat2' bedfile = '52_Late_Ptiwi08RIP.34sRNAClusters.w100.d500.bed' bedfile = os.path.join(path, bedfile) filenames = [os.path.join(path, '190118_NB501473_A_L1-4_ADPF-56_AdapterTrimmed_R1_23And22bp.trim.Pt_51_Mac.sort.bam'), os.path.join(path, '190118_NB501473_A_L1-4_ADPF-56_AdapterTrimmed_R1_23And24bp.trim.Pt_51_Mac.sort.bam')] for f in filenames: cmd = f'samtools view -h -b -L {bedfile} {f}' outfile = '.'.join(f.split('.')[:-1] + ['Ptiwi08Clusters', 'bam']) cmd2 = f'samtools index {outfile}' cmd3 = f'samtools flagstat {outfile}' outfile3 = '.'.join(f.split('.')[:-1] + ['Ptiwi08Clusters', 'bam', 'flagstat']) cmd4 = f'samtools view -F 16 {outfile}' outfile4 = '.'.join(f.split('.')[:-1] + ['Ptiwi08Clusters', 'F', 'sam']) cmd5 = f'samtools view -f 16 {outfile}' outfile5 = '.'.join(f.split('.')[:-1] + ['Ptiwi08Clusters', 'R', 'sam']) with open(outfile, 'w') as OUT: # cmd = 'samtools sort %s > %s' % (bamfile, sortbamfile) ps = subprocess.Popen(cmd.split(), stdout=OUT) ps.wait() subprocess.call(cmd2.split()) with open(outfile3, 'w') as OUT: # cmd = 'samtools sort %s > %s' % (bamfile, sortbamfile) ps = subprocess.Popen(cmd3.split(), stdout=OUT) ps.wait() with open(outfile4, 'w') as OUT: # cmd = 'samtools sort %s > %s' % (bamfile, sortbamfile) ps = subprocess.Popen(cmd4.split(), stdout=OUT) ps.wait() with open(outfile5, 'w') as OUT: # cmd = 'samtools sort %s > %s' % (bamfile, sortbamfile) ps = subprocess.Popen(cmd5.split(), stdout=OUT) ps.wait() print(datetime.datetime.now()) ```
github_jupyter
%load_ext autoreload %autoreload 2 import os import glob import datetime import subprocess print(datetime.datetime.now()) # need pygentoolbox.Tools also #dir(pygentoolbox.Tools) %matplotlib inline import matplotlib.pyplot as plt print(datetime.datetime.now()) path = '/media/sf_LinuxShare/Projects/Theresa/Hisat2/EV_Total_sRNA/Late/Pt_51_Mac/fastp/hisat2' bedfile = '52_Late_Ptiwi08RIP.34sRNAClusters.w100.d500.bed' bedfile = os.path.join(path, bedfile) filenames = [] for i in range(15, 42): name = f'190118_NB501473_A_L1-4_ADPF-56_AdapterTrimmed_R1_{i}bp.trim.Pt_51_Mac.sort.bam' filenames.append(os.path.join(path, name)) filenames.append(os.path.join(path, '190118_NB501473_A_L1-4_ADPF-56_AdapterTrimmed_R1_50bp.trim.Pt_51_Mac.sort.bam')) for f in filenames: cmd = f'samtools view -h -b -L {bedfile} {f}' outfile = '.'.join(f.split('.')[:-1] + ['Ptiwi08Clusters', 'bam']) cmd2 = f'samtools flagstat {outfile}' outfile2 = '.'.join(f.split('.')[:-1] + ['Ptiwi08Clusters', 'bam', 'flagstat']) with open(outfile, 'w') as OUT: # cmd = 'samtools sort %s > %s' % (bamfile, sortbamfile) ps = subprocess.Popen(cmd.split(), stdout=OUT) ps.wait() with open(outfile2, 'w') as OUT: # cmd = 'samtools sort %s > %s' % (bamfile, sortbamfile) ps = subprocess.Popen(cmd2.split(), stdout=OUT) ps.wait() print(datetime.datetime.now()) %load_ext autoreload %autoreload 2 import os import glob import datetime import subprocess print(datetime.datetime.now()) # need pygentoolbox.Tools also #dir(pygentoolbox.Tools) %matplotlib inline import matplotlib.pyplot as plt print(datetime.datetime.now()) path = '/media/sf_LinuxShare/Projects/Theresa/Hisat2/EV_Total_sRNA/Late/Pt_51_Mac/fastp/hisat2' bedfile = '52_Late_Ptiwi08RIP.34sRNAClusters.w100.d500.bed' bedfile = os.path.join(path, bedfile) filenames = [os.path.join(path, '190118_NB501473_A_L1-4_ADPF-56_AdapterTrimmed_R1_23And22bp.trim.Pt_51_Mac.sort.bam'), os.path.join(path, '190118_NB501473_A_L1-4_ADPF-56_AdapterTrimmed_R1_23And24bp.trim.Pt_51_Mac.sort.bam')] for f in filenames: cmd = f'samtools view -h -b -L {bedfile} {f}' outfile = '.'.join(f.split('.')[:-1] + ['Ptiwi08Clusters', 'bam']) cmd2 = f'samtools index {outfile}' cmd3 = f'samtools flagstat {outfile}' outfile3 = '.'.join(f.split('.')[:-1] + ['Ptiwi08Clusters', 'bam', 'flagstat']) cmd4 = f'samtools view -F 16 {outfile}' outfile4 = '.'.join(f.split('.')[:-1] + ['Ptiwi08Clusters', 'F', 'sam']) cmd5 = f'samtools view -f 16 {outfile}' outfile5 = '.'.join(f.split('.')[:-1] + ['Ptiwi08Clusters', 'R', 'sam']) with open(outfile, 'w') as OUT: # cmd = 'samtools sort %s > %s' % (bamfile, sortbamfile) ps = subprocess.Popen(cmd.split(), stdout=OUT) ps.wait() subprocess.call(cmd2.split()) with open(outfile3, 'w') as OUT: # cmd = 'samtools sort %s > %s' % (bamfile, sortbamfile) ps = subprocess.Popen(cmd3.split(), stdout=OUT) ps.wait() with open(outfile4, 'w') as OUT: # cmd = 'samtools sort %s > %s' % (bamfile, sortbamfile) ps = subprocess.Popen(cmd4.split(), stdout=OUT) ps.wait() with open(outfile5, 'w') as OUT: # cmd = 'samtools sort %s > %s' % (bamfile, sortbamfile) ps = subprocess.Popen(cmd5.split(), stdout=OUT) ps.wait() print(datetime.datetime.now())
0.093595
0.172799
# Data exploration 1. Clean the data and fix any issues you see (missing values?). I think we should start working only with `application_train|test.csv`. If by the time we have a functional model there is enough time left, we can take a look at the rest of the data, i.e. `bureau.csv`, `previous_application.csv`, etc. 2. Look at the relationship between variables ## Load the data ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import os from google.colab import drive from sklearn.preprocessing import LabelEncoder drive.mount('/content/drive') # List files in the data dir os.listdir('drive/MyDrive/CS 249 Project/Data/') # Load training data train_df = pd.read_csv('drive/MyDrive/CS 249 Project/Data/application_train.csv') print('Training data shape: ', train_df.shape) train_df.head() ``` ## EDA Explore the data and address any issues ### Is the data balanced? Look at the target column and plot its distribution ``` train_df.TARGET.plot.hist() ``` Clearly the data isn't balanced. The number of loans that were repaid is far greater than the number of loans that were not repaid. We don't need to address this issue right now though. ### Are there missing values? ``` missing_values = train_df.isna().sum() missing_values_percent = missing_values*100/len(train_df) miss_df = pd.concat( [missing_values.rename('missing_val'), missing_values_percent.rename('missing_val_percent')], axis=1 ) miss_df.sort_values(by='missing_val_percent', ascending=False).head(10) ``` ## Which features are categorical? ``` train_df.dtypes.value_counts() categorical_features = train_df.select_dtypes('object') categorical_features.nunique() ``` Encode these categorical features. We can either use one-hot encoding (which we've used in the class) or label encoding. One-hot encoding seems to be the preferred method, with the only caveat that if there's a large number of categories for a feature, the number of one-hot encoded features can explode. A workaround is to use PCA or another dimensionality reduction method. Let's label encode the features with 2 categories and one-hot encode the ones with more than 2. ``` # Label encode the features with 2 categories label_encoder = LabelEncoder() for feat in categorical_features: if len(train_df[feat].unique()) <= 2: train_df[feat] = label_encoder.fit_transform(train_df[feat]) train_df.columns # One-hot encode features with more than 2 categories train_df = pd.get_dummies(train_df, drop_first=True) train_df.columns ``` ## Anomalies According to the reference notebook, the feature `DAYS_BIRTH` has negative numbers because they are recorded relative to the current loan application. The feature is described as client's age in days at the time of the application. It's easier to find anomalies if we transform to years. ``` # Convert BIRTH_DAYS to years (train_df['DAYS_BIRTH'] / -365).describe() ``` Given the min and the max, there doesn't seem to be any outliers. Now let's look at DAYS_EMPLOYED. This feature is described as: >How many days before the application the person started current employment. ``` train_df['DAYS_EMPLOYED'].describe() train_df['DAYS_EMPLOYED'].plot.hist(alpha=0.5) ``` There's clearly a chunk of samples for this feature that aren't right. It makes no sense for this to be negative and have such a large absolute value. A safe way to deal with this is to set those values to NaN and impute them later. Before setting all anomalous values to NaN, check to see if the anomalous clients have any patterns of behavior in terms of credit default (higher or lower rates). ``` max_days_employed = train_df['DAYS_EMPLOYED'].max() anom = train_df[train_df['DAYS_EMPLOYED'] == max_days_employed] non_anom = train_df[train_df['DAYS_EMPLOYED'] != max_days_employed] print(f'Value of days employed anomaly (aka max of days employed column):', max_days_employed) print(f'The non-anomalies default on %0.2f%% of loans' % (100 * non_anom['TARGET'].mean())) print(f'The anomalies default on %0.2f%% of loans' % (100 * anom['TARGET'].mean())) print(f'There are %d anomalous days of employment' % len(anom)) ``` Since the anomalous clients have a lower rate of default, we would like to capture this information in a separate column before clearing the anomalous values. We will fill in the anomalous values with NaN and create a boolean column to indicate whether the value was anomalous or not. ``` # Create an anomalous flag column train_df['DAYS_EMPLOYED_ANOM'] = train_df["DAYS_EMPLOYED"] == max_days_employed # How many samples have the max value as DAYS_EMPLOYED? max_value_count = (train_df['DAYS_EMPLOYED'] == train_df['DAYS_EMPLOYED'].max()).sum() print(f"Count of samples with max DAYS_EMPLOYED: {max_value_count}") # Replace all these occurrences with NaN train_df.DAYS_EMPLOYED.replace( to_replace=train_df.DAYS_EMPLOYED.max(), value=np.nan, inplace=True) train_df.DAYS_EMPLOYED.plot.hist(alpha=0.5) plt.xlabel('Days employed prior to application') print('There are %d anomalies in the train data out of %d entries' % (train_df["DAYS_EMPLOYED_ANOM"].sum(), len(train_df))) ``` ## Relationships between variables With `.corr` method we can use the Pearson correlation coefficient to find relationships between the features in the training set. ``` corr_matrix = train_df.corr().TARGET.sort_values() print(f'Highest positive correlations:\n {corr_matrix.tail(10)}\n') print(f'Highest negative correlations:\n {corr_matrix.head(10)}') ``` ### Features with highest positive correlation: Effect of age on repayment The feature with highest positive correlation is `DAYS_BIRTH`. Let's look at it in more datail. Positive correlation means that as `DAYS_BIRTH` increases the customer is less likely to repay the loan. However, this `DAYS_BIRTH` is given as negative numbers, it's easier to understand its relationship to `TARGET` if we multiply by -1. ``` plt.style.use('seaborn-muted') # Make DAYS_BIRTH positive train_df.DAYS_BIRTH = train_df.DAYS_BIRTH * -1 # Plot the distribution of ages in years sns.kdeplot(train_df.loc[train_df.TARGET == 0, 'DAYS_BIRTH'] / 365) sns.kdeplot(train_df.loc[train_df.TARGET == 1, 'DAYS_BIRTH'] / 365) plt.xlabel('Age in years') plt.title('KDE Plot for Applicant Age') plt.legend(['Repaid', 'Not Paid']) ``` From the plot it looks like among the pool of applicants who were not able to repay their loan, the majority of them were younger than 30 years old. ### Features with highest negative correlation The three features with highest negative correlation are `EXT_SOURCE_3`, `EXT_SOURCE_2`, `EXT_SOURCE_1`. These are described as *Normalized score from external data source*. Let's look at a heat map of their correlation with `TARGET`. ``` features = ['EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3', 'DAYS_BIRTH', 'TARGET'] ext_corr = train_df[features].corr() ext_corr sns.heatmap(ext_corr, annot=True) ``` The correlation matrix tells us that as the value of the external source features increases, the customer is more likely to repay the loan. Similarly, as the age of the customer increases, the higher the chance of the loan to be repaid. ## Impute Missing Data If more than 48% of a column's data is missing, the column will be entirely removed from the dataset. If less than 48% of a column's data is missing, the missing data will be imputed using the median of a column. **EXCEPTION:** Columns will only be removed if they do not occur in the "features" array, since the following columns have the highest negative correlation: `EXT_SOURCE_1`, `EXT_SOURCE_2`, `EXT_SOURCE_3`, `DAYS_BIRTH`, `TARGET`. ``` missing_values = train_df.isna().sum() missing_values_percent = missing_values*100/len(train_df) miss_df = pd.concat( [missing_values.rename('missing_val'), missing_values_percent.rename('missing_val_percent')], axis=1 ) miss_df.sort_values(by='missing_val_percent', ascending=False).head(10) print(f"Training data shape before dropping columns:", train_df.shape) # get columns missing >= 48% of the information missing_48pct = miss_df.loc[miss_df['missing_val_percent'] >= 48] missing_48pct_rows = missing_48pct.index.values print(f"Number of columns missing 48% or more of the data:", len(missing_48pct_rows)) for row in missing_48pct_rows: if row not in features: train_df = train_df.drop(row, axis=1) print(f"Training data shape after dropping columns:", train_df.shape) missing_values = train_df.isna().sum() missing_values_percent = missing_values*100/len(train_df) miss_df = pd.concat( [missing_values.rename('missing_val'), missing_values_percent.rename('missing_val_percent')], axis=1 ) print(f"Missing values data shape:", missing_values.shape) miss_df.sort_values(by='missing_val_percent', ascending=False).head(20) # Impute missing data by filling in NaNs with the median of the column train_df = train_df.fillna(train_df.median()) ``` ## Feature Engineering ### Polynomial features ``` from sklearn.impute import SimpleImputer from sklearn.preprocessing import PolynomialFeatures # Make new df with polynomial features poly_features = train_df[features] # Assign variables y = poly_features.TARGET X = poly_features.drop(columns='TARGET') # Handle missing values imputer = SimpleImputer(strategy='median') X = imputer.fit_transform(X) # Create polynomial features poly = PolynomialFeatures(degree=3) X = poly.fit_transform(X) poly.get_feature_names(features[:-1]) ```
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import os from google.colab import drive from sklearn.preprocessing import LabelEncoder drive.mount('/content/drive') # List files in the data dir os.listdir('drive/MyDrive/CS 249 Project/Data/') # Load training data train_df = pd.read_csv('drive/MyDrive/CS 249 Project/Data/application_train.csv') print('Training data shape: ', train_df.shape) train_df.head() train_df.TARGET.plot.hist() missing_values = train_df.isna().sum() missing_values_percent = missing_values*100/len(train_df) miss_df = pd.concat( [missing_values.rename('missing_val'), missing_values_percent.rename('missing_val_percent')], axis=1 ) miss_df.sort_values(by='missing_val_percent', ascending=False).head(10) train_df.dtypes.value_counts() categorical_features = train_df.select_dtypes('object') categorical_features.nunique() # Label encode the features with 2 categories label_encoder = LabelEncoder() for feat in categorical_features: if len(train_df[feat].unique()) <= 2: train_df[feat] = label_encoder.fit_transform(train_df[feat]) train_df.columns # One-hot encode features with more than 2 categories train_df = pd.get_dummies(train_df, drop_first=True) train_df.columns # Convert BIRTH_DAYS to years (train_df['DAYS_BIRTH'] / -365).describe() train_df['DAYS_EMPLOYED'].describe() train_df['DAYS_EMPLOYED'].plot.hist(alpha=0.5) max_days_employed = train_df['DAYS_EMPLOYED'].max() anom = train_df[train_df['DAYS_EMPLOYED'] == max_days_employed] non_anom = train_df[train_df['DAYS_EMPLOYED'] != max_days_employed] print(f'Value of days employed anomaly (aka max of days employed column):', max_days_employed) print(f'The non-anomalies default on %0.2f%% of loans' % (100 * non_anom['TARGET'].mean())) print(f'The anomalies default on %0.2f%% of loans' % (100 * anom['TARGET'].mean())) print(f'There are %d anomalous days of employment' % len(anom)) # Create an anomalous flag column train_df['DAYS_EMPLOYED_ANOM'] = train_df["DAYS_EMPLOYED"] == max_days_employed # How many samples have the max value as DAYS_EMPLOYED? max_value_count = (train_df['DAYS_EMPLOYED'] == train_df['DAYS_EMPLOYED'].max()).sum() print(f"Count of samples with max DAYS_EMPLOYED: {max_value_count}") # Replace all these occurrences with NaN train_df.DAYS_EMPLOYED.replace( to_replace=train_df.DAYS_EMPLOYED.max(), value=np.nan, inplace=True) train_df.DAYS_EMPLOYED.plot.hist(alpha=0.5) plt.xlabel('Days employed prior to application') print('There are %d anomalies in the train data out of %d entries' % (train_df["DAYS_EMPLOYED_ANOM"].sum(), len(train_df))) corr_matrix = train_df.corr().TARGET.sort_values() print(f'Highest positive correlations:\n {corr_matrix.tail(10)}\n') print(f'Highest negative correlations:\n {corr_matrix.head(10)}') plt.style.use('seaborn-muted') # Make DAYS_BIRTH positive train_df.DAYS_BIRTH = train_df.DAYS_BIRTH * -1 # Plot the distribution of ages in years sns.kdeplot(train_df.loc[train_df.TARGET == 0, 'DAYS_BIRTH'] / 365) sns.kdeplot(train_df.loc[train_df.TARGET == 1, 'DAYS_BIRTH'] / 365) plt.xlabel('Age in years') plt.title('KDE Plot for Applicant Age') plt.legend(['Repaid', 'Not Paid']) features = ['EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3', 'DAYS_BIRTH', 'TARGET'] ext_corr = train_df[features].corr() ext_corr sns.heatmap(ext_corr, annot=True) missing_values = train_df.isna().sum() missing_values_percent = missing_values*100/len(train_df) miss_df = pd.concat( [missing_values.rename('missing_val'), missing_values_percent.rename('missing_val_percent')], axis=1 ) miss_df.sort_values(by='missing_val_percent', ascending=False).head(10) print(f"Training data shape before dropping columns:", train_df.shape) # get columns missing >= 48% of the information missing_48pct = miss_df.loc[miss_df['missing_val_percent'] >= 48] missing_48pct_rows = missing_48pct.index.values print(f"Number of columns missing 48% or more of the data:", len(missing_48pct_rows)) for row in missing_48pct_rows: if row not in features: train_df = train_df.drop(row, axis=1) print(f"Training data shape after dropping columns:", train_df.shape) missing_values = train_df.isna().sum() missing_values_percent = missing_values*100/len(train_df) miss_df = pd.concat( [missing_values.rename('missing_val'), missing_values_percent.rename('missing_val_percent')], axis=1 ) print(f"Missing values data shape:", missing_values.shape) miss_df.sort_values(by='missing_val_percent', ascending=False).head(20) # Impute missing data by filling in NaNs with the median of the column train_df = train_df.fillna(train_df.median()) from sklearn.impute import SimpleImputer from sklearn.preprocessing import PolynomialFeatures # Make new df with polynomial features poly_features = train_df[features] # Assign variables y = poly_features.TARGET X = poly_features.drop(columns='TARGET') # Handle missing values imputer = SimpleImputer(strategy='median') X = imputer.fit_transform(X) # Create polynomial features poly = PolynomialFeatures(degree=3) X = poly.fit_transform(X) poly.get_feature_names(features[:-1])
0.59302
0.943034
<a href="https://colab.research.google.com/github/danielsoy/ALOCC-CVPR2018/blob/master/funka_alibi1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt from PIL import Image import glob import os import shutil from collections import Counter import tensorflow as tf from tensorflow.keras.layers import Conv2D, Conv2DTranspose, UpSampling2D, Dense, Layer, Reshape, InputLayer, Flatten, Input, MaxPooling2D !git clone https://github.com/SeldonIO/alibi-detect.git %cd /content/alibi-detect/alibi_detect/od !pip install alibi-detect from alibi_detect.od import OutlierAE from alibi_detect.utils.visualize import plot_instance_score, plot_feature_outlier_image from google.colab import drive drive.mount('/content/drive') def img_to_np(path, resize = True): img_array = [] fpaths = glob.glob(path, recursive=True) for fname in fpaths: img = Image.open(fname).convert("RGB") if(resize): img = img.resize((64,64)) img_array.append(np.asarray(img)) images = np.array(img_array) return images path_train = "D:\\img\\capsule\\train\\**\*.*" path_test = "D:\\img\\capsule\\test\\**\*.*" train = img_to_np(path_train) test = img_to_np(path_test) train = train.astype('float32') / 255. test = test.astype('float32') / 255. encoding_dim = 1024 dense_dim = [8, 8, 128] encoder_net = tf.keras.Sequential( [ InputLayer(input_shape=train[0].shape), Conv2D(64, 4, strides=2, padding='same', activation=tf.nn.relu), Conv2D(128, 4, strides=2, padding='same', activation=tf.nn.relu), Conv2D(512, 4, strides=2, padding='same', activation=tf.nn.relu), Flatten(), Dense(encoding_dim,) ]) decoder_net = tf.keras.Sequential( [ InputLayer(input_shape=(encoding_dim,)), Dense(np.prod(dense_dim)), Reshape(target_shape=dense_dim), Conv2DTranspose(256, 4, strides=2, padding='same', activation=tf.nn.relu), Conv2DTranspose(64, 4, strides=2, padding='same', activation=tf.nn.relu), Conv2DTranspose(3, 4, strides=2, padding='same', activation='sigmoid') ]) od = OutlierAE( threshold = 0.001, encoder_net=encoder_net, decoder_net=decoder_net) adam = tf.keras.optimizers.Adam(lr=1e-4) od.fit(train, epochs=100, verbose=True, optimizer = adam) od.infer_threshold(test, threshold_perc=95) preds = od.predict(test, outlier_type='instance', return_instance_score=True, return_feature_score=True) for i, fpath in enumerate(glob.glob(path_test)): if(preds['data']['is_outlier'][i] == 1): source = fpath shutil.copy(source, 'img\\') filenames = [os.path.basename(x) for x in glob.glob(path_test, recursive=True)] dict1 = {'Filename': filenames, 'instance_score': preds['data']['instance_score'], 'is_outlier': preds['data']['is_outlier']} df = pd.DataFrame(dict1) df_outliers = df[df['is_outlier'] == 1] print(df_outliers) recon = od.ae(test).numpy() plot_feature_outlier_image(preds, test, X_recon=recon, max_instances=5, outliers_only=False, figsize=(15,15)) ```
github_jupyter
import pandas as pd import numpy as np import matplotlib.pyplot as plt from PIL import Image import glob import os import shutil from collections import Counter import tensorflow as tf from tensorflow.keras.layers import Conv2D, Conv2DTranspose, UpSampling2D, Dense, Layer, Reshape, InputLayer, Flatten, Input, MaxPooling2D !git clone https://github.com/SeldonIO/alibi-detect.git %cd /content/alibi-detect/alibi_detect/od !pip install alibi-detect from alibi_detect.od import OutlierAE from alibi_detect.utils.visualize import plot_instance_score, plot_feature_outlier_image from google.colab import drive drive.mount('/content/drive') def img_to_np(path, resize = True): img_array = [] fpaths = glob.glob(path, recursive=True) for fname in fpaths: img = Image.open(fname).convert("RGB") if(resize): img = img.resize((64,64)) img_array.append(np.asarray(img)) images = np.array(img_array) return images path_train = "D:\\img\\capsule\\train\\**\*.*" path_test = "D:\\img\\capsule\\test\\**\*.*" train = img_to_np(path_train) test = img_to_np(path_test) train = train.astype('float32') / 255. test = test.astype('float32') / 255. encoding_dim = 1024 dense_dim = [8, 8, 128] encoder_net = tf.keras.Sequential( [ InputLayer(input_shape=train[0].shape), Conv2D(64, 4, strides=2, padding='same', activation=tf.nn.relu), Conv2D(128, 4, strides=2, padding='same', activation=tf.nn.relu), Conv2D(512, 4, strides=2, padding='same', activation=tf.nn.relu), Flatten(), Dense(encoding_dim,) ]) decoder_net = tf.keras.Sequential( [ InputLayer(input_shape=(encoding_dim,)), Dense(np.prod(dense_dim)), Reshape(target_shape=dense_dim), Conv2DTranspose(256, 4, strides=2, padding='same', activation=tf.nn.relu), Conv2DTranspose(64, 4, strides=2, padding='same', activation=tf.nn.relu), Conv2DTranspose(3, 4, strides=2, padding='same', activation='sigmoid') ]) od = OutlierAE( threshold = 0.001, encoder_net=encoder_net, decoder_net=decoder_net) adam = tf.keras.optimizers.Adam(lr=1e-4) od.fit(train, epochs=100, verbose=True, optimizer = adam) od.infer_threshold(test, threshold_perc=95) preds = od.predict(test, outlier_type='instance', return_instance_score=True, return_feature_score=True) for i, fpath in enumerate(glob.glob(path_test)): if(preds['data']['is_outlier'][i] == 1): source = fpath shutil.copy(source, 'img\\') filenames = [os.path.basename(x) for x in glob.glob(path_test, recursive=True)] dict1 = {'Filename': filenames, 'instance_score': preds['data']['instance_score'], 'is_outlier': preds['data']['is_outlier']} df = pd.DataFrame(dict1) df_outliers = df[df['is_outlier'] == 1] print(df_outliers) recon = od.ae(test).numpy() plot_feature_outlier_image(preds, test, X_recon=recon, max_instances=5, outliers_only=False, figsize=(15,15))
0.801858
0.881615
``` You arrive at Easter Bunny Headquarters under cover of darkness. However, you left in such a rush that you forgot to use the bathroom! Fancy office buildings like this one usually have keypad locks on their bathrooms, so you search the front desk for the code. "In order to improve security," the document you find says, "bathroom codes will no longer be written down. Instead, please memorize and follow the procedure below to access the bathrooms." The document goes on to explain that each button to be pressed can be found by starting on the previous button and moving to adjacent buttons on the keypad: U moves up, D moves down, L moves left, and R moves right. Each line of instructions corresponds to one button, starting at the previous button (or, for the first line, the "5" button); press whatever button you're on at the end of each line. If a move doesn't lead to a button, ignore it. You can't hold it much longer, so you decide to figure out the code as you walk to the bathroom. You picture a keypad like this: 1 2 3 4 5 6 7 8 9 Suppose your instructions are: ULL RRDDD LURDL UUUUD You start at "5" and move up (to "2"), left (to "1"), and left (you can't, and stay on "1"), so the first button is 1. Starting from the previous button ("1"), you move right twice (to "3") and then down three times (stopping at "9" after two moves and ignoring the third), ending up with 9. Continuing from "9", you move left, up, right, down, and left, ending with 8. Finally, you move up four times (stopping at "2"), then down once, ending with 5. So, in this example, the bathroom code is 1985. Your puzzle input is the instructions from the document you found at the front desk. What is the bathroom code? ``` ``` NEIGHBOURS = { '1': ['1', '2', '3', '1'], '2': ['2', '3', '5', '1'], '3': ['3', '3', '6', '2'], '4': ['1', '5', '7', '4'], '5': ['2', '6', '8', '4'], '6': ['3', '6', '9', '5'], '7': ['4', '8', '7', '7'], '8': ['5', '9', '8', '7'], '9': ['6', '9', '9', '8'] } MOVES = { 'U': 0, 'R': 1, 'D': 2, 'L': 3} def decrypt(instructions, digit = '5'): for line in instructions: for character in line.strip(): digit = NEIGHBOURS[digit][MOVES[character]] print(digit, end="") decrypt(["ULL","RRDD", "LURDL", "UUUUD"]) with open('inputs/day2.txt', 'rt') as instructions: decrypt(instructions) ``` ``` --- Part Two --- You finally arrive at the bathroom (it's a several minute walk from the lobby so visitors can behold the many fancy conference rooms and water coolers on this floor) and go to punch in the code. Much to your bladder's dismay, the keypad is not at all like you imagined it. Instead, you are confronted with the result of hundreds of man-hours of bathroom-keypad-design meetings: 1 2 3 4 5 6 7 8 9 A B C D You still start at "5" and stop when you're at an edge, but given the same instructions as above, the outcome is very different: You start at "5" and don't move at all (up and left are both edges), ending at 5. Continuing from "5", you move right twice and down three times (through "6", "7", "B", "D", "D"), ending at D. Then, from "D", you move five more times (through "D", "B", "C", "C", "B"), ending at B. Finally, after five more moves, you end at 3. So, given the actual keypad layout, the code would be 5DB3. Using the same instructions in your puzzle input, what is the correct bathroom code? ``` ``` NEIGHBOURS = { '1': ['1', '1', '3', '1'], '2': ['2', '3', '6', '2'], '3': ['1', '4', '7', '2'], '4': ['4', '4', '8', '3'], '5': ['5', '6', '5', '5'], '6': ['2', '7', 'A', '5'], '7': ['3', '8', 'B', '6'], '8': ['4', '9', 'C', '7'], '9': ['9', '9', '9', '8'], 'A': ['6', 'B', 'A', 'A'], 'B': ['7', 'C', 'D', 'A'], 'C': ['8', 'C', 'C', 'B'], 'D': ['B', 'D', 'D', 'D'] } decrypt(["ULL","RRDD", "LURDL", "UUUUD"]) with open('inputs/day2.txt', 'rt') as instructions: decrypt(instructions) ```
github_jupyter
You arrive at Easter Bunny Headquarters under cover of darkness. However, you left in such a rush that you forgot to use the bathroom! Fancy office buildings like this one usually have keypad locks on their bathrooms, so you search the front desk for the code. "In order to improve security," the document you find says, "bathroom codes will no longer be written down. Instead, please memorize and follow the procedure below to access the bathrooms." The document goes on to explain that each button to be pressed can be found by starting on the previous button and moving to adjacent buttons on the keypad: U moves up, D moves down, L moves left, and R moves right. Each line of instructions corresponds to one button, starting at the previous button (or, for the first line, the "5" button); press whatever button you're on at the end of each line. If a move doesn't lead to a button, ignore it. You can't hold it much longer, so you decide to figure out the code as you walk to the bathroom. You picture a keypad like this: 1 2 3 4 5 6 7 8 9 Suppose your instructions are: ULL RRDDD LURDL UUUUD You start at "5" and move up (to "2"), left (to "1"), and left (you can't, and stay on "1"), so the first button is 1. Starting from the previous button ("1"), you move right twice (to "3") and then down three times (stopping at "9" after two moves and ignoring the third), ending up with 9. Continuing from "9", you move left, up, right, down, and left, ending with 8. Finally, you move up four times (stopping at "2"), then down once, ending with 5. So, in this example, the bathroom code is 1985. Your puzzle input is the instructions from the document you found at the front desk. What is the bathroom code? NEIGHBOURS = { '1': ['1', '2', '3', '1'], '2': ['2', '3', '5', '1'], '3': ['3', '3', '6', '2'], '4': ['1', '5', '7', '4'], '5': ['2', '6', '8', '4'], '6': ['3', '6', '9', '5'], '7': ['4', '8', '7', '7'], '8': ['5', '9', '8', '7'], '9': ['6', '9', '9', '8'] } MOVES = { 'U': 0, 'R': 1, 'D': 2, 'L': 3} def decrypt(instructions, digit = '5'): for line in instructions: for character in line.strip(): digit = NEIGHBOURS[digit][MOVES[character]] print(digit, end="") decrypt(["ULL","RRDD", "LURDL", "UUUUD"]) with open('inputs/day2.txt', 'rt') as instructions: decrypt(instructions) --- Part Two --- You finally arrive at the bathroom (it's a several minute walk from the lobby so visitors can behold the many fancy conference rooms and water coolers on this floor) and go to punch in the code. Much to your bladder's dismay, the keypad is not at all like you imagined it. Instead, you are confronted with the result of hundreds of man-hours of bathroom-keypad-design meetings: 1 2 3 4 5 6 7 8 9 A B C D You still start at "5" and stop when you're at an edge, but given the same instructions as above, the outcome is very different: You start at "5" and don't move at all (up and left are both edges), ending at 5. Continuing from "5", you move right twice and down three times (through "6", "7", "B", "D", "D"), ending at D. Then, from "D", you move five more times (through "D", "B", "C", "C", "B"), ending at B. Finally, after five more moves, you end at 3. So, given the actual keypad layout, the code would be 5DB3. Using the same instructions in your puzzle input, what is the correct bathroom code? NEIGHBOURS = { '1': ['1', '1', '3', '1'], '2': ['2', '3', '6', '2'], '3': ['1', '4', '7', '2'], '4': ['4', '4', '8', '3'], '5': ['5', '6', '5', '5'], '6': ['2', '7', 'A', '5'], '7': ['3', '8', 'B', '6'], '8': ['4', '9', 'C', '7'], '9': ['9', '9', '9', '8'], 'A': ['6', 'B', 'A', 'A'], 'B': ['7', 'C', 'D', 'A'], 'C': ['8', 'C', 'C', 'B'], 'D': ['B', 'D', 'D', 'D'] } decrypt(["ULL","RRDD", "LURDL", "UUUUD"]) with open('inputs/day2.txt', 'rt') as instructions: decrypt(instructions)
0.511717
0.936343
# Importing Needed packages 원 예제와는 다르게, requests를 import했다. wget이 제대로 작동하지 않아서 대체하기 위함. ``` import matplotlib.pyplot as plt import pandas as pd import pylab as pl import numpy as np import requests %matplotlib inline ``` # Download data url에서 csv를 다운로드 받는다. url에서 내용을 다운로드 -> 내용만 복사 -> 로컬 csv파일에 저장한다. csv.write()와 csv.close()는 항상 세트로 다니는 것을 잊지말 것. ``` url = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv" req = requests.get(url) url_content = req.content csv_file = open('FuelConsumption.csv','wb') csv_file.write(url_content) csv_file.close() ``` Instead of 'wget' used request # Reading the data 판다스를 이용해서 데이터를 가져온다. ``` df = pd.read_csv("./FuelConsumption.csv") # take a look at the dataset df.head() ``` ## 원하는 column값만 가져오기 판다스에서 지원하는 내용인 것 같다. df[[]] (list in list)구조라는 것을 명심해야겠다. df.head(n)을 이용해서 n만큼의 데이터만 보여주는 기능이 보인다. ``` cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY','FUELCONSUMPTION_COMB','CO2EMISSIONS']] cdf.head(9) plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() msk = np.random.rand(len(df)) < 0.8 train = cdf[msk] test = cdf[~msk] plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() from sklearn import linear_model regr = linear_model.LinearRegression() x = np.asanyarray(train[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']]) y = np.asanyarray(train[['CO2EMISSIONS']]) regr.fit (x, y) # The coefficients print ('Coefficients: ', regr.coef_) ``` $\hat{y} = \theta + \theta_{1}x_{1} + \theta_{2}x_{2} + ... + \theta_{n}x_{n}$ $Co2Emission = \theta + \theta_{1}*EngineSize + \theta_{2}*Cylinders + \theta_{3}*FuelConsumption$ As mentioned before, **Coefficient** and **Intercept** , are the parameters of the fit line. Given that it is a multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn can estimate them from our data. Scikit-learn uses plain Ordinary Least Squares method to solve this problem. #### Ordinary Least Squares (OLS) OLS is a method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by minimizing the sum of the squares of the differences between the target dependent variable and those predicted by the linear function. In other words, it tries to minimizes the sum of squared errors (SSE) or mean squared error (MSE) between the target variable (y) and our predicted output ($\hat{y}$) over all samples in the dataset. OLS can find the best parameters using of the following methods: ``` - Solving the model parameters analytically using closed-form equations - Using an optimization algorithm (Gradient Descent, Stochastic Gradient Descent, Newton’s Method, etc.) ``` # Predict ``` y_hat= regr.predict(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']]) x = np.asanyarray(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']]) y = np.asanyarray(test[['CO2EMISSIONS']]) print("Residual sum of squares: %.2f" % np.mean((y_hat - y) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(x, y)) ``` **explained variance regression score:** If $\hat{y}$ is the estimated target output, y the corresponding (correct) target output, and Var is Variance, the square of the standard deviation, then the explained variance is estimated as follow: $\texttt{explainedVariance}(y, \hat{y}) = 1 - \frac{Var{ y - \hat{y}}}{Var{y}}$ The best possible score is 1.0, lower values are worse. <h2 id="practice">Practice</h2> Try to use a multiple linear regression with the same dataset but this time use __FUEL CONSUMPTION in CITY__ and __FUEL CONSUMPTION in HWY__ instead of FUELCONSUMPTION_COMB. Does it result in better accuracy? ### FUEL CONSUMPTION in CITY ``` regr = linear_model.LinearRegression() x = np.asanyarray(train[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']]) y = np.asanyarray(train[['CO2EMISSIONS']]) regr.fit (x, y) # The coefficients print ('Coefficients: ', regr.coef_) y_hat= regr.predict(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']]) x = np.asanyarray(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']]) y = np.asanyarray(test[['CO2EMISSIONS']]) print("Residual sum of squares: %.2f" % np.mean((y_hat - y) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(x, y)) ```
github_jupyter
import matplotlib.pyplot as plt import pandas as pd import pylab as pl import numpy as np import requests %matplotlib inline url = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv" req = requests.get(url) url_content = req.content csv_file = open('FuelConsumption.csv','wb') csv_file.write(url_content) csv_file.close() df = pd.read_csv("./FuelConsumption.csv") # take a look at the dataset df.head() cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY','FUELCONSUMPTION_COMB','CO2EMISSIONS']] cdf.head(9) plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() msk = np.random.rand(len(df)) < 0.8 train = cdf[msk] test = cdf[~msk] plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() from sklearn import linear_model regr = linear_model.LinearRegression() x = np.asanyarray(train[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']]) y = np.asanyarray(train[['CO2EMISSIONS']]) regr.fit (x, y) # The coefficients print ('Coefficients: ', regr.coef_) - Solving the model parameters analytically using closed-form equations - Using an optimization algorithm (Gradient Descent, Stochastic Gradient Descent, Newton’s Method, etc.) y_hat= regr.predict(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']]) x = np.asanyarray(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB']]) y = np.asanyarray(test[['CO2EMISSIONS']]) print("Residual sum of squares: %.2f" % np.mean((y_hat - y) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(x, y)) regr = linear_model.LinearRegression() x = np.asanyarray(train[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']]) y = np.asanyarray(train[['CO2EMISSIONS']]) regr.fit (x, y) # The coefficients print ('Coefficients: ', regr.coef_) y_hat= regr.predict(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']]) x = np.asanyarray(test[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY']]) y = np.asanyarray(test[['CO2EMISSIONS']]) print("Residual sum of squares: %.2f" % np.mean((y_hat - y) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(x, y))
0.58818
0.87982
# Análisis de Patrones en Texto Muchas veces nos enfretamos a diferentes cadenas de carácteres sobre los cuales queremos realizar cierto tratamiento con diversos fines. En esta seccion realizaremos una introducción a algunas funciones especiales del tipo de dato **string** y a las **expresiones regulares**. Objetivos ------------- - Manipulación de cadenas de carácteres - Expresiones regulares ## Búsqueda de Patrones en String ----------------------------------- En esta sección, repasaremos algunos de los conceptos básicos acerca de las funciones básicas en cadena de carácteres y búsqueda de patrones más avanzados **String** Como ya hemos visto en el curso un string esta se encuentra delimitado por <code>'text'</code>, <code>"text"</code> para casos de una cadenas de una sola línea. Para múltiples líneas se tiene <code>"""text""""</code>. ``` my_string = "This is a string" my_string2 = 'This is also a string' my_string = 'And this? It's the wrong string' my_string = "And this? It's the correct string" print(my_string) ``` **Repaso de Funciones básicas** ``` # len -> nos brinda la longitud de la cadena len(my_string) # str() -> Conversión a string str(123) # Concatenacion string1= 'Awesome day' string2 = 'for biking' print(string1 +" "+ string2) "hola " + 2 nombre = 'Gonzalo' f"hola {nombre}" # indexación print(string1[0]) # Obtengo primer elemento de la cadena de carácteres print(string1[-1]) #Obtengo último carácter de la cadena print(string1[len(string1)-1]) #Obtengo último carácter de la cadena # Silicing print(string1[0:3]) # captura de los 3 primeros carácters print(string1[:5]) # 5 primeros elementos print(string1[5:]) # de la posición 5 en adelante # Stride print(string1[0:6:2]) # selecciono los carácteres de la posición 0,6,2 print(string1[::-1]) # reversión de cadena ``` **Operaciones básicas** ``` # lower -> Conversión a minusculas print(string1.lower()) # Upper -> conversión a mayúsculas print(string1.upper()) # Capitalize -> primera letra del texto en mayúscula print(string1.capitalize()) print(string1.title()) # Split -> Divide un texto según un separador my_string = "This string will be split" print(my_string.split(sep=" ")) print(my_string.split(sep=" ", maxsplit=2)) #maxsplit -> delimita la cantidad de palabras a ser separadas de la cadena # \n -> define un salto de línea en texto # \t -> tabular my_string = "This string will be split\nin two" print(my_string) # splitlines -> separa texto según saltos de línea print(my_string.splitlines()) print(my_string.split('\n')) # join -> Permite concatenar strings de un listado my_list = ["this", "would", "be", "a", "string"] print(" ".join(my_list)) # strip -> realiza una limpieza de texto quitando espacios en blanco o saltos de # línea de los extremos de una cadena de carácteres my_string = " This string will be stripped\n" print(my_string) print(my_string.strip()) ``` **Búsqueda de patrones** ``` # Find -> Realiza una búsqueda en texto my_string = "Where's Waldo?" print(my_string.find("Waldo")) print(my_string.find("Wenda")) # No se encotro palabra buscada # index -> similar a find permite realizar la búsqueda my_string = "Where's Waldo?" my_string.index("Waldo") print(my_string.index("Wenda")) # Count -> permite obtener la cantidad de veces en que aparece una palabra en texto my_string = "How many fruits do you have in your fruit basket?" my_string.count("fruit") # replace -> permite reemplazar un texto por otro my_string = "The red house is between the blue house and the old house" print(my_string.replace("house", "car")) print(my_string.replace("house", "car", 2)) # reemplza la palabra 'house' 2 veces ``` # Ejercicios 1. Escribir una función que, dado un string, retorne la longitud de la última palabra. Se considera que las palabras están separadas por uno o más espacios. También podría haber espacios al principio o al final del string pasado por parámetro. **Consideraciones:** - Se considera que las cadenas ingresadas estarán conformadas solo por palabras [abc..] y espacios **Ejemplo de entrada y salida:** - Input: "Hola a todos" -> Output Esperado: 5 - Input: " Bienvenido al curso " -> Outpur Esperado: 5
github_jupyter
my_string = "This is a string" my_string2 = 'This is also a string' my_string = 'And this? It's the wrong string' my_string = "And this? It's the correct string" print(my_string) # len -> nos brinda la longitud de la cadena len(my_string) # str() -> Conversión a string str(123) # Concatenacion string1= 'Awesome day' string2 = 'for biking' print(string1 +" "+ string2) "hola " + 2 nombre = 'Gonzalo' f"hola {nombre}" # indexación print(string1[0]) # Obtengo primer elemento de la cadena de carácteres print(string1[-1]) #Obtengo último carácter de la cadena print(string1[len(string1)-1]) #Obtengo último carácter de la cadena # Silicing print(string1[0:3]) # captura de los 3 primeros carácters print(string1[:5]) # 5 primeros elementos print(string1[5:]) # de la posición 5 en adelante # Stride print(string1[0:6:2]) # selecciono los carácteres de la posición 0,6,2 print(string1[::-1]) # reversión de cadena # lower -> Conversión a minusculas print(string1.lower()) # Upper -> conversión a mayúsculas print(string1.upper()) # Capitalize -> primera letra del texto en mayúscula print(string1.capitalize()) print(string1.title()) # Split -> Divide un texto según un separador my_string = "This string will be split" print(my_string.split(sep=" ")) print(my_string.split(sep=" ", maxsplit=2)) #maxsplit -> delimita la cantidad de palabras a ser separadas de la cadena # \n -> define un salto de línea en texto # \t -> tabular my_string = "This string will be split\nin two" print(my_string) # splitlines -> separa texto según saltos de línea print(my_string.splitlines()) print(my_string.split('\n')) # join -> Permite concatenar strings de un listado my_list = ["this", "would", "be", "a", "string"] print(" ".join(my_list)) # strip -> realiza una limpieza de texto quitando espacios en blanco o saltos de # línea de los extremos de una cadena de carácteres my_string = " This string will be stripped\n" print(my_string) print(my_string.strip()) # Find -> Realiza una búsqueda en texto my_string = "Where's Waldo?" print(my_string.find("Waldo")) print(my_string.find("Wenda")) # No se encotro palabra buscada # index -> similar a find permite realizar la búsqueda my_string = "Where's Waldo?" my_string.index("Waldo") print(my_string.index("Wenda")) # Count -> permite obtener la cantidad de veces en que aparece una palabra en texto my_string = "How many fruits do you have in your fruit basket?" my_string.count("fruit") # replace -> permite reemplazar un texto por otro my_string = "The red house is between the blue house and the old house" print(my_string.replace("house", "car")) print(my_string.replace("house", "car", 2)) # reemplza la palabra 'house' 2 veces
0.169063
0.875681
# U-SimBA Demo This notebook explains the usage of U-SimBA using a simple NN model for MNIST image classification. ## Setup Install U-SimBA, implemented using Adversarial Robustness Toolbox (ART; version 1.7.0). Code is available in [our forked version of ART](https://github.com/kztakemoto/adversarial-robustness-toolbox/blob/main/art/attacks/evasion/universal_simba.py). ``` !pip install git+https://github.com/kztakemoto/adversarial-robustness-toolbox ``` Import libraries. ``` import tensorflow as tf import numpy as np from art.estimators.classification import TensorFlowV2Classifier from art.attacks.evasion import Universal_SimBA import matplotlib.pyplot as plt import logging import random ``` Configure a logger to capture ART outputs; these are printed in console and the level of detail is set to INFO. ``` logger = logging.getLogger() logger.setLevel(logging.INFO) handler = logging.StreamHandler() formatter = logging.Formatter('[%(levelname)s] %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) ``` ### MNIST model Load MNIST dataset. ``` mnist = tf.keras.datasets.mnist (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train = X_train.reshape((60000, 28, 28, 1)) X_test = X_test.reshape((10000, 28, 28, 1)) X_train, X_test = X_train / 255.0, X_test / 255.0 ``` Generate an input dataset, used to generate a UAP, and validation dataset from the test dataset. ``` # Input dataset consists of randomly selected 100 images per class. np.random.seed(111) input_idx = [] num_each_class = 100 for class_i in range(10): idx = np.random.choice(np.where(y_test == class_i)[0], num_each_class, replace=False).tolist() input_idx = input_idx + idx random.shuffle(input_idx) X_input, y_input = X_test[input_idx], y_test[input_idx] # The rest is used as the validation data rest_idx = np.ones(len(X_test), dtype=bool) rest_idx[input_idx] = False X_val, y_val = X_test[rest_idx], y_test[rest_idx] ``` Define and train the NN model. ``` model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28, 1)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False) model.compile(optimizer='adam', loss=loss_fn, metrics=['accuracy']) model.fit(X_train, y_train, epochs=5) ``` Wrap the model to be able to use it in ART. ``` classifier = TensorFlowV2Classifier( model=model, nb_classes=10, input_shape=(28, 28, 1), loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), ) ``` Define the functions for visualization. ``` # set MNIST labels label = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'] # normalization for image plot def norm(x): return (x - np.min(x)) / (np.max(x) - np.min(x) + 1e-5) # plot sample images def img_plot(x, X_adv, target=None): preds_X_adv = np.argmax(classifier.predict(X_adv), axis=1) preds_x = np.argmax(classifier.predict(x), axis=1) if target is None: sr = np.sum(preds_X_adv != preds_x) / x.shape[0] print('Success rate of non-targeted attacks: {:.1f}%'.format(sr * 100)) else: sr = np.sum(preds_X_adv == np.argmax(target, axis=1)) / x.shape[0] print('Success rate of targeted attacks: {:.1f}%'.format(sr * 100)) print('Plot the clean images (top), adversarial images (bottom), and UAP') n = 10 plt.figure(figsize=(20, 4)) for i in range(n + 1): x_ori = norm(x[i-1].reshape(28,28)) x_adv = norm(X_adv[i-1].reshape(28,28)) noise = norm((X_adv[i-1] - x[i-1]).reshape(28,28)) # display original if i != 0: ax = plt.subplot(2, n + 1, i + 1) plt.title(label[preds_x[i-1]]) plt.imshow(x_ori) plt.gray() ax.set_axis_off() # display original + noise bx = plt.subplot(2, n + 1, i + n + 2) if i == 0: plt.title('UAP') plt.imshow(noise) plt.gray() else: plt.title(label[preds_X_adv[i-1]]) plt.imshow(x_adv) plt.gray() bx.set_axis_off() plt.show() ``` ## Nontargeted attacks Build attacker. ``` nontargeted_attack = Universal_SimBA( classifier, attack='dct', epsilon=0.2, freq_dim=int(X_input.shape[1]/8), # this is related to $f_d$ in our paper; specifically, freq_dim = 1 / $f_d$ max_iter=2000, # corresponds to $i_\max$ in our paper eps=0.2, # corresponds to $\xi$ in our paper norm=np.inf, # corresponds to $p$ in our paper targeted=False, batch_size=256 ) ``` Perform nontargeted attacks and get adversarial examples for the input data. ``` X_input_adv_nontargeted = nontargeted_attack.generate(X_input) ``` Generate adversarial examples for the validation data. ``` X_val_adv_nontargeted = X_val + nontargeted_attack.noise ``` Comput success rate of non-targeted attacks (fooling rate $R_f$) for the validation dataset and plot the clean images, UAP, and adversarial images. ``` img_plot(X_val, X_val_adv_nontargeted) ``` ### Supplementary note The above example considers DCT basis as search directions. Standard basis (i.e., pixel attack) is also used. ``` nontargeted_attack = Universal_SimBA( classifier, attack='px', epsilon=0.2, max_iter=2000, eps=0.2, norm=np.inf, targeted=False, batch_size=256 ) ``` ## Targeted attacks Get one-hot verctors for a target class. ``` target_class = 2 target_X_input = tf.keras.utils.to_categorical([target_class] * len(X_input), 10) target_X_val = tf.keras.utils.to_categorical([target_class] * len(X_val), 10) ``` Build attacker. ``` targeted_attack = Universal_SimBA( classifier, attack='dct', epsilon=0.2, freq_dim=int(X_input.shape[1]/8), max_iter=2000, eps=0.2, norm=np.inf, targeted=True, batch_size=256 ) ``` Perform targeted attacks and get adversarial examples for the input data. ``` X_input_adv_targeted = targeted_attack.generate(X_input, y = target_X_input) ``` Generate adversarial examples for the validation data. ``` X_val_adv_targeted = X_val + targeted_attack.noise ``` Comput success rate of targeted attacks (target attack success rate $R_s$) for the validation dataset and plot the clean images, UAP, and adversarial images. ``` img_plot(X_val, X_val_adv_targeted, target_X_val) ```
github_jupyter
!pip install git+https://github.com/kztakemoto/adversarial-robustness-toolbox import tensorflow as tf import numpy as np from art.estimators.classification import TensorFlowV2Classifier from art.attacks.evasion import Universal_SimBA import matplotlib.pyplot as plt import logging import random logger = logging.getLogger() logger.setLevel(logging.INFO) handler = logging.StreamHandler() formatter = logging.Formatter('[%(levelname)s] %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) mnist = tf.keras.datasets.mnist (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train = X_train.reshape((60000, 28, 28, 1)) X_test = X_test.reshape((10000, 28, 28, 1)) X_train, X_test = X_train / 255.0, X_test / 255.0 # Input dataset consists of randomly selected 100 images per class. np.random.seed(111) input_idx = [] num_each_class = 100 for class_i in range(10): idx = np.random.choice(np.where(y_test == class_i)[0], num_each_class, replace=False).tolist() input_idx = input_idx + idx random.shuffle(input_idx) X_input, y_input = X_test[input_idx], y_test[input_idx] # The rest is used as the validation data rest_idx = np.ones(len(X_test), dtype=bool) rest_idx[input_idx] = False X_val, y_val = X_test[rest_idx], y_test[rest_idx] model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28, 1)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False) model.compile(optimizer='adam', loss=loss_fn, metrics=['accuracy']) model.fit(X_train, y_train, epochs=5) classifier = TensorFlowV2Classifier( model=model, nb_classes=10, input_shape=(28, 28, 1), loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), ) # set MNIST labels label = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'] # normalization for image plot def norm(x): return (x - np.min(x)) / (np.max(x) - np.min(x) + 1e-5) # plot sample images def img_plot(x, X_adv, target=None): preds_X_adv = np.argmax(classifier.predict(X_adv), axis=1) preds_x = np.argmax(classifier.predict(x), axis=1) if target is None: sr = np.sum(preds_X_adv != preds_x) / x.shape[0] print('Success rate of non-targeted attacks: {:.1f}%'.format(sr * 100)) else: sr = np.sum(preds_X_adv == np.argmax(target, axis=1)) / x.shape[0] print('Success rate of targeted attacks: {:.1f}%'.format(sr * 100)) print('Plot the clean images (top), adversarial images (bottom), and UAP') n = 10 plt.figure(figsize=(20, 4)) for i in range(n + 1): x_ori = norm(x[i-1].reshape(28,28)) x_adv = norm(X_adv[i-1].reshape(28,28)) noise = norm((X_adv[i-1] - x[i-1]).reshape(28,28)) # display original if i != 0: ax = plt.subplot(2, n + 1, i + 1) plt.title(label[preds_x[i-1]]) plt.imshow(x_ori) plt.gray() ax.set_axis_off() # display original + noise bx = plt.subplot(2, n + 1, i + n + 2) if i == 0: plt.title('UAP') plt.imshow(noise) plt.gray() else: plt.title(label[preds_X_adv[i-1]]) plt.imshow(x_adv) plt.gray() bx.set_axis_off() plt.show() nontargeted_attack = Universal_SimBA( classifier, attack='dct', epsilon=0.2, freq_dim=int(X_input.shape[1]/8), # this is related to $f_d$ in our paper; specifically, freq_dim = 1 / $f_d$ max_iter=2000, # corresponds to $i_\max$ in our paper eps=0.2, # corresponds to $\xi$ in our paper norm=np.inf, # corresponds to $p$ in our paper targeted=False, batch_size=256 ) X_input_adv_nontargeted = nontargeted_attack.generate(X_input) X_val_adv_nontargeted = X_val + nontargeted_attack.noise img_plot(X_val, X_val_adv_nontargeted) nontargeted_attack = Universal_SimBA( classifier, attack='px', epsilon=0.2, max_iter=2000, eps=0.2, norm=np.inf, targeted=False, batch_size=256 ) target_class = 2 target_X_input = tf.keras.utils.to_categorical([target_class] * len(X_input), 10) target_X_val = tf.keras.utils.to_categorical([target_class] * len(X_val), 10) targeted_attack = Universal_SimBA( classifier, attack='dct', epsilon=0.2, freq_dim=int(X_input.shape[1]/8), max_iter=2000, eps=0.2, norm=np.inf, targeted=True, batch_size=256 ) X_input_adv_targeted = targeted_attack.generate(X_input, y = target_X_input) X_val_adv_targeted = X_val + targeted_attack.noise img_plot(X_val, X_val_adv_targeted, target_X_val)
0.735262
0.928474
# <center> Find and Download Landsat 8 Cloudless images</center> <center>- Place a marker somewhere on the map</center> <center>- Choose the band combination and the date range</center> <center>- Click the "Get Image" button. This will find the most cloud free image between your date range and display the image on the map</center> <center>- Download a lower quality version of the image by clicking the "Download Image" button</center> <center>- Access the image's full size source files in S3 AWS by clicking the "Open Image Files" button</center> <center><h3>Please Note: The map may take a minute to load</h3> </center> ``` import ee import os import re import geemap import ipywidgets as widgets from datetime import datetime from ipyleaflet import LayersControl, DrawControl from IPython.display import display, HTML display(HTML(""" <style> .output { display: flex; align-items: center; text-align: center; } </style> """)) style = {'description_width': 'initial'} #band selection drop downs bands = widgets.Dropdown( description='<b>Select RGB Combo:</b>', options=['Natural Color 4/3/2','Natural With Atmospheric Removal 7/5/3', 'Color Infrared 5/4/3', 'False Color (Urban) 7/6/4','Agriculture 6/5/2', 'Atmospheric Penetration 7/6/5', 'Healthy Vegetation 5/6/2', 'Land/Water 5/6/4', 'Shortwave Infrared 7/5/4', 'Vegetation Analysis 6/5/4'], value='Natural Color 4/3/2', layout=widgets.Layout(width='350px'), style=style ) months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'] days = [i for i in range(1, 32)] years = [i for i in range(2013, datetime.today().year + 1)] start_months_drop = widgets.Dropdown( options=months, description='<b>Start Date:</b>', layout=widgets.Layout(width='170px'), style=style, ) start_days_drop = widgets.Dropdown( options=days, description='', layout=widgets.Layout(width='50px'), style=style, ) start_year_drop = widgets.Dropdown( options=years, description='', layout=widgets.Layout(width='70px'), style=style, ) end_months_drop = widgets.Dropdown( options=months, description='<b>End Date:</b>', layout=widgets.Layout(width='170px'), style=style, ) end_days_drop = widgets.Dropdown( options=days, description='', layout=widgets.Layout(width='50px'), style=style, ) end_year_drop = widgets.Dropdown( options=years, description='', layout=widgets.Layout(width='70px'), style=style, ) box = widgets.HBox([bands]) box2 = widgets.HBox([start_months_drop, start_days_drop, start_year_drop, end_months_drop,end_days_drop,end_year_drop]) submit = widgets.Button( description='Load Image', button_style='primary', tooltip='Click submit to find the Landsat 8 image', style=style) download = widgets.Button( description='Download Image', button_style='primary', tooltip='Click download', style=style) download2 = widgets.Button( description='Open Image Files', button_style='primary', tooltip='Click download2', style=style) output = widgets.Output() download_box = widgets.HBox([download, download2]) def landsat_lessCloudy(image): with output: output.clear_output() #dates start_date_string = "{} {} {}".format(start_months_drop.value,start_days_drop.value,start_year_drop.value) try: start_date = datetime.strptime(start_date_string, '%B %d %Y') except ValueError: print('Day is out of range for month') return end_date_string = "{} {} {}".format(end_months_drop.value,end_days_drop.value,end_year_drop.value) try: end_date = datetime.strptime(end_date_string, '%B %d %Y') except ValueError: print('Day is out of range for month') return if start_date >= end_date: print('The end date must be the more recent date.') return try: last_drawn_marker = coordinates_collection[-1] coords = ", ".join(last_drawn_marker) except: print('You must place a point on the map for the tool to run.') return #create coordinates separate_xy = coords.split(',') x = float(separate_xy[0]) y = float(separate_xy[1]) # input coordinates point = ee.Geometry.Point(x, y) #create start and end date in correct format startdate = start_date.strftime("%Y-%m-%d") enddate = end_date.strftime("%Y-%m-%d") # start/end dates start = ee.Date(startdate) finish = ee.Date(enddate) # raw collection collection = ee.ImageCollection('LANDSAT/LC08/C01/T1') # list by cloud cover filteredCollection = ee.ImageCollection('LANDSAT/LC08/C01/T1') \ .filterBounds(point) \ .filterDate(start, finish) \ .sort('CLOUD_COVER', True) # get lowest percentage of cloud cover image lesscloudsimage = filteredCollection.first() #Band Combination Choice band_choice = bands.value if band_choice == 'Natural Color 4/3/2': Band_Combo = {'bands': ['B4', 'B3', 'B2'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Natural With Atmospheric Removal 7/5/3': Band_Combo = {'bands': ['B7', 'B5', 'B3'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Color Infrared 5/4/3': Band_Combo = {'bands': ['B5', 'B4', 'B3'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'False Color (Urban) 7/6/4': Band_Combo = {'bands': ['B7', 'B6', 'B4'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Agriculture 6/5/2': Band_Combo = {'bands': ['B6', 'B5', 'B2'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Atmospheric Penetration 7/6/5': Band_Combo = {'bands': ['B7', 'B6', 'B5'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Healthy Vegetation 5/6/2': Band_Combo = {'bands': ['B5', 'B6', 'B2'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Land/Water 5/6/4': Band_Combo = {'bands': ['B5', 'B6', 'B4'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Shortwave Infrared 7/5/4': Band_Combo = {'bands': ['B7', 'B5', 'B4'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Vegetation Analysis 6/5/4': Band_Combo = {'bands': ['B6', 'B5', 'B4'], 'min': 5000, 'max': 20000, 'gamma': 1} Map.setCenter(x,y,8) if image == 'values': return band_choice, lesscloudsimage try: return Map.addLayer(lesscloudsimage, Band_Combo, 'Landsat_8_' + band_choice) except: print('No Landsat images return for the time period selected. Please adjust the dates and choose a wider temporal range.') return def landsat_image_download(x): with output: output.clear_output() band_choice, lesscloudsimage = landsat_lessCloudy('values') pattern = re.compile(r' \d/\d/\d') cleaned_bands = pattern.sub('', band_choice) out_dir = os.path.join(os.path.expanduser('~'), 'Downloads') if not os.path.exists(out_dir): os.makedirs(out_dir) filename = os.path.join(out_dir, cleaned_bands + '.tif') geemap.ee_export_image(lesscloudsimage, filename=filename, scale=200) def get_landsat_info(x): with output: output.clear_output() band_choice, lesscloudsimage = landsat_lessCloudy('values') imag_props = geemap.image_props(lesscloudsimage) json = imag_props.getInfo() collection_num = json['system:id'].split('/')[2].replace('0', '').lower() path = str(json['WRS_PATH']).zfill(3) row = str(json['WRS_ROW']).zfill(3) ID = json['LANDSAT_PRODUCT_ID'] link = r'https://landsat-pds.s3.amazonaws.com/{}/L8/{}/{}/{}/index.html'.format(collection_num, path, row, ID) return print('Click on the following link to view and download to the Landsat 8 file collection: \n' + link) submit.on_click(landsat_lessCloudy) download.on_click(landsat_image_download) download2.on_click(get_landsat_info) Map = geemap.Map(lite_mode=True) Map.add_basemap(basemap='Esri Topo World') Map.setCenter(-94.567394, 39.795006, zoom=None) draw_control = DrawControl( marker={"shapeOptions": {"color": "#3388ff"}}, polygon={}, polyline={}, circlemarker={}, edit=True, remove=True, ) coordinates_collection = [] def handle_draw(target, action, geo_json): find_coords = re.findall('-?\d+[.]\d+,\s\d+[.]\d+', str(geo_json)) coordinates_collection.append(find_coords) control = LayersControl(position='topright') draw_control.on_draw(handle_draw) Map.add_control(control) Map.add_control(draw_control) #Map.layout.width = '500px' #Map.layout.height = '600px' Map box box2 submit download_box output ```
github_jupyter
import ee import os import re import geemap import ipywidgets as widgets from datetime import datetime from ipyleaflet import LayersControl, DrawControl from IPython.display import display, HTML display(HTML(""" <style> .output { display: flex; align-items: center; text-align: center; } </style> """)) style = {'description_width': 'initial'} #band selection drop downs bands = widgets.Dropdown( description='<b>Select RGB Combo:</b>', options=['Natural Color 4/3/2','Natural With Atmospheric Removal 7/5/3', 'Color Infrared 5/4/3', 'False Color (Urban) 7/6/4','Agriculture 6/5/2', 'Atmospheric Penetration 7/6/5', 'Healthy Vegetation 5/6/2', 'Land/Water 5/6/4', 'Shortwave Infrared 7/5/4', 'Vegetation Analysis 6/5/4'], value='Natural Color 4/3/2', layout=widgets.Layout(width='350px'), style=style ) months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'] days = [i for i in range(1, 32)] years = [i for i in range(2013, datetime.today().year + 1)] start_months_drop = widgets.Dropdown( options=months, description='<b>Start Date:</b>', layout=widgets.Layout(width='170px'), style=style, ) start_days_drop = widgets.Dropdown( options=days, description='', layout=widgets.Layout(width='50px'), style=style, ) start_year_drop = widgets.Dropdown( options=years, description='', layout=widgets.Layout(width='70px'), style=style, ) end_months_drop = widgets.Dropdown( options=months, description='<b>End Date:</b>', layout=widgets.Layout(width='170px'), style=style, ) end_days_drop = widgets.Dropdown( options=days, description='', layout=widgets.Layout(width='50px'), style=style, ) end_year_drop = widgets.Dropdown( options=years, description='', layout=widgets.Layout(width='70px'), style=style, ) box = widgets.HBox([bands]) box2 = widgets.HBox([start_months_drop, start_days_drop, start_year_drop, end_months_drop,end_days_drop,end_year_drop]) submit = widgets.Button( description='Load Image', button_style='primary', tooltip='Click submit to find the Landsat 8 image', style=style) download = widgets.Button( description='Download Image', button_style='primary', tooltip='Click download', style=style) download2 = widgets.Button( description='Open Image Files', button_style='primary', tooltip='Click download2', style=style) output = widgets.Output() download_box = widgets.HBox([download, download2]) def landsat_lessCloudy(image): with output: output.clear_output() #dates start_date_string = "{} {} {}".format(start_months_drop.value,start_days_drop.value,start_year_drop.value) try: start_date = datetime.strptime(start_date_string, '%B %d %Y') except ValueError: print('Day is out of range for month') return end_date_string = "{} {} {}".format(end_months_drop.value,end_days_drop.value,end_year_drop.value) try: end_date = datetime.strptime(end_date_string, '%B %d %Y') except ValueError: print('Day is out of range for month') return if start_date >= end_date: print('The end date must be the more recent date.') return try: last_drawn_marker = coordinates_collection[-1] coords = ", ".join(last_drawn_marker) except: print('You must place a point on the map for the tool to run.') return #create coordinates separate_xy = coords.split(',') x = float(separate_xy[0]) y = float(separate_xy[1]) # input coordinates point = ee.Geometry.Point(x, y) #create start and end date in correct format startdate = start_date.strftime("%Y-%m-%d") enddate = end_date.strftime("%Y-%m-%d") # start/end dates start = ee.Date(startdate) finish = ee.Date(enddate) # raw collection collection = ee.ImageCollection('LANDSAT/LC08/C01/T1') # list by cloud cover filteredCollection = ee.ImageCollection('LANDSAT/LC08/C01/T1') \ .filterBounds(point) \ .filterDate(start, finish) \ .sort('CLOUD_COVER', True) # get lowest percentage of cloud cover image lesscloudsimage = filteredCollection.first() #Band Combination Choice band_choice = bands.value if band_choice == 'Natural Color 4/3/2': Band_Combo = {'bands': ['B4', 'B3', 'B2'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Natural With Atmospheric Removal 7/5/3': Band_Combo = {'bands': ['B7', 'B5', 'B3'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Color Infrared 5/4/3': Band_Combo = {'bands': ['B5', 'B4', 'B3'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'False Color (Urban) 7/6/4': Band_Combo = {'bands': ['B7', 'B6', 'B4'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Agriculture 6/5/2': Band_Combo = {'bands': ['B6', 'B5', 'B2'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Atmospheric Penetration 7/6/5': Band_Combo = {'bands': ['B7', 'B6', 'B5'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Healthy Vegetation 5/6/2': Band_Combo = {'bands': ['B5', 'B6', 'B2'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Land/Water 5/6/4': Band_Combo = {'bands': ['B5', 'B6', 'B4'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Shortwave Infrared 7/5/4': Band_Combo = {'bands': ['B7', 'B5', 'B4'], 'min': 5000, 'max': 20000, 'gamma': 1} elif band_choice == 'Vegetation Analysis 6/5/4': Band_Combo = {'bands': ['B6', 'B5', 'B4'], 'min': 5000, 'max': 20000, 'gamma': 1} Map.setCenter(x,y,8) if image == 'values': return band_choice, lesscloudsimage try: return Map.addLayer(lesscloudsimage, Band_Combo, 'Landsat_8_' + band_choice) except: print('No Landsat images return for the time period selected. Please adjust the dates and choose a wider temporal range.') return def landsat_image_download(x): with output: output.clear_output() band_choice, lesscloudsimage = landsat_lessCloudy('values') pattern = re.compile(r' \d/\d/\d') cleaned_bands = pattern.sub('', band_choice) out_dir = os.path.join(os.path.expanduser('~'), 'Downloads') if not os.path.exists(out_dir): os.makedirs(out_dir) filename = os.path.join(out_dir, cleaned_bands + '.tif') geemap.ee_export_image(lesscloudsimage, filename=filename, scale=200) def get_landsat_info(x): with output: output.clear_output() band_choice, lesscloudsimage = landsat_lessCloudy('values') imag_props = geemap.image_props(lesscloudsimage) json = imag_props.getInfo() collection_num = json['system:id'].split('/')[2].replace('0', '').lower() path = str(json['WRS_PATH']).zfill(3) row = str(json['WRS_ROW']).zfill(3) ID = json['LANDSAT_PRODUCT_ID'] link = r'https://landsat-pds.s3.amazonaws.com/{}/L8/{}/{}/{}/index.html'.format(collection_num, path, row, ID) return print('Click on the following link to view and download to the Landsat 8 file collection: \n' + link) submit.on_click(landsat_lessCloudy) download.on_click(landsat_image_download) download2.on_click(get_landsat_info) Map = geemap.Map(lite_mode=True) Map.add_basemap(basemap='Esri Topo World') Map.setCenter(-94.567394, 39.795006, zoom=None) draw_control = DrawControl( marker={"shapeOptions": {"color": "#3388ff"}}, polygon={}, polyline={}, circlemarker={}, edit=True, remove=True, ) coordinates_collection = [] def handle_draw(target, action, geo_json): find_coords = re.findall('-?\d+[.]\d+,\s\d+[.]\d+', str(geo_json)) coordinates_collection.append(find_coords) control = LayersControl(position='topright') draw_control.on_draw(handle_draw) Map.add_control(control) Map.add_control(draw_control) #Map.layout.width = '500px' #Map.layout.height = '600px' Map box box2 submit download_box output
0.443118
0.809878
``` #default_exp core ``` ## How do you evaluate monolingual embeddings? What constitutes a good embedding? It would be easy to provide a cursory answer - a good embedding is one that lends itself well to a downstream task. While 100% true and accurate, this answer does not allow us to speak to the goodness of embeddings directly, without having to train an additional model. Another way to look at this would be to define that embeddings are valuable and useful to the extent that they encode information about the language and the world around us. This is in line with the reasoning behind why embeddings were created in the first place - we want to train our embeddings, or our language model, on vast amount of unlabeled text, in a way that encodes syntactic and semantic information that can give us a boost on a downstream task where we have labels but the dataset might be of limited size. Taking the second definition, we can attempt to query our embeddings on textual examples and evaluate the accuracy of the anwsers. In its simplest form, we can perform algebraic operations in the embedding space ("king" - "man" + "woman" = ?) and use this as a mechanism for evaluation. While not without [issues](https://www.aclweb.org/anthology/W16-2503.pdf), this approach does allow us to say something about the structure of the trained embedding space. To demonstrate this approach, let's use the embeddings from a classic, seminal paper [Efficient Estimation of Word Representations in Vector Space paper](https://arxiv.org/abs/1301.3781) by Tomas Mikolov et al. We can download the embeddings using fastai (they were originally shared by the authors [here](https://code.google.com/archive/p/word2vec/)). Please note - the file is 1.7 GB compressed. ``` from fastai.data.all import untar_data embedding_path = untar_data('https://storage.googleapis.com/text-embeddings/GoogleNews-vectors-negative300.bin.tar.gz') ``` Now let's load the embeddings using `gensim`. ``` from gensim.models.keyedvectors import KeyedVectors gensim_embeddings = KeyedVectors.load_word2vec_format(embedding_path, binary=True) len(gensim_embeddings.index2entity), gensim_embeddings['cat'].shape ``` 3 million distinct embeddings, each of dimensionality 300! Let's perform the evaluation using the original list of `question-words.txt` as used in the paper (and that was shared by the authors on github [here](https://github.com/tmikolov/word2vec/blob/master/questions-words.txt)). We could use the functionality built into `gensim` to run the evaluation, but this might make it tricky to evaluate embeddings that we train ourselves, or should we want to modify the list of queries. Instead, let's perform the evaluation using code that we develop in this repository. As a starting point, all we need is an array of embeddings and a list with words corresponding to each vector! <!-- We will use [annoy](https://github.com/spotify/annoy) for approximate nearest neighbor lookup. Upon the first run, the embeddings will be added to an index and multiple trees enabling the search will be constructed. Given the size of these embeddings, this took around 5 minutes for me. --> ``` #export import numpy as np class Embeddings(): def __init__(self, embeddings, index2word): '''embeddings - numpy array of embeddings, index2word - list of words corresponding to embeddings''' assert len(embeddings) == len(index2word) self.vectors = embeddings self.i2w = index2word self.w2i = {w:i for i, w in enumerate(index2word)} def analogy(self, a, b, c, n=5, discard_question_words=True): ''' a is to b as c is to ? Performs the following algebraic calculation: result = emb_a - emb_b + emb_c Looks up n closest words to result. Implements the embedding space math behind the famous word2vec example: king - man + woman = queen ''' question_word_indices = [self.w2i[word] for word in [a, b, c]] a, b, c = [self.vectors[idx] for idx in question_word_indices] result = a - b + c if discard_question_words: return self.nn_words_to(result, question_word_indices, n) else: return self.nn_words_to(result, n=n) def nn_words_to(self, vector, skip_indices=[], n=5): nn_indices = self.word_idxs_ranked_by_cosine_similarity_to(vector) nn_words = [] for idx in nn_indices: if idx in skip_indices: continue nn_words.append(self.i2w[idx]) if len(nn_words) == n: break return nn_words def word_idxs_ranked_by_cosine_similarity_to(self, vector): return np.flip( np.argsort(self.vectors @ vector / (self.vectors_lengths() * np.linalg.norm(vector, axis=-1))) ) def vectors_lengths(self): if not hasattr(self, 'vectors_length_cache'): self.vectors_length_cache = np.linalg.norm(self.vectors, axis=-1) return self.vectors_length_cache def __getitem__(self, word): return self.vectors[self.w2i[word]] @classmethod def from_txt_file(cls, path_to_txt_file, limit=None): '''create embeddings from word2vec embeddings text file''' index, vectors = [], [] with open(path_to_txt_file) as f: f.readline() # discarding the header line for line in f: try: embedding = np.array([float(s) for s in line.split()[1:]]) if embedding.shape[0] != 300: continue vectors.append(embedding) index.append(line.split()[0]) except ValueError: pass # we may have encountered a 2 word embedding, for instance 'New York' or 'w dolinie' if limit is not None and len(vectors) == limit: break return cls(np.stack(vectors), index) gensim_embeddings.vectors[:30000].shape # grabbing just the vectors and mapping of vectors to words from gensim embeddings and instatiating our own embedding object # let's stick to just 50_000 of the most popular words so that the computation will run faster embeddings = Embeddings(gensim_embeddings.vectors[:50_000], gensim_embeddings.index2word[:50_000]) ``` Now that we have the Embeddings in place, we can run some examples. France is to Paris as ? is to Warsaw... ``` %%time embeddings.analogy('France', 'Paris', 'Warsaw', 5) ``` Got that one right! Now let's try the classic example of king - man + women = ? ``` %%time embeddings.analogy('king', 'man', 'woman', 5) ``` We get it right as well! Despite kings and queens not being discussed that often in the news today, this is still a great and slightly unexpected performance. Why should such an algebraic structure emerge when trained on a lot of text data in the first place? But yet it does! Let's explore the performance further, by running through the list of question-answer pairs. ``` #export from collections import defaultdict import pandas as pd def evaluate_monolingual_embeddings(embeddings, lower=False): with open('data/questions-words.txt') as f: lines = f.readlines() total_seen = defaultdict(lambda: 0) correct = defaultdict(lambda: 0) question_types = [] not_found = 0 for line in lines: if line[0] == ':': question_types.append(line[1:].strip()) current_type = question_types[-1] else: total_seen[current_type] += 1 example = line.strip().split(' ') if lower: example = [word.lower() for word in example] try: result = embeddings.analogy(*example[:2], example[3], 1) if example[2] == result[0]: correct[current_type] += 1 except KeyError: not_found += 1 types = [] results = [] for key in total_seen.keys(): types.append(key) results.append(f'{correct[key]} / {total_seen[key]}') df = pd.DataFrame(data={'question type': types, 'result': results}) display(df) print('Accuracy:', sum(correct.values()) / sum(total_seen.values())) print('Examples with missing words in the dictionary:', not_found) print('Total examples:', sum(total_seen.values())) %%time evaluate_monolingual_embeddings(embeddings) ``` This is a very good result - bear in mind that we are limiting ourselves to top@1 accuracy and that we are counting synonyms as failure! Another consideration is that while word2vec embeddings are ingenious in how efficient they are to train, they are a relatively simple way of encoding information about a language. Still, it is remarkable that embedding spaces posses the quality that allows us to perform operations such as the above!
github_jupyter
#default_exp core from fastai.data.all import untar_data embedding_path = untar_data('https://storage.googleapis.com/text-embeddings/GoogleNews-vectors-negative300.bin.tar.gz') from gensim.models.keyedvectors import KeyedVectors gensim_embeddings = KeyedVectors.load_word2vec_format(embedding_path, binary=True) len(gensim_embeddings.index2entity), gensim_embeddings['cat'].shape #export import numpy as np class Embeddings(): def __init__(self, embeddings, index2word): '''embeddings - numpy array of embeddings, index2word - list of words corresponding to embeddings''' assert len(embeddings) == len(index2word) self.vectors = embeddings self.i2w = index2word self.w2i = {w:i for i, w in enumerate(index2word)} def analogy(self, a, b, c, n=5, discard_question_words=True): ''' a is to b as c is to ? Performs the following algebraic calculation: result = emb_a - emb_b + emb_c Looks up n closest words to result. Implements the embedding space math behind the famous word2vec example: king - man + woman = queen ''' question_word_indices = [self.w2i[word] for word in [a, b, c]] a, b, c = [self.vectors[idx] for idx in question_word_indices] result = a - b + c if discard_question_words: return self.nn_words_to(result, question_word_indices, n) else: return self.nn_words_to(result, n=n) def nn_words_to(self, vector, skip_indices=[], n=5): nn_indices = self.word_idxs_ranked_by_cosine_similarity_to(vector) nn_words = [] for idx in nn_indices: if idx in skip_indices: continue nn_words.append(self.i2w[idx]) if len(nn_words) == n: break return nn_words def word_idxs_ranked_by_cosine_similarity_to(self, vector): return np.flip( np.argsort(self.vectors @ vector / (self.vectors_lengths() * np.linalg.norm(vector, axis=-1))) ) def vectors_lengths(self): if not hasattr(self, 'vectors_length_cache'): self.vectors_length_cache = np.linalg.norm(self.vectors, axis=-1) return self.vectors_length_cache def __getitem__(self, word): return self.vectors[self.w2i[word]] @classmethod def from_txt_file(cls, path_to_txt_file, limit=None): '''create embeddings from word2vec embeddings text file''' index, vectors = [], [] with open(path_to_txt_file) as f: f.readline() # discarding the header line for line in f: try: embedding = np.array([float(s) for s in line.split()[1:]]) if embedding.shape[0] != 300: continue vectors.append(embedding) index.append(line.split()[0]) except ValueError: pass # we may have encountered a 2 word embedding, for instance 'New York' or 'w dolinie' if limit is not None and len(vectors) == limit: break return cls(np.stack(vectors), index) gensim_embeddings.vectors[:30000].shape # grabbing just the vectors and mapping of vectors to words from gensim embeddings and instatiating our own embedding object # let's stick to just 50_000 of the most popular words so that the computation will run faster embeddings = Embeddings(gensim_embeddings.vectors[:50_000], gensim_embeddings.index2word[:50_000]) %%time embeddings.analogy('France', 'Paris', 'Warsaw', 5) %%time embeddings.analogy('king', 'man', 'woman', 5) #export from collections import defaultdict import pandas as pd def evaluate_monolingual_embeddings(embeddings, lower=False): with open('data/questions-words.txt') as f: lines = f.readlines() total_seen = defaultdict(lambda: 0) correct = defaultdict(lambda: 0) question_types = [] not_found = 0 for line in lines: if line[0] == ':': question_types.append(line[1:].strip()) current_type = question_types[-1] else: total_seen[current_type] += 1 example = line.strip().split(' ') if lower: example = [word.lower() for word in example] try: result = embeddings.analogy(*example[:2], example[3], 1) if example[2] == result[0]: correct[current_type] += 1 except KeyError: not_found += 1 types = [] results = [] for key in total_seen.keys(): types.append(key) results.append(f'{correct[key]} / {total_seen[key]}') df = pd.DataFrame(data={'question type': types, 'result': results}) display(df) print('Accuracy:', sum(correct.values()) / sum(total_seen.values())) print('Examples with missing words in the dictionary:', not_found) print('Total examples:', sum(total_seen.values())) %%time evaluate_monolingual_embeddings(embeddings)
0.558447
0.963506
# T81-558: Applications of Deep Neural Networks **Module 14: Other Neural Network Techniques** * Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # Module 14 Video Material * Part 14.1: What is AutoML [[Video]](https://www.youtube.com/watch?v=TFUysIR5AB0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_01_automl.ipynb) * Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_02_auto_encode.ipynb) * **Part 14.3: Training an Intrusion Detection System with KDD99** [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_03_anomaly.ipynb) * Part 14.4: Anomaly Detection in Keras [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_04_ids_kdd99.ipynb) * Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](t81_558_class_14_05_new_tech.ipynb) # Part 14.3: Anomaly Detection in Keras Anomaly detection is an unsupervised training technique that analyzes the degree to which incoming data is different than data that the data you used to train the neural network. Traditionally, cybersecurity experts have used anomaly detection to ensure network security. However, you can use anomoly in data science to detect input that you have not trained your neural network for. There are several data sets that are commonly used to demonstrate anomaly detection. In this part, we will look at the KDD-99 dataset. * [Stratosphere IPS Dataset](https://www.stratosphereips.org/category/dataset.html) * [The ADFA Intrusion Detection Datasets (2013) - for HIDS](https://www.unsw.adfa.edu.au/unsw-canberra-cyber/cybersecurity/ADFA-IDS-Datasets/) * [ITOC CDX (2009)](https://westpoint.edu/centers-and-research/cyber-research-center/data-sets) * [KDD-99 Dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) ### Read in KDD99 Data Set Although KDD99 dataset is over 20 years old, it is still widely used to demonstrate Intrusion Detection Systems (IDS) and Anomaly detection. KDD99 is the data set used for The Third International Knowledge Discovery and Data Mining Tools Competition, which was held in conjunction with KDD-99 The Fifth International Conference on Knowledge Discovery and Data Mining. The competition task was to build a network intrusion detector, a predictive model capable of distinguishing between "bad" connections, called intrusions or attacks, and "good" normal connections. This database contains a standard set of data to be audited, including a wide variety of intrusions simulated in a military network environment. The following code reads the KDD99 CSV dataset into a Pandas data frame. The standard format of KDD99 does not include column names. Because of that, the program adds them. ``` import pandas as pd from tensorflow.keras.utils import get_file try: path = get_file('kddcup.data_10_percent.gz', origin='http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz') except: print('Error downloading') raise print(path) # This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path, header=None) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's (rows with missing values) # The CSV file has no column heads, so add them df.columns = [ 'duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate', 'outcome' ] # display 5 rows pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) df ``` The KDD99 dataset contains many columns that define the network state over time intervals during which a cyber attack might have taken place. The column labeled "outcome" specifies either "normal," indicating no attack, or the type of attack performed. The following code displays the counts for each type of attack, as well as "normal". ``` df.groupby('outcome')['outcome'].count() ``` ### Preprocessing Before we can feed the KDD99 data into the neural network we must perform some preprocessing. We provide the following two functions to assist with preprocessing. The first function converts numeric columns into Z-Scores. The second function replaces categorical values with dummy variables. ``` # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) ``` We now use these functions to preprocess each of the columns. Once the program preprocesses the data we display the results. This code converts all numeric columns to Z-Scores and all textual columns to dummy variables. ``` # Now encode the feature vector encode_numeric_zscore(df, 'duration') encode_text_dummy(df, 'protocol_type') encode_text_dummy(df, 'service') encode_text_dummy(df, 'flag') encode_numeric_zscore(df, 'src_bytes') encode_numeric_zscore(df, 'dst_bytes') encode_text_dummy(df, 'land') encode_numeric_zscore(df, 'wrong_fragment') encode_numeric_zscore(df, 'urgent') encode_numeric_zscore(df, 'hot') encode_numeric_zscore(df, 'num_failed_logins') encode_text_dummy(df, 'logged_in') encode_numeric_zscore(df, 'num_compromised') encode_numeric_zscore(df, 'root_shell') encode_numeric_zscore(df, 'su_attempted') encode_numeric_zscore(df, 'num_root') encode_numeric_zscore(df, 'num_file_creations') encode_numeric_zscore(df, 'num_shells') encode_numeric_zscore(df, 'num_access_files') encode_numeric_zscore(df, 'num_outbound_cmds') encode_text_dummy(df, 'is_host_login') encode_text_dummy(df, 'is_guest_login') encode_numeric_zscore(df, 'count') encode_numeric_zscore(df, 'srv_count') encode_numeric_zscore(df, 'serror_rate') encode_numeric_zscore(df, 'srv_serror_rate') encode_numeric_zscore(df, 'rerror_rate') encode_numeric_zscore(df, 'srv_rerror_rate') encode_numeric_zscore(df, 'same_srv_rate') encode_numeric_zscore(df, 'diff_srv_rate') encode_numeric_zscore(df, 'srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_count') encode_numeric_zscore(df, 'dst_host_srv_count') encode_numeric_zscore(df, 'dst_host_same_srv_rate') encode_numeric_zscore(df, 'dst_host_diff_srv_rate') encode_numeric_zscore(df, 'dst_host_same_src_port_rate') encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_serror_rate') encode_numeric_zscore(df, 'dst_host_srv_serror_rate') encode_numeric_zscore(df, 'dst_host_rerror_rate') encode_numeric_zscore(df, 'dst_host_srv_rerror_rate') # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] ``` To perform anomaly detection, we divide the data into two groups "normal" and the various attacks. The following code divides the data into two dataframes and displays each of these two groups' sizes. ``` normal_mask = df['outcome']=='normal.' attack_mask = df['outcome']!='normal.' df.drop('outcome',axis=1,inplace=True) df_normal = df[normal_mask] df_attack = df[attack_mask] print(f"Normal count: {len(df_normal)}") print(f"Attack count: {len(df_attack)}") ``` Next, we convert these two dataframes into Numpy arrays. Keras requires this format for data. ``` # This is the numeric feature vector, as it goes to the neural net x_normal = df_normal.values x_attack = df_attack.values ``` ### Training the Autoencoder It is important to note that we are not using the outcome column as a label to predict. This anomaly detection is unsupervised; there is no target (y) value to predict. We will train an autoencoder on the normal data and see how well it can detect that the data not flagged as "normal" represents an anomaly. Next, we split the normal data into a 25% test set and a 75% train set. The program will use the test data to facilitate early stopping. ``` from sklearn.model_selection import train_test_split x_normal_train, x_normal_test = train_test_split( x_normal, test_size=0.25, random_state=42) ``` We display the size of the train and test sets. ``` print(f"Normal train count: {len(x_normal_train)}") print(f"Normal test count: {len(x_normal_test)}") ``` We are now ready to train the autoencoder on the normal data. The autoencoder will learn to compress the data to a vector of just three numbers. The autoencoder should be able to also decompress with reasonable accuracy. As is typical for autoencoders, we are merely training the neural network to produce the same output values as were fed to the input layer. ``` from sklearn import metrics import numpy as np import pandas as pd from IPython.display import display, HTML import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation model = Sequential() model.add(Dense(25, input_dim=x_normal.shape[1], activation='relu')) model.add(Dense(3, activation='relu')) # size to compress to model.add(Dense(25, activation='relu')) model.add(Dense(x_normal.shape[1])) # Multiple output neurons model.compile(loss='mean_squared_error', optimizer='adam') model.fit(x_normal_train,x_normal_train,verbose=1,epochs=100) ``` ### Detecting an Anomaly We are now ready to see if the abnormal data registers as an anomaly. The first two scores show the in-sample and out of sample RMSE errors. Both of these two scores are relatively low at around 0.33 because they resulted from normal data. The much higher 0.76 error occurred from the abnormal data. The autoencoder is not as capable of encoding data that represents an attack. This higher error indicates an anomaly. ``` pred = model.predict(x_normal_test) score1 = np.sqrt(metrics.mean_squared_error(pred,x_normal_test)) pred = model.predict(x_normal) score2 = np.sqrt(metrics.mean_squared_error(pred,x_normal)) pred = model.predict(x_attack) score3 = np.sqrt(metrics.mean_squared_error(pred,x_attack)) print(f"Out of Sample Normal Score (RMSE): {score1}") print(f"Insample Normal Score (RMSE): {score2}") print(f"Attack Underway Score (RMSE): {score3}") ```
github_jupyter
import pandas as pd from tensorflow.keras.utils import get_file try: path = get_file('kddcup.data_10_percent.gz', origin='http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz') except: print('Error downloading') raise print(path) # This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv(path, header=None) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's (rows with missing values) # The CSV file has no column heads, so add them df.columns = [ 'duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate', 'outcome' ] # display 5 rows pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) df df.groupby('outcome')['outcome'].count() # Encode a numeric column as zscores def encode_numeric_zscore(df, name, mean=None, sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name] - mean) / sd # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df, name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = f"{name}-{x}" df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # Now encode the feature vector encode_numeric_zscore(df, 'duration') encode_text_dummy(df, 'protocol_type') encode_text_dummy(df, 'service') encode_text_dummy(df, 'flag') encode_numeric_zscore(df, 'src_bytes') encode_numeric_zscore(df, 'dst_bytes') encode_text_dummy(df, 'land') encode_numeric_zscore(df, 'wrong_fragment') encode_numeric_zscore(df, 'urgent') encode_numeric_zscore(df, 'hot') encode_numeric_zscore(df, 'num_failed_logins') encode_text_dummy(df, 'logged_in') encode_numeric_zscore(df, 'num_compromised') encode_numeric_zscore(df, 'root_shell') encode_numeric_zscore(df, 'su_attempted') encode_numeric_zscore(df, 'num_root') encode_numeric_zscore(df, 'num_file_creations') encode_numeric_zscore(df, 'num_shells') encode_numeric_zscore(df, 'num_access_files') encode_numeric_zscore(df, 'num_outbound_cmds') encode_text_dummy(df, 'is_host_login') encode_text_dummy(df, 'is_guest_login') encode_numeric_zscore(df, 'count') encode_numeric_zscore(df, 'srv_count') encode_numeric_zscore(df, 'serror_rate') encode_numeric_zscore(df, 'srv_serror_rate') encode_numeric_zscore(df, 'rerror_rate') encode_numeric_zscore(df, 'srv_rerror_rate') encode_numeric_zscore(df, 'same_srv_rate') encode_numeric_zscore(df, 'diff_srv_rate') encode_numeric_zscore(df, 'srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_count') encode_numeric_zscore(df, 'dst_host_srv_count') encode_numeric_zscore(df, 'dst_host_same_srv_rate') encode_numeric_zscore(df, 'dst_host_diff_srv_rate') encode_numeric_zscore(df, 'dst_host_same_src_port_rate') encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_serror_rate') encode_numeric_zscore(df, 'dst_host_srv_serror_rate') encode_numeric_zscore(df, 'dst_host_rerror_rate') encode_numeric_zscore(df, 'dst_host_srv_rerror_rate') # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] normal_mask = df['outcome']=='normal.' attack_mask = df['outcome']!='normal.' df.drop('outcome',axis=1,inplace=True) df_normal = df[normal_mask] df_attack = df[attack_mask] print(f"Normal count: {len(df_normal)}") print(f"Attack count: {len(df_attack)}") # This is the numeric feature vector, as it goes to the neural net x_normal = df_normal.values x_attack = df_attack.values from sklearn.model_selection import train_test_split x_normal_train, x_normal_test = train_test_split( x_normal, test_size=0.25, random_state=42) print(f"Normal train count: {len(x_normal_train)}") print(f"Normal test count: {len(x_normal_test)}") from sklearn import metrics import numpy as np import pandas as pd from IPython.display import display, HTML import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation model = Sequential() model.add(Dense(25, input_dim=x_normal.shape[1], activation='relu')) model.add(Dense(3, activation='relu')) # size to compress to model.add(Dense(25, activation='relu')) model.add(Dense(x_normal.shape[1])) # Multiple output neurons model.compile(loss='mean_squared_error', optimizer='adam') model.fit(x_normal_train,x_normal_train,verbose=1,epochs=100) pred = model.predict(x_normal_test) score1 = np.sqrt(metrics.mean_squared_error(pred,x_normal_test)) pred = model.predict(x_normal) score2 = np.sqrt(metrics.mean_squared_error(pred,x_normal)) pred = model.predict(x_attack) score3 = np.sqrt(metrics.mean_squared_error(pred,x_attack)) print(f"Out of Sample Normal Score (RMSE): {score1}") print(f"Insample Normal Score (RMSE): {score2}") print(f"Attack Underway Score (RMSE): {score3}")
0.471467
0.984185
# Numpy Introduction ## numpy arrays ``` import numpy as np arr = np.array([1,3,4,5,6]) arr arr.shape arr.dtype arr = np.array([1,'st','er',3]) arr.dtype np.sum(arr) ``` ### Creating arrays ``` arr = np.array([[1,2,3],[2,4,6],[8,8,8]]) arr.shape arr arr = np.zeros((2,4)) arr arr = np.ones((2,4)) arr arr = np.identity(3) arr arr = np.random.randn(3,4) arr from io import BytesIO b = BytesIO(b"2,23,33\n32,42,63.4\n35,77,12") arr = np.genfromtxt(b, delimiter=",") arr ``` ### Accessing array elements #### Simple indexing ``` arr[1] arr = np.arange(12).reshape(2,2,3) arr arr[0] arr = np.arange(10) arr[5:] arr[5:8] arr[:-5] arr = np.arange(12).reshape(2,2,3) arr arr[1:2] arr = np.arange(27).reshape(3,3,3) arr arr[:,:,2] arr[...,2] ``` #### Advanced Indexing ``` arr = np.arange(9).reshape(3,3) arr arr[[0,1,2],[1,0,0]] ``` ##### Boolean Indexing ``` cities = np.array(["delhi","banglaore","mumbai","chennai","bhopal"]) city_data = np.random.randn(5,3) city_data city_data[cities =="delhi"] city_data[city_data >0] city_data[city_data >0] = 0 city_data ``` #### Operations on arrays ``` arr = np.arange(15).reshape(3,5) arr arr + 5 arr * 2 arr1 = np.arange(15).reshape(5,3) arr2 = np.arange(5).reshape(5,1) arr2 + arr1 arr1 arr2 arr1 = np.random.randn(5,3) arr1 np.modf(arr1) ``` #### Linear algebra using numpy ``` A = np.array([[1,2,3],[4,5,6],[7,8,9]]) B = np.array([[9,8,7],[6,5,4],[1,2,3]]) A.dot(B) A = np.arange(15).reshape(3,5) A.T np.linalg.svd(A) a = np.array([[7,5,-3], [3,-5,2],[5,3,-7]]) b = np.array([16,-8,0]) x = np.linalg.solve(a, b) x np.allclose(np.dot(a, x), b) ``` # Pandas ## Data frames ``` import pandas as pd d = [{'city':'Delhi',"data":1000}, {'city':'Banglaore',"data":2000}, {'city':'Mumbai',"data":1000}] pd.DataFrame(d) df = pd.DataFrame(d) ``` ### Reading in data ``` city_data = pd.read_csv(filepath_or_buffer='simplemaps-worldcities-basic.csv') city_data.head(n=10) city_data.tail() series_es = city_data.lat type(series_es) series_es[1:10:2] series_es[:7] series_es[:-7315] city_data[:7] city_data.iloc[:5,:4] city_data[city_data['pop'] > 10000000][city_data.columns[pd.Series(city_data.columns).str.startswith('l')]] city_greater_10mil = city_data[city_data['pop'] > 10000000] city_greater_10mil.rename(columns={'pop':'population'}, inplace=True) city_greater_10mil.where(city_greater_10mil.population > 15000000) df = pd.DataFrame(np.random.randn(8, 3), columns=['A', 'B', 'C']) ``` ### Operations on dataframes ``` nparray = df.values type(nparray) from numpy import nan df.iloc[4,2] = nan df df.fillna(0) columns_numeric = ['lat','lng','pop'] city_data[columns_numeric].mean() city_data[columns_numeric].sum() city_data[columns_numeric].count() city_data[columns_numeric].median() city_data[columns_numeric].quantile(0.8) city_data[columns_numeric].sum(axis = 1).head() city_data[columns_numeric].describe() city_data1 = city_data.sample(3) ``` ### Concatanating data frames ``` city_data2 = city_data.sample(3) city_data_combine = pd.concat([city_data1,city_data2]) city_data_combine df1 = pd.DataFrame({'col1': ['col10', 'col11', 'col12', 'col13'], 'col2': ['col20', 'col21', 'col22', 'col23'], 'col3': ['col30', 'col31', 'col32', 'col33'], 'col4': ['col40', 'col41', 'col42', 'col43']}, index=[0, 1, 2, 3]) df1 df4 = pd.DataFrame({'col2': ['col22', 'col23', 'col26', 'col27'], 'Col4': ['Col42', 'Col43', 'Col46', 'Col47'], 'col6': ['col62', 'col63', 'col66', 'col67']}, index=[2, 3, 6, 7]) pd.concat([df1,df4], axis=1) country_data = city_data[['iso3','country']].drop_duplicates() country_data.shape country_data.head() del(city_data['country']) city_data.merge(country_data, 'inner').head() ``` # Scikit-learn ``` from sklearn import datasets diabetes = datasets.load_diabetes() X = diabetes.data[:10] y = diabetes.target X[:5] y[:10] feature_names=['age', 'sex', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] ``` ## Scikit example regression ``` from sklearn import datasets from sklearn.linear_model import Lasso from sklearn import linear_model, datasets from sklearn.model_selection import GridSearchCV diabetes = datasets.load_diabetes() X_train = diabetes.data[:310] y_train = diabetes.target[:310] X_test = diabetes.data[310:] y_test = diabetes.target[310:] lasso = Lasso(random_state=0) alphas = np.logspace(-4, -0.5, 30) scores = list() scores_std = list() estimator = GridSearchCV(lasso, param_grid = dict(alpha=alphas)) estimator.fit(X_train, y_train) estimator.best_score_ estimator.best_estimator_ estimator.predict(X_test) ``` ## Deep Learning Frameworks ### Theano example ``` import numpy import theano.tensor as T from theano import function x = T.dscalar('x') y = T.dscalar('y') z = x + y f = function([x, y], z) f(8, 2) ``` ### Tensorflow example ``` import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() print(sess.run(hello)) ``` ### Building a neural network model with Keras ``` from sklearn.datasets import load_breast_cancer cancer = load_breast_cancer() X_train = cancer.data[:340] y_train = cancer.target[:340] X_test = cancer.data[340:] y_test = cancer.target[340:] import numpy as np from keras.models import Sequential from keras.layers import Dense, Dropout model = Sequential() model.add(Dense(15, input_dim=30, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(X_train, y_train, epochs=20, batch_size=50) predictions = model.predict_classes(X_test) from sklearn import metrics print('Accuracy:', metrics.accuracy_score(y_true=y_test, y_pred=predictions)) print(metrics.classification_report(y_true=y_test, y_pred=predictions)) ``` ### The power of deep learning models ``` model = Sequential() model.add(Dense(15, input_dim=30, activation='relu')) model.add(Dense(15, activation='relu')) model.add(Dense(15, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(X_train, y_train, epochs=20, batch_size=50) predictions = model.predict_classes(X_test) print('Accuracy:', metrics.accuracy_score(y_true=y_test, y_pred=predictions)) print(metrics.classification_report(y_true=y_test, y_pred=predictions)) ```
github_jupyter
import numpy as np arr = np.array([1,3,4,5,6]) arr arr.shape arr.dtype arr = np.array([1,'st','er',3]) arr.dtype np.sum(arr) arr = np.array([[1,2,3],[2,4,6],[8,8,8]]) arr.shape arr arr = np.zeros((2,4)) arr arr = np.ones((2,4)) arr arr = np.identity(3) arr arr = np.random.randn(3,4) arr from io import BytesIO b = BytesIO(b"2,23,33\n32,42,63.4\n35,77,12") arr = np.genfromtxt(b, delimiter=",") arr arr[1] arr = np.arange(12).reshape(2,2,3) arr arr[0] arr = np.arange(10) arr[5:] arr[5:8] arr[:-5] arr = np.arange(12).reshape(2,2,3) arr arr[1:2] arr = np.arange(27).reshape(3,3,3) arr arr[:,:,2] arr[...,2] arr = np.arange(9).reshape(3,3) arr arr[[0,1,2],[1,0,0]] cities = np.array(["delhi","banglaore","mumbai","chennai","bhopal"]) city_data = np.random.randn(5,3) city_data city_data[cities =="delhi"] city_data[city_data >0] city_data[city_data >0] = 0 city_data arr = np.arange(15).reshape(3,5) arr arr + 5 arr * 2 arr1 = np.arange(15).reshape(5,3) arr2 = np.arange(5).reshape(5,1) arr2 + arr1 arr1 arr2 arr1 = np.random.randn(5,3) arr1 np.modf(arr1) A = np.array([[1,2,3],[4,5,6],[7,8,9]]) B = np.array([[9,8,7],[6,5,4],[1,2,3]]) A.dot(B) A = np.arange(15).reshape(3,5) A.T np.linalg.svd(A) a = np.array([[7,5,-3], [3,-5,2],[5,3,-7]]) b = np.array([16,-8,0]) x = np.linalg.solve(a, b) x np.allclose(np.dot(a, x), b) import pandas as pd d = [{'city':'Delhi',"data":1000}, {'city':'Banglaore',"data":2000}, {'city':'Mumbai',"data":1000}] pd.DataFrame(d) df = pd.DataFrame(d) city_data = pd.read_csv(filepath_or_buffer='simplemaps-worldcities-basic.csv') city_data.head(n=10) city_data.tail() series_es = city_data.lat type(series_es) series_es[1:10:2] series_es[:7] series_es[:-7315] city_data[:7] city_data.iloc[:5,:4] city_data[city_data['pop'] > 10000000][city_data.columns[pd.Series(city_data.columns).str.startswith('l')]] city_greater_10mil = city_data[city_data['pop'] > 10000000] city_greater_10mil.rename(columns={'pop':'population'}, inplace=True) city_greater_10mil.where(city_greater_10mil.population > 15000000) df = pd.DataFrame(np.random.randn(8, 3), columns=['A', 'B', 'C']) nparray = df.values type(nparray) from numpy import nan df.iloc[4,2] = nan df df.fillna(0) columns_numeric = ['lat','lng','pop'] city_data[columns_numeric].mean() city_data[columns_numeric].sum() city_data[columns_numeric].count() city_data[columns_numeric].median() city_data[columns_numeric].quantile(0.8) city_data[columns_numeric].sum(axis = 1).head() city_data[columns_numeric].describe() city_data1 = city_data.sample(3) city_data2 = city_data.sample(3) city_data_combine = pd.concat([city_data1,city_data2]) city_data_combine df1 = pd.DataFrame({'col1': ['col10', 'col11', 'col12', 'col13'], 'col2': ['col20', 'col21', 'col22', 'col23'], 'col3': ['col30', 'col31', 'col32', 'col33'], 'col4': ['col40', 'col41', 'col42', 'col43']}, index=[0, 1, 2, 3]) df1 df4 = pd.DataFrame({'col2': ['col22', 'col23', 'col26', 'col27'], 'Col4': ['Col42', 'Col43', 'Col46', 'Col47'], 'col6': ['col62', 'col63', 'col66', 'col67']}, index=[2, 3, 6, 7]) pd.concat([df1,df4], axis=1) country_data = city_data[['iso3','country']].drop_duplicates() country_data.shape country_data.head() del(city_data['country']) city_data.merge(country_data, 'inner').head() from sklearn import datasets diabetes = datasets.load_diabetes() X = diabetes.data[:10] y = diabetes.target X[:5] y[:10] feature_names=['age', 'sex', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] from sklearn import datasets from sklearn.linear_model import Lasso from sklearn import linear_model, datasets from sklearn.model_selection import GridSearchCV diabetes = datasets.load_diabetes() X_train = diabetes.data[:310] y_train = diabetes.target[:310] X_test = diabetes.data[310:] y_test = diabetes.target[310:] lasso = Lasso(random_state=0) alphas = np.logspace(-4, -0.5, 30) scores = list() scores_std = list() estimator = GridSearchCV(lasso, param_grid = dict(alpha=alphas)) estimator.fit(X_train, y_train) estimator.best_score_ estimator.best_estimator_ estimator.predict(X_test) import numpy import theano.tensor as T from theano import function x = T.dscalar('x') y = T.dscalar('y') z = x + y f = function([x, y], z) f(8, 2) import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() print(sess.run(hello)) from sklearn.datasets import load_breast_cancer cancer = load_breast_cancer() X_train = cancer.data[:340] y_train = cancer.target[:340] X_test = cancer.data[340:] y_test = cancer.target[340:] import numpy as np from keras.models import Sequential from keras.layers import Dense, Dropout model = Sequential() model.add(Dense(15, input_dim=30, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(X_train, y_train, epochs=20, batch_size=50) predictions = model.predict_classes(X_test) from sklearn import metrics print('Accuracy:', metrics.accuracy_score(y_true=y_test, y_pred=predictions)) print(metrics.classification_report(y_true=y_test, y_pred=predictions)) model = Sequential() model.add(Dense(15, input_dim=30, activation='relu')) model.add(Dense(15, activation='relu')) model.add(Dense(15, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(X_train, y_train, epochs=20, batch_size=50) predictions = model.predict_classes(X_test) print('Accuracy:', metrics.accuracy_score(y_true=y_test, y_pred=predictions)) print(metrics.classification_report(y_true=y_test, y_pred=predictions))
0.45423
0.870432
# Solutions ## About the Data In this notebook, we will be working with two data sources: - 2018 stock data for Facebook, Apple, Amazon, Netflix, and Google (obtained using the [`stock_analysis`](https://github.com/stefmolin/stock-analysis) package) - European Centre for Disease Prevention and Control's (ECDC) [daily number of new reported cases of COVID-19 by country worldwide dataset](https://www.ecdc.europa.eu/en/publications-data/download-todays-data-geographic-distribution-covid-19-cases-worldwide) collected on September 19, 2020 via [this link](https://opendata.ecdc.europa.eu/covid19/casedistribution/csv) ## Setup ``` import pandas as pd ``` ## Exercise 1 We want to look at data for the FAANG stocks (Facebook, Apple, Amazon, Netflix, and Google), but we were given each as a separate CSV file. Make them into a single file and store the dataframe of the FAANG data as `faang`: 1. Read each file in. 2. Add a column to each dataframe indicating the ticker it is for. 3. Append them together into a single dataframe. 4. Save the result to a CSV file. ``` faang = pd.DataFrame() for ticker in ['fb', 'aapl', 'amzn', 'nflx', 'goog']: df = pd.read_csv(f'../../ch_03/exercises/{ticker}.csv') # make the ticker the first column df.insert(0, 'ticker', ticker.upper()) faang = faang.append(df) faang.to_csv('faang.csv', index=False) ``` ## Exercise 2 With `faang`, use type conversion to change the `date` column to datetime and the `volume` column to integers. Then, sort by `date` and `ticker`. ``` faang = faang.assign( date=lambda x: pd.to_datetime(x.date), volume=lambda x: x.volume.astype(int) ).sort_values( ['date', 'ticker'] ) faang.head() ``` ## Exercise 3 Find the 7 rows with the lowest value for `volume`. ``` faang.nsmallest(7, 'volume') ``` ## Exercise 4 Right now, the data is somewhere between long and wide format. Use `melt()` to make it completely long format. ``` melted_faang = faang.melt( id_vars=['ticker', 'date'], value_vars=['open', 'high', 'low', 'close', 'volume'] ) melted_faang.head() ``` ## Exercise 5 Suppose we found out there was a glitch in how the data was recorded on July 26, 2018. How should we handle this? > Given that this is a large data set (~ 1 year), we would be tempted to just drop that date and interpolate. However, some preliminary research on that date for the FAANG stocks reveals that FB took a huge tumble that day. If we had interpolated, we would have missed the magnitude of the drop. ## Exercise 6 The European Centre for Disease Prevention and Control (ECDC) provides an open dataset on COVID-19 cases called, [*daily number of new reported cases of COVID-19 by country worldwide*](https://www.ecdc.europa.eu/en/publications-data/download-todays-data-geographic-distribution-covid-19-cases-worldwide). This dataset is updated daily, but we will use a snapshot that contains data from January 1, 2020 through September 18, 2020. Clean and pivot the data so that it is in wide format: 1. Read in the `covid19_cases.csv` file. 2. Create a `date` column using the data in the `dateRep` column and the `pd.to_datetime()` function. 3. Set the `date` column as the index and sort the index. 4. Replace occurrences of `United_States_of_America` and `United_Kingdom` with `USA` and `UK`, respectively. 5. Using the `countriesAndTerritories` column, filter the data down to Argentina, Brazil, China, Colombia, India, Italy, Mexico, Peru, Russia, Spain, Turkey, the UK, and the USA. 6. Pivot the data so that the index contains the dates, the columns contain the country names, and the values are the case counts in the `cases` column. Be sure to fill in `NaN` values with `0`. ``` covid = pd.read_csv('../../ch_03/exercises/covid19_cases.csv').assign( date=lambda x: pd.to_datetime(x.dateRep, format='%d/%m/%Y') ).set_index('date').replace( 'United_States_of_America', 'USA' ).replace('United_Kingdom', 'UK').sort_index() covid[ covid.countriesAndTerritories.isin([ 'Argentina', 'Brazil', 'China', 'Colombia', 'India', 'Italy', 'Mexico', 'Peru', 'Russia', 'Spain', 'Turkey', 'UK', 'USA' ]) ].reset_index().pivot(index='date', columns='countriesAndTerritories', values='cases').fillna(0) ``` ## Exercise 7 In order to determine the case totals per country efficiently, we need the aggregation skills we will learn in *Chapter 4, Aggregating DataFrames*, so the ECDC data in the `covid19_cases.csv` file has been aggregated for us and saved in the `covid19_total_cases.csv` file. It contains the total number of case per country. Use this data to find the 20 countries with the largest COVID-19 case totals. Hints: - When reading in the CSV file, pass in `index_col='cases'`. - Note that it will be helpful to transpose the data before isolating the countries. ``` pd.read_csv('../../ch_03/exercises/covid19_total_cases.csv', index_col='index')\ .T.nlargest(20, 'cases').sort_values('cases', ascending=False) ``` <hr> <div> <a href="../../ch_03/5-handling_data_issues.ipynb"> <button>&#8592; Chapter 3</button> </a> <a href="../../ch_04/1-querying_and_merging.ipynb"> <button style="float: right;">Chapter 4 &#8594;</button> </a> </div> <hr>
github_jupyter
import pandas as pd faang = pd.DataFrame() for ticker in ['fb', 'aapl', 'amzn', 'nflx', 'goog']: df = pd.read_csv(f'../../ch_03/exercises/{ticker}.csv') # make the ticker the first column df.insert(0, 'ticker', ticker.upper()) faang = faang.append(df) faang.to_csv('faang.csv', index=False) faang = faang.assign( date=lambda x: pd.to_datetime(x.date), volume=lambda x: x.volume.astype(int) ).sort_values( ['date', 'ticker'] ) faang.head() faang.nsmallest(7, 'volume') melted_faang = faang.melt( id_vars=['ticker', 'date'], value_vars=['open', 'high', 'low', 'close', 'volume'] ) melted_faang.head() covid = pd.read_csv('../../ch_03/exercises/covid19_cases.csv').assign( date=lambda x: pd.to_datetime(x.dateRep, format='%d/%m/%Y') ).set_index('date').replace( 'United_States_of_America', 'USA' ).replace('United_Kingdom', 'UK').sort_index() covid[ covid.countriesAndTerritories.isin([ 'Argentina', 'Brazil', 'China', 'Colombia', 'India', 'Italy', 'Mexico', 'Peru', 'Russia', 'Spain', 'Turkey', 'UK', 'USA' ]) ].reset_index().pivot(index='date', columns='countriesAndTerritories', values='cases').fillna(0) pd.read_csv('../../ch_03/exercises/covid19_total_cases.csv', index_col='index')\ .T.nlargest(20, 'cases').sort_values('cases', ascending=False)
0.25128
0.982168
# Tutorial: From physics to tuned GPU kernels This tutorial is designed to show you the whole process starting from modeling a physical process to a Python implementation to creating optimized and auto-tuned GPU application using Kernel Tuner. In this tutorial, we will use [diffusion](https://en.wikipedia.org/wiki/Diffusion) as an example application. We start with modeling the physical process of diffusion, for which we create a simple numerical implementation in Python. Then we create a CUDA kernel that performs the same computation, but on the GPU. Once we have a CUDA kernel, we start using the Kernel Tuner for auto-tuning our GPU application. And finally, we'll introduce a few code optimizations to our CUDA kernel that will improve performance, but also add more parameters to tune on using the Kernel Tuner. <div class="alert alert-info"> **Note:** If you are reading this tutorial on the Kernel Tuner's documentation pages, note that you can actually run this tutorial as a Jupyter Notebook. Just clone the Kernel Tuner's [GitHub repository](http://github.com/benvanwerkhoven/kernel_tuner). Install using *pip install .[tutorial,cuda]* and you're ready to go! You can start the tutorial by typing "jupyter notebook" in the "kernel_tuner/tutorial" directory. </div> ## Diffusion Put simply, diffusion is the redistribution of something from a region of high concentration to a region of low concentration without bulk motion. The concept of diffusion is widely used in many fields, including physics, chemistry, biology, and many more. Suppose that we take a metal sheet, in which the temperature is exactly equal to one degree everywhere in the sheet. Now if we were to heat a number of points on the sheet to a very high temperature, say a thousand degrees, in an instant by some method. We could see the heat diffuse from these hotspots to the cooler areas. We are assuming that the metal does not melt. In addition, we will ignore any heat loss from radiation or other causes in this example. We can use the [diffusion equation](https://en.wikipedia.org/wiki/Diffusion_equation) to model how the heat diffuses through our metal sheet: \begin{equation*} \frac{\partial u}{\partial t}= D \left( \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} \right) \end{equation*} Where $x$ and $y$ represent the spatial descretization of our 2D domain, $u$ is the quantity that is being diffused, $t$ is the descretization in time, and the constant $D$ determines how fast the diffusion takes place. In this example, we will assume a very simple descretization of our problem. We assume that our 2D domain has $nx$ equi-distant grid points in the x-direction and $ny$ equi-distant grid points in the y-direction. Be sure to execute every cell as you read through this document, by selecting it and pressing **shift+enter**. ``` nx = 1024 ny = 1024 ``` This results in a constant distance of $\delta x$ between all grid points in the $x$ dimension. Using central differences, we can numerically approximate the derivative for a given point $x_i$: \begin{equation*} \left. \frac{\partial^2 u}{\partial x^2} \right|_{x_{i}} \approx \frac{u_{x_{i+1}}-2u_{{x_i}}+u_{x_{i-1}}}{(\delta x)^2} \end{equation*} We do the same for the partial derivative in $y$: \begin{equation*} \left. \frac{\partial^2 u}{\partial y^2} \right|_{y_{i}} \approx \frac{u_{y_{i+1}}-2u_{y_{i}}+u_{y_{i-1}}}{(\delta y)^2} \end{equation*} If we combine the above equations, we can obtain a numerical estimation for the temperature field of our metal sheet in the next time step, using $\delta t$ as the time between time steps. But before we do, we also simplify the expression a little bit, because we'll assume that $\delta x$ and $\delta y$ are always equal to 1. \begin{equation*} u'_{x,y} = u_{x,y} + \delta t \times \left( \left( u_{x_{i+1},y}-2u_{{x_i},y}+u_{x_{i-1},y} \right) + \left( u_{x,y_{i+1}}-2u_{x,y_{i}}+u_{x,y_{i-1}} \right) \right) \end{equation*} In this formula $u'_{x,y}$ refers to the temperature field at the time $t + \delta t$. As a final step, we further simplify this equation to: \begin{equation*} u'_{x,y} = u_{x,y} + \delta t \times \left( u_{x,y_{i+1}}+u_{x_{i+1},y}-4u_{{x_i},y}+u_{x_{i-1},y}+u_{x,y_{i-1}} \right) \end{equation*} ## Python implementation We can create a Python function that implements the numerical approximation defined in the above equation. For simplicity we'll use the assumption of a free boundary condition. ``` def diffuse(field, dt=0.225): field[1:nx-1,1:ny-1] = field[1:nx-1,1:ny-1] + dt * ( field[1:nx-1,2:ny]+field[2:nx,1:ny-1]-4*field[1:nx-1,1:ny-1]+ field[0:nx-2,1:ny-1]+field[1:nx-1,0:ny-2] ) return field ``` To give our Python function a test run, we will now do some imports and generate the input data for the initial conditions of our metal sheet with a few very hot points. We'll also make two plots, one after a thousand time steps, and a second plot after another two thousand time steps. Do note that the plots are using different ranges for the colors. Also, executing the following cell may take a little while. ``` #do the imports we need import numpy from matplotlib import pyplot %matplotlib inline #setup initial conditions def get_initial_conditions(nx, ny): field = numpy.ones((ny, nx)).astype(numpy.float32) field[numpy.random.randint(0,nx,size=10), numpy.random.randint(0,ny,size=10)] = 1e3 return field field = get_initial_conditions(nx, ny) #run the diffuse function a 1000 times and another 2000 times and make plots fig, (ax1, ax2) = pyplot.subplots(1,2) for i in range(1000): field = diffuse(field) ax1.imshow(field) for i in range(2000): field = diffuse(field) ax2.imshow(field) ``` Now let's take a quick look at the execution time of our diffuse function. Before we do, we also copy the current state of the metal sheet to be able to restart the computation from this state. ``` #save the current field for later use field_copy = numpy.copy(field) #run another 1000 steps of the diffuse function and measure the time from time import time start = time() for i in range(1000): field = diffuse(field) end = time() print("1000 steps of diffuse took", (end-start)*1000.0, "ms") pyplot.imshow(field) ``` ## Computing on the GPU The next step in this tutorial is to implement a GPU kernel that will allow us to run our problem on the GPU. We store the kernel code in a Python string, because we can directly compile and run the kernel from Python. In this tutorial, we'll use the CUDA programming model to implement our kernels. > If you prefer OpenCL over CUDA, don't worry. Everything in this tutorial > applies as much to OpenCL as it does to CUDA. But we will use CUDA for our > examples, and CUDA terminology in the text. ``` def get_kernel_string(nx, ny): return """ #define nx %d #define ny %d #define dt 0.225f __global__ void diffuse_kernel(float *u_new, float *u) { int x = blockIdx.x * block_size_x + threadIdx.x; int y = blockIdx.y * block_size_y + threadIdx.y; if (x>0 && x<nx-1 && y>0 && y<ny-1) { u_new[y*nx+x] = u[y*nx+x] + dt * ( u[(y+1)*nx+x]+u[y*nx+x+1]-4.0f*u[y*nx+x]+u[y*nx+x-1]+u[(y-1)*nx+x]); } } """ % (nx, ny) kernel_string = get_kernel_string(nx, ny) ``` The above CUDA kernel parallelizes the work such that every grid point will be processed by a different CUDA thread. Therefore, the kernel is executed by a 2D grid of threads, which are grouped together into 2D thread blocks. The specific thread block dimensions we choose are not important for the result of the computation in this kernel. But as we will see will later, they will have an impact on performance. In this kernel we are using two, currently undefined, compile-time constants for `block_size_x` and `block_size_y`, because we will auto tune these parameters later. It is often needed for performance to fix the thread block dimensions at compile time, because the compiler can unroll loops that iterate using the block size, or because you need to allocate shared memory using the thread block dimensions. The next bit of Python code initializes PyCuda, and makes preparations so that we can call the CUDA kernel to do the computation on the GPU as we did earlier in Python. ``` import pycuda.driver as drv from pycuda.compiler import SourceModule #initialize PyCuda and get compute capability needed for compilation drv.init() context = drv.Device(0).make_context() devprops = { str(k): v for (k, v) in context.get_device().get_attributes().items() } cc = str(devprops['COMPUTE_CAPABILITY_MAJOR']) + str(devprops['COMPUTE_CAPABILITY_MINOR']) #allocate GPU memory u_old = drv.mem_alloc(field_copy.nbytes) u_new = drv.mem_alloc(field_copy.nbytes) #setup thread block dimensions and compile the kernel threads = (16,16,1) grid = (int(nx/16), int(ny/16), 1) block_size_string = "#define block_size_x 16\n#define block_size_y 16\n" diffuse_kernel = SourceModule(block_size_string+kernel_string, arch='sm_'+cc).get_function("diffuse_kernel") #create events for measuring performance start = drv.Event() end = drv.Event() ``` The above code is a bit of boilerplate we need to compile a kernel using PyCuda. We've also, for the moment, fixed the thread block dimensions at 16 by 16. These dimensions serve as our initial guess for what a good performing pair of thread block dimensions could look like. Now that we've setup everything, let's see how long the computation would take using the GPU. ``` #move the data to the GPU drv.memcpy_htod(u_old, field_copy) drv.memcpy_htod(u_new, field_copy) #call the GPU kernel a 1000 times and measure performance context.synchronize() start.record() for i in range(500): diffuse_kernel(u_new, u_old, block=threads, grid=grid) diffuse_kernel(u_old, u_new, block=threads, grid=grid) end.record() context.synchronize() print("1000 steps of diffuse took", end.time_since(start), "ms.") #copy the result from the GPU to Python for plotting gpu_result = numpy.zeros_like(field_copy) drv.memcpy_dtoh(gpu_result, u_new) fig, (ax1, ax2) = pyplot.subplots(1,2) ax1.imshow(gpu_result) ax1.set_title("GPU Result") ax2.imshow(field) ax2.set_title("Python Result") ``` That should already be a lot faster than our previous Python implementation, but we can do much better if we optimize our GPU kernel. And that is exactly what the rest of this tutorial is about! ``` #cleanup the PyCuda context context.pop() ``` Also, if you think the Python boilerplate code to call a GPU kernel was a bit messy, we've got good news for you! From now on, we'll only use the Kernel Tuner to compile and benchmark GPU kernels, which we can do with much cleaner Python code. ## Auto-Tuning with the Kernel Tuner Remember that previously we've set the thread block dimensions to 16 by 16. But how do we actually know if that is the best performing setting? That is where auto-tuning comes into play. Basically, it is very difficult to provide an answer through performance modeling and as such, we'd rather use the Kernel Tuner to compile and benchmark all possible kernel configurations. But before we continue, we'll increase the problem size, because the GPU is very likely underutilized. ``` nx = 4096 ny = 4096 field = get_initial_conditions(nx, ny) kernel_string = get_kernel_string(nx, ny) ``` The above code block has generated new initial conditions and a new string that contains our CUDA kernel using our new domain size. To call the Kernel Tuner, we have to specify the tunable parameters, in our case `block_size_x` and `block_size_y`. For this purpose, we'll create an ordered dictionary to store the tunable parameters. The keys will be the name of the tunable parameter, and the corresponding value is the list of possible values for the parameter. For the purpose of this tutorial, we'll use a small number of commonly used values for the thread block dimensions, but feel free to try more! ``` from collections import OrderedDict tune_params = OrderedDict() tune_params["block_size_x"] = [16, 32, 48, 64, 128] tune_params["block_size_y"] = [2, 4, 8, 16, 32] ``` We also have to tell the Kernel Tuner about the argument list of our CUDA kernel. Because the Kernel Tuner will be calling the CUDA kernel and measure its execution time. For this purpose we create a list in Python, that corresponds with the argument list of the `diffuse_kernel` CUDA function. This list will only be used as input to the kernel during tuning. The objects in the list should be Numpy arrays or scalars. Because you can specify the arguments as Numpy arrays, the Kernel Tuner will take care of allocating GPU memory and copying the data to the GPU. ``` args = [field, field] ``` We're almost ready to call the Kernel Tuner, we just need to set how large the problem is we are currently working on by setting a `problem_size`. The Kernel Tuner knows about thread block dimensions, which it expects to be called `block_size_x`, `block_size_y`, and/or `block_size_z`. From these and the `problem_size`, the Kernel Tuner will compute the appropiate grid dimensions on the fly. ``` problem_size = (nx, ny) ``` And that's everything the Kernel Tuner needs to know to be able to start tuning our kernel. Let's give it a try by executing the next code block! ``` from kernel_tuner import tune_kernel result = tune_kernel("diffuse_kernel", kernel_string, problem_size, args, tune_params) ``` Note that the Kernel Tuner prints a lot of useful information. To ensure you'll be able to tell what was measured in this run the Kernel Tuner always prints the GPU or OpenCL Device name that is being used, as well as the name of the kernel. After that every line contains the combination of parameters and the time that was measured during benchmarking. The time that is being printed is in milliseconds and is obtained by averaging the execution time of 7 runs of the kernel. Finally, as a matter of convenience, the Kernel Tuner also prints the best performing combination of tunable parameters. However, later on in this tutorial we'll explain how to analyze and store the tuning results using Python. Looking at the results printed above, the difference in performance between the different kernel configurations may seem very little. However, on our hardware, the performance of this kernel already varies in the order of 10%. Which of course can build up to large differences in the execution time if the kernel is to be executed thousands of times. We can also see that the performance of the best configuration in this set is 5% better than our initially guessed thread block dimensions of 16 by 16. In addtion, you may notice that not all possible combinations of values for `block_size_x` and `block_size_y` are among the results. For example, 128x32 is not among the results. This is because some configuration require more threads per thread block than allowed on our GPU. The Kernel Tuner checks the limitations of your GPU at runtime and automatically skips over configurations that use too many threads per block. It will also do this for kernels that cannot be compiled because they use too much shared memory. And likewise for kernels that use too many registers to be launched at runtime. If you'd like to know about which configurations were skipped automatically you can pass the optional parameter `verbose=True` to `tune_kernel`. However, knowing the best performing combination of tunable parameters becomes even more important when we start to further optimize our CUDA kernel. In the next section, we'll add a simple code optimization and show how this affects performance. ## Using Shared Memory Shared memory, is a special type of the memory available in CUDA. Shared memory can be used by threads within the same thread block to exchange and share values. It is in fact, one of the very few ways for threads to communicate on the GPU. The idea is that we'll try improve the performance of our kernel by using shared memory as a software controlled cache. There are already caches on the GPU, but most GPUs only cache accesses to global memory in L2. Shared memory is closer to the multiprocessors where the thread blocks are executed, comparable to an L1 cache. However, because there are also hardware caches, the performance improvement from this step is expected to not be that great. The more fine-grained control that we get by using a software managed cache, rather than a hardware implemented cache, comes at the cost of some instruction overhead. In fact, performance is quite likely to degrade a little. However, this intermediate step is necessary for the next optimization step we have in mind. ``` kernel_string = """ #define nx %d #define ny %d #define dt 0.225f __global__ void diffuse_kernel(float *u_new, float *u) { int tx = threadIdx.x; int ty = threadIdx.y; int bx = blockIdx.x * block_size_x; int by = blockIdx.y * block_size_y; __shared__ float sh_u[block_size_y+2][block_size_x+2]; #pragma unroll for (int i = ty; i<block_size_y+2; i+=block_size_y) { #pragma unroll for (int j = tx; j<block_size_x+2; j+=block_size_x) { int y = by+i-1; int x = bx+j-1; if (x>=0 && x<nx && y>=0 && y<ny) { sh_u[i][j] = u[y*nx+x]; } } } __syncthreads(); int x = bx+tx; int y = by+ty; if (x>0 && x<nx-1 && y>0 && y<ny-1) { int i = ty+1; int j = tx+1; u_new[y*nx+x] = sh_u[i][j] + dt * ( sh_u[i+1][j] + sh_u[i][j+1] -4.0f * sh_u[i][j] + sh_u[i][j-1] + sh_u[i-1][j] ); } } """ % (nx, ny) result = tune_kernel("diffuse_kernel", kernel_string, problem_size, args, tune_params) ``` ## Tiling GPU Code One very useful code optimization is called tiling, sometimes also called thread-block-merge. You can look at it in this way, currently we have many thread blocks that together work on the entire domain. If we were to use only half of the number of thread blocks, every thread block would need to double the amount of work it performs to cover the entire domain. However, the threads may be able to reuse part of the data and computation that is required to process a single output element for every element beyond the first. This is a code optimization because effectively we are reducing the total number of instructions executed by all threads in all thread blocks. So in a way, were are condensing the total instruction stream while keeping the all the really necessary compute instructions. More importantly, we are increasing data reuse, where previously these values would have been reused from the cache or in the worst-case from GPU memory. We can apply tiling in both the x and y-dimensions. This also introduces two new tunable parameters, namely the tiling factor in x and y, which we will call `tile_size_x` and `tile_size_y`. This is what the new kernel looks like: ``` kernel_string = """ #define nx %d #define ny %d #define dt 0.225f __global__ void diffuse_kernel(float *u_new, float *u) { int tx = threadIdx.x; int ty = threadIdx.y; int bx = blockIdx.x * block_size_x * tile_size_x; int by = blockIdx.y * block_size_y * tile_size_y; __shared__ float sh_u[block_size_y*tile_size_y+2][block_size_x*tile_size_x+2]; #pragma unroll for (int i = ty; i<block_size_y*tile_size_y+2; i+=block_size_y) { #pragma unroll for (int j = tx; j<block_size_x*tile_size_x+2; j+=block_size_x) { int y = by+i-1; int x = bx+j-1; if (x>=0 && x<nx && y>=0 && y<ny) { sh_u[i][j] = u[y*nx+x]; } } } __syncthreads(); #pragma unroll for (int tj=0; tj<tile_size_y; tj++) { int i = ty+tj*block_size_y+1; int y = by + ty + tj*block_size_y; #pragma unroll for (int ti=0; ti<tile_size_x; ti++) { int j = tx+ti*block_size_x+1; int x = bx + tx + ti*block_size_x; if (x>0 && x<nx-1 && y>0 && y<ny-1) { u_new[y*nx+x] = sh_u[i][j] + dt * ( sh_u[i+1][j] + sh_u[i][j+1] -4.0f * sh_u[i][j] + sh_u[i][j-1] + sh_u[i-1][j] ); } } } } """ % (nx, ny) ``` We can tune our tiled kernel by adding the two new tunable parameters to our dictionary `tune_params`. We also need to somehow tell the Kernel Tuner to use fewer thread blocks to launch kernels with `tile_size_x` or `tile_size_y` larger than one. For this purpose the Kernel Tuner's `tune_kernel` function supports two optional arguments, called grid_div_x and grid_div_y. These are the grid divisor lists, which are lists of strings containing all the tunable parameters that divide a certain grid dimension. So far, we have been using the default settings for these, in which case the Kernel Tuner only uses the block_size_x and block_size_y tunable parameters to divide the problem_size. Note that the Kernel Tuner will replace the values of the tunable parameters inside the strings and use the product of the parameters in the grid divisor list to compute the grid dimension rounded up. You can even use arithmetic operations, inside these strings as they will be evaluated. As such, we could have used ``["block_size_x*tile_size_x"]`` to get the same result. We are now ready to call the Kernel Tuner again and tune our tiled kernel. Let's execute the following code block, note that it may take a while as the number of kernel configurations that the Kernel Tuner will try has just been increased with a factor of 9! ``` tune_params["tile_size_x"] = [1,2,4] #add tile_size_x to the tune_params tune_params["tile_size_y"] = [1,2,4] #add tile_size_y to the tune_params grid_div_x = ["block_size_x", "tile_size_x"] #tile_size_x impacts grid dimensions grid_div_y = ["block_size_y", "tile_size_y"] #tile_size_y impacts grid dimensions result = tune_kernel("diffuse_kernel", kernel_string, problem_size, args, tune_params, grid_div_x=grid_div_x, grid_div_y=grid_div_y) ``` We can see that the number of kernel configurations tried by the Kernel Tuner is growing rather quickly. Also, the best performing configuration quite a bit faster than the best kernel before we started optimizing. On our GTX Titan X, the execution time went from 0.72 ms to 0.53 ms, a performance improvement of 26%! Note that the thread block dimensions for this kernel configuration are also different. Without optimizations the best performing kernel used a thread block of 32x2, after we've added tiling the best performing kernel uses thread blocks of size 64x4, which is four times as many threads! Also the amount of work increased with tiling factors 2 in the x-direction and 4 in the y-direction, increasing the amount of work per thread block by a factor of 8. The difference in the area processed per thread block between the naive and the tiled kernel is a factor 32. However, there are actually several kernel configurations that come close. The following Python code prints all instances with an execution time within 5% of the best performing configuration. ``` best_time = min(result[0], key=lambda x:x['time'])['time'] for i in result[0]: if i["time"] < best_time*1.05: print("".join([k + "=" + str(v) + ", " for k,v in i.items()])) ``` ## Storing the results While it's nice that the Kernel Tuner prints the tuning results to stdout, it's not that great if we'd have to parse what is printed to get the results. That is why the `tune_kernel()` returns a data structure that holds all the results. We've actually already used this data in the above bit of Python code. `tune_kernel` returns a list of dictionaries, where each benchmarked kernel is represented by a dictionary containing the tunable parameters for that particular kernel configuration and one more entry called 'time'. The list of dictionaries format is very flexible and can easily be converted other formats that are easy to parse formats, like json or csv, for further analysis. You can execute the following code block to store the tuning results to both a json and a csv file (if you have Pandas installed). ``` #store output as json import json with open("tutorial.json", 'w') as fp: json.dump(result[0], fp) #store output as csv from pandas import DataFrame df = DataFrame(result[0]) df.to_csv("tutorial.csv") ```
github_jupyter
nx = 1024 ny = 1024 def diffuse(field, dt=0.225): field[1:nx-1,1:ny-1] = field[1:nx-1,1:ny-1] + dt * ( field[1:nx-1,2:ny]+field[2:nx,1:ny-1]-4*field[1:nx-1,1:ny-1]+ field[0:nx-2,1:ny-1]+field[1:nx-1,0:ny-2] ) return field #do the imports we need import numpy from matplotlib import pyplot %matplotlib inline #setup initial conditions def get_initial_conditions(nx, ny): field = numpy.ones((ny, nx)).astype(numpy.float32) field[numpy.random.randint(0,nx,size=10), numpy.random.randint(0,ny,size=10)] = 1e3 return field field = get_initial_conditions(nx, ny) #run the diffuse function a 1000 times and another 2000 times and make plots fig, (ax1, ax2) = pyplot.subplots(1,2) for i in range(1000): field = diffuse(field) ax1.imshow(field) for i in range(2000): field = diffuse(field) ax2.imshow(field) #save the current field for later use field_copy = numpy.copy(field) #run another 1000 steps of the diffuse function and measure the time from time import time start = time() for i in range(1000): field = diffuse(field) end = time() print("1000 steps of diffuse took", (end-start)*1000.0, "ms") pyplot.imshow(field) def get_kernel_string(nx, ny): return """ #define nx %d #define ny %d #define dt 0.225f __global__ void diffuse_kernel(float *u_new, float *u) { int x = blockIdx.x * block_size_x + threadIdx.x; int y = blockIdx.y * block_size_y + threadIdx.y; if (x>0 && x<nx-1 && y>0 && y<ny-1) { u_new[y*nx+x] = u[y*nx+x] + dt * ( u[(y+1)*nx+x]+u[y*nx+x+1]-4.0f*u[y*nx+x]+u[y*nx+x-1]+u[(y-1)*nx+x]); } } """ % (nx, ny) kernel_string = get_kernel_string(nx, ny) import pycuda.driver as drv from pycuda.compiler import SourceModule #initialize PyCuda and get compute capability needed for compilation drv.init() context = drv.Device(0).make_context() devprops = { str(k): v for (k, v) in context.get_device().get_attributes().items() } cc = str(devprops['COMPUTE_CAPABILITY_MAJOR']) + str(devprops['COMPUTE_CAPABILITY_MINOR']) #allocate GPU memory u_old = drv.mem_alloc(field_copy.nbytes) u_new = drv.mem_alloc(field_copy.nbytes) #setup thread block dimensions and compile the kernel threads = (16,16,1) grid = (int(nx/16), int(ny/16), 1) block_size_string = "#define block_size_x 16\n#define block_size_y 16\n" diffuse_kernel = SourceModule(block_size_string+kernel_string, arch='sm_'+cc).get_function("diffuse_kernel") #create events for measuring performance start = drv.Event() end = drv.Event() #move the data to the GPU drv.memcpy_htod(u_old, field_copy) drv.memcpy_htod(u_new, field_copy) #call the GPU kernel a 1000 times and measure performance context.synchronize() start.record() for i in range(500): diffuse_kernel(u_new, u_old, block=threads, grid=grid) diffuse_kernel(u_old, u_new, block=threads, grid=grid) end.record() context.synchronize() print("1000 steps of diffuse took", end.time_since(start), "ms.") #copy the result from the GPU to Python for plotting gpu_result = numpy.zeros_like(field_copy) drv.memcpy_dtoh(gpu_result, u_new) fig, (ax1, ax2) = pyplot.subplots(1,2) ax1.imshow(gpu_result) ax1.set_title("GPU Result") ax2.imshow(field) ax2.set_title("Python Result") #cleanup the PyCuda context context.pop() nx = 4096 ny = 4096 field = get_initial_conditions(nx, ny) kernel_string = get_kernel_string(nx, ny) from collections import OrderedDict tune_params = OrderedDict() tune_params["block_size_x"] = [16, 32, 48, 64, 128] tune_params["block_size_y"] = [2, 4, 8, 16, 32] args = [field, field] problem_size = (nx, ny) from kernel_tuner import tune_kernel result = tune_kernel("diffuse_kernel", kernel_string, problem_size, args, tune_params) kernel_string = """ #define nx %d #define ny %d #define dt 0.225f __global__ void diffuse_kernel(float *u_new, float *u) { int tx = threadIdx.x; int ty = threadIdx.y; int bx = blockIdx.x * block_size_x; int by = blockIdx.y * block_size_y; __shared__ float sh_u[block_size_y+2][block_size_x+2]; #pragma unroll for (int i = ty; i<block_size_y+2; i+=block_size_y) { #pragma unroll for (int j = tx; j<block_size_x+2; j+=block_size_x) { int y = by+i-1; int x = bx+j-1; if (x>=0 && x<nx && y>=0 && y<ny) { sh_u[i][j] = u[y*nx+x]; } } } __syncthreads(); int x = bx+tx; int y = by+ty; if (x>0 && x<nx-1 && y>0 && y<ny-1) { int i = ty+1; int j = tx+1; u_new[y*nx+x] = sh_u[i][j] + dt * ( sh_u[i+1][j] + sh_u[i][j+1] -4.0f * sh_u[i][j] + sh_u[i][j-1] + sh_u[i-1][j] ); } } """ % (nx, ny) result = tune_kernel("diffuse_kernel", kernel_string, problem_size, args, tune_params) kernel_string = """ #define nx %d #define ny %d #define dt 0.225f __global__ void diffuse_kernel(float *u_new, float *u) { int tx = threadIdx.x; int ty = threadIdx.y; int bx = blockIdx.x * block_size_x * tile_size_x; int by = blockIdx.y * block_size_y * tile_size_y; __shared__ float sh_u[block_size_y*tile_size_y+2][block_size_x*tile_size_x+2]; #pragma unroll for (int i = ty; i<block_size_y*tile_size_y+2; i+=block_size_y) { #pragma unroll for (int j = tx; j<block_size_x*tile_size_x+2; j+=block_size_x) { int y = by+i-1; int x = bx+j-1; if (x>=0 && x<nx && y>=0 && y<ny) { sh_u[i][j] = u[y*nx+x]; } } } __syncthreads(); #pragma unroll for (int tj=0; tj<tile_size_y; tj++) { int i = ty+tj*block_size_y+1; int y = by + ty + tj*block_size_y; #pragma unroll for (int ti=0; ti<tile_size_x; ti++) { int j = tx+ti*block_size_x+1; int x = bx + tx + ti*block_size_x; if (x>0 && x<nx-1 && y>0 && y<ny-1) { u_new[y*nx+x] = sh_u[i][j] + dt * ( sh_u[i+1][j] + sh_u[i][j+1] -4.0f * sh_u[i][j] + sh_u[i][j-1] + sh_u[i-1][j] ); } } } } """ % (nx, ny) tune_params["tile_size_x"] = [1,2,4] #add tile_size_x to the tune_params tune_params["tile_size_y"] = [1,2,4] #add tile_size_y to the tune_params grid_div_x = ["block_size_x", "tile_size_x"] #tile_size_x impacts grid dimensions grid_div_y = ["block_size_y", "tile_size_y"] #tile_size_y impacts grid dimensions result = tune_kernel("diffuse_kernel", kernel_string, problem_size, args, tune_params, grid_div_x=grid_div_x, grid_div_y=grid_div_y) best_time = min(result[0], key=lambda x:x['time'])['time'] for i in result[0]: if i["time"] < best_time*1.05: print("".join([k + "=" + str(v) + ", " for k,v in i.items()])) #store output as json import json with open("tutorial.json", 'w') as fp: json.dump(result[0], fp) #store output as csv from pandas import DataFrame df = DataFrame(result[0]) df.to_csv("tutorial.csv")
0.357568
0.992753
# Session-1: An introduction to Pandas ------------------------------------------------------ *Introduction to Data Science & Machine Learning* *Pablo M. Olmos olmos@tsc.uc3m.es* ------------------------------------------------------ When dealing with numeric matrices and vectors in Python, Numerical Python ([Numpy](https://docs.scipy.org/doc/numpy-dev/user/quickstart.html NumPy)) makes life a lot easier. Doing data analysis directly with NumPy can be problematic, as many different data types have to jointly managed. Fortunately, some nice folks have written the **[Python Data Analysis Library](https://pandas.pydata.org/)** (a.k.a. pandas). Pandas is an open sourcelibrary providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language In this tutorial, we'll go through the basics of pandas using a database of house prices provided by [Kaggle](https://www.kaggle.com/). Pandas has a lot of functionality, so we'll only be able to cover a small fraction of what you can do. Check out the (very readable) [pandas docs](http://pandas.pydata.org/pandas-docs/stable/) if you want to learn more. ### Acknowledgment: I have compiled this tutorial by putting together a few very nice blogs and posts I found on the web. All credit goes to them: - [An introduction to Pandas](http://synesthesiam.com/posts/an-introduction-to-pandas.html#handing-missing-values) - [Using iloc, loc, & ix to select rows and columns in Pandas DataFrames](https://www.shanelynn.ie/select-pandas-dataframe-rows-and-columns-using-iloc-loc-and-ix/) ## Getting Started Let's import the libray and check the current installed version ``` import pandas as pd import matplotlib.pyplot as plt import numpy as np #The following is required to print the plots inside the notebooks %matplotlib inline pd.__version__ ``` If you are using Anaconda and you want to update pandas to the latest version, you can use either the [package manager](https://docs.anaconda.com/anaconda/navigator/tutorials/manage-packages) in Anaconda Navigator, or type in a terminal window ``` > conda update pandas ``` Next lets read the housing price database, which is provided by [Kaggle in this link](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data). Because it's in a CSV file, we can use pandas' `read_csv` function to pull it directly into the basic data structure in pandas: a **DataFrame**. ``` data = pd.read_csv("house_prices_train.csv") ``` We can visualize the first rows of the Dataframe `data` ``` data.head() ``` You have a description of all fields in the [data description file](./data_description.txt). You can check the size of the Dataframe and get a list of the column labels as follows: ``` print("The dataframe has %d entries, and %d attributes (columns)\n" %(data.shape[0],data.shape[1])) print("The labels associated to each of the %d attributes are:\n " %(data.shape[1])) label_list = list(data.columns) print(label_list) ``` Columns can be accessed in two ways. The first is using the DataFrame like a dictionary with string keys: ``` data[['SalePrice']].head(10) #This shows the first 10 entries in the column 'SalePrice' ``` You can get multiple columns out at the same time by passing in a list of strings. ``` simple_data = data[['LotArea','1stFlrSF','2ndFlrSF','SalePrice']] #Subpart of the dataframe. # Watch out! This is not a different copy! simple_data.tail(10) #.tail() shows the last 10 entries ``` ## Operations with columns We can easily [change the name](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html) of the columns ``` data.rename(index=str,columns={"LotArea":"Area"}, inplace=True) ``` Try to rename the column name directly in `simple.data`, what do you get? There are a lot of useful methods that can be applied over columns. Most of pandas' methods will happily ignore missing values like `NaN`. We will talk about **missing data** later. First, since we rename one column name, lets recompute the short (referenced) data-frame `simple_data`` ``` simple_data = data[['Area','1stFlrSF','2ndFlrSF','SalePrice']] print(simple_data.head(5)) print(simple_data['Area'].mean()) print(simple_data['Area'].std()) ``` Some methods, like plot() and hist() produce plots using [matplotlib](https://matplotlib.org/). We'll go over plotting in more detail later. ``` simple_data[['Area']][:100].plot() simple_data[['Area']].hist() ``` ## Operations with `apply()` Methods like `sum()` and `std()` work on entire columns. We can run our own functions across all values in a column (or row) using `apply()`. To get an idea about how this works, assume we want to convert the variable ['Area'] into squared meters instead of square foots. First, we create a conversion function. ``` def sfoot_to_smeter(x): return (x * 0.092903) sfoot_to_smeter(1) #just checking everything is correct ``` Using the `apply()` method, which takes an [anonymous function](https://docs.python.org/2/reference/expressions.html#lambda), we can apply `sfoot_to_smeter` to each value in the column. We can now either overwrite the data in the column 'Area' or create a new one. We'll do the latter in this case. ``` # Recall! data['Area'] is not a DataFrama, but a Pandas Series (another data object with different attributes). In order # to index a DataFrame with a single column, you should use double [[]], i.e., data[['Area']] data['Area_m2'] = data[['Area']].apply(lambda d: sfoot_to_smeter(d)) simple_data = data[['Area','Area_m2', '1stFlrSF','2ndFlrSF','SalePrice']] simple_data.head() ``` What do you get if you try to apply the transformation directly over `simple_data`? What do you think the problem is? Now, we do not even need the column `Area`(in square foot), lets remove it. ``` data.drop('Area',axis=1,inplace=True) data.head(5) ``` # Indexing, iloc, loc There are [multiple ways](http://pandas.pydata.org/pandas-docs/stable/indexing.html#different-choices-for-indexing) to select and index rows and columns from Pandas DataFrames. There’s three main options to achieve the selection and indexing activities in Pandas, which can be confusing. The three selection cases and methods covered in this post are: - Selecting data by row numbers (.iloc) - Selecting data by label or by a conditional statment (.loc) - Selecting in a hybrid approach (.ix) (now Deprecated in Pandas 0.20.1) We will cover the first two ### Selecting rows using `iloc()` The [`iloc`](http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.iloc.html) indexer for Pandas Dataframe is used for integer-location based indexing / selection by position. The iloc indexer syntax is `data.iloc[<row selection>, <column selection>]`. “iloc” in pandas is used to select rows and columns by number, **in the order that they appear in the data frame**. You can imagine that each row has a row number from 0 to the total rows (data.shape[0]) and iloc[] allows selections based on these numbers. The same applies for columns (ranging from 0 to data.shape[1] ) ``` simple_data.iloc[[3,4],0:3] ``` Note that `.iloc` returns a Pandas Series when one row is selected, and a Pandas DataFrame when multiple rows are selected, or if any column in full is selected. To counter this, pass a single-valued list if you require DataFrame output. ``` print(type(simple_data.iloc[:,0])) #PandaSeries print(type(simple_data.iloc[:,[0]])) #DataFrame # To avoid confusion, work always with DataFrames! ``` When selecting multiple columns or multiple rows in this manner, remember that in your selection e.g.[1:5], the rows/columns selected will run from the first number to one minus the second number. e.g. [1:5] will go 1,2,3,4., [x,y] goes from x to y-1. In practice, `iloc()` is sheldom used. 'loc()' is way more handly. ### Selecting rows using `loc()` The Pandas `loc()` indexer can be used with DataFrames for two different use cases: - Selecting rows by label/index - Selecting rows with a boolean / conditional lookup #### Selecting rows by label/index *Important* Selections using the `loc()` method are based on the index of the data frame (if any). Where the index is set on a DataFrame, using <code>df.set_index()</code>, the `loc()` method directly selects based on index values of any rows. For example, setting the index of our test data frame to the column 'OverallQual' (Rates the overall material and finish of the house): ``` data.set_index('OverallQual',inplace=True) data.head(5) ``` Using `.loc()` we can search for rows with a specific index value ``` good_houses = data.loc[[8,9,10]] #List all houses with rating above 8 good_houses.head(10) ``` We can sort the dataframe according to index ``` data.sort_index(inplace=True,ascending=False) #Again, what is what you get if soft Dataframe good_houses directly? good_houses.head(10) ``` #### Boolean / Logical indexing using .loc [Conditional selections](http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing) with boolean arrays using `data.loc[<selection>]` is a common method with Pandas DataFrames. With boolean indexing or logical selection, you pass an array or Series of `True/False` values to the `.loc` indexer to select the rows where your Series has True values. For example, the statement data[‘first_name’] == ‘Antonio’] produces a Pandas Series with a True/False value for every row in the ‘data’ DataFrame, where there are “True” values for the rows where the first_name is “Antonio”. These type of boolean arrays can be passed directly to the .loc indexer as so: ``` good_houses.loc[good_houses['PoolArea']>0] #How many houses with quality above or equal to 8 have a Pool ``` As before, a second argument can be passed to .loc to select particular columns out of the data frame. ``` good_houses.loc[good_houses['PoolArea']>0,['GarageArea','GarageCars']] #Among those above, we focus on the area of the # garage and how many cars can fit within ``` Even an anonymous function with the `.apply()` method can be used to generate the series of True/False indexes. For instance, select good houses with less than 10 years. ``` def check_date(current_year,year_built,threshold): return (current_year-year_built) <= threshold good_houses.loc[good_houses['YearBuilt'].apply(lambda d: check_date(2018, d,10))] ``` Using the above filtering, we can add our own column to the DataFrame to create an index that is 1 for houses that have swimming pool and less than 30 years. ``` data['My_index'] = 0 # We create new column with default vale data.loc[(data['YearBuilt'].apply(lambda d: check_date(2018, d,30))) & (data['PoolArea']>0),'My_index'] = 1 data.loc[data['My_index'] == 1] ``` ## Handling Missing Data Pandas considers values like `NaN` and `None` to represent missing data. The `pandas.isnull` function can be used to tell whether or not a value is missing. Let's use `apply()` across all of the columns in our DataFrame to figure out which values are missing. ``` empty = data.apply(lambda col: pd.isnull(col)) empty.head(5) #We get back a boolean Dataframe with 'True' whenever we have a missing data (either Nan or None) ``` There are multiple ways of handling missing data, we will talk about this during the course. Pandas provides handly functions to easily work with missing data, check [this post](https://chrisalbon.com/python/data_wrangling/pandas_missing_data/) for examples. ## More about plotting with `matplotlib()` library You should consult [matplotlib documentation](https://matplotlib.org/index.html) for tons of examples and options. ``` plt.plot(data['Area_m2'],data['SalePrice'],'ro') plt.plot(good_houses['Area_m2'],good_houses['SalePrice'],'*') plt.legend(['SalePrice (all data)','SalePrince (good houses)']) plt.xlabel('Area_m2') plt.grid(True) plt.xlim([0,7500]) data.sort_values(['SalePrice'],ascending=True,inplace=True) #We order the data according to SalePrice # Create axes fig, ax = plt.subplots() ax2 = ax.twinx() ax.loglog(data['SalePrice'], data['Area_m2'], color='blue',marker='o') ax.set_xlabel('SalePrice (logscale)') ax.set_ylabel('Area_m2 (logscale)') ax2.semilogx(data['SalePrice'],data[['GarageArea']].apply(lambda d: sfoot_to_smeter(d)), color='red',marker='+',linewidth=0) ax2.set_ylabel('Garage Area (logscale)') ax.set_title('A plot with two scales') ``` ## Getting data out Writing data out in pandas is as easy as getting data in. To save our DataFrame out to a new csv file, we can just do this: ``` data.to_csv("modified_data.csv") ``` There's also support for reading and writing [Excel files](http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files), if you need it. Also, creating a Numpy array is straightforward: ``` data_array = np.array(good_houses) print(data_array.shape) ```
github_jupyter
import pandas as pd import matplotlib.pyplot as plt import numpy as np #The following is required to print the plots inside the notebooks %matplotlib inline pd.__version__ > conda update pandas data = pd.read_csv("house_prices_train.csv") data.head() print("The dataframe has %d entries, and %d attributes (columns)\n" %(data.shape[0],data.shape[1])) print("The labels associated to each of the %d attributes are:\n " %(data.shape[1])) label_list = list(data.columns) print(label_list) data[['SalePrice']].head(10) #This shows the first 10 entries in the column 'SalePrice' simple_data = data[['LotArea','1stFlrSF','2ndFlrSF','SalePrice']] #Subpart of the dataframe. # Watch out! This is not a different copy! simple_data.tail(10) #.tail() shows the last 10 entries data.rename(index=str,columns={"LotArea":"Area"}, inplace=True) simple_data = data[['Area','1stFlrSF','2ndFlrSF','SalePrice']] print(simple_data.head(5)) print(simple_data['Area'].mean()) print(simple_data['Area'].std()) simple_data[['Area']][:100].plot() simple_data[['Area']].hist() def sfoot_to_smeter(x): return (x * 0.092903) sfoot_to_smeter(1) #just checking everything is correct # Recall! data['Area'] is not a DataFrama, but a Pandas Series (another data object with different attributes). In order # to index a DataFrame with a single column, you should use double [[]], i.e., data[['Area']] data['Area_m2'] = data[['Area']].apply(lambda d: sfoot_to_smeter(d)) simple_data = data[['Area','Area_m2', '1stFlrSF','2ndFlrSF','SalePrice']] simple_data.head() data.drop('Area',axis=1,inplace=True) data.head(5) simple_data.iloc[[3,4],0:3] print(type(simple_data.iloc[:,0])) #PandaSeries print(type(simple_data.iloc[:,[0]])) #DataFrame # To avoid confusion, work always with DataFrames! data.set_index('OverallQual',inplace=True) data.head(5) good_houses = data.loc[[8,9,10]] #List all houses with rating above 8 good_houses.head(10) data.sort_index(inplace=True,ascending=False) #Again, what is what you get if soft Dataframe good_houses directly? good_houses.head(10) good_houses.loc[good_houses['PoolArea']>0] #How many houses with quality above or equal to 8 have a Pool good_houses.loc[good_houses['PoolArea']>0,['GarageArea','GarageCars']] #Among those above, we focus on the area of the # garage and how many cars can fit within def check_date(current_year,year_built,threshold): return (current_year-year_built) <= threshold good_houses.loc[good_houses['YearBuilt'].apply(lambda d: check_date(2018, d,10))] data['My_index'] = 0 # We create new column with default vale data.loc[(data['YearBuilt'].apply(lambda d: check_date(2018, d,30))) & (data['PoolArea']>0),'My_index'] = 1 data.loc[data['My_index'] == 1] empty = data.apply(lambda col: pd.isnull(col)) empty.head(5) #We get back a boolean Dataframe with 'True' whenever we have a missing data (either Nan or None) plt.plot(data['Area_m2'],data['SalePrice'],'ro') plt.plot(good_houses['Area_m2'],good_houses['SalePrice'],'*') plt.legend(['SalePrice (all data)','SalePrince (good houses)']) plt.xlabel('Area_m2') plt.grid(True) plt.xlim([0,7500]) data.sort_values(['SalePrice'],ascending=True,inplace=True) #We order the data according to SalePrice # Create axes fig, ax = plt.subplots() ax2 = ax.twinx() ax.loglog(data['SalePrice'], data['Area_m2'], color='blue',marker='o') ax.set_xlabel('SalePrice (logscale)') ax.set_ylabel('Area_m2 (logscale)') ax2.semilogx(data['SalePrice'],data[['GarageArea']].apply(lambda d: sfoot_to_smeter(d)), color='red',marker='+',linewidth=0) ax2.set_ylabel('Garage Area (logscale)') ax.set_title('A plot with two scales') data.to_csv("modified_data.csv") data_array = np.array(good_houses) print(data_array.shape)
0.422981
0.986218
``` import datetime as dt import numpy as np import math import matplotlib.pyplot as plt from __future__ import print_function import tensorflow.compat.v1 as tf tf.disable_v2_behavior() import numpy as np import matplotlib.pyplot as plt %matplotlib inline from multiprocess import Process from multiprocess import Queue import multiprocess as mp import time import random def wrapper(func, *args, **kwargs): def wrapped(): return func(*args, **kwargs) return wrapped class Network: def __init__(self): self.layers = [] self.tn = Neuron(100) def add(self, layer): self.layers.append(layer) def full_connect(self): for i in range(len(self.layers)): for neuron in self.layers[i].neurons: if i + 1 < len(self.layers): for r_neuron in self.layers[i + 1].neurons: neuron.add_recipient(r_neuron) else: neuron.add_recipient(self.tn) def train(self, full_inputs, full_outputs, time_per_image, worker_amount): results = [] if len(full_inputs) != len(full_outputs): print('you have done something wrong!') return None for layer in self.layers: layer.set_update(True) for i in range(len(full_inputs)): self.layers[-1].set_training_outputs(full_outputs[i]) for _ in range(time_per_image): self.layers[0].receive_inputs(full_inputs[i]) for j in range(1, len(self.layers) - 1): self.layers[j].fire() results.append(self.layers[-1].fire()) print(results[-1], full_outputs[i]) return results def predict(self, full_inputs, time_per_image, worker_amount): results = [] for layer in self.layers: layer.set_update(False) for i in range(len(full_inputs)): for _ in range(time_per_image): self.layers[0].receive_inputs(full_inputs[i]) for j in range(1, len(self.layers) - 1): self.layers[j].fire() results.append(self.layers[-1].fire()) print(results[-1], full_outputs[i]) return results def receive_inputs(self, inputs): if len(inputs) != len(self.layers[0]): print('input len != len of first layer') self.layers[0].receive_input(inputs) def set_training_outputs(self, desired_outputs): if len(desired_outputs) != len(self.layers[-1]): print('input len != len of first layer') self.layers[0].set_training_outputs(set_training_outputs) def sparse_connect(self): pass class Layer: def __init__(self, size): self.neurons = [Neuron(i) for i in range(size)] def fire(self): firing = [] for neuron in self.neurons: fs = [func() for func in neuron.attempt_fire()] firing.append(len(fs)) print(fs) return [1 if f > 0 else 0 for f in firing] def receive_inputs(self, inputs): for i in range(len(inputs)): f = [func() for func in self.neurons[i].receive_input(inputs[i])] def set_training_outputs(self, outputs): for i in range(len(outputs)): self.neurons[i].set_outputs(outputs[i]) def set_update(self, b): for neuron in self.neurons: neuron.set_weight_update(b) class Neuron: def __init__(self, num): # a unique identifier self.num = num # When the neuron will fire self.action_potential = 105.0 # The membrane potential when it was last checked self.membrane_potential = 0 # references to neurons that will increase this one's potential self.incomming_connections = {} # references to neurons that will be increased by this one self.outgoing_connections = [] # resting potential self.resting_potential = 0 # leak ammount self.leak = 0.2 # last fire self.last_fire = dt.datetime.now() # how long the neron has to physically wait befor it can fire again. self.fire_rate = dt.timedelta(microseconds=500) # expected output self.expected_output = None self.update_weights = True def set_weight_update(self, b): self.update_weights = b def set_outputs(self, output): self.expected_output = output def add_recipient(self, neuron): """ Add a connection to a new neuron. """ self.outgoing_connections.append(neuron) def leak(self): """ A function that will cause a constant leakage of membrane potential. This keeps the neuron near "equilibrium". """ if self.membrane_potential < self.resting_potential: # If we are below the resting potential we rise to it. self.membrane_potential += self.leak else: # If we are above the resting potential we lower to it. self.membrane_potential -= self.leak def attempt_fire(self): if (dt.datetime.now() - self.last_fire) > self.fire_rate and self.membrane_potential >= self.action_potential: return self.fire() return [] def fire(self): self.last_fire = dt.datetime.now() self.membrane_potential = -10 return [wrapper(c.receive_input, self.num) for c in self.outgoing_connections] def receive_input(self, amount): self.membrane_potential += amount if (dt.datetime.now() - self.last_fire) > self.fire_rate and self.membrane_potential >= self.action_potential: return self.fire() return [] def receive_fire(self, neuron): """To be used with multiprocessing """ weight = self.incomming_connections.get(neuron) if weight is not None: self.membrane_potential += weight else: # setting initial weight self.incomming_connections[neuron] = random.random() * 10 + 10 self.membrane_potential += self.incomming_connections[neuron] if (dt.datetime.now() - self.last_fire) > self.fire_rate and self.membrane_potential >= self.action_potential: if self.expected_output != None: self.output_queue.put(self.num) if self.expected_output == 0 and self.update_weights: self.incomming_connections[neuron] -= 1.5 if (self.update_weights): self.incomming_connections[neuron] += 1 return self.fire() if self.update_weights: self.incomming_connections[neuron] -= 0.1 return [] neuron0 = Neuron(0) neuron1 = Neuron(1) neuron0.add_recipient(neuron1) now = dt.datetime.now() for i in range(1000): # print(neuron0.membrane_potential) if len(neuron0.receive_input(1)) > 0: print(f'fire {dt.datetime.now() - now}') data = [[random.random() for __ in range(784)] for _ in range(10)] labels = [[1 if random.random() > 0.5 else 0 for __ in range(2)] for _ in range(10)] model = Network() model.add(Layer(784)) model.add(Layer(2)) model.full_connect() print(model.train(data, labels, 500, 6)) ```
github_jupyter
import datetime as dt import numpy as np import math import matplotlib.pyplot as plt from __future__ import print_function import tensorflow.compat.v1 as tf tf.disable_v2_behavior() import numpy as np import matplotlib.pyplot as plt %matplotlib inline from multiprocess import Process from multiprocess import Queue import multiprocess as mp import time import random def wrapper(func, *args, **kwargs): def wrapped(): return func(*args, **kwargs) return wrapped class Network: def __init__(self): self.layers = [] self.tn = Neuron(100) def add(self, layer): self.layers.append(layer) def full_connect(self): for i in range(len(self.layers)): for neuron in self.layers[i].neurons: if i + 1 < len(self.layers): for r_neuron in self.layers[i + 1].neurons: neuron.add_recipient(r_neuron) else: neuron.add_recipient(self.tn) def train(self, full_inputs, full_outputs, time_per_image, worker_amount): results = [] if len(full_inputs) != len(full_outputs): print('you have done something wrong!') return None for layer in self.layers: layer.set_update(True) for i in range(len(full_inputs)): self.layers[-1].set_training_outputs(full_outputs[i]) for _ in range(time_per_image): self.layers[0].receive_inputs(full_inputs[i]) for j in range(1, len(self.layers) - 1): self.layers[j].fire() results.append(self.layers[-1].fire()) print(results[-1], full_outputs[i]) return results def predict(self, full_inputs, time_per_image, worker_amount): results = [] for layer in self.layers: layer.set_update(False) for i in range(len(full_inputs)): for _ in range(time_per_image): self.layers[0].receive_inputs(full_inputs[i]) for j in range(1, len(self.layers) - 1): self.layers[j].fire() results.append(self.layers[-1].fire()) print(results[-1], full_outputs[i]) return results def receive_inputs(self, inputs): if len(inputs) != len(self.layers[0]): print('input len != len of first layer') self.layers[0].receive_input(inputs) def set_training_outputs(self, desired_outputs): if len(desired_outputs) != len(self.layers[-1]): print('input len != len of first layer') self.layers[0].set_training_outputs(set_training_outputs) def sparse_connect(self): pass class Layer: def __init__(self, size): self.neurons = [Neuron(i) for i in range(size)] def fire(self): firing = [] for neuron in self.neurons: fs = [func() for func in neuron.attempt_fire()] firing.append(len(fs)) print(fs) return [1 if f > 0 else 0 for f in firing] def receive_inputs(self, inputs): for i in range(len(inputs)): f = [func() for func in self.neurons[i].receive_input(inputs[i])] def set_training_outputs(self, outputs): for i in range(len(outputs)): self.neurons[i].set_outputs(outputs[i]) def set_update(self, b): for neuron in self.neurons: neuron.set_weight_update(b) class Neuron: def __init__(self, num): # a unique identifier self.num = num # When the neuron will fire self.action_potential = 105.0 # The membrane potential when it was last checked self.membrane_potential = 0 # references to neurons that will increase this one's potential self.incomming_connections = {} # references to neurons that will be increased by this one self.outgoing_connections = [] # resting potential self.resting_potential = 0 # leak ammount self.leak = 0.2 # last fire self.last_fire = dt.datetime.now() # how long the neron has to physically wait befor it can fire again. self.fire_rate = dt.timedelta(microseconds=500) # expected output self.expected_output = None self.update_weights = True def set_weight_update(self, b): self.update_weights = b def set_outputs(self, output): self.expected_output = output def add_recipient(self, neuron): """ Add a connection to a new neuron. """ self.outgoing_connections.append(neuron) def leak(self): """ A function that will cause a constant leakage of membrane potential. This keeps the neuron near "equilibrium". """ if self.membrane_potential < self.resting_potential: # If we are below the resting potential we rise to it. self.membrane_potential += self.leak else: # If we are above the resting potential we lower to it. self.membrane_potential -= self.leak def attempt_fire(self): if (dt.datetime.now() - self.last_fire) > self.fire_rate and self.membrane_potential >= self.action_potential: return self.fire() return [] def fire(self): self.last_fire = dt.datetime.now() self.membrane_potential = -10 return [wrapper(c.receive_input, self.num) for c in self.outgoing_connections] def receive_input(self, amount): self.membrane_potential += amount if (dt.datetime.now() - self.last_fire) > self.fire_rate and self.membrane_potential >= self.action_potential: return self.fire() return [] def receive_fire(self, neuron): """To be used with multiprocessing """ weight = self.incomming_connections.get(neuron) if weight is not None: self.membrane_potential += weight else: # setting initial weight self.incomming_connections[neuron] = random.random() * 10 + 10 self.membrane_potential += self.incomming_connections[neuron] if (dt.datetime.now() - self.last_fire) > self.fire_rate and self.membrane_potential >= self.action_potential: if self.expected_output != None: self.output_queue.put(self.num) if self.expected_output == 0 and self.update_weights: self.incomming_connections[neuron] -= 1.5 if (self.update_weights): self.incomming_connections[neuron] += 1 return self.fire() if self.update_weights: self.incomming_connections[neuron] -= 0.1 return [] neuron0 = Neuron(0) neuron1 = Neuron(1) neuron0.add_recipient(neuron1) now = dt.datetime.now() for i in range(1000): # print(neuron0.membrane_potential) if len(neuron0.receive_input(1)) > 0: print(f'fire {dt.datetime.now() - now}') data = [[random.random() for __ in range(784)] for _ in range(10)] labels = [[1 if random.random() > 0.5 else 0 for __ in range(2)] for _ in range(10)] model = Network() model.add(Layer(784)) model.add(Layer(2)) model.full_connect() print(model.train(data, labels, 500, 6))
0.356447
0.321407
# Day 2 - Getting Data with Python Something about automation and scripts Something about exceptions #### Let's try a challenge! Error handling - or having a computer program anticipate and respond to errors created by other functions - is a big part of programming. To give you a little more practice with this, we're going to have you team up with person sitting next to you and try challenge B in the challenges directory. ## Introduction to the interwebs A vast amount of data exists on the web and is now publicly available. In this section, we give an overview of popular ways to retrieve data from the web, and walk through some important concerns and considerations. ![An extremely simplified model of the web](images/Client-server-model.svg.png) The internet follows a client-server architecture, where clients (e.g. you) ask servers to do things. The most common way that you experience this is through a browser, where you enter a URL and a server sends your computer a page for your browser to render. Most of what you think about as the internet are stored documents (web pages) that are given out to anyone who asks. You probably also have a program on your computer like Outlook or Thunderbird that sends emails to a server and asks it to forward them along to someone else. You may also have proprietary software that's protected by a license, and needs to connect to a license server to verify that you are an authenticated user. Ultimately, the internet is just connecting to computers that you don't own and passing data back and forth. Because the data transfer protocol (`http`) and typical data formats (`html`) are not native to Python, we're going to leave Python just for a little bit. ## Intro to HTTP requests You can view the request sent by your browser by: 1) Opening a new tab in your browser 2) Enabling developer tools (__View -> Developer -> Developer Tools in Chrome__ and __Tools -> Web Developer -> Toggle Tools in Firefox__) 3) Loading or reloading a web page (etc. www.google.com) 4) Navigating to the Network tab in the panel that appears at the bottom of the page. ![Chrome Examine Request Example](images/chrome_request.png) ![Firefox Examine Request Example](images/firefox_request.png) These requests you send follow the HTTP protocol (Hypertext Transfer Protocol), part of which defines the information (along with the format) the server needs to receive to return the right resources. Your HTTP request contains __headers__, which contains information that the server needs to know in order to return the right information to you. But we're not here to wander around the web (you probably do this a lot, all on your own). You're here because you want Python to do it for you. In order to get web pages, we're going to use a python library called `requests`, which takes a lot of the fuss out of contacting servers. ``` import requests r = requests.get("http://en.wikipedia.org/wiki/Main_Page") ``` This response object contains various information about the request you sent to the server, the resources returned, and information about the response the server returned to you, among other information. These are accessible through the <i>__request__</i> attribute, the <i>__content__</i> attribute and the <i>__headers__</i> attribute respectively, which we'll each examine below. ``` type(r.request), type(r.content), type(r.headers) ``` Here, we can see that __request__ is an object with a custom type, __content__ is a str value and __headers__ is an object with "dict" in its name, suggesting we can interact with it like we would with a dictionary. The content is the actual resource returned to us - let's take a look at the content first before examining the request and response objects more carefully. (We select the first 1000 characters b/c of the display limits of Jupyter/python notebook.) ``` from pprint import pprint pprint(r.content[0:1000]) ``` The content returned is written in HTML (__H__yper__T__ext __M__arkup __L__anguage), which is the default format in which web pages are returned. The content looks like gibberish at first, with little to no spacing. The reason for this is that some of the formatting rules for the document, like its hierarchical structure, are saved in text along with the text in the document. > note - this is called the __D__ocument __O__bject __M__odel (DOM) and is the same way that markdown and LaTeX documents are written If you save a web page as a ".html" file, and open the file in a text editor like Notepad++ or Sublime Text, this is the same format you'll see. Opening the file in a browser (i.e. by double-clicking it) gives you the Google home page you are familiar with. You can inspect the information you sent to Wikipedia long with your request ``` r.request.headers ``` Along with the additional info that Wikipedia sent back: ``` r.headers ``` But you will probably not ever need this information. Most of what you'll be doing is sending what are called `GET` requests (this is why we typed in `requests.get` above). This is an `HTTP` protocol for asking a server to send you some stuff. We asked Wikipedia to `GET` us their main page. Things like queries (searching Wikipedia) also fall under `GET`. From time to time, you may also want to send information to a server (we'll do this later today). These are called `POST` requests, because you are posting something to the server (and not asking for data back). > note - From the server's perspective, the request it receives from your browser is not so different from the request received from your console (though some servers use a range of methods to determine if the request comes from a "valid" person using a browser, versus an automated program.) To have a look at the content of the web page, we can ask for the content: ``` r.content[:1000] ``` which gives us the response in bytes, or text: ``` r.text[:1000] ``` ## Parsing HTML in Python Trying to parse this `str` by hand is basically a nightmare. Instead, we'll use a Python library called Beautiful Soup to turn it into something that is still confusing, but less of a nightmare. ``` from bs4 import BeautifulSoup page = BeautifulSoup(r.content) page ``` Beautiful Soup creates a linked tree, where the root of the tree is the whole HTML document. It has children, which are all the elements of the HTML document. Each of those has children, which are any elements they have. Each element of the tree is aware of its parent and children. You probably don't want to iterate through each child of the whole HTML document - you want a specific thing or things in it. In some cases, you want to seach for html tags. Common tages include: | tag | function | |------------|------------------------------------------------------------| | `<title>` | The title of the web page (shows up in your browser header) | | `<meta>` | Information about the web page that is not shown to the user | | `<a>` | Links to other web pages | | `<p>` | Paragraph of text | In other cases, you want to look for IDs. These are optional information added to a tag to help developers or other code on the web page know which tag is for which purpose. Unlike tags, these are not standardized, so they will change from site to site and author to author. They will look something like: `<div id="banner" class="MyBanner">` With the advent of CSS (__C__ascading __S__tyle __S__heets), it is also common for people to define their own HTML styling tags. So, while things like lists (`<ol>`) and tables (`<table>`, `<tr>`, and `<td>`) are in the HTML specification, it's not safe to assume they'll be used when you expect. As a general strategy, when web scraping, you should have the page you want to scrape open in a browser with either the Developer Tools window open, or the HTML source displayed. We can pull out elements by tag with: ``` page.p ``` This is grabbing the paragraph tag from the page. If we want the first link from the first paragraph, we can try: ``` page.p.a ``` But what if we want all the links? We are going to use a method of bs4's elements called `find_all`. ``` page.p.findAll('a') ``` What if you want all the elements in that paragraph, and not just the links? bs4 has an iterator for children: ``` for element in page.p.children: print(element) ``` HTML elements can be nested, but children only iterates at one level below the element. If you want everything, you can iterate with `descendants` ``` for element in page.p.descendants: print(element) ``` This splits out formatting tags that we *probably* don't care about, like bold-faced text, and so we probably won't use it again. In reality, you won't be inspecting things yourself, so you'll want to get in the habit of using your knowledge from day 2 about looping and control structures to make decisions for you. For example, what if we wanted to look at every link in the page, then print it's neighbor but only if the link is not to a media file? We could do something like: ``` for link in page.find_all('a'): if link.attrs.get('class') != 'mw-redirect': print(link.find_next()) ``` #### Time for a challenge! To make sure that everyone is on the same page (and to give you a little more practice dealing with HTML), let's partner up with the person next to you and try challenge A, on using html, in the challenges directory. # Creating data with web APIs Most people who think they want to do web scraping actually want to pull data down from site-supplied APIs. Using an API is better in almost every way, and really the only reason to scrape data is if: 1. The website was constructed in the 90s and does not have an API; or, 2. You are doing something illegal If [LiveJournal has an API](http://dev.livejournal.com/), the website you are interested in probably does too. ## What is an API? **API** is shorthand for **A**pplication **P**rogramming **I**nterface, which is in turn computer-ese for a middleman. Think about it this way. You have a bunch of things on your computer that you want other people to be able to look at. Some of them are static documents, some of them call programs in real time, and some of them are programs themselves. #### Solution 1 You publish login credentials on the internet, and let anyone log into your computer Problems: 1. People will need to know how each document and program works to be able to access their data 2. You don't want the world looking at your browser history #### Solution 2 You paste everything into HTML and publish it on the internet Problems: 1. This can be information overload 2. Making things dynamic can be tricky #### Solution 3 You create a set of methods to act as an intermediary between the people you want to help and the things you want them to have access to. Why this is the best solution: 1. People only access what you want them to have, in the way that you want them to have it 2. People use one language to get the things they want Why this is still not Panglossian: 1. You will have to explain to people how to use your middleman ## Twitter's API Twitter has an API - mostly written for third-party apps - that is comparatively straightforward and gives you access to _nearly_ all of the information that Twitter has about its users, including: 1. User histories 2. User (and tweet) location 3. User language 4. Tweet popularity 5. Tweet spread 6. Conversation chains Also, Twitter returns data to you in json, or **J**ava **S**cript **O**bject **N**otation. This is a very common format for passing data around http connections for browsers and servers, so many APIs return it as a datatype as well (instead of using something like xml or plain text). Luckily, json converts into native Python data structures. Specifically, every json object you get from Twitter will be a combination of nested `dicts` and `lists`, which you learned about yesterday. This makes Twitter a lot easier to manipulate in Python than html objects, for example. Here's what a tweet looks like: ``` import json with open('../data/02_tweet.json','r') as f: a_tweet = json.loads(f.read()) ``` We can take a quick look at the structure by pretty printing it: ``` from pprint import pprint pprint(a_tweet) ``` #### Time for a challenge! Let's see how much you remember about lists and dicts from yesterday. Go into the challenges directory and try your hand at `02_scraping/C_json.py`. ## Authentication Twitter controls access to their servers via a process of authentication and authorization. Authentication is how you let Twitter know who you are, in a way that is very hard to fake. Authorization is how the account owner (which will usually be yourself unless you are writing a Twitter app) controls what you are allowed to do in Twitter using their account. In Twitter, different levels of authorization require different levels of authentication. Because we want to be able to interact with everything, we'll need the highest level of authorization and the strictest level of authentication. In Twitter, this means that we need two sets of ID's (called keys or tokens) and passwords (called secrets): * consumer_key * consumer_secret * access_token_key * access_token_secret We'll provide some for you to use, but if you want to get your own you need to create an account on Twitter with a verified phone number. Then, while signed in to your Twitter account, go to: https://apps.twitter.com/. Follow the prompts to generate your keys and access tokens. Note that getting the second ID/password pair requires that you manually set the authorization level of your app. We've stored our credentials in a separate file, which is smart. However, we have uploaded it to Github so that you have them too, which is not smart. **You should NEVER NEVER NEVER do this in real life.** We've stored it in YAML format, because it is more human-readible than JSON is. However, once it's inside Python, these data structures behave the same way. ``` import yaml with open('../etc/creds.yml', 'r') as f: creds = yaml.load(f) ``` We're going to load these credentials into a requests module specifically designed for handling the flavor of authentication management that Twitter uses. ``` from requests_oauthlib import OAuth1Session twitter = OAuth1Session(**creds) ``` That `**` syntax we just used is called a "double splat" and is a python convenience function for converting the key-value pairs of a dictionary into keyword-argument pairs to pass to a function. ## Accessing the API Access to Twitter's API is organized through URLs called "endpoints". An endpoint is the location at which you can submit a request for Twitter to do something for you. For example, the "endpoint" to search for specific kinds of tweets is at: ``` https://api.twitter.com/1.1/search/tweets.json ``` whereas posting new tweets is at: ``` https://api.twitter.com/1.1/statuses/update.json ``` For more information on the REST APIs, end points, and terms, check out: https://dev.twitter.com/rest/public. For the Streaming APIs: https://dev.twitter.com/streaming/overview. All APIs on Twitter are "rate-limited" - this means that you are only allowed to ask a set number of questions per unit time (to keep their servers from being overloaded). This rate varies by endpoint and authorization, so be sure to check their developer site for the action you are trying to take. For example, at the lowest level of authorization (Twitter calls this `application only`), you are allowed to make 450 search requests per 15 minute window, or about one every two seconds. At the highest level of authorization (Twitter calls this `user`) you can submit 180 requests every 15 minutes, or only about once every five seconds. > side note - Google search is the worst rate-limiting I've ever seen, with an allowance of one hundred requests per day, or about once every *nine hundred seconds* Let's try a couple of simple API queries. We're going to specify query parameters with `param`. ``` search = "https://api.twitter.com/1.1/search/tweets.json" r = twitter.get(search, params={'q' : 'technology'}) ``` This has returned an http response object, which contains data like whether or not the request succeeded: ``` r.ok ``` You can also get the http response code, and the reason why Twitter sent you that code (these are all super important for controlling the flow of your program). ``` r.status_code, r.reason ``` The data that we asked Twitter to send us in r.content ``` r.content ``` But that's not helpful. We can extract it in python's representation of json with the `json` method: ``` r.json() ``` This has some helpful metadata about our request, like a url where we can get the next batch of results from Twitter for the same query: ``` data = r.json() data['search_metadata'] ``` The tweets that we want are under the key "statuses" ``` statuses = data['statuses'] statuses[0] ``` This is one tweet. > Depending on which tweet this is, you may or may not see that Twitter automatically pulls out links and mentions and gives you their index location in the raw tweet string Twitter gives you a whole lot of information about their users, including geographical coordinates, the device they are tweeting from, and links to their photographs. Twitter supports what it calls query operators, which modify the search behavior. For example, if you want to search for tweets where a particular user is mentioned, include the at-sign, `@`, followed by the username. To search for tweets sent to a particular user, use `to:username`. For tweets from a particular user, `from:username`. For hashtags, use `#hashtag`. For a complete set of options: https://dev.twitter.com/rest/public/search. Let's try a more complicated search: ``` r = twitter.get(search, params={ 'q' : 'happy', 'geocode' : '37.8734855,-122.2597169,10mi' }) r.ok statuses = r.json()['statuses'] statuses[0] ``` If we want to store this data somewhere, we can output it as json using the json library from above. However, if you're doing a lot of these, you'll probaby want to use a database to handle everything. ``` with open('my_tweets.json', 'w') as f: json.dump(statuses, f) ``` To post tweets, we need to use a different endpoint: ``` post = "https://api.twitter.com/1.1/statuses/update.json" ``` And now we can pass a new tweet (remember, Twitter calls these 'statuses') as a parameter to our post request. ``` r = twitter.post(post, params={ 'status' : "I stole Juan's Twitter credentials" }) r.ok ``` Other (optional) parameters include things like location, and replies. ## Scheduling The real beauty of bots is that they are designed to work without interaction or oversight. Imagine a situation where you want to automatically retweet everything coming out of the D-Lab's twitter account, "@DLabAtBerkeley". You could: 1. spend the rest of your life glued to D-Lab's twitter page and hitting refresh; or, 2. write a function We're going to import a module called `time` that will pause our code, so that we don't hit Twitter's rate limit ``` import time def retweet(): r = twitter.get(search, {'q':'DLabAtBerkeley'}) if r.ok: statuses = r.json()['statuses'] for update in statuses: username = item['user']['screen_name'] parameters = {'status':'HOORAY! @' + username} r = twitter.post(post, parameters) print(r.status_code, r.reason) time.sleep(5) ``` But you are a human that needs to eat, sleep, and be social with other humans. Luckily, Linux systems have a time-based daemon called `cron` that will run scripts like this *for you*. > People on windows and macs will not be able to run this. That's okay. The way that `cron` works is it reads in files where each line has a time followed by a job (these are called cronjobs). You can edit your crontab by typing `crontab -e` into a terminal. They looks like this: ``` with open('../etc/crontab_example', 'r') as f: print(f.read()) ``` This is telling `cron` to print that statement to a file called "dumblog" at 8am every Monday. It's generally frowned upon to enter jobs through crontabs because they are hard to modify without breaking them. The better solution is to put your timed command into a file and copy the file into `/etc/cron.d/`. These files look like this: ``` with open('../etc/crond_example', 'r') as f: print(f.read()) ``` At this point, you might be a little upset that you can't do this on your laptop, but the truth is you don't really want to run daemons and cronjobs on your laptop, which goes to sleep and runs out of batteries. This is what servers are for (like AWS). ## Now it is time for you to make your own twitter bot! To get you started, we've put a template in the `scripts` folder. Try it out, but be generous with your `time.sleep()` calls as the whole class is sharing this account. If you have tried to run this, or some of the earlier code in this notebook, you have probably encountered some of Twitter's error codes. Here are the most common, and why you are triggering them. 1. `400 = bad request` - This means the API (middleman) doesn't like how you formatted your request. Check the API documentation to make sure you are doing things correctly. 2. `401 = unauthorized` - This either means you entered your auth codes incorrectly, or those auth codes don't have permission to do what you're trying to do. It takes Twitter a while to assign posting rights to your auth tokens after you've given them your phone number. If you have just done this, wait five minutes, then try again. 3. `403 = forbidden` - Twitter won't let you post what you are trying to post, most likely because you are trying to post the same tweet twice in a row within a few minutes of each other. Try changing your status update. If that doesn't fix it, then you are either: A. Hitting Twitter's daily posting limit. They don't say what this is. B. Trying to follow too many people, rapidly following and unfollowing the same person, or are otherwise making Twitter think you are a spambot 4. `429 = too many requests` - This means that you have exceeded Twitter's rate limit for whatever it is you are trying to do. Increase your `time.sleep()` value. ## Considerate robots and legality __Typically, in starting a new web scraping project, you'll want to follow these steps:__ 1) Find the websites' robots.txt and do not access those pages through your bot 2) Make sure your bot does not make too many requests in a specific period (etc. by using Python's sleep.wait function) 3) Look up the website's term of use or terms of service. We'll discuss each of these briefly. ### What data owners care about __Data owners are concerned with:__ 1) Keeping their website up 2) Protecting the commercial value of their data Their policies and responses differ with respect to these two areas. You'll need to do some research to determine what is appropriate with regards to your research. #### 1) Keeping their website up Most commercial websites have strategies to throttle or block IPs that make too many requests within a fixed amount of time. Because a bot can make a large number of requests in a small amount of time (etc. entering 100 different terms into Google in one second), servers are able to determine if traffic is coming from a bot or a person (among many other methods). For companies that rely on advertising, like Google or Twitter, these requests do not represent "human eyeballs" and need to be filtered out from their bill to advertisers. In order to keep their site up and running, companies may block your IP temporarily or permanently if they detect too many requests coming from your IP, or other signs that requests are being made by a bot instead of a person. If you systematically down a site (such as sending millions of requests to an official government site), there is the small chance your actions may be interpreted maliciously (and regarded as hacking), with risk of prosecution. #### 2) Protecting the commercial value of their data Companies are also typically very protective of their data, especially data that ties directly into how they make money. A listings site (like Craigslist), for instance, would lose traffic if listings on its site were poached and transfered to a competitor, or if a rival company used scraping tools to derive lists of users to contact. For this reason, companies' term of use agreements are typically very restrictive of what you can do with their data. Different companies may have a range of responses to your scraping, depending on what you do with the data. Typically, repurposing the data for a rival application or business will trigger a strong response from the company (i.e. legal attention). Publishing any analysis or results, either in a formal academic journal or on a blog or webpage, may be of less concern, though legal attention is still possible. ### robots.txt: internet convention The robots.txt file is typically located in the root folder of the site, with instructions to various services (User-agents) on what they are not allowed to scrape. Typically, the robots.txt file is more geared towards search engines (and their crawlers) more than anything else. However, companies and agencies typically will not want you to scrape any pages that they disallow search engines from accessing. Scraping these pages makes it more likely for your IP to be detected and blocked (along with other possible actions.) Below is an example of reddit's robots.txt file: https://www.reddit.com/robots.txt # 80legs User-agent: 008 Disallow: / User-Agent: bender Disallow: /my_shiny_metal_ass User-Agent: Gort Disallow: /earth User-Agent: * Disallow: /*.json Disallow: /*.json-compact Disallow: /*.json-html Disallow: /*.xml Disallow: /*.rss Disallow: /*.i Disallow: /*.embed Disallow: /*/comments/*?*sort= Disallow: /r/*/comments/*/*/c* Disallow: /comments/*/*/c* Disallow: /r/*/submit Disallow: /message/compose* Disallow: /api Disallow: /post Disallow: /submit Disallow: /goto Disallow: /*after= Disallow: /*before= Disallow: /domain/*t= Disallow: /login Disallow: /reddits/search Disallow: /search Disallow: /r/*/search Allow: / User blahblahblah provides a concise description of how to read the robots.txt file: https://www.reddit.com/r/learnprogramming/comments/3l1lcq/how_do_you_find_out_if_a_website_is_scrapable/ - The bot that calls itself 008 (apparently from 80legs) isn't allowed to access anything - bender is not allowed to visit my_shiny_metal_ass (it's a Futurama joke, the page doesn't actually exist) - Gort isn't allowed to visit Earth (another joke, from The Day the Earth Stood Still) - Other scrapers should avoid checking the API methods or "compose message" or 'search" or the "over 18?" page (because those aren't something you really want showing up in Google), but they're allowed to visit anything else. In general, your bot will fall into the * wildcard category of what the site generally do not want bots to access. You should make sure your scraper does not access any of those pages, etc. www.reddit.com/login etc.
github_jupyter
import requests r = requests.get("http://en.wikipedia.org/wiki/Main_Page") type(r.request), type(r.content), type(r.headers) from pprint import pprint pprint(r.content[0:1000]) r.request.headers r.headers r.content[:1000] r.text[:1000] from bs4 import BeautifulSoup page = BeautifulSoup(r.content) page page.p page.p.a page.p.findAll('a') for element in page.p.children: print(element) for element in page.p.descendants: print(element) for link in page.find_all('a'): if link.attrs.get('class') != 'mw-redirect': print(link.find_next()) import json with open('../data/02_tweet.json','r') as f: a_tweet = json.loads(f.read()) from pprint import pprint pprint(a_tweet) import yaml with open('../etc/creds.yml', 'r') as f: creds = yaml.load(f) from requests_oauthlib import OAuth1Session twitter = OAuth1Session(**creds) https://api.twitter.com/1.1/search/tweets.json https://api.twitter.com/1.1/statuses/update.json search = "https://api.twitter.com/1.1/search/tweets.json" r = twitter.get(search, params={'q' : 'technology'}) r.ok r.status_code, r.reason r.content r.json() data = r.json() data['search_metadata'] statuses = data['statuses'] statuses[0] r = twitter.get(search, params={ 'q' : 'happy', 'geocode' : '37.8734855,-122.2597169,10mi' }) r.ok statuses = r.json()['statuses'] statuses[0] with open('my_tweets.json', 'w') as f: json.dump(statuses, f) post = "https://api.twitter.com/1.1/statuses/update.json" r = twitter.post(post, params={ 'status' : "I stole Juan's Twitter credentials" }) r.ok import time def retweet(): r = twitter.get(search, {'q':'DLabAtBerkeley'}) if r.ok: statuses = r.json()['statuses'] for update in statuses: username = item['user']['screen_name'] parameters = {'status':'HOORAY! @' + username} r = twitter.post(post, parameters) print(r.status_code, r.reason) time.sleep(5) with open('../etc/crontab_example', 'r') as f: print(f.read()) with open('../etc/crond_example', 'r') as f: print(f.read())
0.189784
0.947962
``` from nustar_gen.radial_profile import find_source, make_radial_profile, optimize_radius_snr from nustar_gen.wrappers import make_image from astropy.wcs import WCS from astropy.io import fits from astropy.coordinates import SkyCoord import numpy as np from nustar_gen import info obs = info.Observation(path='../data/', seqid='30001143002') obs.exposure_report() for mod in ['A', 'B']: for file in obs.science_files[mod]: hdr = fits.getheader(file) print(f"{file}, exposure: {1e-3*hdr['EXPOSURE']:20.4} ks") print() mod='A' infile = obs.science_files[mod][0] full_range = make_image(infile, elow = 3, ehigh = 80, clobber=True) coordinates = find_source(full_range, show_image = True, filt_range=3) # Get the WCS header and convert the pixel coordinates into an RA/Dec object hdu = fits.open(full_range, uint=True)[0] wcs = WCS(hdu.header) # The "flip" is necessary to go to [X, Y] ordering from native [Y, X] ordering, which wcs seems to require world = wcs.all_pix2world(np.flip(coordinates), 0) ra = world[0][0] dec = world[0][1] target = SkyCoord(ra, dec, unit='deg', frame='fk5') print(target) obj_j2000 = SkyCoord(hdu.header['RA_OBJ'], hdu.header['DEC_OBJ'], unit = 'deg', frame ='fk5') # How far are we from the J2000 coordinates? If <15 arcsec, all is okay sep = target.separation(obj_j2000) print(sep) # Now the radial image parts. # Make the radial image for the full energy range (or whatever is the best SNR) full_range = make_image(infile, elow = 3, ehigh = 80, clobber=True) rind, rad_profile, radial_err, psf_profile = make_radial_profile(full_range, show_image=False, coordinates = coordinates) # Pick energy ranges that you want to check. # Note that this formalism breaks down when the source isn't detected, so use your best judgement here. # Below should be used as a "best guess" when choosing a radius for spectral extraction. # For the 3-20 keV case, the source dominates out the edge of the FoV (and the assumptoons about the PSF # start to break down in the fit). # This a soft source (LMC X-1), so for 20-30 keV we already see that we need to restrict the radius that we # use so that we're not just adding noise to the spectrum. pairs = [[3, 20], [20, 30], [30, 40], [40, 50], [50, 80]] coordinates = find_source(full_range, show_image = False) for pair in pairs: test_file = make_image(infile, elow = pair[0], ehigh = pair[1], clobber=True) rind, rad_profile, radial_err, psf_profile = make_radial_profile(test_file, show_image=False, coordinates = coordinates) rlimit = optimize_radius_snr(rind, rad_profile, radial_err, psf_profile, show=True) print('Radius of peak SNR for {} to {} keV: {}'.format( pair[0], pair[1], rlimit)) import regions import astropy.units as u source_reg = [regions.CircleSkyRegion(center=target, radius=60*u.arcsec)] outfile = obs._evdir+f'/src{mod}01.reg' regions.write_ds9(source_reg, outfile, radunit='arcsec') ```
github_jupyter
from nustar_gen.radial_profile import find_source, make_radial_profile, optimize_radius_snr from nustar_gen.wrappers import make_image from astropy.wcs import WCS from astropy.io import fits from astropy.coordinates import SkyCoord import numpy as np from nustar_gen import info obs = info.Observation(path='../data/', seqid='30001143002') obs.exposure_report() for mod in ['A', 'B']: for file in obs.science_files[mod]: hdr = fits.getheader(file) print(f"{file}, exposure: {1e-3*hdr['EXPOSURE']:20.4} ks") print() mod='A' infile = obs.science_files[mod][0] full_range = make_image(infile, elow = 3, ehigh = 80, clobber=True) coordinates = find_source(full_range, show_image = True, filt_range=3) # Get the WCS header and convert the pixel coordinates into an RA/Dec object hdu = fits.open(full_range, uint=True)[0] wcs = WCS(hdu.header) # The "flip" is necessary to go to [X, Y] ordering from native [Y, X] ordering, which wcs seems to require world = wcs.all_pix2world(np.flip(coordinates), 0) ra = world[0][0] dec = world[0][1] target = SkyCoord(ra, dec, unit='deg', frame='fk5') print(target) obj_j2000 = SkyCoord(hdu.header['RA_OBJ'], hdu.header['DEC_OBJ'], unit = 'deg', frame ='fk5') # How far are we from the J2000 coordinates? If <15 arcsec, all is okay sep = target.separation(obj_j2000) print(sep) # Now the radial image parts. # Make the radial image for the full energy range (or whatever is the best SNR) full_range = make_image(infile, elow = 3, ehigh = 80, clobber=True) rind, rad_profile, radial_err, psf_profile = make_radial_profile(full_range, show_image=False, coordinates = coordinates) # Pick energy ranges that you want to check. # Note that this formalism breaks down when the source isn't detected, so use your best judgement here. # Below should be used as a "best guess" when choosing a radius for spectral extraction. # For the 3-20 keV case, the source dominates out the edge of the FoV (and the assumptoons about the PSF # start to break down in the fit). # This a soft source (LMC X-1), so for 20-30 keV we already see that we need to restrict the radius that we # use so that we're not just adding noise to the spectrum. pairs = [[3, 20], [20, 30], [30, 40], [40, 50], [50, 80]] coordinates = find_source(full_range, show_image = False) for pair in pairs: test_file = make_image(infile, elow = pair[0], ehigh = pair[1], clobber=True) rind, rad_profile, radial_err, psf_profile = make_radial_profile(test_file, show_image=False, coordinates = coordinates) rlimit = optimize_radius_snr(rind, rad_profile, radial_err, psf_profile, show=True) print('Radius of peak SNR for {} to {} keV: {}'.format( pair[0], pair[1], rlimit)) import regions import astropy.units as u source_reg = [regions.CircleSkyRegion(center=target, radius=60*u.arcsec)] outfile = obs._evdir+f'/src{mod}01.reg' regions.write_ds9(source_reg, outfile, radunit='arcsec')
0.703753
0.593756
# Attention Basics In this notebook, we look at how attention is implemented. We will focus on implementing attention in isolation from a larger model. That's because when implementing attention in a real-world model, a lot of the focus goes into piping the data and juggling the various vectors rather than the concepts of attention themselves. We will implement attention scoring as well as calculating an attention context vector. ## Attention Scoring ### Inputs to the scoring function Let's start by looking at the inputs we'll give to the scoring function. We will assume we're in the first step in the decoding phase. The first input to the scoring function is the hidden state of decoder (assuming a toy RNN with three hidden nodes -- not usable in real life, but easier to illustrate): ``` dec_hidden_state = [5,1,20] ``` Let's visualize this vector: ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns # Let's visualize our decoder hidden state plt.figure(figsize=(1.5, 4.5)) sns.heatmap(np.transpose(np.matrix(dec_hidden_state)), annot=True, cmap=sns.light_palette("purple", as_cmap=True), linewidths=1) ``` Our first scoring function will score a single annotation (encoder hidden state), which looks like this: ``` annotation = [3,12,45] #e.g. Encoder hidden state # Let's visualize the single annotation plt.figure(figsize=(1.5, 4.5)) sns.heatmap(np.transpose(np.matrix(annotation)), annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1) ``` ### IMPLEMENT: Scoring a Single Annotation Let's calculate the dot product of a single annotation. Numpy's [dot()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) is a good candidate for this operation ``` def single_dot_attention_score(dec_hidden_state, enc_hidden_state): # TODO: return the dot product of the two vectors return np.dot(dec_hidden_state, enc_hidden_state) single_dot_attention_score(dec_hidden_state, annotation) ``` ### Annotations Matrix Let's now look at scoring all the annotations at once. To do that, here's our annotation matrix: ``` annotations = np.transpose([[3,12,45], [59,2,5], [1,43,5], [4,3,45.3]]) ``` And it can be visualized like this (each column is a hidden state of an encoder time step): ``` # Let's visualize our annotation (each column is an annotation) ax = sns.heatmap(annotations, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1) ``` ### IMPLEMENT: Scoring All Annotations at Once Let's calculate the scores of all the annotations in one step using matrix multiplication. Let's continue to us the dot scoring method <img src="images/scoring_functions.png" /> To do that, we'll have to transpose `dec_hidden_state` and [matrix multiply](https://docs.scipy.org/doc/numpy/reference/generated/numpy.matmul.html) it with `annotations`. ``` def dot_attention_score(dec_hidden_state, annotations): # TODO: return the product of dec_hidden_state transpose and enc_hidden_states return np.matmul(np.transpose(dec_hidden_state), annotations) attention_weights_raw = dot_attention_score(dec_hidden_state, annotations) attention_weights_raw ``` Looking at these scores, can you guess which of the four vectors will get the most attention from the decoder at this time step? ## Softmax Now that we have our scores, let's apply softmax: <img src="images/softmax.png" /> ``` def softmax(x): x = np.array(x, dtype=np.float128) e_x = np.exp(x) return e_x / e_x.sum(axis=0) attention_weights = softmax(attention_weights_raw) attention_weights ``` Even when knowing which annotation will get the most focus, it's interesting to see how drastic softmax makes the end score become. The first and last annotation had the respective scores of 927 and 929. But after softmax, the attention they'll get is 0.12 and 0.88 respectively. # Applying the scores back on the annotations Now that we have our scores, let's multiply each annotation by its score to proceed closer to the attention context vector. This is the multiplication part of this formula (we'll tackle the summation part in the latter cells) <img src="images/Context_vector.png" /> ``` def apply_attention_scores(attention_weights, annotations): # TODO: Multiple the annotations by their weights return attention_weights* annotations applied_attention = apply_attention_scores(attention_weights, annotations) applied_attention ``` Let's visualize how the context vector looks now that we've applied the attention scores back on it: ``` # Let's visualize our annotations after applying attention to them ax = sns.heatmap(applied_attention, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1) ``` Contrast this with the raw annotations visualized earlier in the notebook, and we can see that the second and third annotations (columns) have been nearly wiped out. The first annotation maintains some of its value, and the fourth annotation is the most pronounced. # Calculating the Attention Context Vector All that remains to produce our attention context vector now is to sum up the four columns to produce a single attention context vector ``` def calculate_attention_vector(applied_attention): return np.sum(applied_attention, axis=1) attention_vector = calculate_attention_vector(applied_attention) attention_vector # Let's visualize the attention context vector plt.figure(figsize=(1.5, 4.5)) sns.heatmap(np.transpose(np.matrix(attention_vector)), annot=True, cmap=sns.light_palette("Blue", as_cmap=True), linewidths=1) ``` Now that we have the context vector, we can concatenate it with the hidden state and pass it through a hidden layer to produce the the result of this decoding time step.
github_jupyter
dec_hidden_state = [5,1,20] %matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns # Let's visualize our decoder hidden state plt.figure(figsize=(1.5, 4.5)) sns.heatmap(np.transpose(np.matrix(dec_hidden_state)), annot=True, cmap=sns.light_palette("purple", as_cmap=True), linewidths=1) annotation = [3,12,45] #e.g. Encoder hidden state # Let's visualize the single annotation plt.figure(figsize=(1.5, 4.5)) sns.heatmap(np.transpose(np.matrix(annotation)), annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1) def single_dot_attention_score(dec_hidden_state, enc_hidden_state): # TODO: return the dot product of the two vectors return np.dot(dec_hidden_state, enc_hidden_state) single_dot_attention_score(dec_hidden_state, annotation) annotations = np.transpose([[3,12,45], [59,2,5], [1,43,5], [4,3,45.3]]) # Let's visualize our annotation (each column is an annotation) ax = sns.heatmap(annotations, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1) def dot_attention_score(dec_hidden_state, annotations): # TODO: return the product of dec_hidden_state transpose and enc_hidden_states return np.matmul(np.transpose(dec_hidden_state), annotations) attention_weights_raw = dot_attention_score(dec_hidden_state, annotations) attention_weights_raw def softmax(x): x = np.array(x, dtype=np.float128) e_x = np.exp(x) return e_x / e_x.sum(axis=0) attention_weights = softmax(attention_weights_raw) attention_weights def apply_attention_scores(attention_weights, annotations): # TODO: Multiple the annotations by their weights return attention_weights* annotations applied_attention = apply_attention_scores(attention_weights, annotations) applied_attention # Let's visualize our annotations after applying attention to them ax = sns.heatmap(applied_attention, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1) def calculate_attention_vector(applied_attention): return np.sum(applied_attention, axis=1) attention_vector = calculate_attention_vector(applied_attention) attention_vector # Let's visualize the attention context vector plt.figure(figsize=(1.5, 4.5)) sns.heatmap(np.transpose(np.matrix(attention_vector)), annot=True, cmap=sns.light_palette("Blue", as_cmap=True), linewidths=1)
0.450601
0.99166
# Data Cleaner for Falls Data from CDC's NHIS Author: Vikas Enti, venti@mit.edu This script cleans the csv files from CDC's NHIS Dataset to create a single, easy to analyze and visualize dataset. ``` import pandas as pd import sqlite3 import glob # This is a quick and dirty approach. Rewrite if you need to ingest a lot more CSV files # Create injury episode dataframes from csv files #inj_df_2017 = pd.read_csv('NHIS/2017_injpoiep.csv') #inj_df_2016 = pd.read_csv('NHIS/2016_injpoiep.csv') #inj_df_2015 = pd.read_csv('NHIS/2015_injpoiep.csv') # Create sample adult dataframes from csv files #sam_df_2017 = pd.read_csv('NHIS/2017_samadult.csv') #sam_df_2016 = pd.read_csv('NHIS/2016_samadult.csv') #sam_df_2015 = pd.read_csv('NHIS/2015_samadult.csv') # Elegant approach # Injury Episodes inj_epi_df = pd.concat([pd.read_csv(f, encoding='latin1') for f in glob.glob('NHIS/*inj*.csv')], ignore_index=True, sort=True) # Sameple Adult sam_adu_df = pd.concat([pd.read_csv(f, encoding='latin1') for f in glob.glob('NHIS/*sam*.csv')], ignore_index=True, sort=True) inj_epi_df sam_adu_df # Dictionaries for different variable values # Source: Injury Episode Frequency file. # ftp://ftp.cdc.gov/pub/Health_Statistics/NCHS/Dataset_Documentation/NHIS/2016/Injpoiep_freq.pdf #ICAUS injury_cause = { 1:'In a motor vehicle', 2:'On a bike, scooter, skateboard, skates, skis, horse, etc', 3:'Pedestrian who was struck by a vehicle such as a car or bicycle', 4:'In a boat, train, or plane', 5:'Fall', 6:'Burned or scalded by substances such as hot objects or liquids, fire, or chemicals', 7:'Other', 97:'Refused', 98:'Not ascertained', 99:"Don't know" } #ijbody1, ijbody2, ijbody4, ijbody4 body_part = { 1:'Ankle', 2:'Back', 3:'Buttocks', 4:'Chest', 5:'Ear', 6:'Elbow', 7:'Eye', 8:'Face', 9:'Finger/thumb', 10:'Foot', 11:'Forearm', 12:'Groin', 13:'Hand', 14:'Head (not face)', 15:'Hip', 16:'Jaw', 17:'Knee', 18:'Lower leg', 19:'Mouth', 20:'Neck', 22:'Shoulder', 23:'Stomach', 24:'Teeth', 25:'Thigh', 26:'Toe', 27:'Upper arm', 28:'Wrist', 29:'Other', 97:'Refused', 98:'Not ascertained', 99:"Don't know" } #ifall1, ifall2 fall_loc = { 1:"Stairs, steps, or escalator", 2:"Floor or level ground", 3:"Curb (including sidewalk)", 4:"Ladder or scaffolding", 5:"Playground equipment", 6:"Sports field, court, or rink", 7:"Building or other structure", 8:"Chair, bed, sofa, or other furniture", 9:"Bathtub, shower, toilet, or commode", 10:"Hole or other opening", 11:"Other", 97:"Refused", 98:"Not ascertained", 99:"Don't know", } #ifallwhy fall_reason = { 1:"Slipping or tripping", 2:"Jumping or diving", 3:"Bumping into an object or another person", 4:"Being shoved or pushed by another person", 5:"Losing balance or having dizziness (becoming faint or having a seizure)", 6:"Other", 7:"Refused", 8:"Not ascertained", 9:"Don't know", } #SEX gender = { 1:"Male", 2:"Female" } # Merge both dataframes for easier analysis nhis_falls = pd.merge(sam_adu_df, inj_epi_df, on = ['SRVY_YR','HHX','FMX','FPX'], how = 'inner') nhis_falls = nhis_falls.fillna(999) nhis_falls = nhis_falls.astype('int32') # Embed dictionary values as new columns nhis_falls['injury_cause'] = nhis_falls['ICAUS'].map(injury_cause) nhis_falls['body_part1'] = nhis_falls['IJBODY1'].map(body_part) nhis_falls['body_part2'] = nhis_falls['IJBODY2'].map(body_part) nhis_falls['body_part3'] = nhis_falls['IJBODY3'].map(body_part) nhis_falls['body_part4'] = nhis_falls['IJBODY4'].map(body_part) nhis_falls['fall_loc1'] = nhis_falls['IFALL1'].map(fall_loc) nhis_falls['fall_loc2'] = nhis_falls['IFALL2'].map(fall_loc) nhis_falls['fall_reason'] = nhis_falls['IFALLWHY'].map(fall_reason) nhis_falls['gender'] = nhis_falls['SEX'].map(gender) nhis_falls['ICAUS'] # Output select variables from dataframe to csv file header = ['SRVY_YR','HHX','FMX','FPX','AGE_P','gender','ICAUS','IJBODY1','IJBODY2','IJBODY3','IJBODY4', 'IFALL1','IFALL2','IFALLWHY','injury_cause','body_part1','body_part2','body_part3','body_part4', 'fall_loc1','fall_loc2','fall_reason'] nhis_falls.to_csv('NHIS/nhis_falls.csv', columns=header) nhis_falls[header] ```
github_jupyter
import pandas as pd import sqlite3 import glob # This is a quick and dirty approach. Rewrite if you need to ingest a lot more CSV files # Create injury episode dataframes from csv files #inj_df_2017 = pd.read_csv('NHIS/2017_injpoiep.csv') #inj_df_2016 = pd.read_csv('NHIS/2016_injpoiep.csv') #inj_df_2015 = pd.read_csv('NHIS/2015_injpoiep.csv') # Create sample adult dataframes from csv files #sam_df_2017 = pd.read_csv('NHIS/2017_samadult.csv') #sam_df_2016 = pd.read_csv('NHIS/2016_samadult.csv') #sam_df_2015 = pd.read_csv('NHIS/2015_samadult.csv') # Elegant approach # Injury Episodes inj_epi_df = pd.concat([pd.read_csv(f, encoding='latin1') for f in glob.glob('NHIS/*inj*.csv')], ignore_index=True, sort=True) # Sameple Adult sam_adu_df = pd.concat([pd.read_csv(f, encoding='latin1') for f in glob.glob('NHIS/*sam*.csv')], ignore_index=True, sort=True) inj_epi_df sam_adu_df # Dictionaries for different variable values # Source: Injury Episode Frequency file. # ftp://ftp.cdc.gov/pub/Health_Statistics/NCHS/Dataset_Documentation/NHIS/2016/Injpoiep_freq.pdf #ICAUS injury_cause = { 1:'In a motor vehicle', 2:'On a bike, scooter, skateboard, skates, skis, horse, etc', 3:'Pedestrian who was struck by a vehicle such as a car or bicycle', 4:'In a boat, train, or plane', 5:'Fall', 6:'Burned or scalded by substances such as hot objects or liquids, fire, or chemicals', 7:'Other', 97:'Refused', 98:'Not ascertained', 99:"Don't know" } #ijbody1, ijbody2, ijbody4, ijbody4 body_part = { 1:'Ankle', 2:'Back', 3:'Buttocks', 4:'Chest', 5:'Ear', 6:'Elbow', 7:'Eye', 8:'Face', 9:'Finger/thumb', 10:'Foot', 11:'Forearm', 12:'Groin', 13:'Hand', 14:'Head (not face)', 15:'Hip', 16:'Jaw', 17:'Knee', 18:'Lower leg', 19:'Mouth', 20:'Neck', 22:'Shoulder', 23:'Stomach', 24:'Teeth', 25:'Thigh', 26:'Toe', 27:'Upper arm', 28:'Wrist', 29:'Other', 97:'Refused', 98:'Not ascertained', 99:"Don't know" } #ifall1, ifall2 fall_loc = { 1:"Stairs, steps, or escalator", 2:"Floor or level ground", 3:"Curb (including sidewalk)", 4:"Ladder or scaffolding", 5:"Playground equipment", 6:"Sports field, court, or rink", 7:"Building or other structure", 8:"Chair, bed, sofa, or other furniture", 9:"Bathtub, shower, toilet, or commode", 10:"Hole or other opening", 11:"Other", 97:"Refused", 98:"Not ascertained", 99:"Don't know", } #ifallwhy fall_reason = { 1:"Slipping or tripping", 2:"Jumping or diving", 3:"Bumping into an object or another person", 4:"Being shoved or pushed by another person", 5:"Losing balance or having dizziness (becoming faint or having a seizure)", 6:"Other", 7:"Refused", 8:"Not ascertained", 9:"Don't know", } #SEX gender = { 1:"Male", 2:"Female" } # Merge both dataframes for easier analysis nhis_falls = pd.merge(sam_adu_df, inj_epi_df, on = ['SRVY_YR','HHX','FMX','FPX'], how = 'inner') nhis_falls = nhis_falls.fillna(999) nhis_falls = nhis_falls.astype('int32') # Embed dictionary values as new columns nhis_falls['injury_cause'] = nhis_falls['ICAUS'].map(injury_cause) nhis_falls['body_part1'] = nhis_falls['IJBODY1'].map(body_part) nhis_falls['body_part2'] = nhis_falls['IJBODY2'].map(body_part) nhis_falls['body_part3'] = nhis_falls['IJBODY3'].map(body_part) nhis_falls['body_part4'] = nhis_falls['IJBODY4'].map(body_part) nhis_falls['fall_loc1'] = nhis_falls['IFALL1'].map(fall_loc) nhis_falls['fall_loc2'] = nhis_falls['IFALL2'].map(fall_loc) nhis_falls['fall_reason'] = nhis_falls['IFALLWHY'].map(fall_reason) nhis_falls['gender'] = nhis_falls['SEX'].map(gender) nhis_falls['ICAUS'] # Output select variables from dataframe to csv file header = ['SRVY_YR','HHX','FMX','FPX','AGE_P','gender','ICAUS','IJBODY1','IJBODY2','IJBODY3','IJBODY4', 'IFALL1','IFALL2','IFALLWHY','injury_cause','body_part1','body_part2','body_part3','body_part4', 'fall_loc1','fall_loc2','fall_reason'] nhis_falls.to_csv('NHIS/nhis_falls.csv', columns=header) nhis_falls[header]
0.157072
0.606353
# DNA Panda This is a small script that will import your genome, and query specified genes against NCBI returning a data_frame and .csv with positive matches. ``` # Imports import os import pandas as pd from os import listdir import numpy as np import seaborn as sns import matplotlib import matplotlib.pyplot as plt %matplotlib inline import pylev import re import seaborn as sns sns.set_style('darkgrid') sns.color_palette('Spectral') import matplotlib.pyplot as plt import requests from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait ``` ## Import User Data ``` user_frame = [] user_frame.append(pd.read_csv('data/23andme_MG_v4.txt', sep='\t', dtype={'rsid':'str', 'chromosome':'object', 'position':'int', 'genotype':'str'}, comment='#')) data_frame = pd.concat(user_frame, axis=0, ignore_index=True) #import_frame = pd.read_csv("rccx.csv") #merged_frame = pd.concat([data_frame, import_frame], axis=0, sort=True) #print(merged_frame) #df = pd.DataFrame(merged_frame) # Read the data into a pandas DataFrame and do some EDA df = pd.DataFrame(data_frame) #df = pd.DataFrame(merged_frame) df.info #df = df.fillna("0") #df.isna().any() # How many chromosomes are on the Y chromosome? df['chromosome'].unique() Y_chromosome = df[df.chromosome == 'Y'] len(Y_chromosome) # Show unique counts df.nunique() ## Display how many missing SNPs are in your genome genotype_na = df[df.genotype == '--'] len(genotype_na) # Print the length of any chromosome df6 = df[df.chromosome == "6"] len(df6) df6.info() df6.head() # See the frequency of genotypes #df6['genotype'].value_counts() df6.count() notch4 = df6[(df6['position'] >= 32194843) & (df6['position'] <= 32224067)] notch3 = df[(df['position'] >= 15159038) & (df6['position'] <= 15200995)] notch4.count() notch3.count() notch = pd.concat([notch4, notch3], axis=0, sort=True) notch.info() ``` ## Isolate the RCCX module ``` # CYP21A2 :: 32,038,306 to 32,041,670 on chromosome 6 # tnxb :: 32,041,153 to 32,109,338 # C4 :: 31,982,057 to 32,002,681 # stk19 :: 31,971,175 to 31,981,446 # NOTCH 3 :: 15,159,038 to 15,200,995 # NOTCH 4 32,194,843 to 32,224,067 rccx = df6[(df6['position'] >= 31971175) & (df6['position'] <= 32109338)] rccx = rccx[rccx.genotype != "--"] rccx.count() toScan = pd.concat([notch, rccx], axis=0, sort=True) toScan['genotype'].value_counts() pd.options.display.max_rows = 999 toScan.count() ``` ## Crawling NCBI ``` import urllib.request from bs4 import BeautifulSoup count = 0 toScan['Parsed'] = "0" for i, row in toScan.iterrows(): count = count + 1 if(row.Parsed != "1"): try: print("trying...", row.rsid,"(", count, "out of", len(rccx['rsid']),")") url = "https://www.ncbi.nlm.nih.gov/snp/" + row.rsid + "#clinical_significance" response = urllib.request.urlopen(url) html = response.read() bs = BeautifulSoup(html, "html.parser") classification = bs.find(id="clinical_significance") if classification: rows = classification.find_all("tr") ClinVar = [] for row in rows: cols = row.find_all("td") cols = [ele.text.strip() for ele in cols] ClinVar.append([ele for ele in cols if ele]) listToStr = ' '.join([str(elem) for elem in ClinVar]) toScan.at[i, 'ClinVar'] = listToStr ncbi = bs.find(class_="summary-box usa-grid-full") if ncbi: dbSNP = [] rows = ncbi.find_all("div") for row in rows: cols = row.find_all("div") cols = [ele.text.strip() for ele in cols] dbSNP.append(cols) try: print("Risk", dbSNP[2][0][0]) print("Frequency",dbSNP[2][0][3:7]) toScan.at[i, 'Risk'] = dbSNP[2][0][0] toScan.at[i, 'Frequency'] = dbSNP[2][0][3:7] except IndexError: print("index error") dbSNPTwo= [] rows = ncbi.find_all("dl") for row in rows: cols = row.find_all("dd") cols = [ele.text.strip() for ele in cols] dbSNPTwo.append(cols) try: print("Gene", dbSNPTwo[1][1].split(' ')[0]) toScan.at[i, 'Gene'] = dbSNPTwo[1][1].split(' ')[0] print("Publications", dbSNPTwo[1][2][0]) toScan.at[i, 'Citations'] = dbSNPTwo[1][2][0] toScan.at[i, 'Parsed'] = "1" except IndexError: print("index error") except urllib.error.HTTPError: print(url + " was not found or on dbSNP or contained no valid information") rccx #rccx.to_csv('rccx.csv', index=False) rccx_filled = rccx.fillna("0") rccx_filled rccx_present = rccx_filled rccx_present = rccx_filled[rccx_filled.apply(lambda x: x.Risk in x.genotype, axis=1)] rccx_present notch notch_filed = notch.fillna("0") notch_present = notch_filed notch_present = notch_filed[notch_filed.apply(lambda x: x.Risk in x.genotype, axis=1)] notch_present with open('rccx.csv', 'a') as f: df.to_csv(f, header=False) ```
github_jupyter
# Imports import os import pandas as pd from os import listdir import numpy as np import seaborn as sns import matplotlib import matplotlib.pyplot as plt %matplotlib inline import pylev import re import seaborn as sns sns.set_style('darkgrid') sns.color_palette('Spectral') import matplotlib.pyplot as plt import requests from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait user_frame = [] user_frame.append(pd.read_csv('data/23andme_MG_v4.txt', sep='\t', dtype={'rsid':'str', 'chromosome':'object', 'position':'int', 'genotype':'str'}, comment='#')) data_frame = pd.concat(user_frame, axis=0, ignore_index=True) #import_frame = pd.read_csv("rccx.csv") #merged_frame = pd.concat([data_frame, import_frame], axis=0, sort=True) #print(merged_frame) #df = pd.DataFrame(merged_frame) # Read the data into a pandas DataFrame and do some EDA df = pd.DataFrame(data_frame) #df = pd.DataFrame(merged_frame) df.info #df = df.fillna("0") #df.isna().any() # How many chromosomes are on the Y chromosome? df['chromosome'].unique() Y_chromosome = df[df.chromosome == 'Y'] len(Y_chromosome) # Show unique counts df.nunique() ## Display how many missing SNPs are in your genome genotype_na = df[df.genotype == '--'] len(genotype_na) # Print the length of any chromosome df6 = df[df.chromosome == "6"] len(df6) df6.info() df6.head() # See the frequency of genotypes #df6['genotype'].value_counts() df6.count() notch4 = df6[(df6['position'] >= 32194843) & (df6['position'] <= 32224067)] notch3 = df[(df['position'] >= 15159038) & (df6['position'] <= 15200995)] notch4.count() notch3.count() notch = pd.concat([notch4, notch3], axis=0, sort=True) notch.info() # CYP21A2 :: 32,038,306 to 32,041,670 on chromosome 6 # tnxb :: 32,041,153 to 32,109,338 # C4 :: 31,982,057 to 32,002,681 # stk19 :: 31,971,175 to 31,981,446 # NOTCH 3 :: 15,159,038 to 15,200,995 # NOTCH 4 32,194,843 to 32,224,067 rccx = df6[(df6['position'] >= 31971175) & (df6['position'] <= 32109338)] rccx = rccx[rccx.genotype != "--"] rccx.count() toScan = pd.concat([notch, rccx], axis=0, sort=True) toScan['genotype'].value_counts() pd.options.display.max_rows = 999 toScan.count() import urllib.request from bs4 import BeautifulSoup count = 0 toScan['Parsed'] = "0" for i, row in toScan.iterrows(): count = count + 1 if(row.Parsed != "1"): try: print("trying...", row.rsid,"(", count, "out of", len(rccx['rsid']),")") url = "https://www.ncbi.nlm.nih.gov/snp/" + row.rsid + "#clinical_significance" response = urllib.request.urlopen(url) html = response.read() bs = BeautifulSoup(html, "html.parser") classification = bs.find(id="clinical_significance") if classification: rows = classification.find_all("tr") ClinVar = [] for row in rows: cols = row.find_all("td") cols = [ele.text.strip() for ele in cols] ClinVar.append([ele for ele in cols if ele]) listToStr = ' '.join([str(elem) for elem in ClinVar]) toScan.at[i, 'ClinVar'] = listToStr ncbi = bs.find(class_="summary-box usa-grid-full") if ncbi: dbSNP = [] rows = ncbi.find_all("div") for row in rows: cols = row.find_all("div") cols = [ele.text.strip() for ele in cols] dbSNP.append(cols) try: print("Risk", dbSNP[2][0][0]) print("Frequency",dbSNP[2][0][3:7]) toScan.at[i, 'Risk'] = dbSNP[2][0][0] toScan.at[i, 'Frequency'] = dbSNP[2][0][3:7] except IndexError: print("index error") dbSNPTwo= [] rows = ncbi.find_all("dl") for row in rows: cols = row.find_all("dd") cols = [ele.text.strip() for ele in cols] dbSNPTwo.append(cols) try: print("Gene", dbSNPTwo[1][1].split(' ')[0]) toScan.at[i, 'Gene'] = dbSNPTwo[1][1].split(' ')[0] print("Publications", dbSNPTwo[1][2][0]) toScan.at[i, 'Citations'] = dbSNPTwo[1][2][0] toScan.at[i, 'Parsed'] = "1" except IndexError: print("index error") except urllib.error.HTTPError: print(url + " was not found or on dbSNP or contained no valid information") rccx #rccx.to_csv('rccx.csv', index=False) rccx_filled = rccx.fillna("0") rccx_filled rccx_present = rccx_filled rccx_present = rccx_filled[rccx_filled.apply(lambda x: x.Risk in x.genotype, axis=1)] rccx_present notch notch_filed = notch.fillna("0") notch_present = notch_filed notch_present = notch_filed[notch_filed.apply(lambda x: x.Risk in x.genotype, axis=1)] notch_present with open('rccx.csv', 'a') as f: df.to_csv(f, header=False)
0.176352
0.603757
# Build some pie and Donuts charts ### importing libraries ``` import pandas as pd #(version 1.0.0) import plotly #(version 4.5.4) #pip install plotly==4.5.4 import plotly.express as px import plotly.io as pio ``` ### Importing Dataset 1. Data from https://covidtracking.com/api/ ``` df = pd.read_csv("covid-19-states-daily.csv") df2 = pd.read_csv("all-states-history.csv") # df.head() # df2 ``` ### Preprocessing the data and ``` df['dateChecked'] = pd.to_datetime(df['dateChecked']) # df.head() ``` ### Filter out according to need ``` df = df[df['dateChecked'].dt.date.astype(str) == '2020-03-17'] df df = df[df['death']>=5] df ``` # Let's start build some charts ### default color ``` pie_chart = px.pie( data_frame=df, values='death', names='state', color='state', #differentiate markers (discrete) by color # color_discrete_sequence=["red","green","blue","orange"], #set marker colors # color_discrete_map={"WA":"yellow","CA":"red","NY":"black","FL":"brown"}, hover_name='negative', #values appear in bold in the hover tooltip # hover_data=['positive'], #values appear as extra data in the hover tooltip # custom_data=['total'], #values are extra data to be used in Dash callbacks labels={"state":"the State"}, #map the labels title='Coronavirus in the USA', #figure title template='presentation', #'ggplot2', 'seaborn', 'simple_white', 'plotly', #'plotly_white', 'plotly_dark', 'presentation', #'xgridoff', 'ygridoff', 'gridon', 'none' width=800, #figure width in pixels height=600, #figure height in pixels hole=0.5, #represents the hole in middle of pie ) pio.show(pie_chart) ``` ### Black bg and my color pallete ``` pie_chart = px.pie( data_frame=df, values='death', names='state', color='state', #differentiate markers (discrete) by color # color_discrete_sequence=["red","green","blue","orange"], #set marker colors color_discrete_map={"WA":"blue","CA":"red","NY":"black","FL":"brown"}, hover_name='negative', #values appear in bold in the hover tooltip # hover_data=['positive'], #values appear as extra data in the hover tooltip # custom_data=['total'], #values are extra data to be used in Dash callbacks labels={"state":"the State"}, #map the labels title='Coronavirus in the USA', #figure title template='plotly_dark', #'ggplot2', 'seaborn', 'simple_white', 'plotly', #'plotly_white', 'plotly_dark', 'presentation', #'xgridoff', 'ygridoff', 'gridon', 'none' width=800, #figure width in pixels height=600, #figure height in pixels hole=0.7, #represents the hole in middle of pie ) pio.show(pie_chart) pie_chart = px.pie( data_frame=df, values='death', names='state', color='state', #differentiate markers (discrete) by color color_discrete_sequence=["red","green","blue","orange"], #set marker colors # color_discrete_map={"WA":"yellow","CA":"red","NY":"black","FL":"brown"}, hover_name='negative', #values appear in bold in the hover tooltip # hover_data=['positive'], #values appear as extra data in the hover tooltip # custom_data=['total'], #values are extra data to be used in Dash callbacks labels={"state":"the State"}, #map the labels title='Coronavirus in the USA', #figure title template='plotly_dark', #'ggplot2', 'seaborn', 'simple_white', 'plotly', #'plotly_white', 'plotly_dark', 'presentation', #'xgridoff', 'ygridoff', 'gridon', 'none' width=800, #figure width in pixels height=600, #figure height in pixels hole=0.4, #represents the hole in middle of pie ) pie_chart.update_traces(textposition='outside', textinfo='percent+label', marker=dict(line=dict(color='#000000', width=10)), pull=[0, 0, 0.2, 0], opacity=0.7, rotation=180) pio.show(pie_chart) ```
github_jupyter
import pandas as pd #(version 1.0.0) import plotly #(version 4.5.4) #pip install plotly==4.5.4 import plotly.express as px import plotly.io as pio df = pd.read_csv("covid-19-states-daily.csv") df2 = pd.read_csv("all-states-history.csv") # df.head() # df2 df['dateChecked'] = pd.to_datetime(df['dateChecked']) # df.head() df = df[df['dateChecked'].dt.date.astype(str) == '2020-03-17'] df df = df[df['death']>=5] df pie_chart = px.pie( data_frame=df, values='death', names='state', color='state', #differentiate markers (discrete) by color # color_discrete_sequence=["red","green","blue","orange"], #set marker colors # color_discrete_map={"WA":"yellow","CA":"red","NY":"black","FL":"brown"}, hover_name='negative', #values appear in bold in the hover tooltip # hover_data=['positive'], #values appear as extra data in the hover tooltip # custom_data=['total'], #values are extra data to be used in Dash callbacks labels={"state":"the State"}, #map the labels title='Coronavirus in the USA', #figure title template='presentation', #'ggplot2', 'seaborn', 'simple_white', 'plotly', #'plotly_white', 'plotly_dark', 'presentation', #'xgridoff', 'ygridoff', 'gridon', 'none' width=800, #figure width in pixels height=600, #figure height in pixels hole=0.5, #represents the hole in middle of pie ) pio.show(pie_chart) pie_chart = px.pie( data_frame=df, values='death', names='state', color='state', #differentiate markers (discrete) by color # color_discrete_sequence=["red","green","blue","orange"], #set marker colors color_discrete_map={"WA":"blue","CA":"red","NY":"black","FL":"brown"}, hover_name='negative', #values appear in bold in the hover tooltip # hover_data=['positive'], #values appear as extra data in the hover tooltip # custom_data=['total'], #values are extra data to be used in Dash callbacks labels={"state":"the State"}, #map the labels title='Coronavirus in the USA', #figure title template='plotly_dark', #'ggplot2', 'seaborn', 'simple_white', 'plotly', #'plotly_white', 'plotly_dark', 'presentation', #'xgridoff', 'ygridoff', 'gridon', 'none' width=800, #figure width in pixels height=600, #figure height in pixels hole=0.7, #represents the hole in middle of pie ) pio.show(pie_chart) pie_chart = px.pie( data_frame=df, values='death', names='state', color='state', #differentiate markers (discrete) by color color_discrete_sequence=["red","green","blue","orange"], #set marker colors # color_discrete_map={"WA":"yellow","CA":"red","NY":"black","FL":"brown"}, hover_name='negative', #values appear in bold in the hover tooltip # hover_data=['positive'], #values appear as extra data in the hover tooltip # custom_data=['total'], #values are extra data to be used in Dash callbacks labels={"state":"the State"}, #map the labels title='Coronavirus in the USA', #figure title template='plotly_dark', #'ggplot2', 'seaborn', 'simple_white', 'plotly', #'plotly_white', 'plotly_dark', 'presentation', #'xgridoff', 'ygridoff', 'gridon', 'none' width=800, #figure width in pixels height=600, #figure height in pixels hole=0.4, #represents the hole in middle of pie ) pie_chart.update_traces(textposition='outside', textinfo='percent+label', marker=dict(line=dict(color='#000000', width=10)), pull=[0, 0, 0.2, 0], opacity=0.7, rotation=180) pio.show(pie_chart)
0.512205
0.817829
# 1. előadás _Tartalom_: Python bevezetés, Python 2 vs Python 3, IDE, egyszerű matematika, változók, szintaxis, és logika, ciklusok I., string I. A Python egy népszerű, általános célú, magas szintű szkript nyelv, melyet Guido van Rossum holland programozó publikált 1991-ben. A Python keresztplatformos nyelv, futtatókörnyezete több operációs rendszeren (Windows, Linux, OsX, Android) elérhető. A népszerűségéhez nagyban hozzájárul számos kiegészítőcsomagja, mint a később tárgyalandó numpy, matplotlib, TensorFlow, pyQT, openCV, iPython, stb. A Pythont tanuláshoz és egyszerűbb esetekben használhatjuk interaktívan is, az értelmezővel soronként végrehajttathatjuk az utasításainkat. Persze lehetőségünk van `.py` kiterjesztésű fájlokat készíteni és futtatni is, sőt lehet generálni futtatható fájlt vagy telepítőt. Egyelőre viszont maradjunk az alapoknál, ismerkedjünk a szintaktikával és az egyszerű matematikával. _Megjegyzés_: az előadások során a Jupyter notebookot fogjuk használni ez interaktív python notebook (`.ipynb`) fájlokat hoz létre, amiben a kódok és kimenetek szerkeszthetőek, illetve GitHub-on a legutolsó futás lesz látható. A gyakorlati `.py` kiterjesztésű fájlokhoz a Jupyter notebook nem fog kelleni, ez csak a demonstrációt könnyíti meg. #### Python 2 vs Python 3 Sajnos a Python még relatív széles körben használt 2.x változata, és a 2008 óta elérhető 3.x változat között nincs teljes kompatibilitás. Tehát, ha például az interneten ilyen kóddokat találunk: ``` python print "Hello, World!" ``` gyaníthatjuk, hogy Python 2.x -el van dolgunk. Ugyanez a kód Python 3.x-ben így néz ki: ``` python print("Hello, World!") ``` Erről még lesz szó, egyelőre annyit kell tudnunk, hogy ez a segédlet a 3.x verzióval lett tesztelve. ``` # mivel a Python 2.x és 3.x szintakitiája enyhén eltér ellenőrizzük, hogy a 3-ast használjuk-e # később további magyarázat lesz ezekről is import platform print(platform.python_version()) ``` Térjük tehát rá a matematikai alapokra. #### Egyszerű matematika ``` 6 * 8 # szorzás 2 ** 10 # hatványozás 100 / 3 # osztás 100 // 3 # egész osztás 52 % 10 # modulo osztás ``` Pythonban a megszokott *aritmetikai operátorok* jelen vannak, sőt további hasznosak is. | Operátor| Leírás| | ------------- |-------------| |`+`| összeadás | |`-`| kivonás| |`*`| osztás | |`**`| hatványozás | |`//`| egész osztás | |`/`| osztás | |`%`| modulo osztás | A Python a # karakter utáni szöveget a sor végéig megjegyzésnek értelmezi, több soros megjegyzést 3 idézőjellel vagy aposztróffal kezdhetünk és zárhatunk le. ```python """ Ez több soros komment """ ''' Ez is. ''' ``` #### Változók A változókba mentett értékek esetén interaktív módban nem jelenik meg a művelet adott eredménye. Pl: ``` x = (3 ** 3 ) + 0.2 ``` De kiírhatjuk például a `print` funkcióval ``` print(x) ``` Python-ban nincs automatikus típus konverzió a változókra, de a `type` függvénnyel kiírathatjuk egy változó típusát. ``` s = "1024" type(s) ``` A következő műveletben sincs automatikus típus konverzió, így hibát kapunk. `x + s` ``` python TypeError: unsupported operand type(s) for +: 'float' and 'str' ``` ``` type(s) type(s) is str type(x) ``` A Python alapvetően 3 numerikus típust ismer: `int`, `float` és `complex`. ``` a1 = 120 print(type(a1)) a2 = 12.0 print(type(a2)) a3 = 5 + 1j print(type(a3)) a4 = float(7) print(type(a4)) ``` A `sys.float_info.dig` megadja a tizedesjegyek maximális pontosságát. ``` import sys sys.float_info.dig ``` A változóinkat integerré konvertálhatjuk, például így: ``` x + int(s) s * 20 # ez nem megszorozza, hanem többszörözi ``` Pythonban az osztályokat `StudlyCaps` a konstanasokat `ALLCAPS` a változókat, metódusokat pedig `snake_case` konvenccióval szokás jelölni. #### String I. Stringeket többféleképp definiálhatunk: például aposztróffal és idézőjellel is. ``` s1 = "Ez az egyik mód." s2 = 'De ez is működik.' print(s1, s2) ``` Emlékeztető a karakterkódolásokról: - Az egybájtos karakterkódolások: - *ASCII* » 128 karakter, 7 bit (2^7 = 128) - *Latin-1* » másnéven ISO8859-1 » az első 128 ugyanaz, mint az ASCII, de, utána még 96 betűt használ (160-255 tartományban), így is belefér az egy bájtba (2^8 = 256) » nincs benn magyar ő és ű csak û és õ - *Latin-2* » hasonló a Latin-1-hez, de ebben a magyar ő és ű is megtalálható - *Windows-1250* » ez hasonlít a Latin-2-hez, többek közt a Windows szöveges fájok is használják, konzolos alkalmazásoknál előfordul. - *OEM-852* » konzolban szintén előfordulhat, de nem hasonlít a Latin2-höz - A többbájtos karakterkódolások: - *Unicode* » A Unicode az előzőekkel ellentétben nem egyetlen szabvány, inkább egy szabványcsomag. Az első 128 karaktere ugyanaz, az ASCII-é, de több, mint 120000 karaktert tartalmaz a világ majdnem összes nyelvén. Több bájtos, de az egybájtosról viszonylag könnyen átalakítható kódolás, de vissza már kevésbé, hiszen egy Unicode karakter nem biztos, hogy létezik pl.: Latin-2-ben. További probléma, hogy a 2 bájtot nem minden architektúra ugyanabban a sorrendben tárolja. Ezt a BOM (byte order mark) hivatott jelezni, mely vagy 0xFEFF vagy 0xFFFE. A Unicode szövegeket különböző karakterkódolással tárolhatjuk. A Unicode szabvány meghatározza az `UTF-8`, `UTF-16` és az `UTF-32` karakterkódolást, de számos más kódolás is használatban van. - *UTF-8* » Változó méretű karakterkódolás, 1 bájtos egységeket használ, így nem csak sorrendben, de tényleges bináris kódban is kompatibilis az ASCIIvel, hiszen az egy bájtba beleférő értékeket egy bájton is tárolja. Neve a 8-bit Unicode Transformation Format, 8 bites Unicode átalakítási formátum, ami utal arra, hogy bármilyen Unicode karaktert képes reprezentálni, gyakran használják internet-alapú karakterkódolásra. A `sys.getdefaultencoding()` megadja, mi az alapértelmezett kódolás. ``` import sys print(sys.getdefaultencoding()) ``` A különbség az aposztróffal és idézőjellel készített stringek között, hogy ``` python "ebben így lehet ' karakter" 'ebben viszont \' kell a visszaper elé' ``` #### Formázott kiíratás A `prtint` funckió rengeteg lehetőséget ad. Később erre még visszatérünk, most nézzük meg a `sep` elválasztó és az `end` működését példákon szemléltetve. ``` print('a','b', 'c', 'da\'sda"as"sd--e') print("a","b", "c", "deas'aaa'asd") print("a","b", "c", "de", sep="") print("a","b", "c", "de", sep="***") print("x\ny") print("Ez kerüljön", end=" ") print("egy sorba.") ``` #### Ciklusok I., alapvető szintaktika Összetettebb kódsorok készítése előtt ki kell emelni a Python azon sajátosságát, hogy a Python programokban a kód blokkokat a sor elején található szóközökkel vagy tabulátorokkal jelöljük. Más nyelvekben a blokk elejét és végét jelölik meg, például C-ben, C++-ban stb. `'{'` és `'}'` zárójelekkel. A nyelv ezen tulajdonsága kikényszeríti a könnyen olvasható kód készítését, másik oldalon nagyobb figyelmet igényel a sorok írásánál. A szóköz és tabulátor karaktereket nem lehet keverni, gyakran négy szóközös tagolást alkalmaznak. ``` print("range(6): \t", end="") # 0 1 2 3 4 5 for x in range(6): print("%4d" % x, end="") print("\nrange(3, 9): \t", end="") # 3 4 5 6 7 8 for x in range(3, 9): print("%4d" % x, end="") print("\nrange(3,14,2):\t", end="") # 3 5 7 9 11 13 for x in range(3, 14, 2): print("%4d" % x, end="") print("\narray (tömb):\t", end="") primes = [2, 3, 5, 7] # ez egy tömb, később erről bővebben for prime in primes: print("%4d" % prime, end="") print("\nwhile: ", end="") # 0 1 2 3 4 i = 0 while i < 5: print("%4d" % i, end="") i += 1 ``` #### Comprehension A listaértelmezések (list comprehension) a listák létrehozásának rövidebb módja. _Megjegyzés_: a listákról még később lesz szó. ``` print([i for i in range(6)]) print([i for i in range(3,14,2)]) print([2**i for i in range(3,14,2)]) ``` ### _Used sources_ / Felhasznált források - [Shannon Turner: Python lessons repository](https://github.com/shannonturner/python-lessons) MIT license (c) Shannon Turner 2013-2014 - [Siki Zoltán: Python mogyoróhéjban](http://www.agt.bme.hu/gis/python/python_oktato.pdf) GNU FDL license (c) Siki Zoltán
github_jupyter
gyaníthatjuk, hogy Python 2.x -el van dolgunk. Ugyanez a kód Python 3.x-ben így néz ki: Erről még lesz szó, egyelőre annyit kell tudnunk, hogy ez a segédlet a 3.x verzióval lett tesztelve. Térjük tehát rá a matematikai alapokra. #### Egyszerű matematika Pythonban a megszokott *aritmetikai operátorok* jelen vannak, sőt további hasznosak is. | Operátor| Leírás| | ------------- |-------------| |`+`| összeadás | |`-`| kivonás| |`*`| osztás | |`**`| hatványozás | |`//`| egész osztás | |`/`| osztás | |`%`| modulo osztás | A Python a # karakter utáni szöveget a sor végéig megjegyzésnek értelmezi, több soros megjegyzést 3 idézőjellel vagy aposztróffal kezdhetünk és zárhatunk le. #### Változók A változókba mentett értékek esetén interaktív módban nem jelenik meg a művelet adott eredménye. Pl: De kiírhatjuk például a `print` funkcióval Python-ban nincs automatikus típus konverzió a változókra, de a `type` függvénnyel kiírathatjuk egy változó típusát. A következő műveletben sincs automatikus típus konverzió, így hibát kapunk. `x + s` A Python alapvetően 3 numerikus típust ismer: `int`, `float` és `complex`. A `sys.float_info.dig` megadja a tizedesjegyek maximális pontosságát. A változóinkat integerré konvertálhatjuk, például így: Pythonban az osztályokat `StudlyCaps` a konstanasokat `ALLCAPS` a változókat, metódusokat pedig `snake_case` konvenccióval szokás jelölni. #### String I. Stringeket többféleképp definiálhatunk: például aposztróffal és idézőjellel is. Emlékeztető a karakterkódolásokról: - Az egybájtos karakterkódolások: - *ASCII* » 128 karakter, 7 bit (2^7 = 128) - *Latin-1* » másnéven ISO8859-1 » az első 128 ugyanaz, mint az ASCII, de, utána még 96 betűt használ (160-255 tartományban), így is belefér az egy bájtba (2^8 = 256) » nincs benn magyar ő és ű csak û és õ - *Latin-2* » hasonló a Latin-1-hez, de ebben a magyar ő és ű is megtalálható - *Windows-1250* » ez hasonlít a Latin-2-hez, többek közt a Windows szöveges fájok is használják, konzolos alkalmazásoknál előfordul. - *OEM-852* » konzolban szintén előfordulhat, de nem hasonlít a Latin2-höz - A többbájtos karakterkódolások: - *Unicode* » A Unicode az előzőekkel ellentétben nem egyetlen szabvány, inkább egy szabványcsomag. Az első 128 karaktere ugyanaz, az ASCII-é, de több, mint 120000 karaktert tartalmaz a világ majdnem összes nyelvén. Több bájtos, de az egybájtosról viszonylag könnyen átalakítható kódolás, de vissza már kevésbé, hiszen egy Unicode karakter nem biztos, hogy létezik pl.: Latin-2-ben. További probléma, hogy a 2 bájtot nem minden architektúra ugyanabban a sorrendben tárolja. Ezt a BOM (byte order mark) hivatott jelezni, mely vagy 0xFEFF vagy 0xFFFE. A Unicode szövegeket különböző karakterkódolással tárolhatjuk. A Unicode szabvány meghatározza az `UTF-8`, `UTF-16` és az `UTF-32` karakterkódolást, de számos más kódolás is használatban van. - *UTF-8* » Változó méretű karakterkódolás, 1 bájtos egységeket használ, így nem csak sorrendben, de tényleges bináris kódban is kompatibilis az ASCIIvel, hiszen az egy bájtba beleférő értékeket egy bájton is tárolja. Neve a 8-bit Unicode Transformation Format, 8 bites Unicode átalakítási formátum, ami utal arra, hogy bármilyen Unicode karaktert képes reprezentálni, gyakran használják internet-alapú karakterkódolásra. A `sys.getdefaultencoding()` megadja, mi az alapértelmezett kódolás. A különbség az aposztróffal és idézőjellel készített stringek között, hogy #### Formázott kiíratás A `prtint` funckió rengeteg lehetőséget ad. Később erre még visszatérünk, most nézzük meg a `sep` elválasztó és az `end` működését példákon szemléltetve. #### Ciklusok I., alapvető szintaktika Összetettebb kódsorok készítése előtt ki kell emelni a Python azon sajátosságát, hogy a Python programokban a kód blokkokat a sor elején található szóközökkel vagy tabulátorokkal jelöljük. Más nyelvekben a blokk elejét és végét jelölik meg, például C-ben, C++-ban stb. `'{'` és `'}'` zárójelekkel. A nyelv ezen tulajdonsága kikényszeríti a könnyen olvasható kód készítését, másik oldalon nagyobb figyelmet igényel a sorok írásánál. A szóköz és tabulátor karaktereket nem lehet keverni, gyakran négy szóközös tagolást alkalmaznak. #### Comprehension A listaértelmezések (list comprehension) a listák létrehozásának rövidebb módja. _Megjegyzés_: a listákról még később lesz szó.
0.450359
0.852568
# 3.6 softmax回归的从零开始实现 这一节我们来动手实现softmax回归。首先导入本节实现所需的包或模块。 ``` import tensorflow as tf import numpy as np import sys sys.path.append("..") # 为了导入上层目录的d2lzh_tensorflow import d2lzh_tensorflow2 as d2l print(tf.__version__) ``` ## 3.6.1 获取和读取数据 我们将使用Fashion-MNIST数据集,并设置批量大小为256。 ``` from tensorflow.keras.datasets import fashion_mnist batch_size=256 (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() x_train = tf.cast(x_train, tf.float32) / 255 #在进行矩阵相乘时需要float型,故强制类型转换为float型 x_test = tf.cast(x_test,tf.float32) / 255 #在进行矩阵相乘时需要float型,故强制类型转换为float型 train_iter = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size) test_iter = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(batch_size) ``` ## 3.6.2 初始化模型参数 跟线性回归中的例子一样,我们将使用向量表示每个样本。已知每个样本输入是高和宽均为28像素的图像。模型的输入向量的长度是 28×28=784:该向量的每个元素对应图像中每个像素。由于图像有10个类别,单层神经网络输出层的输出个数为10,因此softmax回归的权重和偏差参数分别为784×10和1×10的矩阵。 ``` num_inputs = 784 num_outputs = 10 W = tf.Variable(tf.random.normal(shape=(num_inputs, num_outputs), mean=0, stddev=0.01, dtype=tf.float32)) b = tf.Variable(tf.zeros(num_outputs, dtype=tf.float32)) ``` ## 3.6.3. 实现softmax运算¶ 在介绍如何定义softmax回归之前,我们先描述一下对如何对多维Tensor按维度操作。在下面的例子中,给定一个Tensor矩阵X。我们可以只对其中同一列(axis=0)或同一行(axis=1)的元素求和,并在结果中保留行和列这两个维度(keepdims=True)。 ``` X = tf.constant([[1, 2, 3], [4, 5, 6]]) tf.reduce_sum(X, axis=0, keepdims=True), tf.reduce_sum(X, axis=1, keepdims=True) ``` 下面我们就可以定义前面小节里介绍的softmax运算了。在下面的函数中,矩阵X的行数是样本数,列数是输出个数。为了表达样本预测各个输出的概率,softmax运算会先通过exp函数对每个元素做指数运算,再对exp矩阵同行元素求和,最后令矩阵每行各元素与该行元素之和相除。这样一来,最终得到的矩阵每行元素和为1且非负。因此,该矩阵每行都是合法的概率分布。softmax运算的输出矩阵中的任意一行元素代表了一个样本在各个输出类别上的预测概率 ``` def softmax(logits, axis=-1): return tf.exp(logits)/tf.reduce_sum(tf.exp(logits), axis, keepdims=True) ``` 可以看到,对于随机输入,我们将每个元素变成了非负数,且每一行和为1。 ``` X = tf.random.normal(shape=(2, 5)) X_prob = softmax(X) X_prob, tf.reduce_sum(X_prob, axis=1) ``` ## 3.6.4. 定义模型¶ 有了softmax运算,我们可以定义上节描述的softmax回归模型了。这里通过reshape函数将每张原始图像改成长度为num_inputs的向量。 ``` def net(X): logits = tf.matmul(tf.reshape(X, shape=(-1, W.shape[0])), W) + b return softmax(logits) ``` 3.6.5. 定义损失函数¶ 上一节中,我们介绍了softmax回归使用的交叉熵损失函数。为了得到标签的预测概率,我们可以使用pick函数。在下面的例子中,变量y_hat是2个样本在3个类别的预测概率,变量y是这2个样本的标签类别。通过使用pick函数,我们得到了2个样本的标签的预测概率。与“softmax回归”一节数学表述中标签类别离散值从1开始逐一递增不同,在代码中,标签类别的离散值是从0开始逐一递增的。 下面实现了“softmax回归”一节中介绍的交叉熵损失函数。 ``` y_hat = np.array([[0.1, 0.3, 0.6], [0.3, 0.2, 0.5]]) y = np.array([0, 2], dtype='int32') tf.boolean_mask(y_hat, tf.one_hot(y, depth=3)) def cross_entropy(y_hat, y): y = tf.cast(tf.reshape(y, shape=[-1, 1]),dtype=tf.int32) y = tf.one_hot(y, depth=y_hat.shape[-1]) y = tf.cast(tf.reshape(y, shape=[-1, y_hat.shape[-1]]),dtype=tf.int32) return -tf.math.log(tf.boolean_mask(y_hat, y)+1e-8) ``` ## 3.6.6 计算分类准确率 给定一个类别的预测概率分布y_hat,我们把预测概率最大的类别作为输出类别。如果它与真实类别y一致,说明这次预测是正确的。分类准确率即正确预测数量与总预测数量之比。 为了演示准确率的计算,下面定义准确率accuracy函数。其中tf.argmax(dim=1)返回矩阵y_hat每行中最大元素的索引,且返回结果与变量y形状相同。相等条件判断式(y_hat.argmax(dim=1) == y)是一个类型为ByteTensor的Tensor,我们用float()将其转换为值为0(相等为假)或1(相等为真)的浮点型Tensor。 ``` def accuracy(y_hat, y): return np.mean((tf.argmax(y_hat, axis=1) == y)) ``` 让我们继续使用在演示pick函数时定义的变量y_hat和y,并将它们分别作为预测概率分布和标签。可以看到,第一个样本预测类别为2(该行最大元素0.6在本行的索引为2),与真实标签0不一致;第二个样本预测类别为2(该行最大元素0.5在本行的索引为2),与真实标签2一致。因此,这两个样本上的分类准确率为0.5。 ``` accuracy(y_hat, y) ``` 类似地,我们可以评价模型net在数据集data_iter上的准确率 ``` # 描述,对于tensorflow2中,比较的双方必须类型都是int型,所以要将输出和标签都转为int型 def evaluate_accuracy(data_iter, net): acc_sum, n = 0.0, 0 for _, (X, y) in enumerate(data_iter): y = tf.cast(y,dtype=tf.int64) acc_sum += np.sum(tf.cast(tf.argmax(net(X), axis=1), dtype=tf.int64) == y) n += y.shape[0] return acc_sum / n print(evaluate_accuracy(test_iter, net)) ``` ## 3.6.7. 训练模型¶ 训练softmax回归的实现跟“线性回归的从零开始实现”一节介绍的线性回归中的实现非常相似。我们同样使用小批量随机梯度下降来优化模型的损失函数。在训练模型时,迭代周期数num_epochs和学习率lr都是可以调的超参数。改变它们的值可能会得到分类更准确的模型。 ``` num_epochs, lr = 5, 0.1 # 本函数已保存在d2lzh包中方便以后使用 def train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params=None, lr=None, trainer=None): for epoch in range(num_epochs): train_l_sum, train_acc_sum, n = 0.0, 0.0, 0 for X, y in train_iter: with tf.GradientTape() as tape: y_hat = net(X) l = tf.reduce_sum(loss(y_hat, y)) grads = tape.gradient(l, params) if trainer is None: # 如果没有传入优化器,则使用原先编写的小批量随机梯度下降 d2l.sgd(params, lr, batch_size, grads) else: # tf.keras.optimizers.SGD 直接使用是随机梯度下降 theta(t+1) = theta(t) - learning_rate * gradient # 这里使用批量梯度下降,需要对梯度除以 batch_size, 对应原书代码的 trainer.step(batch_size) trainer.apply_gradients(zip([grad / batch_size for grad in grads], params)) # “softmax回归的简洁实现”一节将用到 y = tf.cast(y, dtype=tf.float32) train_l_sum += l.numpy() train_acc_sum += tf.reduce_sum(tf.cast(tf.argmax(y_hat, axis=1) == tf.cast(y, dtype=tf.int64), dtype=tf.int64)).numpy() n += y.shape[0] test_acc = evaluate_accuracy(test_iter, net) print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f'% (epoch + 1, train_l_sum / n, train_acc_sum / n, test_acc)) trainer = tf.keras.optimizers.SGD(lr) train_ch3(net, train_iter, test_iter, cross_entropy, num_epochs, batch_size, [W, b], lr) ``` ## 3.6.8 预测 ``` import matplotlib.pyplot as plt X, y = iter(test_iter).next() def get_fashion_mnist_labels(labels): text_labels = ['t-shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle boot'] return [text_labels[int(i)] for i in labels] def show_fashion_mnist(images, labels): # 这⾥的_表示我们忽略(不使⽤)的变量 _, figs = plt.subplots(1, len(images), figsize=(12, 12)) # 这里注意subplot 和subplots 的区别 for f, img, lbl in zip(figs, images, labels): f.imshow(tf.reshape(img, shape=(28, 28)).numpy()) f.set_title(lbl) f.axes.get_xaxis().set_visible(False) f.axes.get_yaxis().set_visible(False) plt.show() true_labels = get_fashion_mnist_labels(y.numpy()) pred_labels = get_fashion_mnist_labels(tf.argmax(net(X), axis=1).numpy()) titles = [true + '\n' + pred for true, pred in zip(true_labels, pred_labels)] show_fashion_mnist(X[0:9], titles[0:9]) ```
github_jupyter
import tensorflow as tf import numpy as np import sys sys.path.append("..") # 为了导入上层目录的d2lzh_tensorflow import d2lzh_tensorflow2 as d2l print(tf.__version__) from tensorflow.keras.datasets import fashion_mnist batch_size=256 (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() x_train = tf.cast(x_train, tf.float32) / 255 #在进行矩阵相乘时需要float型,故强制类型转换为float型 x_test = tf.cast(x_test,tf.float32) / 255 #在进行矩阵相乘时需要float型,故强制类型转换为float型 train_iter = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size) test_iter = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(batch_size) num_inputs = 784 num_outputs = 10 W = tf.Variable(tf.random.normal(shape=(num_inputs, num_outputs), mean=0, stddev=0.01, dtype=tf.float32)) b = tf.Variable(tf.zeros(num_outputs, dtype=tf.float32)) X = tf.constant([[1, 2, 3], [4, 5, 6]]) tf.reduce_sum(X, axis=0, keepdims=True), tf.reduce_sum(X, axis=1, keepdims=True) def softmax(logits, axis=-1): return tf.exp(logits)/tf.reduce_sum(tf.exp(logits), axis, keepdims=True) X = tf.random.normal(shape=(2, 5)) X_prob = softmax(X) X_prob, tf.reduce_sum(X_prob, axis=1) def net(X): logits = tf.matmul(tf.reshape(X, shape=(-1, W.shape[0])), W) + b return softmax(logits) y_hat = np.array([[0.1, 0.3, 0.6], [0.3, 0.2, 0.5]]) y = np.array([0, 2], dtype='int32') tf.boolean_mask(y_hat, tf.one_hot(y, depth=3)) def cross_entropy(y_hat, y): y = tf.cast(tf.reshape(y, shape=[-1, 1]),dtype=tf.int32) y = tf.one_hot(y, depth=y_hat.shape[-1]) y = tf.cast(tf.reshape(y, shape=[-1, y_hat.shape[-1]]),dtype=tf.int32) return -tf.math.log(tf.boolean_mask(y_hat, y)+1e-8) def accuracy(y_hat, y): return np.mean((tf.argmax(y_hat, axis=1) == y)) accuracy(y_hat, y) # 描述,对于tensorflow2中,比较的双方必须类型都是int型,所以要将输出和标签都转为int型 def evaluate_accuracy(data_iter, net): acc_sum, n = 0.0, 0 for _, (X, y) in enumerate(data_iter): y = tf.cast(y,dtype=tf.int64) acc_sum += np.sum(tf.cast(tf.argmax(net(X), axis=1), dtype=tf.int64) == y) n += y.shape[0] return acc_sum / n print(evaluate_accuracy(test_iter, net)) num_epochs, lr = 5, 0.1 # 本函数已保存在d2lzh包中方便以后使用 def train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params=None, lr=None, trainer=None): for epoch in range(num_epochs): train_l_sum, train_acc_sum, n = 0.0, 0.0, 0 for X, y in train_iter: with tf.GradientTape() as tape: y_hat = net(X) l = tf.reduce_sum(loss(y_hat, y)) grads = tape.gradient(l, params) if trainer is None: # 如果没有传入优化器,则使用原先编写的小批量随机梯度下降 d2l.sgd(params, lr, batch_size, grads) else: # tf.keras.optimizers.SGD 直接使用是随机梯度下降 theta(t+1) = theta(t) - learning_rate * gradient # 这里使用批量梯度下降,需要对梯度除以 batch_size, 对应原书代码的 trainer.step(batch_size) trainer.apply_gradients(zip([grad / batch_size for grad in grads], params)) # “softmax回归的简洁实现”一节将用到 y = tf.cast(y, dtype=tf.float32) train_l_sum += l.numpy() train_acc_sum += tf.reduce_sum(tf.cast(tf.argmax(y_hat, axis=1) == tf.cast(y, dtype=tf.int64), dtype=tf.int64)).numpy() n += y.shape[0] test_acc = evaluate_accuracy(test_iter, net) print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f'% (epoch + 1, train_l_sum / n, train_acc_sum / n, test_acc)) trainer = tf.keras.optimizers.SGD(lr) train_ch3(net, train_iter, test_iter, cross_entropy, num_epochs, batch_size, [W, b], lr) import matplotlib.pyplot as plt X, y = iter(test_iter).next() def get_fashion_mnist_labels(labels): text_labels = ['t-shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle boot'] return [text_labels[int(i)] for i in labels] def show_fashion_mnist(images, labels): # 这⾥的_表示我们忽略(不使⽤)的变量 _, figs = plt.subplots(1, len(images), figsize=(12, 12)) # 这里注意subplot 和subplots 的区别 for f, img, lbl in zip(figs, images, labels): f.imshow(tf.reshape(img, shape=(28, 28)).numpy()) f.set_title(lbl) f.axes.get_xaxis().set_visible(False) f.axes.get_yaxis().set_visible(False) plt.show() true_labels = get_fashion_mnist_labels(y.numpy()) pred_labels = get_fashion_mnist_labels(tf.argmax(net(X), axis=1).numpy()) titles = [true + '\n' + pred for true, pred in zip(true_labels, pred_labels)] show_fashion_mnist(X[0:9], titles[0:9])
0.417509
0.959001
``` import pandas as pd ames_housing = pd.read_csv("../datasets/ames_housing_no_missing.csv") target_name = "SalePrice" data = ames_housing.drop(columns=target_name) target = ames_housing[target_name] numerical_features = [ "LotFrontage", "LotArea", "MasVnrArea", "BsmtFinSF1", "BsmtFinSF2", "BsmtUnfSF", "TotalBsmtSF", "1stFlrSF", "2ndFlrSF", "LowQualFinSF", "GrLivArea", "BedroomAbvGr", "KitchenAbvGr", "TotRmsAbvGrd", "Fireplaces", "GarageCars", "GarageArea", "WoodDeckSF", "OpenPorchSF", "EnclosedPorch", "3SsnPorch", "ScreenPorch", "PoolArea", "MiscVal", ] data_numerical = data[numerical_features] ``` # Question 1 : ``` from sklearn.preprocessing import StandardScaler from sklearn.pipeline import make_pipeline from sklearn.model_selection import cross_validate from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor linear_regression = make_pipeline( StandardScaler(), LinearRegression() ) tree = DecisionTreeRegressor() cv_results_linear_regression = cross_validate( linear_regression, data_numerical, target, cv=10, return_estimator=True ) cv_results_tree = cross_validate( tree, data_numerical, target, cv=10, return_estimator=True ) (cv_results_linear_regression['test_score']>cv_results_tree['test_score']).sum() ``` # Question 2 : ``` import numpy as np from sklearn.model_selection import GridSearchCV params = {"max_depth": np.arange(1, 16)} search = GridSearchCV(tree, params, cv=10) cv_results_tree_optimal_depth = cross_validate( search, data_numerical, target, cv=10, return_estimator=True, n_jobs=2, ) for search_cv in cv_results_tree_optimal_depth["estimator"]: print(search_cv.best_params_) ``` # Question 3 : ``` search = GridSearchCV(tree, params, cv=10) cv_results_tree_optimal_depth = cross_validate( search, data_numerical, target, cv=10, return_estimator=True, n_jobs=-1, ) cv_results_tree_optimal_depth["test_score"].mean() (cv_results_tree_optimal_depth['test_score']>cv_results_linear_regression['test_score']).sum() ``` # Question 4 : ``` from sklearn.compose import make_column_selector as selector from sklearn.compose import make_column_transformer from sklearn.preprocessing import OrdinalEncoder categorical_columns = selector(dtype_include=object)(data) numerical_columns = selector(dtype_exclude=object)(data) preprocessor = make_column_transformer( (OrdinalEncoder(handle_unknown="use_encoded_value", unknown_value=-1), categorical_columns), (StandardScaler(), numerical_columns), ) tree_without_numerical_data = DecisionTreeRegressor(max_depth=7) tree_with_numerical_data = make_pipeline( preprocessor, DecisionTreeRegressor(max_depth=7) ) cv_results_tree_without_numerical_data = cross_validate( tree_without_numerical_data, data_numerical, target, cv=10, return_estimator=True ) cv_results_tree_with_numerical_data = cross_validate( tree_with_numerical_data, data, target, cv=10, return_estimator=True ) (cv_results_tree_without_numerical_data['test_score']>cv_results_tree_with_numerical_data['test_score']).sum() ```
github_jupyter
import pandas as pd ames_housing = pd.read_csv("../datasets/ames_housing_no_missing.csv") target_name = "SalePrice" data = ames_housing.drop(columns=target_name) target = ames_housing[target_name] numerical_features = [ "LotFrontage", "LotArea", "MasVnrArea", "BsmtFinSF1", "BsmtFinSF2", "BsmtUnfSF", "TotalBsmtSF", "1stFlrSF", "2ndFlrSF", "LowQualFinSF", "GrLivArea", "BedroomAbvGr", "KitchenAbvGr", "TotRmsAbvGrd", "Fireplaces", "GarageCars", "GarageArea", "WoodDeckSF", "OpenPorchSF", "EnclosedPorch", "3SsnPorch", "ScreenPorch", "PoolArea", "MiscVal", ] data_numerical = data[numerical_features] from sklearn.preprocessing import StandardScaler from sklearn.pipeline import make_pipeline from sklearn.model_selection import cross_validate from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor linear_regression = make_pipeline( StandardScaler(), LinearRegression() ) tree = DecisionTreeRegressor() cv_results_linear_regression = cross_validate( linear_regression, data_numerical, target, cv=10, return_estimator=True ) cv_results_tree = cross_validate( tree, data_numerical, target, cv=10, return_estimator=True ) (cv_results_linear_regression['test_score']>cv_results_tree['test_score']).sum() import numpy as np from sklearn.model_selection import GridSearchCV params = {"max_depth": np.arange(1, 16)} search = GridSearchCV(tree, params, cv=10) cv_results_tree_optimal_depth = cross_validate( search, data_numerical, target, cv=10, return_estimator=True, n_jobs=2, ) for search_cv in cv_results_tree_optimal_depth["estimator"]: print(search_cv.best_params_) search = GridSearchCV(tree, params, cv=10) cv_results_tree_optimal_depth = cross_validate( search, data_numerical, target, cv=10, return_estimator=True, n_jobs=-1, ) cv_results_tree_optimal_depth["test_score"].mean() (cv_results_tree_optimal_depth['test_score']>cv_results_linear_regression['test_score']).sum() from sklearn.compose import make_column_selector as selector from sklearn.compose import make_column_transformer from sklearn.preprocessing import OrdinalEncoder categorical_columns = selector(dtype_include=object)(data) numerical_columns = selector(dtype_exclude=object)(data) preprocessor = make_column_transformer( (OrdinalEncoder(handle_unknown="use_encoded_value", unknown_value=-1), categorical_columns), (StandardScaler(), numerical_columns), ) tree_without_numerical_data = DecisionTreeRegressor(max_depth=7) tree_with_numerical_data = make_pipeline( preprocessor, DecisionTreeRegressor(max_depth=7) ) cv_results_tree_without_numerical_data = cross_validate( tree_without_numerical_data, data_numerical, target, cv=10, return_estimator=True ) cv_results_tree_with_numerical_data = cross_validate( tree_with_numerical_data, data, target, cv=10, return_estimator=True ) (cv_results_tree_without_numerical_data['test_score']>cv_results_tree_with_numerical_data['test_score']).sum()
0.477067
0.745445
``` %matplotlib inline import numpy as np import mplcyberpunk import torch from torch import nn from torch.nn import functional as F from matplotlib import pyplot as plt from matplotlib import rcParams plt.style.use("cyberpunk") rcParams["font.sans-serif"] = "Roboto" rcParams["xtick.labelsize"] = 14. rcParams["ytick.labelsize"] = 14. rcParams["axes.labelsize"] = 14. rcParams["legend.fontsize"] = 14 rcParams["axes.titlesize"] = 16. np.random.seed(42) _ = torch.manual_seed(42) ``` # Auxiliary Functions This notebook is a compilation of a few aspects that are "secondary" to deep learning models. This provides a quick overview using PyTorch. ## Exponential Activation Functions These activation functions all contain exponential functions, and so vary relatively smoothly with $x$. The general purpose of these functions is to compress output values such that they lie within a certain range, and eventually asymptote/saturate at some point. The sigmoid and tanh functions in particular are classic activation functions, but as you will see below, their derivatives go to zero quite quickly with larger values of $x$. ### Sigmoid $$ \sigma(x) = \frac{1}{1 + \exp(-x)} $$ Collapses values to the range of [0,1]; used traditionally to mimic neurons firing, although saturates very easily with values of $x$ not near zero and so is less commonly used now between layers, but more for binary classification (0 or 1) or [Bernoulli trial.](https://en.wikipedia.org/wiki/Bernoulli_distribution) ### Softmax $$ \mathrm{softmax}(x) = \frac{\exp(x)}{\sum \exp(x)} $$ Forces the vector to sum to 1 like probabilities of multiple independent events. Used commonly for multiclass classification, which outputs the likelihood of a given class. ### Softplus $$ \mathrm{softplus}(x) = \frac{1}{\beta}\log(1 + \exp(\beta x))$$ ### tanh $$ \tanh(x) = \frac{\exp(x) - \exp(-x)}{\exp(x) + \exp(-x)} $$ ``` funcs = [torch.sigmoid, F.softmax, F.softmin, F.softplus, torch.tanh, torch.cos] fig, axarray = plt.subplots(1, len(funcs), figsize=(16,5.)) with torch.no_grad(): for ax, func in zip(axarray, funcs): X = torch.arange(-5, 5., step=0.1) Y = func(X) ax.plot(X, Y, lw=1.5, alpha=1., label="$f(x)$") ax.set_title(func.__name__) ax.set_xlabel("X") # Compute gradients this time for ax, func in zip(axarray, funcs): X = torch.arange(-5, 5., step=0.1) X.requires_grad_(True) Y = func(X) Y.backward(torch.ones(Y.size())) grads = X.grad ax.plot(X.detach(), grads.numpy(), lw=1.5, alpha=1., label="$\\nabla f(x)$", ls="--") axarray[0].legend(loc=2) fig.tight_layout() ``` ## Activation Functions ### Rectified Linear Unit (ReLU) $$ \mathrm{ReLU}(x) = \mathrm{max(0, x)} $$ Despite having little statistical interpretation (compared to sigmoid, for example), it works well for most tasks. ### Exponential Linear Unit (ELU) $$ \mathrm{ELU}(x) = \mathrm{max}(0,x) + \mathrm{min}(0, \alpha \exp(x) - 1) $$ where $\alpha$ is a hyperparameter tuned to change the rate of change. Allows the gradient to be non-zero with small values of $x$, and only asymptotes at very negative values. ### Leaky ReLU $$ \mathrm{LeakyReLU}(x) = \mathrm{max}(0,x) + \alpha \mathrm{min}(0,x) $$ where $\alpha$ is a hyperparameter tuned to allow some negative gradients backpropagate; otherwise they would asymptote to zero as in ReLU. ### ReLU6 $$ \mathrm{ReLU6}(x) = \mathrm{min}(\mathrm{max(0, x)}, 6) $$ ### Scaled Exponential Linear Unit (SELU) $$ \mathrm{SELU}(x) = \lambda \mathrm{max}(0, x) + \mathrm{min}(0,\alpha (\exp(x) - 1)) $$ where $\lambda$ and $\alpha$ are hyperparameters, although were numerically tuned to $\alpha=1.6733$ and $\lambda=1.0507$ for [self-normalizing neural networks](https://arxiv.org/pdf/1706.02515.pdf). ### Gaussian Error Linear Unit (GELU) $$ \mathrm{GELU}(x) = x \Phi(x) $$ where $\Phi(x)$ is the cumulative distribution function for a standard Gaussian distribution. Allows for [self-regularization to a certain extent](https://arxiv.org/pdf/1606.08415.pdf) similar to dropouts. ``` funcs = [F.relu, F.elu, F.leaky_relu, F.relu6, torch.selu, F.gelu] fig, axarray = plt.subplots(1, len(funcs), figsize=(18,5.)) with torch.no_grad(): for ax, func in zip(axarray, funcs): X = torch.arange(-7, 7., step=0.1) Y = func(X) ax.plot(X, Y, lw=1.5, alpha=1., label="$f(x)$") ax.set_title(func.__name__) ax.set_xlabel("X") for ax, func in zip(axarray, funcs): X = torch.autograd.Variable(torch.arange(-7, 7., step=0.1), requires_grad=True) Y = func(X) Y.backward(torch.ones(Y.size())) grads = X.grad ax.plot(X.detach(), grads.numpy(), lw=1.5, alpha=1., label="$\\nabla f(x)$", ls="--") ax.set_ylim([-2., 7.]) axarray[0].legend(loc=2) fig.tight_layout() ``` ## Loss functions Designing an appropriate loss function is probably the most important aspect of deep learning: you can be creative about how to encourage your model to learn about the problem. You can find all the loss functions implemented in PyTorch [here](https://pytorch.org/docs/stable/nn.html#loss-functions), and mix and match them to make your model learn what you want it to learn. ### Mean squared error Probably the most commonly used loss in machine learning: individual predictions scale quadratically away from the ground truth, but the actual objective of this function is to learn to reproduce the _mean_ of your data. $$ \mathcal{L} = \frac{1}{N}\sqrt{\sum^I_i(\hat{y}_i - y_i)^2} $$ Something worth noting here, is that minimizing the mean squared error is equivalent to maximizing the $\log$ likelihood in _most_ circumstances assuming normally distributed errors. In other words, predictions with models trained on the MSE loss can be thought of as the maximum likelihood estimate. [Read here](https://www.jessicayung.com/mse-as-maximum-likelihood/) for more details. ### Kullback-Leibler divergence Less commonly encountered in the physical sciences unless you work with some information theory. This is an asymmetric loss function, contrasting the mean squared error, and is useful for estimating ``distances'' between probability distributions. Effectively, this loss function measures the extra amount of information you need for a distribution $q$ to encode another distribution $p$; when the two distributions are exactly equivalent, then $D_\mathrm{KL}$ is zero. $$ D_\mathrm{KL}(p \vert \vert q) = p \frac{\log p}{\log q} $$ This loss is asymmetric, because $D_\mathrm{KL}(p \vert \vert q) \neq D_\mathrm{KL}(q \vert \vert p)$. To illustrate, we plot two Gaussians below: ``` from scipy.stats import entropy, norm x = np.linspace(-5., 5., 1000) p = norm(loc=-1.5, scale=0.7).pdf(x) q = norm(loc=2., scale=0.3).pdf(x) + norm(loc=-2.5, scale=1.3).pdf(x) fig, ax = plt.subplots() ax.fill_between(x, p, 0., label="p(x)", alpha=0.6) ax.fill_between(x, q, 0., label="q(x)", alpha=0.6) fig.legend(loc="upper center"); ``` The $D_\mathrm{KL}$ for each direction: ``` dpq = entropy(p, q) dqp = entropy(q, p) print(f"D(p||q) = {dpq:.3f}, D(q||p) = {dqp:.3f}") ``` The way to interpret this is coverage: $D_\mathrm{KL}(p \vert \vert q)$ is smaller than $D_\mathrm{KL}(q \vert \vert p)$, because if you wanted to express $p$ with $q$ you would do an okay job (at least with respect to the left Gaussian). Conversely, if you wanted to use $p$ to represent $q$ ($D_\mathrm{KL}(q \vert \vert p)$) it would do a poor job of representing the right Gaussian. Another way of looking at it is at their cumulative distribution functions: ``` cdf_p = norm(loc=-0.8, scale=0.7).cdf(x) cdf_q = norm(loc=2., scale=0.3).cdf(x) + norm(loc=-2.5, scale=1.3).cdf(x) fig, ax = plt.subplots() ax.fill_between(x, cdf_p, 0., label="p(x)", alpha=0.6) ax.fill_between(x, cdf_q, 0., label="q(x)", alpha=0.6) fig.legend(loc="upper center"); ``` The (unnormalized) CDFs show how $p$ does not contain any knowledge at higher values of $x$, leading to a higher $D_\mathrm{KL}$. ### Binary cross entropy This represents a special, yet common enough loss, for binomial targets $[0,1]$. This is usually used for classification tasks, but also for predicting pixel intensities (if they fall in the $[0,1]$ range). $$ \mathcal{L} = -[y \cdot \log \hat{y} + (1 - y) \cdot \log(1 - \hat{y})] $$ ### A note on implementations You will more often than not in many deep learning libraries see a "X with logits" implementation of some X loss function. For example, the binary cross entropy in PyTorch has a `BCEWithLogits` and a `BCELoss` implementation. When possible, you use the former, as it ensures numerical stability by taking advantage of $\log$ of very small numbers. For example when multiplying by likelihoods, you can end up with rounding errors due to a loss in number precision: for $p_a = 10^{-5}$ and $p_b = 10^{-7}$, you preserve the precision by doing $p_a \times p_b$ in $\log$ space, which would be $(-5) + (-7)$ in $\log_{10}$, as opposed to $10^{-12}$. If you use `BCEWithLogits`, the loss function will include the sigmoid activation function, and the output layer (if it learns) will generate the $\log$ likelihood as its output.
github_jupyter
%matplotlib inline import numpy as np import mplcyberpunk import torch from torch import nn from torch.nn import functional as F from matplotlib import pyplot as plt from matplotlib import rcParams plt.style.use("cyberpunk") rcParams["font.sans-serif"] = "Roboto" rcParams["xtick.labelsize"] = 14. rcParams["ytick.labelsize"] = 14. rcParams["axes.labelsize"] = 14. rcParams["legend.fontsize"] = 14 rcParams["axes.titlesize"] = 16. np.random.seed(42) _ = torch.manual_seed(42) funcs = [torch.sigmoid, F.softmax, F.softmin, F.softplus, torch.tanh, torch.cos] fig, axarray = plt.subplots(1, len(funcs), figsize=(16,5.)) with torch.no_grad(): for ax, func in zip(axarray, funcs): X = torch.arange(-5, 5., step=0.1) Y = func(X) ax.plot(X, Y, lw=1.5, alpha=1., label="$f(x)$") ax.set_title(func.__name__) ax.set_xlabel("X") # Compute gradients this time for ax, func in zip(axarray, funcs): X = torch.arange(-5, 5., step=0.1) X.requires_grad_(True) Y = func(X) Y.backward(torch.ones(Y.size())) grads = X.grad ax.plot(X.detach(), grads.numpy(), lw=1.5, alpha=1., label="$\\nabla f(x)$", ls="--") axarray[0].legend(loc=2) fig.tight_layout() funcs = [F.relu, F.elu, F.leaky_relu, F.relu6, torch.selu, F.gelu] fig, axarray = plt.subplots(1, len(funcs), figsize=(18,5.)) with torch.no_grad(): for ax, func in zip(axarray, funcs): X = torch.arange(-7, 7., step=0.1) Y = func(X) ax.plot(X, Y, lw=1.5, alpha=1., label="$f(x)$") ax.set_title(func.__name__) ax.set_xlabel("X") for ax, func in zip(axarray, funcs): X = torch.autograd.Variable(torch.arange(-7, 7., step=0.1), requires_grad=True) Y = func(X) Y.backward(torch.ones(Y.size())) grads = X.grad ax.plot(X.detach(), grads.numpy(), lw=1.5, alpha=1., label="$\\nabla f(x)$", ls="--") ax.set_ylim([-2., 7.]) axarray[0].legend(loc=2) fig.tight_layout() from scipy.stats import entropy, norm x = np.linspace(-5., 5., 1000) p = norm(loc=-1.5, scale=0.7).pdf(x) q = norm(loc=2., scale=0.3).pdf(x) + norm(loc=-2.5, scale=1.3).pdf(x) fig, ax = plt.subplots() ax.fill_between(x, p, 0., label="p(x)", alpha=0.6) ax.fill_between(x, q, 0., label="q(x)", alpha=0.6) fig.legend(loc="upper center"); dpq = entropy(p, q) dqp = entropy(q, p) print(f"D(p||q) = {dpq:.3f}, D(q||p) = {dqp:.3f}") cdf_p = norm(loc=-0.8, scale=0.7).cdf(x) cdf_q = norm(loc=2., scale=0.3).cdf(x) + norm(loc=-2.5, scale=1.3).cdf(x) fig, ax = plt.subplots() ax.fill_between(x, cdf_p, 0., label="p(x)", alpha=0.6) ax.fill_between(x, cdf_q, 0., label="q(x)", alpha=0.6) fig.legend(loc="upper center");
0.858289
0.958343
# Project: Time Travel Debugger In this project you will build your own **Time Travel Debugger** for Python. Please read the [Chapter on Debuggers](https://www.debuggingbook.org/beta/html/Debugger.html) beforehand, notably Exercise 1. **Update\[26.11.2020\]**: the project description was amended to include answers to most of the student's questions. Clarifications are highlighted with <span style="color:blue">blue</span> color. Changes/additions are highlighted with <span style="color:red">red</span> color. A time travel debugger records a log of the process execution (including call stack and values of all variables at each step), so that it is possible to run it later with both forward and backward commands. The interactive session does not execute the code statements, but just replays all actions taken from the recorded execution. A time travel debugger can restore a full snapshot of each program state at any point in time and either continue the execution from then on, or run the program backward. As normal execution changes values of variables along the run, the backward execution reverts variables to the previous values and "un-executes" functions. The project can be approached in two ways: either as a single-person project or a pair project. The single-person project comprises the implementation of a command line interface, whereas the pair project requires the implementation of a graphical user interface. To be successful, you must obtain at least 15 points by implementing features listed in "Must-have Requirements"; otherwise, you will not be awarded any points for the project. To fully enjoy coding (and get maximum points) feel free to additionally implement some (or all) features from the "May-Have Requirements". ## Submission details The deadline for this project is on **the 18th of December, 2020 at 11:59pm CET** All files packaged in a zip archive must be uploaded via the CMS system. The project should be a self-contained bundle with the `TimeTravelDebugger.ipynb` Jupyter notebook and supplementary files. ## General Requirements The project should be implemented in a Jupyter notebook with a step-by-step explanation of the implemented features (like the notebooks from the lecture). The notebook should also include a "Presentation" section containing **demo interactions** which show how to use each feature. Your project should come with a working environment via either `virtualenv` (*requirements.txt* file) or `pipenv` (*Pipfile.lock* file) tools. Your code should follow PEP 8 style conventions. You can use `%%pycodestyle` command from the [pycodestyle](https://pycodestyle.readthedocs.io) package to check files for PEP 8 compliance. The time travel debugger should be implemented as a class that can be executed as follows: ``` with TimeTravelDebugger(): foo(args) ``` where `foo(args)` can be an arbitrary function under debugging, implemented either in the same notebook or imported from another file. **Do not let the debugger escape the context and also debug commands outside the `with` block (e.g., methods of the Jupyter framework).** ## Part 1: Command-Line Debugger If you work as a **single-person**, this is the part you will have to build. The implementation should include an interactive command line interface like the one presented in the [Chapter on Debugging](https://www.debuggingbook.org/beta/html/Debugger.html). Make use of the `Debugger` class, notably its `execute()` infrastructure, to easily tie commands to methods. ### Must-Have Requirements (20 Points) Your time travel debugger should support the following features: #### Basics * /R1/ `quit` Exit the interactive session (or resume execution until function return) * /R2/ `help` Prints all available commands and their description and arguments * /R3/ Missing or bad arguments should result in specific error messages. #### Navigation Commands After each navigation command, the current line should be printed. * /R4/ `step` Step to the next executed line. Execute a program until it reaches the next executable statement. If the current line has a list comprehension statement, it should step into it, but remain at the current line (do not show `<listcomp>` source). If the current source line includes a function call, step into the called function and stop at the beginning of this function. * /R5/ `backstep` Step to the previous executed line. Execute a program until it reaches the previous executable statement. If the current line has a list comprehension statement, it should step into it, but remain at the current line. If the current source line includes a function call, step into the called function and stop at the last statement invoked in this function (usually a return statement) * /R6/ `next` Step over function calls going to the next line. Execute a program until it reaches the next source line. Any function calls (and list comprehension) in the current line should be executed without stopping. Starting from the last line of a function, this command should take you to its call site. * /R7/ `previous` Step over function calls going to the previous line. Execute a program until it reaches the previous source line. If the line contains a function call, it should be “un-executed” restoring previous values of global variables. Starting from the first line of a function, `previous` should take you back to the caller of that function, before the function was called. Hint: The difference between `step` and `next` is that `step` will go inside a called function, while `next` stops at the next line of the **current** function. * /R8/ `finish` Execute until return. Takes you forward to the point where the current function returns. <span style="color:blue">If finish is executed at the last line of a function, it should stay at that line and print it again.</span> * /R9/ `start` Execute backwards until a function start. Takes you backwards to the first instruction of the function. <span style="color:blue">If start is executed at the first line of a function, it should stay at that line and print it again.</span> * /R10/ Execute forwards until a certain point: * /R100/ `until <line_number>` Resume execution until a line greater than `<line_number>` is reached. If `<line_number>` is not given, resume execution until a line greater than the current is reached. This is useful to avoid stepping through multiple loop iterations. <span style="color:blue">If 'line_number' is not given and</span> the execution jumps to another function, act as `next` * /R101/ `until <filename>:<line_number>` Execute a program forward until it reaches the line with the number `<line_number>` in the file `<filename>`. * /R102/ `until <function_name>` Execute a program forward until it reaches the line with a call to the function named `<function_name>` <span style="color:blue">declared</span> in the current file. * /R103/ `until <filename>:<function_name>` Execute a program forward until it reaches the line with a call to the function named `<function_name>` <span style="color:blue">declared</span> in the file `<filename>`. * <span style="color:blue">Expl: If the execution is already at the specified line/function, it should look for the next occurrence or run till the end.</span> * /R11/ Execute backwards until a certain point: * /R110/ `backuntil <line_number>` Resume execution backwards until a line lower than `<line_number>` in the current file is reached. If `<line_number>` is not given, resume execution backwards until a line lower than the current is reached. <span style="color:blue">If 'line_number' is not given and</span> the execution jumps to another function, act as `previous` * /R111/ `backuntil <filename>:<line_number>` Execute a program backward until it reaches the line with the number `<line_number>` in the `<filename>`. * /R112/ `backuntil <function_name>` Execute a program backward until it reaches the line with a call to the function named `<function_name>` <span style="color:blue">declared</span> in the current file. * /R113/ `backuntil <filename>:<function_name>` Execute a program backward until it reaches the line with a call to the function named `<function_name>` <span style="color:blue">declared</span> in the `<filename>`. * <span style="color:blue">Expl: If the execution is already at the specified line/function, it should look for the next occurrence or run till the start.</span> * /R12/ `continue` Continue execution _forward_ until a breakpoint is hit, or the program finishes. * /R13/ `reverse` Continue execution _backward_ until a breakpoint is hit, or the program starts. * <span style="color:red">Hint: Ignore the command if the debugger reaches start/end of the execution and cannot go further. (Optionally: print an appropriate message.)</span> #### Call Stack * /R14/ Print call stack: * /R141/ `where` Print the whole call stack * /R142/ `where <number>` Print the `<number>` of leading and trailing lines from the call stack surrounding the current frame if any. * /R15/ Navigate the call stack * `up` and `down` Move up (and down) the call stack towards callers (and callees): print the code of the previous (next) frame and mark the currently executed line. #### Inspecting Code and Variables * /R16/ Print the source code around the current line (with the current line marked) * /R161/ `list` Print 2 lines before and 2 lines after the current line * /R162/ `list <number>` Print `<number>` lines before and `<number>` lines after the current line * /R163/ `list <above> <below>` Print `<above>` lines before and `<below>` lines after the current line * <span style="color:blue">Expl: Lines are limited to the current function body</span> * /R17/ Inspect the value of a variable * /R171/ `print` Print values of _all_ local variables <span style="color:blue">(including values of member variables)</span> * /R172/ `print <var_name>` Print the value of a variable with name `<var_name>`. If the variable `<var_name>` is not defined, print an error message. * /R18/ `print <expr>` * Like `print <var_name>`, but allow for arbitrary Python expressions * The code expression should be evaluated in the current environment of the code being debugged. * Keep in mind that this requires to evaluate the expression during the interactive session, which may produce exceptions. #### Watchpoints * /R19/ Watchpoints: * /R190/ `watch <var_name>` Creates a numbered watchpoint for the given variable: If its value changes after a navigation command, its value should be printed. * /R191/ `watch` Show all watchpoints and associated variables. * /R192/ `unwatch <watch_id>` Remove a watchpoint. #### Breakpoints * /R20/ Breakpoints: * /R201/ `break <line_number>` Create a numbered breakpoint at line with number `<line_number>` * /R202/ `break <function_name>` Set a breakpoint which hits when a function with the name `<function_name>` is called (or returned in case of backward execution). The execution should stop at the beginning (or the end) of the function. * /R203/ `break <file_name>:<function_name>` Set a breakpoint which hits when a function with the name `<function_name>` in file `<file_name>` is called (or returned in case of backward execution). The execution should stop at the beginning (or the end) of the function. * /R204/ `breakpoints` Display all available breakpoints <span style="color:red">Depending on the type of the breakpoint the output can be the following:</span> `breakpoint_id line file_name:line_number is_active` `breakpoint_id func file_name:func_name is_active` `breakpoint_id cond file_name:line_number is_active cond_expression` * /R205/ `delete <breakpoint_id>` Delete a breakpoint with the index `<breakpoint_id>` from the list of breakpoints * /R206/ `disable <breakpoint_id>` Suspend a breakpoint with the index `<breakpoint_id>` from the list of breakpoints * /R207/ `enable <breakpoint_id>` Re-enable a breakpoint with the index `<breakpoint_id>` from the list of breakpoints * /R208/ `cond <line> <condition>`. Conditional breakpoints. Set a breakpoint at which the execution is stopped at line `<line>` if a condition `<condition>` is true. A condition can include local variables (e.g., `tag == "b"` or `tag.startswith(b)`), but not function calls from a debugged program. * **Hint: keep in mind that breakpoints may be set in different modules (files) and sometimes cannot be set (e.g., in for comment lines).** * <span style="color:blue">Expl: The execution should stop at each active breakpoint despite the command (until \<line\>, continue, etc.)</span> ### May-Have Requirements (10 Points) Fulfilling these additional requirements gains extra points. #### Extended Watchpoints * `watch <expression>` Like `watch <variable>`, but allow for arbitrary expressions. #### Extended Breakpoints * `cond <line> <expression_code>` Conditional-breakpoints with complex expressions (see _conditional-breakpoints_ from **Must-haves** and _expression_) * `bpafter <breakpoint_id> <line_number>` Disable a breakpoint after hitting another specified breakpoint * `bpuntil <breakpoint_id> <line_number>` Disable a breakpoint until hitting another specified breakpoint * Step into my code: inspect function calls only if they are defined in modules located in the current folder * Inspect members of complex objects * `bpwrite <variable_name>` Write access breakpoints A breakpoint hits each time a certain variable `<variable_name>` is changed * `alias <breakpoint_id> <breakpoint_name>` Create aliases for breakpoints (refer to a breakpoint by name instead of an index) <span style="color:blue">The 'breakpoints' command should then output breakpoint names instead of breakpoint ids when available</span> * Support for inline breakpoints (e.g., for lambda functions, list comprehension) if a line contains multiple statements such as `filter(lambda x: x % 2 == 0, [x**2 for x in range(10)])` #### More Features * Watch I/O interaction * Stand-alone command line debugger, which can be used outside Jupyter notebooks * Some other cool feature of your own design ### Assessing Requirements Your command line debugger will be evaluated by _well-documented functionality_ as listed above. A "well-documented" functionality has readable code whose effect is illustrated by at least one example in the notebook. The functionality itself will be validated by a set of _tests_ consisting of a series of commands with expected results. Here is an example of how to run your debugger with predefined commands; you can also use this in your notebook to demonstrate features. ``` import bookutils def remove_html_markup(s): # type: ignore tag = False quote = False out = "" for c in s: if c == '<' and not quote: tag = True elif c == '>' and not quote: tag = False elif c == '"' or c == "'" and tag: quote = not quote elif not tag: out = out + c return out from Debugger import Debugger from bookutils import next_inputs class TimeTravelDebugger(Debugger): pass next_inputs(["step", "step", "step", "print s", "continue"]) with TimeTravelDebugger(): remove_html_markup("foo") ``` ### Example Interaction In this section, we give you some sample interactions. ``` next_inputs(["break 8", "break 16", "step", "continue", "print c", "continue", "quit"]) with TimeTravelDebugger(): remove_html_markup("foo") next_inputs(["break 6", "watch out", "continue", "continue", "continue", "continue", "continue"]) with TimeTravelDebugger(): remove_html_markup("foo") next_inputs(["break 16", "watch out", "continue", "start", "continue", "continue"]) with TimeTravelDebugger(): remove_html_markup("foo") next_inputs(["until 16", "print out", "continue", "quit"]) with TimeTravelDebugger(): remove_html_markup("foo") next_inputs(["break 6", "watch out", "continue", "where", "up", "list", "delete 0", "continue", "quit"]) with TimeTravelDebugger(): remove_html_markup("foo") ``` ## Part 2: GUI-Based Debugger If you work as a **team of two**, this is the part you will _also_ have to build. To create a GUI in Jupyter notebooks, one can follow two paths: 1. Use embeddings of plain HTML/JS into the notebook (see Exercise 3 in the [chapter on interactive debuggers](Debugger.ipynb). This has the advantage of not requiring Python in the final result; your time travel debugger can execute in any browser. You may follow this path if you already have experience with Web design and programming. 2. Use [Jupyter widgets](https://ipywidgets.readthedocs.io/en/stable/index.html) to create a user interface. This has the advantage that you can use Python all along the way. Your debugger, however, can only be run in the notebook; not in, say, a Web page. Your GUI-based time travel debugger should implement similar features as the command line debugger, but its functionality should be accessible via a graphical user interface (instead of typing in the commands). For instance, a user may be able to step backward by clicking a ◀ button, or set a breakpoint by clicking on a line in the code view. The "Presentation" section should include a video/YouTube (up to 1 min each) embedded in Jupyter Notebook, which shows a demo of each implemented feature. ### Must-Have Requirements (20 Points): Note that your GUI need not implement _all_ features of your command-line debugger; _ease of use_ and _discoverability_ have priority. Choose wisely! * The GUI-based debugger should allow to inspect and navigate through * /R31/ The source code currently being executed, where the current line and breakpoints are highlighted. * /R32/ Users should be able to scroll through the code of the current module. * /R33/ Variables. * /R34/ The list of watchpoints (if any). * /R35/ The call stack. * /R36/ The list of breakpoints (if any). * The debugger should provide the following controls: * /R37/ An interactive timeline (e.g., a slider) which allows moving along the execution back and forth. * /R38/ Automatic execution replay (forward and backward) at various speed * /R39/ Search for specific events in the timeline (a breakpoint hit, a variable changed etc.) * /R40/ Search results should be selectable by the user, moving the timeline to the associated event Here is a demo of how a basic GUI may look like: ![Demo](PICS/timetravel_debugger_gui_demo.gif) ### May-Have Requirements (10 Points): Fulfilling these additional requirements gains extra points. * Syntax-highlight the source code * Show values of the variables in the source code. * Show events (e.g., breakpoints) on the timeline. * Visualize and explore data structures. * Produce an interactive session which can be run uniquely with HTML + JavaScript, such that Python is not required (excludes the evaluation of expressions) (worth up to 10 points). * Implement the debugger as a [custom Jupyter widget](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Custom.html) (worth up to 10 points). * Some other cool feature of your design. ### Assessing Requirements Your interactive debugger will be evaluated for _well-documented_ and _discoverable_ functionality as listed above. * A "well-documented" functionality has readable code whose effect is described in the notebook (possibly with examples). * A "discoverable" functionality can be found quickly by ordinary users by exploring the GUI. Extra functionality (keyboard shortcuts, etc.) should be made available as part of a help screen or as a tutorial in your notebook. The functionality itself will be validated manually by test users. ## Example Notebook Structure ### Personal Information Start with these fields: ``` # ignore from typing import Union, List, Set PROJECT_TYPE: int NAME: Union[str, List[str]] ID: Union[str, List[str]] IMPLEMENTED: Set[str] ``` As an example: ``` PROJECT_TYPE = 1 NAME = "Riad Sattouf" ID = "1234567" ``` or ``` PROJECT_TYPE = 2 NAME = ["Riad Sattouf", "Stan Cispas"] ID = ["1234567", "1536589"] IMPLEMENTED = set() ``` ### Implementation ``` Contains the code with comments pointing to requirements ``` ``` import bookutils from Debugger import Debugger ``` Feature /R17/: /R171/: A `print` command that prints all variables of the current frame. ``` class TimeTravelDebugger(Debugger): # type: ignore def print_command(self, arg: str = "") -> None: vars = self.frame.f_locals self.log("\n".join([f"{var} = {repr(vars[var])}" for var in vars])) IMPLEMENTED.add("R171") ``` ### Presentation ``` Examples (pointing to requirements, e.g. /R1/ /R2/, etc.) ``` ``` from bookutils import next_inputs def remove_html_markup(s): # type: ignore tag = False quote = False out = "" for c in s: if c == '<' and not quote: tag = True elif c == '>' and not quote: tag = False elif c == '"' or c == "'" and tag: quote = not quote elif not tag: out = out + c return out ``` Feature /R17/: /R171/: A `print` command allows to print all variables of the current frame. The command sequence "step", "step", "step", "print" should print three variables: ``` next_inputs(["step", "step", "step", "print", "quit"]) with TimeTravelDebugger(): remove_html_markup("foo") ``` The command sequence "step", "step", "print" should print just two variables: ``` next_inputs(["step", "step", "print", "quit"]) with TimeTravelDebugger(): remove_html_markup("foo") ``` ### Summary ``` print(f"Implemented features: {IMPLEMENTED}") ```
github_jupyter
with TimeTravelDebugger(): foo(args) import bookutils def remove_html_markup(s): # type: ignore tag = False quote = False out = "" for c in s: if c == '<' and not quote: tag = True elif c == '>' and not quote: tag = False elif c == '"' or c == "'" and tag: quote = not quote elif not tag: out = out + c return out from Debugger import Debugger from bookutils import next_inputs class TimeTravelDebugger(Debugger): pass next_inputs(["step", "step", "step", "print s", "continue"]) with TimeTravelDebugger(): remove_html_markup("foo") next_inputs(["break 8", "break 16", "step", "continue", "print c", "continue", "quit"]) with TimeTravelDebugger(): remove_html_markup("foo") next_inputs(["break 6", "watch out", "continue", "continue", "continue", "continue", "continue"]) with TimeTravelDebugger(): remove_html_markup("foo") next_inputs(["break 16", "watch out", "continue", "start", "continue", "continue"]) with TimeTravelDebugger(): remove_html_markup("foo") next_inputs(["until 16", "print out", "continue", "quit"]) with TimeTravelDebugger(): remove_html_markup("foo") next_inputs(["break 6", "watch out", "continue", "where", "up", "list", "delete 0", "continue", "quit"]) with TimeTravelDebugger(): remove_html_markup("foo") # ignore from typing import Union, List, Set PROJECT_TYPE: int NAME: Union[str, List[str]] ID: Union[str, List[str]] IMPLEMENTED: Set[str] PROJECT_TYPE = 1 NAME = "Riad Sattouf" ID = "1234567" PROJECT_TYPE = 2 NAME = ["Riad Sattouf", "Stan Cispas"] ID = ["1234567", "1536589"] IMPLEMENTED = set() Contains the code with comments pointing to requirements import bookutils from Debugger import Debugger class TimeTravelDebugger(Debugger): # type: ignore def print_command(self, arg: str = "") -> None: vars = self.frame.f_locals self.log("\n".join([f"{var} = {repr(vars[var])}" for var in vars])) IMPLEMENTED.add("R171") Examples (pointing to requirements, e.g. /R1/ /R2/, etc.) from bookutils import next_inputs def remove_html_markup(s): # type: ignore tag = False quote = False out = "" for c in s: if c == '<' and not quote: tag = True elif c == '>' and not quote: tag = False elif c == '"' or c == "'" and tag: quote = not quote elif not tag: out = out + c return out next_inputs(["step", "step", "step", "print", "quit"]) with TimeTravelDebugger(): remove_html_markup("foo") next_inputs(["step", "step", "print", "quit"]) with TimeTravelDebugger(): remove_html_markup("foo") print(f"Implemented features: {IMPLEMENTED}")
0.498291
0.922692
<img style="float: left;padding: 1.3em" src="https://indico.in2p3.fr/event/18313/logo-786578160.png"> # Gravitational Wave Open Data Workshop #3 ## Tutorial 2.1 PyCBC Tutorial, An introduction to matched-filtering We will be using the [PyCBC](http://github.com/ligo-cbc/pycbc) library, which is used to study gravitational-wave data, find astrophysical sources due to compact binary mergers, and study their parameters. These are some of the same tools that the LIGO and Virgo collaborations use to find gravitational waves in LIGO/Virgo data In this tutorial we will walk through how find a specific signal in LIGO data. We present matched filtering as a cross-correlation, in both the time domain and the frequency domain. In the next tutorial (2.2), we use the method as encoded in PyCBC, which is optimal in the case of Gaussian noise and a known signal model. In reality our noise is not entirely Gaussian, and in practice we use a variety of techniques to separate signals from noise in addition to the use of the matched filter. [Click this link to view this tutorial in Google Colaboratory](https://colab.research.google.com/github/gw-odw/odw-2020/blob/master/Day_2/Tuto_2.1_Matched_filtering_introduction.ipynb) Additional [examples](http://pycbc.org/pycbc/latest/html/#library-examples-and-interactive-tutorials) and module level documentation are [here](http://pycbc.org/pycbc/latest/html/py-modindex.html) ## Installation (un-comment and execute only if running on a cloud platform!) ``` # -- Use the following for Google Colab #! pip install -q 'lalsuite==6.66' 'PyCBC==1.15.3' ``` **Important:** With Google Colab, you may need to restart the runtime after running the cell above. ### Matched-filtering: Finding well modelled signals in Gaussian noise Matched filtering can be shown to be the optimal method for "detecting" signals---when the signal waveform is known---in Gaussian noise. We'll explore those assumptions a little later, but for now let's demonstrate how this works. Let's assume you have a stretch of noise, white noise to start: ``` %matplotlib inline import numpy import pylab # specify the sample rate. # LIGO raw data is sampled at 16384 Hz (=2^14 samples/second). # It captures signal frequency content up to f_Nyquist = 8192 Hz. # Here, we will make the computation faster by sampling at a lower rate. sample_rate = 1024 # samples per second data_length = 1024 # seconds # Generate a long stretch of white noise: the data series and the time series. data = numpy.random.normal(size=[sample_rate * data_length]) times = numpy.arange(len(data)) / float(sample_rate) ``` And then let's add a gravitational wave signal to some random part of this data. ``` from pycbc.waveform import get_td_waveform # the "approximant" (jargon for parameterized waveform family). # IMRPhenomD is defined in the frequency domain, but we'll get it in the time domain (td). # It runs fast, but it doesn't include effects such as non-aligned component spin, or higher order modes. apx = 'IMRPhenomD' # You can specify many parameters, # https://pycbc.org/pycbc/latest/html/pycbc.waveform.html?highlight=get_td_waveform#pycbc.waveform.waveform.get_td_waveform # but here, we'll use defaults for everything except the masses. # It returns both hplus and hcross, but we'll only use hplus for now. hp1, _ = get_td_waveform(approximant=apx, mass1=10, mass2=10, delta_t=1.0/sample_rate, f_lower=25) # The amplitude of gravitational-wave signals is normally of order 1E-20. To demonstrate our method # on white noise with amplitude O(1) we normalize our signal so the cross-correlation of the signal with # itself will give a value of 1. In this case we can interpret the cross-correlation of the signal with white # noise as a signal-to-noise ratio. hp1 = hp1 / max(numpy.correlate(hp1,hp1, mode='full'))**0.5 # note that in this figure, the waveform amplitude is of order 1. # The duration (for frequency above f_lower=25 Hz) is only 3 or 4 seconds long. # The waveform is "tapered": slowly ramped up from zero to full strength, over the first second or so. # It is zero-padded at earlier times. pylab.figure() pylab.title("The waveform hp1") pylab.plot(hp1.sample_times, hp1) pylab.xlabel('Time (s)') pylab.ylabel('Normalized amplitude') # Shift the waveform to start at a random time in the Gaussian noise data. waveform_start = numpy.random.randint(0, len(data) - len(hp1)) data[waveform_start:waveform_start+len(hp1)] += 10 * hp1.numpy() pylab.figure() pylab.title("Looks like random noise, right?") pylab.plot(hp1.sample_times, data[waveform_start:waveform_start+len(hp1)]) pylab.xlabel('Time (s)') pylab.ylabel('Normalized amplitude') pylab.figure() pylab.title("Signal in the data") pylab.plot(hp1.sample_times, data[waveform_start:waveform_start+len(hp1)]) pylab.plot(hp1.sample_times, 10 * hp1) pylab.xlabel('Time (s)') pylab.ylabel('Normalized amplitude') ``` To search for this signal we can cross-correlate the signal with the entire dataset -> Not in any way optimized at this point, just showing the method. We will do the cross-correlation in the time domain, once for each time step. It runs slowly... ``` cross_correlation = numpy.zeros([len(data)-len(hp1)]) hp1_numpy = hp1.numpy() for i in range(len(data) - len(hp1_numpy)): cross_correlation[i] = (hp1_numpy * data[i:i+len(hp1_numpy)]).sum() # plot the cross-correlated data vs time. Superimpose the location of the end of the signal; # this is where we should find a peak in the cross-correlation. pylab.figure() times = numpy.arange(len(data) - len(hp1_numpy)) / float(sample_rate) pylab.plot(times, cross_correlation) pylab.plot([waveform_start/float(sample_rate), waveform_start/float(sample_rate)], [-10,10],'r:') pylab.xlabel('Time (s)') pylab.ylabel('Cross-correlation') ``` Here you can see that the largest spike from the cross-correlation comes at the time of the signal. We only really need one more ingredient to describe matched-filtering: "Colored" noise (Gaussian noise but with a frequency-dependent variance; white noise has frequency-independent variance). Let's repeat the process, but generate a stretch of data colored with LIGO's zero-detuned--high-power noise curve. We'll use a PyCBC library to do this. ``` # http://pycbc.org/pycbc/latest/html/noise.html import pycbc.noise import pycbc.psd # The color of the noise matches a PSD which you provide: # Generate a PSD matching Advanced LIGO's zero-detuned--high-power noise curve flow = 10.0 delta_f = 1.0 / 128 flen = int(sample_rate / (2 * delta_f)) + 1 psd = pycbc.psd.aLIGOZeroDetHighPower(flen, delta_f, flow) # Generate colored noise delta_t = 1.0 / sample_rate ts = pycbc.noise.noise_from_psd(data_length*sample_rate, delta_t, psd, seed=127) # Estimate the amplitude spectral density (ASD = sqrt(PSD)) for the noisy data # using the "welch" method. We'll choose 4 seconds PSD samples that are overlapped 50% seg_len = int(4 / delta_t) seg_stride = int(seg_len / 2) estimated_psd = pycbc.psd.welch(ts,seg_len=seg_len,seg_stride=seg_stride) # plot it: pylab.loglog(estimated_psd.sample_frequencies, estimated_psd, label='estimate') pylab.loglog(psd.sample_frequencies, psd, linewidth=3, label='known psd') pylab.xlim(xmin=flow, xmax=512) pylab.ylim(1e-47, 1e-45) pylab.legend() pylab.grid() pylab.show() # add the signal, this time, with a "typical" amplitude. ts[waveform_start:waveform_start+len(hp1)] += hp1.numpy() * 1E-20 ``` Then all we need to do is to "whiten" both the data, and the template waveform. This can be done, in the frequency domain, by dividing by the PSD. This *can* be done in the time domain as well, but it's more intuitive in the frequency domain ``` # Generate a PSD for whitening the data from pycbc.types import TimeSeries # The PSD, sampled properly for the noisy data flow = 10.0 delta_f = 1.0 / data_length flen = int(sample_rate / (2 * delta_f)) + 1 psd_td = pycbc.psd.aLIGOZeroDetHighPower(flen, delta_f, 0) # The PSD, sampled properly for the signal delta_f = sample_rate / float(len(hp1)) flen = int(sample_rate / (2 * delta_f)) + 1 psd_hp1 = pycbc.psd.aLIGOZeroDetHighPower(flen, delta_f, 0) # The 0th and Nth values are zero. Set them to a nearby value to avoid dividing by zero. psd_td[0] = psd_td[1] psd_td[len(psd_td) - 1] = psd_td[len(psd_td) - 2] # Same, for the PSD sampled for the signal psd_hp1[0] = psd_hp1[1] psd_hp1[len(psd_hp1) - 1] = psd_hp1[len(psd_hp1) - 2] # convert both noisy data and the signal to frequency domain, # and divide each by ASD=PSD**0.5, then convert back to time domain. # This "whitens" the data and the signal template. # Multiplying the signal template by 1E-21 puts it into realistic units of strain. data_whitened = (ts.to_frequencyseries() / psd_td**0.5).to_timeseries() hp1_whitened = (hp1.to_frequencyseries() / psd_hp1**0.5).to_timeseries() * 1E-21 # Now let's re-do the correlation, in the time domain, but with whitened data and template. cross_correlation = numpy.zeros([len(data)-len(hp1)]) hp1n = hp1_whitened.numpy() datan = data_whitened.numpy() for i in range(len(datan) - len(hp1n)): cross_correlation[i] = (hp1n * datan[i:i+len(hp1n)]).sum() # plot the cross-correlation in the time domain. Superimpose the location of the end of the signal. # Note how much bigger the cross-correlation peak is, relative to the noise level, # compared with the unwhitened version of the same quantity. SNR is much higher! pylab.figure() times = numpy.arange(len(datan) - len(hp1n)) / float(sample_rate) pylab.plot(times, cross_correlation) pylab.plot([waveform_start/float(sample_rate), waveform_start/float(sample_rate)], [(min(cross_correlation))*1.1,(max(cross_correlation))*1.1],'r:') pylab.xlabel('Time (s)') pylab.ylabel('Cross-correlation') ``` # Challenge! * Histogram the whitened time series. Ignoring the outliers associated with the signal, is it a Gaussian? What is the mean and standard deviation? (We have not been careful in normalizing the whitened data properly). * Histogram the above cross-correlation time series. Ignoring the outliers associated with the signal, is it a Gaussian? What is the mean and standard deviation? * Find the location of the peak. (Note that here, it can be positive or negative), and the value of the SNR of the signal (which is the absolute value of the peak value, divided by the standard deviation of the cross-correlation time series). ## Optional challenge question. much harder: * Repeat this process, but instead of using a waveform with mass1=mass2=10, try 15, 20, or 25. Plot the SNR vs mass. Careful! Using lower masses (eg, mass1=mass2=1.4 Msun) will not work here. Why? ### Optimizing a matched-filter That's all that a matched-filter is. A cross-correlation of the data with a template waveform performed as a function of time. This cross-correlation walking through the data is a convolution operation. Convolution operations are more optimally performed in the frequency domain, which becomes a `O(N ln N)` operation, as opposed to the `O(N^2)` operation shown here. You can also conveniently vary the phase of the signal in the frequency domain, as we will illustrate in the next tutorial. PyCBC implements a frequency-domain matched-filtering engine, which is much faster than the code we've shown here. Let's move to the next tutorial now, where we will demonstrate its use on real data.
github_jupyter
# -- Use the following for Google Colab #! pip install -q 'lalsuite==6.66' 'PyCBC==1.15.3' %matplotlib inline import numpy import pylab # specify the sample rate. # LIGO raw data is sampled at 16384 Hz (=2^14 samples/second). # It captures signal frequency content up to f_Nyquist = 8192 Hz. # Here, we will make the computation faster by sampling at a lower rate. sample_rate = 1024 # samples per second data_length = 1024 # seconds # Generate a long stretch of white noise: the data series and the time series. data = numpy.random.normal(size=[sample_rate * data_length]) times = numpy.arange(len(data)) / float(sample_rate) from pycbc.waveform import get_td_waveform # the "approximant" (jargon for parameterized waveform family). # IMRPhenomD is defined in the frequency domain, but we'll get it in the time domain (td). # It runs fast, but it doesn't include effects such as non-aligned component spin, or higher order modes. apx = 'IMRPhenomD' # You can specify many parameters, # https://pycbc.org/pycbc/latest/html/pycbc.waveform.html?highlight=get_td_waveform#pycbc.waveform.waveform.get_td_waveform # but here, we'll use defaults for everything except the masses. # It returns both hplus and hcross, but we'll only use hplus for now. hp1, _ = get_td_waveform(approximant=apx, mass1=10, mass2=10, delta_t=1.0/sample_rate, f_lower=25) # The amplitude of gravitational-wave signals is normally of order 1E-20. To demonstrate our method # on white noise with amplitude O(1) we normalize our signal so the cross-correlation of the signal with # itself will give a value of 1. In this case we can interpret the cross-correlation of the signal with white # noise as a signal-to-noise ratio. hp1 = hp1 / max(numpy.correlate(hp1,hp1, mode='full'))**0.5 # note that in this figure, the waveform amplitude is of order 1. # The duration (for frequency above f_lower=25 Hz) is only 3 or 4 seconds long. # The waveform is "tapered": slowly ramped up from zero to full strength, over the first second or so. # It is zero-padded at earlier times. pylab.figure() pylab.title("The waveform hp1") pylab.plot(hp1.sample_times, hp1) pylab.xlabel('Time (s)') pylab.ylabel('Normalized amplitude') # Shift the waveform to start at a random time in the Gaussian noise data. waveform_start = numpy.random.randint(0, len(data) - len(hp1)) data[waveform_start:waveform_start+len(hp1)] += 10 * hp1.numpy() pylab.figure() pylab.title("Looks like random noise, right?") pylab.plot(hp1.sample_times, data[waveform_start:waveform_start+len(hp1)]) pylab.xlabel('Time (s)') pylab.ylabel('Normalized amplitude') pylab.figure() pylab.title("Signal in the data") pylab.plot(hp1.sample_times, data[waveform_start:waveform_start+len(hp1)]) pylab.plot(hp1.sample_times, 10 * hp1) pylab.xlabel('Time (s)') pylab.ylabel('Normalized amplitude') cross_correlation = numpy.zeros([len(data)-len(hp1)]) hp1_numpy = hp1.numpy() for i in range(len(data) - len(hp1_numpy)): cross_correlation[i] = (hp1_numpy * data[i:i+len(hp1_numpy)]).sum() # plot the cross-correlated data vs time. Superimpose the location of the end of the signal; # this is where we should find a peak in the cross-correlation. pylab.figure() times = numpy.arange(len(data) - len(hp1_numpy)) / float(sample_rate) pylab.plot(times, cross_correlation) pylab.plot([waveform_start/float(sample_rate), waveform_start/float(sample_rate)], [-10,10],'r:') pylab.xlabel('Time (s)') pylab.ylabel('Cross-correlation') # http://pycbc.org/pycbc/latest/html/noise.html import pycbc.noise import pycbc.psd # The color of the noise matches a PSD which you provide: # Generate a PSD matching Advanced LIGO's zero-detuned--high-power noise curve flow = 10.0 delta_f = 1.0 / 128 flen = int(sample_rate / (2 * delta_f)) + 1 psd = pycbc.psd.aLIGOZeroDetHighPower(flen, delta_f, flow) # Generate colored noise delta_t = 1.0 / sample_rate ts = pycbc.noise.noise_from_psd(data_length*sample_rate, delta_t, psd, seed=127) # Estimate the amplitude spectral density (ASD = sqrt(PSD)) for the noisy data # using the "welch" method. We'll choose 4 seconds PSD samples that are overlapped 50% seg_len = int(4 / delta_t) seg_stride = int(seg_len / 2) estimated_psd = pycbc.psd.welch(ts,seg_len=seg_len,seg_stride=seg_stride) # plot it: pylab.loglog(estimated_psd.sample_frequencies, estimated_psd, label='estimate') pylab.loglog(psd.sample_frequencies, psd, linewidth=3, label='known psd') pylab.xlim(xmin=flow, xmax=512) pylab.ylim(1e-47, 1e-45) pylab.legend() pylab.grid() pylab.show() # add the signal, this time, with a "typical" amplitude. ts[waveform_start:waveform_start+len(hp1)] += hp1.numpy() * 1E-20 # Generate a PSD for whitening the data from pycbc.types import TimeSeries # The PSD, sampled properly for the noisy data flow = 10.0 delta_f = 1.0 / data_length flen = int(sample_rate / (2 * delta_f)) + 1 psd_td = pycbc.psd.aLIGOZeroDetHighPower(flen, delta_f, 0) # The PSD, sampled properly for the signal delta_f = sample_rate / float(len(hp1)) flen = int(sample_rate / (2 * delta_f)) + 1 psd_hp1 = pycbc.psd.aLIGOZeroDetHighPower(flen, delta_f, 0) # The 0th and Nth values are zero. Set them to a nearby value to avoid dividing by zero. psd_td[0] = psd_td[1] psd_td[len(psd_td) - 1] = psd_td[len(psd_td) - 2] # Same, for the PSD sampled for the signal psd_hp1[0] = psd_hp1[1] psd_hp1[len(psd_hp1) - 1] = psd_hp1[len(psd_hp1) - 2] # convert both noisy data and the signal to frequency domain, # and divide each by ASD=PSD**0.5, then convert back to time domain. # This "whitens" the data and the signal template. # Multiplying the signal template by 1E-21 puts it into realistic units of strain. data_whitened = (ts.to_frequencyseries() / psd_td**0.5).to_timeseries() hp1_whitened = (hp1.to_frequencyseries() / psd_hp1**0.5).to_timeseries() * 1E-21 # Now let's re-do the correlation, in the time domain, but with whitened data and template. cross_correlation = numpy.zeros([len(data)-len(hp1)]) hp1n = hp1_whitened.numpy() datan = data_whitened.numpy() for i in range(len(datan) - len(hp1n)): cross_correlation[i] = (hp1n * datan[i:i+len(hp1n)]).sum() # plot the cross-correlation in the time domain. Superimpose the location of the end of the signal. # Note how much bigger the cross-correlation peak is, relative to the noise level, # compared with the unwhitened version of the same quantity. SNR is much higher! pylab.figure() times = numpy.arange(len(datan) - len(hp1n)) / float(sample_rate) pylab.plot(times, cross_correlation) pylab.plot([waveform_start/float(sample_rate), waveform_start/float(sample_rate)], [(min(cross_correlation))*1.1,(max(cross_correlation))*1.1],'r:') pylab.xlabel('Time (s)') pylab.ylabel('Cross-correlation')
0.685002
0.981311
**This notebook is an exercise in the [Introduction to Machine Learning](https://www.kaggle.com/learn/intro-to-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/machine-learning-competitions).** --- # Introduction In this exercise, you will create and submit predictions for a Kaggle competition. You can then improve your model (e.g. by adding features) to apply what you've learned and move up the leaderboard. Begin by running the code cell below to set up code checking and the filepaths for the dataset. ``` # Set up code checking from learntools.core import binder binder.bind(globals()) from learntools.machine_learning.ex7 import * # Set up filepaths import os if not os.path.exists("../input/train.csv"): os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv") os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv") ``` Here's some of the code you've written so far. Start by running it again. ``` # Import helpful libraries import pandas as pd from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error from sklearn.model_selection import train_test_split # Load the data, and separate the target iowa_file_path = '../input/train.csv' home_data = pd.read_csv(iowa_file_path) y = home_data.SalePrice # Create X (After completing the exercise, you can return to modify this line!) features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd'] # Select columns corresponding to features, and preview the data X = home_data[features] X.head() # Split into validation and training data train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1) # Define a random forest model rf_model = RandomForestRegressor(random_state=1) rf_model.fit(train_X, train_y) rf_val_predictions = rf_model.predict(val_X) rf_val_mae = mean_absolute_error(rf_val_predictions, val_y) print("Validation MAE for Random Forest Model: {:,.0f}".format(rf_val_mae)) ``` # Train a model for the competition The code cell above trains a Random Forest model on **`train_X`** and **`train_y`**. Use the code cell below to build a Random Forest model and train it on all of **`X`** and **`y`**. ``` # To improve accuracy, create a new Random Forest model which you will train on all training data rf_model_on_full_data = RandomForestRegressor(n_estimators=72, random_state=1) # fit rf_model_on_full_data on all data from the training data rf_model_on_full_data.fit(X, y) ``` Now, read the file of "test" data, and apply your model to make predictions. ``` # path to file you will use for predictions test_data_path = '../input/test.csv' # read test data file using pandas test_data = pd.read_csv(test_data_path) # create test_X which comes from test_data but includes only the columns you used for prediction. # The list of columns is stored in a variable called features test_X = test_data[features] # make predictions which we will submit. test_preds = rf_model_on_full_data.predict(test_X) ``` Before submitting, run a check to make sure your `test_preds` have the right format. ``` # Check your answer (To get credit for completing the exercise, you must get a "Correct" result!) step_1.check() step_1.solution() ``` # Generate a submission Run the code cell below to generate a CSV file with your predictions that you can use to submit to the competition. ``` # Run the code to save predictions in the format used for competition scoring output = pd.DataFrame({'Id': test_data.Id, 'SalePrice': test_preds}) output.to_csv('submission.csv', index=False) ``` # Submit to the competition To test your results, you'll need to join the competition (if you haven't already). So open a new window by clicking on **[this link](https://www.kaggle.com/c/home-data-for-ml-course)**. Then click on the **Join Competition** button. ![join competition image](https://i.imgur.com/axBzctl.png) Next, follow the instructions below: 1. Begin by clicking on the **Save Version** button in the top right corner of the window. This will generate a pop-up window. 2. Ensure that the **Save and Run All** option is selected, and then click on the **Save** button. 3. This generates a window in the bottom left corner of the notebook. After it has finished running, click on the number to the right of the **Save Version** button. This pulls up a list of versions on the right of the screen. Click on the ellipsis **(...)** to the right of the most recent version, and select **Open in Viewer**. This brings you into view mode of the same page. You will need to scroll down to get back to these instructions. 4. Click on the **Output** tab on the right of the screen. Then, click on the file you would like to submit, and click on the blue **Submit** button to submit your results to the leaderboard. You have now successfully submitted to the competition! If you want to keep working to improve your performance, select the **Edit** button in the top right of the screen. Then you can change your code and repeat the process. There's a lot of room to improve, and you will climb up the leaderboard as you work. # Continue Your Progress There are many ways to improve your model, and **experimenting is a great way to learn at this point.** The best way to improve your model is to add features. To add more features to the data, revisit the first code cell, and change this line of code to include more column names: ```python features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd'] ``` Some features will cause errors because of issues like missing values or non-numeric data types. Here is a complete list of potential columns that you might like to use, and that won't throw errors: - 'MSSubClass' - 'LotArea' - 'OverallQual' - 'OverallCond' - 'YearBuilt' - 'YearRemodAdd' - 'BsmtFinSF1' - 'BsmtFinSF2' - 'BsmtUnfSF' - 'TotalBsmtSF' - '1stFlrSF' - '2ndFlrSF' - 'LowQualFinSF' - 'GrLivArea' - 'BsmtFullBath' - 'BsmtHalfBath' - 'FullBath' - 'HalfBath' - 'BedroomAbvGr' - 'KitchenAbvGr' - 'TotRmsAbvGrd' - 'Fireplaces' - 'GarageCars' - 'GarageArea' - 'WoodDeckSF' - 'OpenPorchSF' - 'EnclosedPorch' - '3SsnPorch' - 'ScreenPorch' - 'PoolArea' - 'MiscVal' - 'MoSold' - 'YrSold' Look at the list of columns and think about what might affect home prices. To learn more about each of these features, take a look at the data description on the **[competition page](https://www.kaggle.com/c/home-data-for-ml-course/data)**. After updating the code cell above that defines the features, re-run all of the code cells to evaluate the model and generate a new submission file. # What's next? As mentioned above, some of the features will throw an error if you try to use them to train your model. The **[Intermediate Machine Learning](https://www.kaggle.com/learn/intermediate-machine-learning)** course will teach you how to handle these types of features. You will also learn to use **xgboost**, a technique giving even better accuracy than Random Forest. The **[Pandas](https://kaggle.com/Learn/Pandas)** course will give you the data manipulation skills to quickly go from conceptual idea to implementation in your data science projects. You are also ready for the **[Deep Learning](https://kaggle.com/Learn/intro-to-Deep-Learning)** course, where you will build models with better-than-human level performance at computer vision tasks. --- *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161285) to chat with other Learners.*
github_jupyter
# Set up code checking from learntools.core import binder binder.bind(globals()) from learntools.machine_learning.ex7 import * # Set up filepaths import os if not os.path.exists("../input/train.csv"): os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv") os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv") # Import helpful libraries import pandas as pd from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error from sklearn.model_selection import train_test_split # Load the data, and separate the target iowa_file_path = '../input/train.csv' home_data = pd.read_csv(iowa_file_path) y = home_data.SalePrice # Create X (After completing the exercise, you can return to modify this line!) features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd'] # Select columns corresponding to features, and preview the data X = home_data[features] X.head() # Split into validation and training data train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1) # Define a random forest model rf_model = RandomForestRegressor(random_state=1) rf_model.fit(train_X, train_y) rf_val_predictions = rf_model.predict(val_X) rf_val_mae = mean_absolute_error(rf_val_predictions, val_y) print("Validation MAE for Random Forest Model: {:,.0f}".format(rf_val_mae)) # To improve accuracy, create a new Random Forest model which you will train on all training data rf_model_on_full_data = RandomForestRegressor(n_estimators=72, random_state=1) # fit rf_model_on_full_data on all data from the training data rf_model_on_full_data.fit(X, y) # path to file you will use for predictions test_data_path = '../input/test.csv' # read test data file using pandas test_data = pd.read_csv(test_data_path) # create test_X which comes from test_data but includes only the columns you used for prediction. # The list of columns is stored in a variable called features test_X = test_data[features] # make predictions which we will submit. test_preds = rf_model_on_full_data.predict(test_X) # Check your answer (To get credit for completing the exercise, you must get a "Correct" result!) step_1.check() step_1.solution() # Run the code to save predictions in the format used for competition scoring output = pd.DataFrame({'Id': test_data.Id, 'SalePrice': test_preds}) output.to_csv('submission.csv', index=False) features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
0.647798
0.954774
## Comparison of various implementations of Householder QR factorization We implement the Householder algorithm for orthogonal triangularization in three different ways: - Python and Numpy - Python and Numpy and Numba (-> full jit compilation, no Python) - C++ and xtensor and Python/Numpy bindings (pybind11, xtensor-python) The actual algorithm will be the same in all three cases, only the technology to implement it will be different. ``` import numpy as np import numba # run "pip install --upgrade --force-reinstall --no-deps ." inside the house_cpp directory # to compile the C++ code and create the Python module, which wraps this code. import house_cpp ``` ### Complex-valued test matrices of random shape We generate random matrices. These matrices have more rows than columns and their entries are complex-valued. ``` def generate_test_matrices(num=50): matrices = [] def generate_matrix(m, n): re = np.random.rand(m, n) im = np.random.rand(m, n) return (re + 1j * im) for i in range(num): two_random_ints = np.random.randint(low=2, high=100, size=2) m, n = np.max(two_random_ints), np.min(two_random_ints) matrices.append(generate_matrix(m, n)) return matrices matrices = generate_test_matrices(num=100) ``` ### Testing the implementations Each implementation consists of two methods. The first method computes the triangular matrix R by applying Householder reflections to the matrix. The vectors that describe the reflections are additionly returned in a matrix W. The second method computes an orthogonal matrix Q, given W. ``` def test_implementation(house, formQ, A): m, n = A.shape assert m >= n W, R = house(A) Q = formQ(W) assert np.allclose(A, Q[:, :n].dot(R)) def run_comparison(house, formQ): for matrix in matrices: test_implementation(house, formQ, matrix) ``` ### Python and Numpy implementation ``` def house_numpy(A): """ Computes an implicit representation of a full QR factorization A = QR of an m x n matrix A with m ≥ n using Householder reflections. Returns ------- - lower-triangular matrix W ∈ C m×n whose columns are the vectors v_k defining the successive Householder reflections - triangular matrix R ∈ C n x n """ m, n = A.shape assert m >= n R = np.copy(A).astype(complex) W = np.zeros_like(R, dtype=complex) for k in range(n): v_k = np.copy(R[k:, k]) sgn = np.sign(v_k[0]) if sgn == 0: sgn = 1 v_k[0] += np.exp(1j*np.angle(v_k[0])) * sgn * np.linalg.norm(v_k) v_k /= np.linalg.norm(v_k) W[k:, k] = v_k R[k:, k:] -= 2 * np.outer(v_k, np.dot(v_k.conj().T, R[k:, k:])) # 124 ms #R[k:, k:] -= 2 * np.dot(np.outer(v_k, v_k.conj().T), R[k:, k:]) # slower # 155 ms if m > n: R = np.copy(R[:n,:]) return W, R def formQ_numpy(W): """ generates a corresponding m × m orthogonal matrix Q """ m, n = W.shape Q = np.eye(m, dtype=complex) for i in range(n): for k in range(n-1, -1, -1): v_k = W[k:, k] Q[k:, i] -= 2 * v_k * np.dot(v_k.conjugate(), Q[k:, i]) return Q %timeit run_comparison(house_numpy, formQ_numpy) ``` ### Python and Numpy and Numba implementation ``` @numba.jit(numba.types.UniTuple(numba.complex128[:,:], 2)(numba.complex128[:,:]), nopython=True) def house_numba(A): """ Computes an implicit representation of a full QR factorization A = QR of an m x n matrix A with m ≥ n using Householder reflections. Returns ------- - lower-triangular matrix W ∈ C m×n whose columns are the vectors v_k defining the successive Householder reflections - upper-triangular matrix R ∈ C n x n """ m, n = A.shape assert m >= n R = np.copy(A) W = np.zeros_like(R, dtype=numba.complex128) for k in range(n): v_k = np.copy(R[k:, k]) sgn = np.sign(v_k[0]) if sgn == 0: sgn = 1 v_k[0] += np.exp(1j*np.angle(v_k[0])) * sgn * np.linalg.norm(v_k) v_k /= np.linalg.norm(v_k) W[k:, k] = v_k R[k:, k:] -= 2 * np.outer(v_k, np.dot(np.conjugate(v_k).T, R[k:, k:])) # 28 ms #R[k:, k:] -= 2 * np.dot(np.outer(v_k, np.conjugate(v_k).T), R[k:, k:]) # slower 31.5 ms if m > n: R = np.copy(R[:n,:]) return W, R @numba.jit(numba.complex128[:,:](numba.complex128[:,:]), nopython=True) def formQ_numba(W): """ Generates a corresponding m × m orthogonal matrix Q. """ m, n = W.shape #np.eye(m, dtype=complex128) does not work Q = np.zeros((m, m), dtype=numba.complex128) for i in range(m): Q[i, i] = 1 for i in range(n): for k in range(n-1, -1, -1): v_k = W[k:, k] Q[k:, i] -= 2 * v_k * np.dot(np.conjugate(v_k), Q[k:, i]) return Q %timeit run_comparison(house_numba, formQ_numba) ``` ### C++ and xtensor implementation The C++ source code is in ./house_cpp/src/main.cpp ``` # as of now only the formQ method works def compute_Ws(): Ws = [] for i in range(len(matrices)): W, R = house(matrices[i]) Ws.append(W) return Ws Ws = compute_Ws() def compute_Qs(formQ): for W in Ws: _ = formQ(W) %timeit compute_Qs(formQ_numba) # yeah my first dive into C++ got awarded. 23% of runtime saved %timeit compute_Qs(house_cpp.formQ) W1, R1 = house_cpp.house(matrices[1]) house_cpp.test(np.array([[+1,4,3, 8]])) ```
github_jupyter
import numpy as np import numba # run "pip install --upgrade --force-reinstall --no-deps ." inside the house_cpp directory # to compile the C++ code and create the Python module, which wraps this code. import house_cpp def generate_test_matrices(num=50): matrices = [] def generate_matrix(m, n): re = np.random.rand(m, n) im = np.random.rand(m, n) return (re + 1j * im) for i in range(num): two_random_ints = np.random.randint(low=2, high=100, size=2) m, n = np.max(two_random_ints), np.min(two_random_ints) matrices.append(generate_matrix(m, n)) return matrices matrices = generate_test_matrices(num=100) def test_implementation(house, formQ, A): m, n = A.shape assert m >= n W, R = house(A) Q = formQ(W) assert np.allclose(A, Q[:, :n].dot(R)) def run_comparison(house, formQ): for matrix in matrices: test_implementation(house, formQ, matrix) def house_numpy(A): """ Computes an implicit representation of a full QR factorization A = QR of an m x n matrix A with m ≥ n using Householder reflections. Returns ------- - lower-triangular matrix W ∈ C m×n whose columns are the vectors v_k defining the successive Householder reflections - triangular matrix R ∈ C n x n """ m, n = A.shape assert m >= n R = np.copy(A).astype(complex) W = np.zeros_like(R, dtype=complex) for k in range(n): v_k = np.copy(R[k:, k]) sgn = np.sign(v_k[0]) if sgn == 0: sgn = 1 v_k[0] += np.exp(1j*np.angle(v_k[0])) * sgn * np.linalg.norm(v_k) v_k /= np.linalg.norm(v_k) W[k:, k] = v_k R[k:, k:] -= 2 * np.outer(v_k, np.dot(v_k.conj().T, R[k:, k:])) # 124 ms #R[k:, k:] -= 2 * np.dot(np.outer(v_k, v_k.conj().T), R[k:, k:]) # slower # 155 ms if m > n: R = np.copy(R[:n,:]) return W, R def formQ_numpy(W): """ generates a corresponding m × m orthogonal matrix Q """ m, n = W.shape Q = np.eye(m, dtype=complex) for i in range(n): for k in range(n-1, -1, -1): v_k = W[k:, k] Q[k:, i] -= 2 * v_k * np.dot(v_k.conjugate(), Q[k:, i]) return Q %timeit run_comparison(house_numpy, formQ_numpy) @numba.jit(numba.types.UniTuple(numba.complex128[:,:], 2)(numba.complex128[:,:]), nopython=True) def house_numba(A): """ Computes an implicit representation of a full QR factorization A = QR of an m x n matrix A with m ≥ n using Householder reflections. Returns ------- - lower-triangular matrix W ∈ C m×n whose columns are the vectors v_k defining the successive Householder reflections - upper-triangular matrix R ∈ C n x n """ m, n = A.shape assert m >= n R = np.copy(A) W = np.zeros_like(R, dtype=numba.complex128) for k in range(n): v_k = np.copy(R[k:, k]) sgn = np.sign(v_k[0]) if sgn == 0: sgn = 1 v_k[0] += np.exp(1j*np.angle(v_k[0])) * sgn * np.linalg.norm(v_k) v_k /= np.linalg.norm(v_k) W[k:, k] = v_k R[k:, k:] -= 2 * np.outer(v_k, np.dot(np.conjugate(v_k).T, R[k:, k:])) # 28 ms #R[k:, k:] -= 2 * np.dot(np.outer(v_k, np.conjugate(v_k).T), R[k:, k:]) # slower 31.5 ms if m > n: R = np.copy(R[:n,:]) return W, R @numba.jit(numba.complex128[:,:](numba.complex128[:,:]), nopython=True) def formQ_numba(W): """ Generates a corresponding m × m orthogonal matrix Q. """ m, n = W.shape #np.eye(m, dtype=complex128) does not work Q = np.zeros((m, m), dtype=numba.complex128) for i in range(m): Q[i, i] = 1 for i in range(n): for k in range(n-1, -1, -1): v_k = W[k:, k] Q[k:, i] -= 2 * v_k * np.dot(np.conjugate(v_k), Q[k:, i]) return Q %timeit run_comparison(house_numba, formQ_numba) # as of now only the formQ method works def compute_Ws(): Ws = [] for i in range(len(matrices)): W, R = house(matrices[i]) Ws.append(W) return Ws Ws = compute_Ws() def compute_Qs(formQ): for W in Ws: _ = formQ(W) %timeit compute_Qs(formQ_numba) # yeah my first dive into C++ got awarded. 23% of runtime saved %timeit compute_Qs(house_cpp.formQ) W1, R1 = house_cpp.house(matrices[1]) house_cpp.test(np.array([[+1,4,3, 8]]))
0.715325
0.973844
# Estimators, Bias and Variance **Learning Objectives:** Learn about estimators, bias and variance. ## Imports ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline ``` ## 1 Estimators Remember the core diagram for inference in modelling: **Model** $+$ **Observed Data** + **Training** $\rightarrow$ **Best Parameters** Let's develop a more concrete mathematical language for these terms. This follows the notation in Deep Learning (Goodfellow et al., 2016). For the **observed data**, let $\{ \mathbf{x}^{(1)}, \mathbf{x}^{(2)}, \ldots, \mathbf{x}^{(m)} \}$ be a set of $m$ independent observations. Each observation $\mathbf{x}^{(i)}$ can be a vector of variables. You can think of these as the $m$ rows of a tidy `DataFrame`. Next assume that the **model** has a vector of parameters $\mathbf{\theta}$. Our goal is to use the data to determine the "best" estimate for this vector of parameters. Denote the best estimate as $\hat{\mathbf{\theta}}$. A **point estimator** or **statistic** is any function of the data: $$ \hat{\mathbf{\theta}}_m = g \left( \mathbf{x}^{(1)}, \mathbf{x}^{(2)}, \ldots, \mathbf{x}^{(m)} \right)$$ To see how this works, let's use the example of the normal distribution. Let's generate data points with a known mean $\mu$ and variance $\sigma^2$: ``` mu = 5.0 var = 0.5 theta = [mu, var] data = np.random.normal(theta[0], np.sqrt(theta[1]), 50) ``` Notice how we are grouping the parameters $\mu$ and variance $\sigma^2$ into a vector of parameters $\mathbf{\theta}$. First, let's look at the data with a histogram: ``` plt.hist(data, bins=20); ``` Notice, with this few observations, our estimated $\hat{\mathbf{\theta}}$ is not going to match the true values; we just don't have that much information about the true distribution. We are now going to treat this data as our observed data and use an estimator to find an estimate for these parameters *from the data alone*. One possible estimator for $\mu$ is the sample mean, $$ \hat{\mu} = \frac{1}{m} \sum_{i=1}^m x_i, $$ which we can compute as follows: ``` mu_hat = data.mean() mu_hat ``` One possible estimator for $\sigma^2$ is the population variance, $$ \hat{\sigma}^2 = \frac{1}{m} \sum_{i=1}^m \left( x_i - \hat{\mu} \right)^2, $$ which we can compute as follows: ``` var_hat = data.var() var_hat ``` Then our estimate for $\hat{\mathbf{\theta}}$ is: ``` theta_hat = [mu_hat, var_hat] theta_hat ``` Compare this to the true value: ``` theta ``` To emphasize that choice and flexibility that we have in picking estimators, consider the alternate estimator for $\mu$: $ \hat{\mu} = $ pick the smallest value in the data set: ``` data.min() ``` In this case, this estimator isn't that much worse than using the mean, but we would expect it to be much worse much of the time. There are a number of questions that need to be answered about this process: * **Are the estimators that we used the best ones?** A closely related question is "how difficult is it to find a good estimator?" In some cases is it very easy to find a good estimator. In other cases, it is very difficult. * **How does the quality of our estimates improve with more data?** In general, we expect more data to give better estimates, but that is not necessarily the case. * **Have we used the right model in the first place?** In our example above, we knew precisely which probability distribution to use as our model (the normal distribution), as we ourselves generated the data using that very model. However, in most cases, we have no idea what the actual model is. We may find excellent estimators for our model, but if we have chosen the wrong model, it won't matter much. ## Bias One measure of how good our estimators are is the idea of bias. Bias can be formalized using the language of expectation values. In this case, we will use data to illustrate how this works. Take the above estimator for the variance of the normal distribution: $$ \hat{\sigma}^2 = \frac{1}{m} \sum_{i=1}^m \left( x_i - \hat{\mu} \right)^2 $$ I claim that this estimator is **biased** and that a better one exists. We will need to define what we mean by **bias**. Furthermore, I claim that the following estimator is **unbiased**: $$ \hat{\sigma}^2 = \frac{1}{m-1} \sum_{i=1}^m \left( x_i - \hat{\mu} \right)^2 $$ Notice the simple replacement $m \rightarrow m-1$. The number $1$ here is called the **degrees of freedom** and the correction is known as [Bessel's correction](https://en.wikipedia.org/wiki/Bessel%27s_correction). To see the difference between these two estimators, let's compute their bias. The **bias** of an estimator is defined as: $$ Bias[\hat{\theta}] = E[\hat{\theta}] - \theta $$ In general, computing his requires performing integrals over the true probability distribution, treating the estimator as a random variable. In our case, we are going to estimate the bias of the estimators using simulation. We will use the following process: 1. Treat the estimator $\hat{\theta}_m$ as a random variable by drawing $m$ observations from the model with the true parameters $\theta$ and compute the estimator for those observations. 2. Repeat this process to get a distribution of values for $\hat{\theta}_m$. 3. Then take the mean of those values ($E[\hat{\theta}]$) and subtract the true value of the parameter $E[\hat{\theta}] - \theta$. Here are two functions that return a random variable for the biased and unbiased estimators above (step 1): ``` def biased_var(theta, observations): data = np.random.normal(theta[0], np.sqrt(theta[1]), size=observations) return data.var() def unbiased_var(theta, observations): data = np.random.normal(theta[0], np.sqrt(theta[1]), size=observations) return data.var(ddof=1) ``` Here is the number of observations we are going to use: ``` observations = 5 ``` To convince ourselves that the estimators can be treated as a random variable, let's call one multiple times: ``` for i in range(10): print(biased_var(theta, observations)) ``` Now let's create a vector of those random variables, for both the biased and unbiased estimator (step 2): ``` var_dist1 = np.array([biased_var(theta, observations) for i in range(1000)]) var_dist2 = np.array([unbiased_var(theta, observations) for i in range(1000)]) ``` Finally, compute the mean and subtract the true value (step 3): ``` bias1 = var_dist1.mean() - theta[1] bias2 = var_dist2.mean() - theta[1] print("Bias of biased esimator: {}".format(bias1)) print("Bias of unbiased esimator: {}".format(bias2)) ``` Now we can see why the unbiased estimator is called unbiased; it has a smaller bias. To see that the bias of the unbiased estimator is exactly zero, you need to compute the bias analytically by performing integrals to compute the expectation value of the estimator with respect to the model's probability distribution. The other thing to explore is how the bias of an estimator changes with the number of observations. This gives us a sense of how an estimator will work on a small or large dataset. ``` np.random.seed(0) m = np.arange(5, 101, 1, dtype=int) biased_data = np.empty_like(m, dtype=float) unbiased_data = np.empty_like(m, dtype=float) for i in range(len(m)): biased_data[i] = np.array([biased_var(theta, m[i]) for p in range(500)]).mean() unbiased_data[i] = np.array([unbiased_var(theta, m[i]) for p in range(500)]).mean() ``` Let's visualize the results by plotting the biased and unbiased variance estimates versus the number of observations $m$. We also show the true value with a horizontal grey line: ``` plt.plot(m, biased_data, label='Biased') plt.plot(m, unbiased_data, label='Unbiased') plt.hlines(theta[1], 0, 100, color='grey', alpha=0.8, label="True value") plt.ylabel('Variance estimate') plt.xlabel('Number of observations (m)') plt.title('Biased/Unbiased Estimators: Variance of the Normal Dist.') plt.legend(); ``` This is quite striking: * The unbiased estimate is reasonably close to the true value, even for very small numbers of observations. Thus, with the unbiased estimator, it is possible to get a good estimate of the true value, even with few observations. * At small numbers of observations, the biased estimator is *biased* away from the true value towards much smaller values. Thus, it will not give us a very good estimate of the true value. We may have a good model (the normal distribution), but our estimate of its variance will be far from the true value. * As the number of observations goes up, the biased estimate coverges on the unbiased one and true value. An estimator that converges to the true value as the number of observations increases is called **consistent**. This example again emphasizes the following fundamental truths about modelling: 1. First pick a good model. 2. Then pick good estimators for its parameters. Both of these steps can be *extremely* difficult, or even impossible! State of the art models have as many as a billion unknown parameters, and the best estimators take hundreds of years of CPU time to run. ## 3 Variance One might hope that bias was the only thing that you need to be concerned about when picking an estimator. That is not the case. There is another property of an estimator, called the variance, that also needs to be considered. Note, there are two variances here: * The estimator for the variance * The variance of that estimator To see where the variance of the estimator comes in, let's think back to when we generated an entire distribution of estimators (remember, we treated them as random variables above). Let's look at the distribution of estimator values for the biased and unbiased estimators: ``` fig, ax = plt.subplots(2, 1, sharex=True) ax[0].hist(var_dist1, bins=20, normed=True) ax[0].set_title('Biased') ax[1].hist(var_dist2, bins=20, normed=True) ax[1].set_title('Unbiased') ax[1].set_xlabel('Estimated Variance') plt.tight_layout(); ``` In computing the bias of these estimators, we took the mean of these distributions: ``` var_dist1.mean(), var_dist2.mean() ``` The **variance** of the estimators are just the variance of these distributions: ``` var_dist1.var(), var_dist2.var() ``` Here is the surprise: **The estimator with the smaller biased has a larger variance.** Using basic properties of expectation values it is possible to show that that [mean-squared-error](https://en.wikipedia.org/wiki/Mean_squared_error) (MSE) of an estimator is equal to the sum of the following two terms: $$ MSE[\hat{\theta}] = Bias[\hat{\theta}]^2 + Var[\hat{\theta}] $$ Thus, if you want to minimize the error in an estimator, you have to deal with both bias and variance. It is important to note that this is an equation for a single estimator and can't be used to compare two estimators. Here is a summary of the take-home: > The overall error of an estimator has contributions from both the bias and variance. Thus, using an unbiased estimator is not necessarily optimal. Amazingly, the estimator for the $\sigma^2$ parameter of the normal distribution with the smallest MSE can be shown to be: $$ \hat{\sigma}^2 = \frac{1}{m+1} \sum_{i=1}^m \left( x_i - \hat{\mu} \right)^2 $$ This this estimator has even more bias than the original one we considered. The minimized MSE is possible because of the low variance. This pattern is known as the "bias/variance tradeoff" and is one of the most important ideas in modelling and machine learning.
github_jupyter
import numpy as np import matplotlib.pyplot as plt %matplotlib inline mu = 5.0 var = 0.5 theta = [mu, var] data = np.random.normal(theta[0], np.sqrt(theta[1]), 50) plt.hist(data, bins=20); mu_hat = data.mean() mu_hat var_hat = data.var() var_hat theta_hat = [mu_hat, var_hat] theta_hat theta data.min() def biased_var(theta, observations): data = np.random.normal(theta[0], np.sqrt(theta[1]), size=observations) return data.var() def unbiased_var(theta, observations): data = np.random.normal(theta[0], np.sqrt(theta[1]), size=observations) return data.var(ddof=1) observations = 5 for i in range(10): print(biased_var(theta, observations)) var_dist1 = np.array([biased_var(theta, observations) for i in range(1000)]) var_dist2 = np.array([unbiased_var(theta, observations) for i in range(1000)]) bias1 = var_dist1.mean() - theta[1] bias2 = var_dist2.mean() - theta[1] print("Bias of biased esimator: {}".format(bias1)) print("Bias of unbiased esimator: {}".format(bias2)) np.random.seed(0) m = np.arange(5, 101, 1, dtype=int) biased_data = np.empty_like(m, dtype=float) unbiased_data = np.empty_like(m, dtype=float) for i in range(len(m)): biased_data[i] = np.array([biased_var(theta, m[i]) for p in range(500)]).mean() unbiased_data[i] = np.array([unbiased_var(theta, m[i]) for p in range(500)]).mean() plt.plot(m, biased_data, label='Biased') plt.plot(m, unbiased_data, label='Unbiased') plt.hlines(theta[1], 0, 100, color='grey', alpha=0.8, label="True value") plt.ylabel('Variance estimate') plt.xlabel('Number of observations (m)') plt.title('Biased/Unbiased Estimators: Variance of the Normal Dist.') plt.legend(); fig, ax = plt.subplots(2, 1, sharex=True) ax[0].hist(var_dist1, bins=20, normed=True) ax[0].set_title('Biased') ax[1].hist(var_dist2, bins=20, normed=True) ax[1].set_title('Unbiased') ax[1].set_xlabel('Estimated Variance') plt.tight_layout(); var_dist1.mean(), var_dist2.mean() var_dist1.var(), var_dist2.var()
0.552298
0.995673
``` ls %cd !git clone https://github.com/tensorflow/models.git ``` # XCEPTION INITIAL MODEL - OPTION 1 ``` %cd models/research/deeplab/ !sh ./local_test.sh model_dir = '/content/models/research/deeplab/datasets/pascal_voc_seg/exp/train_on_trainval_set/export/' ``` # MOBILE INITIAL MODEL - OPTION 2 ``` %cd models/research/deeplab/ !sh ./local_test_mobilenetv2.sh model_dir = '/content/models/research/deeplab/datasets/pascal_voc_seg/exp/train_on_trainval_set_mobilenetv2/export/' ``` # RUN INFERENCE ``` import numpy as np import tensorflow as tf from matplotlib import pyplot as plt from matplotlib import gridspec class DeepLabModel(object): """Class to load deeplab model and run inference.""" INPUT_TENSOR_NAME = 'ImageTensor:0' OUTPUT_TENSOR_NAME = 'SemanticPredictions:0' INPUT_SIZE = 513 FROZEN_GRAPH_NAME = 'frozen_inference_graph' def __init__(self, tarball_path): """Creates and loads pretrained deeplab model.""" self.graph = tf.Graph() graph_def = None # Extract frozen graph from tar archive. tar_file = tarfile.open(tarball_path) for tar_info in tar_file.getmembers(): if self.FROZEN_GRAPH_NAME in os.path.basename(tar_info.name): file_handle = tar_file.extractfile(tar_info) graph_def = tf.GraphDef.FromString(file_handle.read()) break tar_file.close() if graph_def is None: raise RuntimeError('Cannot find inference graph in tar archive.') with self.graph.as_default(): tf.import_graph_def(graph_def, name='') self.sess = tf.Session(graph=self.graph) def run(self, image): """Runs inference on a single image. Args: image: A PIL.Image object, raw input image. Returns: resized_image: RGB image resized from original input image. seg_map: Segmentation map of `resized_image`. """ width, height = image.size resize_ratio = 1.0 * self.INPUT_SIZE / max(width, height) target_size = (int(resize_ratio * width), int(resize_ratio * height)) resized_image = image.convert('RGB').resize(target_size, Image.ANTIALIAS) batch_seg_map = self.sess.run( self.OUTPUT_TENSOR_NAME, feed_dict={self.INPUT_TENSOR_NAME: [np.asarray(resized_image)]}) seg_map = batch_seg_map[0] return resized_image, seg_map def create_pascal_label_colormap(): """Creates a label colormap used in PASCAL VOC segmentation benchmark. Returns: A Colormap for visualizing segmentation results. """ colormap = np.zeros((256, 3), dtype=int) ind = np.arange(256, dtype=int) for shift in reversed(range(8)): for channel in range(3): colormap[:, channel] |= ((ind >> channel) & 1) << shift ind >>= 3 return colormap def label_to_color_image(label): """Adds color defined by the dataset colormap to the label. Args: label: A 2D array with integer type, storing the segmentation label. Returns: result: A 2D array with floating type. The element of the array is the color indexed by the corresponding element in the input label to the PASCAL color map. Raises: ValueError: If label is not of rank 2 or its value is larger than color map maximum entry. """ if label.ndim != 2: raise ValueError('Expect 2-D input label') colormap = create_pascal_label_colormap() if np.max(label) >= len(colormap): raise ValueError('label value too large.') return colormap[label] def vis_segmentation(image, seg_map): """Visualizes input image, segmentation map and overlay view.""" plt.figure(figsize=(15, 5)) grid_spec = gridspec.GridSpec(1, 4, width_ratios=[6, 6, 6, 1]) plt.subplot(grid_spec[0]) plt.imshow(image) plt.axis('off') plt.title('input image') plt.subplot(grid_spec[1]) seg_image = label_to_color_image(seg_map).astype(np.uint8) plt.imshow(seg_image) plt.axis('off') plt.title('segmentation map') plt.subplot(grid_spec[2]) plt.imshow(image) plt.imshow(seg_image, alpha=0.7) plt.axis('off') plt.title('segmentation overlay') unique_labels = np.unique(seg_map) ax = plt.subplot(grid_spec[3]) plt.imshow( FULL_COLOR_MAP[unique_labels].astype(np.uint8), interpolation='nearest') ax.yaxis.tick_right() plt.yticks(range(len(unique_labels)), LABEL_NAMES[unique_labels]) plt.xticks([], []) ax.tick_params(width=0.0) plt.grid('off') plt.show() LABEL_NAMES = np.asarray([ 'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tv' ]) FULL_LABEL_MAP = np.arange(len(LABEL_NAMES)).reshape(len(LABEL_NAMES), 1) FULL_COLOR_MAP = label_to_color_image(FULL_LABEL_MAP) import os import tarfile _MODEL_NAME = 'frozen_inference_graph.pb' _TARBALL_NAME = 'deeplab_model.tar.gz' model_path = os.path.join(model_dir, _MODEL_NAME) download_path = os.path.join(model_dir, _TARBALL_NAME) with tarfile.open(download_path, "w:gz") as tar: tar.add(model_path) MODEL = DeepLabModel(download_path) print('model loaded successfully!') from google.colab import files from os import path from PIL import Image uploaded = files.upload() for name, data in uploaded.items(): with open('img.jpg', 'wb') as f: f.write(data) f.close() print('saved file ' + name) im = Image.open(name) resized_im, seg_map = MODEL.run(im) vis_segmentation(resized_im, seg_map) ```
github_jupyter
ls %cd !git clone https://github.com/tensorflow/models.git %cd models/research/deeplab/ !sh ./local_test.sh model_dir = '/content/models/research/deeplab/datasets/pascal_voc_seg/exp/train_on_trainval_set/export/' %cd models/research/deeplab/ !sh ./local_test_mobilenetv2.sh model_dir = '/content/models/research/deeplab/datasets/pascal_voc_seg/exp/train_on_trainval_set_mobilenetv2/export/' import numpy as np import tensorflow as tf from matplotlib import pyplot as plt from matplotlib import gridspec class DeepLabModel(object): """Class to load deeplab model and run inference.""" INPUT_TENSOR_NAME = 'ImageTensor:0' OUTPUT_TENSOR_NAME = 'SemanticPredictions:0' INPUT_SIZE = 513 FROZEN_GRAPH_NAME = 'frozen_inference_graph' def __init__(self, tarball_path): """Creates and loads pretrained deeplab model.""" self.graph = tf.Graph() graph_def = None # Extract frozen graph from tar archive. tar_file = tarfile.open(tarball_path) for tar_info in tar_file.getmembers(): if self.FROZEN_GRAPH_NAME in os.path.basename(tar_info.name): file_handle = tar_file.extractfile(tar_info) graph_def = tf.GraphDef.FromString(file_handle.read()) break tar_file.close() if graph_def is None: raise RuntimeError('Cannot find inference graph in tar archive.') with self.graph.as_default(): tf.import_graph_def(graph_def, name='') self.sess = tf.Session(graph=self.graph) def run(self, image): """Runs inference on a single image. Args: image: A PIL.Image object, raw input image. Returns: resized_image: RGB image resized from original input image. seg_map: Segmentation map of `resized_image`. """ width, height = image.size resize_ratio = 1.0 * self.INPUT_SIZE / max(width, height) target_size = (int(resize_ratio * width), int(resize_ratio * height)) resized_image = image.convert('RGB').resize(target_size, Image.ANTIALIAS) batch_seg_map = self.sess.run( self.OUTPUT_TENSOR_NAME, feed_dict={self.INPUT_TENSOR_NAME: [np.asarray(resized_image)]}) seg_map = batch_seg_map[0] return resized_image, seg_map def create_pascal_label_colormap(): """Creates a label colormap used in PASCAL VOC segmentation benchmark. Returns: A Colormap for visualizing segmentation results. """ colormap = np.zeros((256, 3), dtype=int) ind = np.arange(256, dtype=int) for shift in reversed(range(8)): for channel in range(3): colormap[:, channel] |= ((ind >> channel) & 1) << shift ind >>= 3 return colormap def label_to_color_image(label): """Adds color defined by the dataset colormap to the label. Args: label: A 2D array with integer type, storing the segmentation label. Returns: result: A 2D array with floating type. The element of the array is the color indexed by the corresponding element in the input label to the PASCAL color map. Raises: ValueError: If label is not of rank 2 or its value is larger than color map maximum entry. """ if label.ndim != 2: raise ValueError('Expect 2-D input label') colormap = create_pascal_label_colormap() if np.max(label) >= len(colormap): raise ValueError('label value too large.') return colormap[label] def vis_segmentation(image, seg_map): """Visualizes input image, segmentation map and overlay view.""" plt.figure(figsize=(15, 5)) grid_spec = gridspec.GridSpec(1, 4, width_ratios=[6, 6, 6, 1]) plt.subplot(grid_spec[0]) plt.imshow(image) plt.axis('off') plt.title('input image') plt.subplot(grid_spec[1]) seg_image = label_to_color_image(seg_map).astype(np.uint8) plt.imshow(seg_image) plt.axis('off') plt.title('segmentation map') plt.subplot(grid_spec[2]) plt.imshow(image) plt.imshow(seg_image, alpha=0.7) plt.axis('off') plt.title('segmentation overlay') unique_labels = np.unique(seg_map) ax = plt.subplot(grid_spec[3]) plt.imshow( FULL_COLOR_MAP[unique_labels].astype(np.uint8), interpolation='nearest') ax.yaxis.tick_right() plt.yticks(range(len(unique_labels)), LABEL_NAMES[unique_labels]) plt.xticks([], []) ax.tick_params(width=0.0) plt.grid('off') plt.show() LABEL_NAMES = np.asarray([ 'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tv' ]) FULL_LABEL_MAP = np.arange(len(LABEL_NAMES)).reshape(len(LABEL_NAMES), 1) FULL_COLOR_MAP = label_to_color_image(FULL_LABEL_MAP) import os import tarfile _MODEL_NAME = 'frozen_inference_graph.pb' _TARBALL_NAME = 'deeplab_model.tar.gz' model_path = os.path.join(model_dir, _MODEL_NAME) download_path = os.path.join(model_dir, _TARBALL_NAME) with tarfile.open(download_path, "w:gz") as tar: tar.add(model_path) MODEL = DeepLabModel(download_path) print('model loaded successfully!') from google.colab import files from os import path from PIL import Image uploaded = files.upload() for name, data in uploaded.items(): with open('img.jpg', 'wb') as f: f.write(data) f.close() print('saved file ' + name) im = Image.open(name) resized_im, seg_map = MODEL.run(im) vis_segmentation(resized_im, seg_map)
0.814938
0.707291
## Rendu de pièces de monnaie ``` def rendu_monnaie_dyna(montant, systeme): pieces_min = [ [float('inf')] * (montant + 1) for j in range(len(systeme))] for i in range(len(systeme)): pieces_min[i][0] = 0 for j in range(1, montant + 1): if j % systeme[0] == 0: pieces_min[0][j] = j // systeme[0] for i in range(1, len(systeme)): if systeme[i] <= j: pieces_min[i][j] = min(pieces_min[i-1][j], 1 + pieces_min[i][j-systeme[i]]) else: pieces_min[i][j] = pieces_min[i-1][j] return (pieces_min, pieces_min[-1][montant]) rendu_monnaie_dyna(263, [1,2,5,10,20,50,100,200]) def liste_rendu_monnaie_dyna(montant, systeme): pieces_min, _ = rendu_monnaie_dyna(montant, systeme) liste_rendu = [] reste = montant (i, j) = (len(systeme) - 1, montant) while reste > 0: #on prend la plus grosse pièce if j-systeme[i] >= 0 and 1 + pieces_min[i][j-systeme[i]] < pieces_min[i-1][j]: (i, j) = (i, j-systeme[i]) reste = reste - systeme[i] liste_rendu.append(systeme[i]) #on ne prend pas la plus grosse pièce elif i > 0: (i, j) = (i - 1, j) #il ne reste pas de plus petite pièce on est obligé de la prendre else: reste = reste - pieces_min[0][j] liste_rendu.append(pieces_min[0][j]) return liste_rendu liste_rendu_monnaie_dyna(263, [1,2,5,10,20,50,100,200]) def rendu_monnaie_dyna_bfs(montant, systeme): file = [montant] nbpieces = 0 while len(file) > 0: montant = file.pop() nbpieces += 1 deja_calcule = [False] * (montant + 1) for i in range(0, len(systeme)): if systeme[i] <= montant: reste = montant - systeme[i] if not deja_calcule[reste]: file.append(reste) deja_calcule[reste] = True if reste == 0: return nbpieces return float('inf') #aucun rendu possible rendu_monnaie_dyna_bfs(263, [1,2,5,10,20,50,100,200]) def liste_rendu_monnaie_dyna_bfs(montant, systeme): file = [montant] nbpieces = 0 memo_liste = {m : [] for m in range(montant + 1)} while len(file) > 0: montant = file.pop() nbpieces += 1 deja_calcule = [False] * (montant + 1) for i in range(0, len(systeme)): if systeme[i] <= montant: reste = montant - systeme[i] if not deja_calcule[reste]: file.append(reste) deja_calcule[reste] = True memo_liste[reste] = memo_liste[montant] + [systeme[i]] if reste == 0: return (nbpieces, memo_liste[0]) return (float('inf'), []) #aucun rendu possible liste_rendu_monnaie_dyna_bfs(263, [1,2,5,10,20,50,100,200]) ``` ## Le sac à dos ``` def sac_dos_dyna(capacite_sac, objets): max_val = [ [0] * (capacite_sac + 1) for j in range(len(objets) + 1)] optimal = [ [False] * (capacite_sac + 1) for j in range(len(objets) + 1)] for i in range(len(objets)): max_val[i][0] = 0 for capacite in range(1, capacite_sac + 1): for i in range(len(objets)): if objets[i][2] <= capacite: #on prend l'objet d'index i (ligne i + 1 dans max_val) if max_val[i][capacite - objets[i][2]] + objets[i][1] > max_val[i][capacite]: max_val[i + 1][capacite] = max_val[i][capacite - objets[i][2]] + objets[i][1] optimal[i + 1][capacite] = True else: max_val[i + 1][capacite] = max_val[i][capacite] return (max_val, optimal) sac_dos_dyna(6, [['A', 6, 5], ['B', 3, 2], ['C', 3, 2], ['D', 3, 2], ['E',1,1]] ) def liste_objets_sac_dos_dyna(capacite_sac, objets): max_val, optimal = sac_dos_dyna(capacite_sac, objets) index_objet = len(objets) capacite_restante = capacite_sac choix = [] while index_objet > 0: if optimal[index_objet][capacite_restante]: if max_val[index_objet - 1][capacite_restante - objets[index_objet - 1][2]] + objets[index_objet - 1][1] > max_val[index_objet - 1][capacite_restante]: choix.append( objets[index_objet - 1][0]) (capacite_restante, index_objet) = (capacite_restante - objets[index_objet - 1][2], index_objet - 1) else: index_objet -= 1 return choix liste_objets_sac_dos_dyna(6, [['A', 6, 5], ['B', 3, 2], ['C', 3, 2], ['D', 3, 2], ['E',1,1]] ) ```
github_jupyter
def rendu_monnaie_dyna(montant, systeme): pieces_min = [ [float('inf')] * (montant + 1) for j in range(len(systeme))] for i in range(len(systeme)): pieces_min[i][0] = 0 for j in range(1, montant + 1): if j % systeme[0] == 0: pieces_min[0][j] = j // systeme[0] for i in range(1, len(systeme)): if systeme[i] <= j: pieces_min[i][j] = min(pieces_min[i-1][j], 1 + pieces_min[i][j-systeme[i]]) else: pieces_min[i][j] = pieces_min[i-1][j] return (pieces_min, pieces_min[-1][montant]) rendu_monnaie_dyna(263, [1,2,5,10,20,50,100,200]) def liste_rendu_monnaie_dyna(montant, systeme): pieces_min, _ = rendu_monnaie_dyna(montant, systeme) liste_rendu = [] reste = montant (i, j) = (len(systeme) - 1, montant) while reste > 0: #on prend la plus grosse pièce if j-systeme[i] >= 0 and 1 + pieces_min[i][j-systeme[i]] < pieces_min[i-1][j]: (i, j) = (i, j-systeme[i]) reste = reste - systeme[i] liste_rendu.append(systeme[i]) #on ne prend pas la plus grosse pièce elif i > 0: (i, j) = (i - 1, j) #il ne reste pas de plus petite pièce on est obligé de la prendre else: reste = reste - pieces_min[0][j] liste_rendu.append(pieces_min[0][j]) return liste_rendu liste_rendu_monnaie_dyna(263, [1,2,5,10,20,50,100,200]) def rendu_monnaie_dyna_bfs(montant, systeme): file = [montant] nbpieces = 0 while len(file) > 0: montant = file.pop() nbpieces += 1 deja_calcule = [False] * (montant + 1) for i in range(0, len(systeme)): if systeme[i] <= montant: reste = montant - systeme[i] if not deja_calcule[reste]: file.append(reste) deja_calcule[reste] = True if reste == 0: return nbpieces return float('inf') #aucun rendu possible rendu_monnaie_dyna_bfs(263, [1,2,5,10,20,50,100,200]) def liste_rendu_monnaie_dyna_bfs(montant, systeme): file = [montant] nbpieces = 0 memo_liste = {m : [] for m in range(montant + 1)} while len(file) > 0: montant = file.pop() nbpieces += 1 deja_calcule = [False] * (montant + 1) for i in range(0, len(systeme)): if systeme[i] <= montant: reste = montant - systeme[i] if not deja_calcule[reste]: file.append(reste) deja_calcule[reste] = True memo_liste[reste] = memo_liste[montant] + [systeme[i]] if reste == 0: return (nbpieces, memo_liste[0]) return (float('inf'), []) #aucun rendu possible liste_rendu_monnaie_dyna_bfs(263, [1,2,5,10,20,50,100,200]) def sac_dos_dyna(capacite_sac, objets): max_val = [ [0] * (capacite_sac + 1) for j in range(len(objets) + 1)] optimal = [ [False] * (capacite_sac + 1) for j in range(len(objets) + 1)] for i in range(len(objets)): max_val[i][0] = 0 for capacite in range(1, capacite_sac + 1): for i in range(len(objets)): if objets[i][2] <= capacite: #on prend l'objet d'index i (ligne i + 1 dans max_val) if max_val[i][capacite - objets[i][2]] + objets[i][1] > max_val[i][capacite]: max_val[i + 1][capacite] = max_val[i][capacite - objets[i][2]] + objets[i][1] optimal[i + 1][capacite] = True else: max_val[i + 1][capacite] = max_val[i][capacite] return (max_val, optimal) sac_dos_dyna(6, [['A', 6, 5], ['B', 3, 2], ['C', 3, 2], ['D', 3, 2], ['E',1,1]] ) def liste_objets_sac_dos_dyna(capacite_sac, objets): max_val, optimal = sac_dos_dyna(capacite_sac, objets) index_objet = len(objets) capacite_restante = capacite_sac choix = [] while index_objet > 0: if optimal[index_objet][capacite_restante]: if max_val[index_objet - 1][capacite_restante - objets[index_objet - 1][2]] + objets[index_objet - 1][1] > max_val[index_objet - 1][capacite_restante]: choix.append( objets[index_objet - 1][0]) (capacite_restante, index_objet) = (capacite_restante - objets[index_objet - 1][2], index_objet - 1) else: index_objet -= 1 return choix liste_objets_sac_dos_dyna(6, [['A', 6, 5], ['B', 3, 2], ['C', 3, 2], ['D', 3, 2], ['E',1,1]] )
0.099865
0.68684
# Vehicle silhouettes ## Objective To classify a given silhouette as one of four types of vehicle, using a set of features extracted from the silhouette. The vehicle may be viewed from one of many different angles. ## Description ### The features were extracted from the silhouettes by the HIPS (Hierarchical Image Processing System) extension BINATTS, which extracts a combination of scale independent features utilising both classical moments based measures such as scaled variance, skewness and kurtosis about the major/minor axes and heuristic measures such as hollows, circularity, rectangularity and compactness. Four "Corgie" model vehicles were used for the experiment: a double decker bus, Cheverolet van, Saab 9000 and an Opel Manta 400. This particular combination of vehicles was chosen with the expectation that the bus, van and either one of the cars would be readily distinguishable, but it would be more difficult to distinguish between the cars. ## Source: https://www.kaggle.com/rajansharma780/vehicle ## ATTRIBUTES 1. compactness float average perimeter**2/area 2. circularity float average radius**2/area 3. distance_circularity float area/(av.distance from border)**2 4. radius_ratio float (max.rad-min.rad)/av.radius 5. pr_axis_aspect_ratio float (minor axis)/(major axis) 6. max_length_aspect_ratio float (length perp. max length)/(max length) 7. scatter_ratio float (inertia about minor axis)/(inertia about major axis) 8. elongatedness float area/(shrink width)**2 9. pr_axis_rectangularity float area/(pr.axis length*pr.axis width) 10. max_length_rectangularity float area/(max.length*length perp. to this) 11. scaled_variance_major_axis float (2nd order moment about minor axis)/area 12. scaled_variance_minor_axis float (2nd order moment about major axis)/area 13. scaled_radius_gyration float (mavar+mivar)/area 14. skewness_major_axis float (3rd order moment about major axis)/sigma_min**3 15. skewness_minor_axis float (3rd order moment about minor axis)/sigma_maj**3 16. kurtosis_minor_axis float (4th order moment about major axis)/sigma_min**4 17. kurtosis_major_axis float (4th order moment about minor axis)/sigma_maj**4 18. hollows_ratio float (area of hollows)/(area of bounding polygon) ## Target variable 19. vehicle_class string Predictor Class. Values: Opel, Saab, Bus, Van # Tasks: 1. Obtain the multi-class dataset from the given link 2. Load the dataset 3. Apply pre-processing techniques: Encoding, Scaling 4. Divide the dataset into training (70%) and testing (30%) 5. Build your own random forest model from scratch (using invidual decision tree model from sklearn) 6. Train the random forest model 7. Test the random forest model 8. Train and test the random forest model using sklearn. 9. Compare the performance of both the models ## Useful links: https://machinelearningmastery.com/implement-random-forest-scratch-python/ https://towardsdatascience.com/random-forests-and-decision-trees-from-scratch-in-python-3e4fa5ae4249 https://www.analyticsvidhya.com/blog/2018/12/building-a-random-forest-from-scratch-understanding-real-world-data-products-ml-for-programmers-part-3/ # Part 1: Random Forest from scratch Random forests are an ensemble learning method for classification and regression that operate by constructing multiple decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. ``` # Load the libraries import numpy as np import pandas as pd from sklearn.ensemble import RandomForestClassifier from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score,classification_report import matplotlib.pyplot as plt from random import sample # Load the dataset df=pd.read_csv("vehicle.csv") df.info() df.head() from sklearn.impute import SimpleImputer imp=SimpleImputer(missing_values=np.nan,strategy='mean') df[list(df.isnull().any()[df.isnull().any()==True].index)]=imp.fit_transform( df[list(df.isnull().any()[df.isnull().any()==True].index)]) # Preprocessing # Encoding categorical variables (if any) # Feature Scaling # Filling missing values (if any) scaler=MinMaxScaler() df.iloc[:,0:17]=scaler.fit_transform(df.iloc[:,0:17]) dum=pd.get_dummies(df['class']) df.drop('class',axis=1,inplace=True) df=pd.concat([df,dum],axis=1) df.head() X=df.drop(columns=['bus','car','van']) y=df[['bus','car','van']] # Divide the dataset to training and testing set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) # Randomly choose the features from training set and build decision tree # Randomness in the features will help us to achieve different DTrees every time # You can keep minimum number of random features every time so that trees will have sufficient features # Note: You can use builtin function for DT training using Sklearn X_train.iloc[:,columns] # Train N number of decision trees using random feature selection strategy # Number of trees N can be user input trees=[] cols=[] for i in range(100): N_columns = list(np.random.choice(range(X.shape[1]),1)+1) columns = list(np.random.choice(range(X.shape[1]), N_columns, replace=False)) cols.append(columns) model=DecisionTreeClassifier(random_state=42) print("Model "+str(i+1)+" fitting ....") print("List of Columns - ",columns) model.fit(X_train.iloc[:,columns],y_train) trees.append(model) print() for i,j in zip(trees,cols): print(i.score(X_test.iloc[:,j],y_test)) scores=[] for i,j in zip(trees,cols): scores.append(accuracy_score(i.predict(X_test.iloc[:,j]),y_test)) scores=np.array(scores) scores[0:4] # Apply different voting mechanisms such as # max voting/average voting/weighted average voting (using accuracy as weightage) # Perform the ensembling for the training set. ``` ### Max Voting ``` maxvotmod=trees[np.argmax(scores)] print("Training Score -",maxvotmod.score(X_train.iloc[:,cols[np.argmax(scores)]],y_train)) print("Testing Score -",maxvotmod.score(X_test.iloc[:,cols[np.argmax(scores)]],y_test)) maxvottrain=maxvotmod.predict(X_train.iloc[:,cols[np.argmax(scores)]]) maxvottest=maxvotmod.predict(X_test.iloc[:,cols[np.argmax(scores)]]) ``` ### Average Voting ``` avgvottrain=np.mean([trees[i].predict(X_train.iloc[:,cols[i]]) for i in range(len(trees))],axis=0).round() avgvottest=np.mean([trees[i].predict(X_test.iloc[:,cols[i]]) for i in range(len(trees))],axis=0).round() print("Training Accuracy -",accuracy_score(y_train,avgvottrain)) print("Testing Accuracy -",accuracy_score(y_test,avgvottest)) ``` ### Weighted Average Voting ``` avgwevottrain=np.mean([accuracy_score(y_train,trees[i].predict(X_train.iloc[:,cols[i]]))*trees[i].predict(X_train.iloc[:,cols[i]]) for i in range(len(trees))],axis=0).round() avgwevottest=np.mean([accuracy_score(y_test,trees[i].predict(X_test.iloc[:,cols[i]]))*trees[i].predict(X_test.iloc[:,cols[i]]) for i in range(len(trees))],axis=0).round() print("Training Accuracy -",accuracy_score(y_train,avgvottrain)) print("Testing Accuracy -",accuracy_score(y_test,avgvottest)) # Apply invidual trees trained on the testingset # Note: You should've saved the feature sets used for training invidual trees, # so that same features can be chosen in testing set # Get predictions on testing set # Evaluate the results using accuracy, precision, recall and f-measure # Compare different voting mechanisms and their accuracies print("------------------------Max Voting------------------------") print(classification_report(y_test,maxvottest)) print() print("------------------------Average Voting------------------------") print(classification_report(y_test,avgvottest)) print() print("------------------------Weighted Average Voting------------------------") print(classification_report(y_test,avgwevottest)) print() # Compare the Random forest models with different number of trees N trainaccs=[] testaccs=[] n=300 def rfc(n=100,min_fea=None): trees=[] cols=[] for i in range(1,n+1): if min_fea!=None: columns = list(np.random.choice(range(X.shape[1]), min_fea, replace=False)) else: N_columns = list(np.random.choice(range(X.shape[1]),1)+1) columns = list(np.random.choice(range(X.shape[1]), N_columns, replace=False)) cols.append(columns) model=DecisionTreeClassifier(random_state=42) model.fit(X_train.iloc[:,columns],y_train) trees.append(model) scores=[] for i,j in zip(trees,cols): scores.append(accuracy_score(i.predict(X_test.iloc[:,j]),y_test)) scores=np.array(scores) maxvotmod=trees[np.argmax(scores)] maxvottrain=maxvotmod.predict(X_train.iloc[:,cols[np.argmax(scores)]]) maxvottest=maxvotmod.predict(X_test.iloc[:,cols[np.argmax(scores)]]) return({'test':maxvottest,'train':maxvottrain}) for i in range(1,n+1): print("Training "+str(i)+" trees") accs=rfc(i) trainaccs.append(accs['train']) testaccs.append(accs['test']) testacc=[] trainacc=[] for i in range(n+1): testacc.append(accuracy_score(y_test,testaccs[i])) trainacc.append(accuracy_score(y_train,trainaccs[i])) np.array(trainacc).shape plt.style.use('seaborn') plt.plot([i for i in range(n+1)],testacc,label="Testing Accuracy") plt.plot([i for i in range(n+1)],trainacc,label="Training Accuracy") plt.ylabel('Accuracies') plt.xlabel('Number of Estimators') plt.legend() accs=rfc(5) accs['train'].shape testacc=[] trainacc=[] for i in range(1,19): accs=rfc(100,min_fea=i) testacc.append(accuracy_score(y_test,accs['test'])) trainacc.append(accuracy_score(y_train,accs['train'])) print("Trained model with minimum "+str(i)+" features") plt.style.use('seaborn') plt.plot([i for i in range(1,19)],testacc,label="Testing Accuracy") plt.plot([i for i in range(1,19)],trainacc,label="Training Accuracy") plt.ylabel('Accuracies') plt.xlabel('Mininum number of Features') plt.legend() ``` ## Part 2: Random Forest using Sklearn ``` # Use the preprocessed dataset here # Train the Random Forest Model using builtin Sklearn Dataset model=RandomForestClassifier(verbose=3) model.fit(X_train,y_train) # Test the model with testing set and print the accuracy, precision, recall and f-measure y_pred=model.predict(X_test) print(classification_report(y_test,y_pred)) # Play with parameters such as # number of decision trees # Criterion for splitting # Max depth # Minimum samples per split and leaf model=RandomForestClassifier(criterion='entropy') model.fit(X_train,y_train) print(model.score(X_train,y_train)) print(model.score(X_test,y_test)) model=RandomForestClassifier(criterion='gini') model.fit(X_train,y_train) print(model.score(X_train,y_train)) print(model.score(X_test,y_test)) model=RandomForestClassifier(n_estimators=200,max_depth=18) model.fit(X_train,y_train) print(model.score(X_train,y_train)) print(model.score(X_test,y_test)) model=RandomForestClassifier(n_estimators=200,min_samples_split=6,min_samples_leaf=3) model.fit(X_train,y_train) print(model.score(X_train,y_train)) print(model.score(X_test,y_test)) ```
github_jupyter
# Load the libraries import numpy as np import pandas as pd from sklearn.ensemble import RandomForestClassifier from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score,classification_report import matplotlib.pyplot as plt from random import sample # Load the dataset df=pd.read_csv("vehicle.csv") df.info() df.head() from sklearn.impute import SimpleImputer imp=SimpleImputer(missing_values=np.nan,strategy='mean') df[list(df.isnull().any()[df.isnull().any()==True].index)]=imp.fit_transform( df[list(df.isnull().any()[df.isnull().any()==True].index)]) # Preprocessing # Encoding categorical variables (if any) # Feature Scaling # Filling missing values (if any) scaler=MinMaxScaler() df.iloc[:,0:17]=scaler.fit_transform(df.iloc[:,0:17]) dum=pd.get_dummies(df['class']) df.drop('class',axis=1,inplace=True) df=pd.concat([df,dum],axis=1) df.head() X=df.drop(columns=['bus','car','van']) y=df[['bus','car','van']] # Divide the dataset to training and testing set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) # Randomly choose the features from training set and build decision tree # Randomness in the features will help us to achieve different DTrees every time # You can keep minimum number of random features every time so that trees will have sufficient features # Note: You can use builtin function for DT training using Sklearn X_train.iloc[:,columns] # Train N number of decision trees using random feature selection strategy # Number of trees N can be user input trees=[] cols=[] for i in range(100): N_columns = list(np.random.choice(range(X.shape[1]),1)+1) columns = list(np.random.choice(range(X.shape[1]), N_columns, replace=False)) cols.append(columns) model=DecisionTreeClassifier(random_state=42) print("Model "+str(i+1)+" fitting ....") print("List of Columns - ",columns) model.fit(X_train.iloc[:,columns],y_train) trees.append(model) print() for i,j in zip(trees,cols): print(i.score(X_test.iloc[:,j],y_test)) scores=[] for i,j in zip(trees,cols): scores.append(accuracy_score(i.predict(X_test.iloc[:,j]),y_test)) scores=np.array(scores) scores[0:4] # Apply different voting mechanisms such as # max voting/average voting/weighted average voting (using accuracy as weightage) # Perform the ensembling for the training set. maxvotmod=trees[np.argmax(scores)] print("Training Score -",maxvotmod.score(X_train.iloc[:,cols[np.argmax(scores)]],y_train)) print("Testing Score -",maxvotmod.score(X_test.iloc[:,cols[np.argmax(scores)]],y_test)) maxvottrain=maxvotmod.predict(X_train.iloc[:,cols[np.argmax(scores)]]) maxvottest=maxvotmod.predict(X_test.iloc[:,cols[np.argmax(scores)]]) avgvottrain=np.mean([trees[i].predict(X_train.iloc[:,cols[i]]) for i in range(len(trees))],axis=0).round() avgvottest=np.mean([trees[i].predict(X_test.iloc[:,cols[i]]) for i in range(len(trees))],axis=0).round() print("Training Accuracy -",accuracy_score(y_train,avgvottrain)) print("Testing Accuracy -",accuracy_score(y_test,avgvottest)) avgwevottrain=np.mean([accuracy_score(y_train,trees[i].predict(X_train.iloc[:,cols[i]]))*trees[i].predict(X_train.iloc[:,cols[i]]) for i in range(len(trees))],axis=0).round() avgwevottest=np.mean([accuracy_score(y_test,trees[i].predict(X_test.iloc[:,cols[i]]))*trees[i].predict(X_test.iloc[:,cols[i]]) for i in range(len(trees))],axis=0).round() print("Training Accuracy -",accuracy_score(y_train,avgvottrain)) print("Testing Accuracy -",accuracy_score(y_test,avgvottest)) # Apply invidual trees trained on the testingset # Note: You should've saved the feature sets used for training invidual trees, # so that same features can be chosen in testing set # Get predictions on testing set # Evaluate the results using accuracy, precision, recall and f-measure # Compare different voting mechanisms and their accuracies print("------------------------Max Voting------------------------") print(classification_report(y_test,maxvottest)) print() print("------------------------Average Voting------------------------") print(classification_report(y_test,avgvottest)) print() print("------------------------Weighted Average Voting------------------------") print(classification_report(y_test,avgwevottest)) print() # Compare the Random forest models with different number of trees N trainaccs=[] testaccs=[] n=300 def rfc(n=100,min_fea=None): trees=[] cols=[] for i in range(1,n+1): if min_fea!=None: columns = list(np.random.choice(range(X.shape[1]), min_fea, replace=False)) else: N_columns = list(np.random.choice(range(X.shape[1]),1)+1) columns = list(np.random.choice(range(X.shape[1]), N_columns, replace=False)) cols.append(columns) model=DecisionTreeClassifier(random_state=42) model.fit(X_train.iloc[:,columns],y_train) trees.append(model) scores=[] for i,j in zip(trees,cols): scores.append(accuracy_score(i.predict(X_test.iloc[:,j]),y_test)) scores=np.array(scores) maxvotmod=trees[np.argmax(scores)] maxvottrain=maxvotmod.predict(X_train.iloc[:,cols[np.argmax(scores)]]) maxvottest=maxvotmod.predict(X_test.iloc[:,cols[np.argmax(scores)]]) return({'test':maxvottest,'train':maxvottrain}) for i in range(1,n+1): print("Training "+str(i)+" trees") accs=rfc(i) trainaccs.append(accs['train']) testaccs.append(accs['test']) testacc=[] trainacc=[] for i in range(n+1): testacc.append(accuracy_score(y_test,testaccs[i])) trainacc.append(accuracy_score(y_train,trainaccs[i])) np.array(trainacc).shape plt.style.use('seaborn') plt.plot([i for i in range(n+1)],testacc,label="Testing Accuracy") plt.plot([i for i in range(n+1)],trainacc,label="Training Accuracy") plt.ylabel('Accuracies') plt.xlabel('Number of Estimators') plt.legend() accs=rfc(5) accs['train'].shape testacc=[] trainacc=[] for i in range(1,19): accs=rfc(100,min_fea=i) testacc.append(accuracy_score(y_test,accs['test'])) trainacc.append(accuracy_score(y_train,accs['train'])) print("Trained model with minimum "+str(i)+" features") plt.style.use('seaborn') plt.plot([i for i in range(1,19)],testacc,label="Testing Accuracy") plt.plot([i for i in range(1,19)],trainacc,label="Training Accuracy") plt.ylabel('Accuracies') plt.xlabel('Mininum number of Features') plt.legend() # Use the preprocessed dataset here # Train the Random Forest Model using builtin Sklearn Dataset model=RandomForestClassifier(verbose=3) model.fit(X_train,y_train) # Test the model with testing set and print the accuracy, precision, recall and f-measure y_pred=model.predict(X_test) print(classification_report(y_test,y_pred)) # Play with parameters such as # number of decision trees # Criterion for splitting # Max depth # Minimum samples per split and leaf model=RandomForestClassifier(criterion='entropy') model.fit(X_train,y_train) print(model.score(X_train,y_train)) print(model.score(X_test,y_test)) model=RandomForestClassifier(criterion='gini') model.fit(X_train,y_train) print(model.score(X_train,y_train)) print(model.score(X_test,y_test)) model=RandomForestClassifier(n_estimators=200,max_depth=18) model.fit(X_train,y_train) print(model.score(X_train,y_train)) print(model.score(X_test,y_test)) model=RandomForestClassifier(n_estimators=200,min_samples_split=6,min_samples_leaf=3) model.fit(X_train,y_train) print(model.score(X_train,y_train)) print(model.score(X_test,y_test))
0.69451
0.916783
# Pulse gates Most quantum algorithms can be described with circuit operations alone. When we need more control over the low-level implementation of our program, we can use _pulse gates_. Pulse gates remove the contraint of executing circuits with basis gates only, and also allow you to override the default implementation of any basis gate. Pulse gates allow you to map a logical circuit gate (e.g., `X`) to a Qiskit Pulse program, called a `Schedule`. This mapping is referred to as a _calibration_. A high fidelity calibration is one which faithfully implements the logical operation it is mapped from (e.g., whether the `X` gate calibration drives $|0\rangle$ to $|1\rangle$, etc.). A schedule specifies the exact time dynamics of the input signals across all input _channels_ to the device. There are usually multiple channels per qubit, such as drive and measure. This interface is more powerful, and requires a deeper understanding of the underlying device physics. It's important to note that Pulse programs operate on physical qubits. A drive pulse on qubit $a$ will not enact the same logical operation on the state of qubit $b$ -- in other words, gate calibrations are not interchangeable across qubits. This is in contrast to the circuit level, where an `X` gate is defined independent of its qubit operand. This page shows you how to add a calibration to your circuit. **Note:** To execute a program with pulse gates, the backend has to be enabled with OpenPulse. You can check via ``backend.configuration().open_pulse``, which is ``True`` when OpenPulse is enabled. If it is enabled and the pulse gates feature is not enabled, you can [schedule](07_pulse_scheduler.ipynb) your input circuit. ## Build your circuit Let's start with a very simple example, a Bell state circuit. ``` from qiskit import QuantumCircuit circ = QuantumCircuit(2, 2) circ.h(0) circ.cx(0, 1) circ.measure(0, 0) circ.measure(1, 1) circ.draw('mpl') ``` ## Build your calibrations Now that we have our circuit, let's define a calibration for the Hadamard gate on qubit 0. In practice, the pulse shape and its parameters would be optimized through a series of Rabi experiments (see the [Qiskit Textbook](https://qiskit.org/textbook/ch-quantum-hardware/calibrating-qubits-openpulse.html) for a walk through). For this demonstration, our Hadamard will be a Gaussian pulse. We will _play_ our pulse on the _drive_ channel of qubit 0. Don't worry too much about the details of building the calibration itself; you can learn all about this on the following page: [building pulse schedules](06_building_pulse_schedules.ipynb). ``` from qiskit import pulse with pulse.build(name='hadamard') as h_q0: pulse.play(pulse.library.Gaussian(duration=128, amp=0.1, sigma=16), pulse.DriveChannel(0)) ``` Let's draw the new schedule to see what we've built. ``` h_q0.draw() ``` ## Link your calibration to your circuit All that remains is to complete the registration. The circuit method `add_calibration` needs information about the gate and a reference to the schedule to complete the mapping: QuantumCircuit.add_calibration(gate, qubits, schedule, parameters) The `gate` can either be a `circuit.Gate` object or the name of the gate. Usually, you'll need a different schedule for each unique set of `qubits` and `parameters`. Since the Hadamard gate doesn't have any parameters, we don't have to supply any. ``` circ.add_calibration('h', [0], h_q0) ``` Lastly, note that the transpiler will respect your calibrations. Use it as you normally would (our example is too simple for the transpiler to optimize, so the output is the same). ``` from qiskit import transpile from qiskit.test.mock import FakeAlmaden backend = FakeAlmaden() circ = transpile(circ, backend) print(backend.configuration().basis_gates) circ.draw('mpl', idle_wires=False) ``` Notice that `h` is not a basis gate for the mock backend `FakeAlmaden`. Since we have added a calibration for it, the transpiler will treat our gate as a basis gate; _but only on the qubits for which it was defined_. A Hadamard applied to a different qubit would be unrolled to the basis gates. That's it! ## Custom gates We'll briefly show the same process for nonstandard, completely custom gates. This demonstration includes a gate with parameters. ``` from qiskit import QuantumCircuit from qiskit.circuit import Gate circ = QuantumCircuit(1, 1) custom_gate = Gate('my_custom_gate', 1, [3.14, 1]) # 3.14 is an arbitrary parameter for demonstration circ.append(custom_gate, [0]) circ.measure(0, 0) circ.draw('mpl') with pulse.build(name='custom') as my_schedule: pulse.play(pulse.library.Gaussian(duration=64, amp=0.2, sigma=8), pulse.DriveChannel(0)) circ.add_calibration('my_custom_gate', [0], my_schedule, [3.14, 1]) # Alternatively: circ.add_calibration(custom_gate, [0], my_schedule) ``` If we use the `Gate` instance variable `custom_gate` to add the calibration, the parameters are derived from that instance. Remember that the order of parameters is meaningful. ``` circ = transpile(circ, backend) circ.draw('mpl', idle_wires=False) ``` Normally, if we tried to transpile our `circ`, we would get an error. There was no functional definition provided for `"my_custom_gate"`, so the transpiler can't unroll it to the basis gate set of the target device. We can show this by trying to add `"my_custom_gate"` to another qubit which hasn't been calibrated. ``` circ = QuantumCircuit(2, 2) circ.append(custom_gate, [1]) from qiskit import QiskitError try: circ = transpile(circ, backend) except QiskitError as e: print(e) import qiskit.tools.jupyter %qiskit_version_table %qiskit_copyright ```
github_jupyter
from qiskit import QuantumCircuit circ = QuantumCircuit(2, 2) circ.h(0) circ.cx(0, 1) circ.measure(0, 0) circ.measure(1, 1) circ.draw('mpl') from qiskit import pulse with pulse.build(name='hadamard') as h_q0: pulse.play(pulse.library.Gaussian(duration=128, amp=0.1, sigma=16), pulse.DriveChannel(0)) h_q0.draw() circ.add_calibration('h', [0], h_q0) from qiskit import transpile from qiskit.test.mock import FakeAlmaden backend = FakeAlmaden() circ = transpile(circ, backend) print(backend.configuration().basis_gates) circ.draw('mpl', idle_wires=False) from qiskit import QuantumCircuit from qiskit.circuit import Gate circ = QuantumCircuit(1, 1) custom_gate = Gate('my_custom_gate', 1, [3.14, 1]) # 3.14 is an arbitrary parameter for demonstration circ.append(custom_gate, [0]) circ.measure(0, 0) circ.draw('mpl') with pulse.build(name='custom') as my_schedule: pulse.play(pulse.library.Gaussian(duration=64, amp=0.2, sigma=8), pulse.DriveChannel(0)) circ.add_calibration('my_custom_gate', [0], my_schedule, [3.14, 1]) # Alternatively: circ.add_calibration(custom_gate, [0], my_schedule) circ = transpile(circ, backend) circ.draw('mpl', idle_wires=False) circ = QuantumCircuit(2, 2) circ.append(custom_gate, [1]) from qiskit import QiskitError try: circ = transpile(circ, backend) except QiskitError as e: print(e) import qiskit.tools.jupyter %qiskit_version_table %qiskit_copyright
0.565779
0.991464
``` %matplotlib inline ``` How to compute wavenumbers in rectangular ducts ================================================= In this example we compute the wavenumbers in rectangular ducts without flow. We compare the (mostly-used) Kirchoff dissipation, with the model proposed by `Stinson <https://asa.scitation.org/doi/10.1121/1.400379>`_\. The Kirchoff model was derived for circular ducts and is adapted to rectangular ducts by computing an equivalent wetted perimeter with the hydraulic radius. The Stinson model is derived for arbitrary cross sections. ![](../../image/channel.JPG) :width: 800 1. Initialization ----------------- First, we import the packages needed for this example. ``` import matplotlib.pyplot as plt import numpy import acdecom ``` We create a test duct with a rectangular cross section of the dimensions *a* = 0.01 m and *b* = 0.1 m without flow. ``` section = "rectangular" a, b = 0.01, 0.1 # [m] Mach_number = 0 ``` We create two *WaveGuides* with the predefined dissipation models *stinson* and *kirchoff*. ``` stinson_duct = acdecom.WaveGuide(cross_section=section, dimensions=(a, b), M=Mach_number, damping="stinson") kirchoff_duct = acdecom.WaveGuide(cross_section=section, dimensions=(a, b), M=Mach_number, damping="kirchoff") ``` 2. Extract the Wavenumbers ----------------------- We can now loop through the frequencies of interest and compute the wavenumbers for the two WaveGuides. ``` wavenumber_stinson=[] wavenumber_kirchoff=[] frequencies = range(100,2000,50) m, n = 0, 0 for f in frequencies: wavenumber_stinson.append(stinson_duct.get_wavenumber(m, n, f)) wavenumber_kirchoff.append(kirchoff_duct.get_wavenumber(m, n, f)) ``` 3. Plot ---- We can plot the imaginary part of the wavenumber, which shows the dissipation of the sound into the surrounding fluid. ``` plt.plot(frequencies,numpy.imag(wavenumber_stinson), color="#67A3C1", linestyle="-", label="Stinson") plt.plot(frequencies,numpy.imag(wavenumber_kirchoff), color="#D38D7B", linestyle="--", label="Kirchoff") plt.legend() plt.xlabel("Frequency [Hz]") plt.ylabel("$Im(k_{00})$") plt.title("Comparing the dispersion of Stinson's and Kirchoff's Model \n for a rectangular duct without flow") plt.show() ``` Additionally, we can compute how strongly a wave propagating along a duct of length *L* is attenuated with the different dissipation models. ``` L = 10 * b plt.figure(2) plt.plot(frequencies,(1-numpy.exp(numpy.imag(wavenumber_stinson)*L))*100, color="#67A3C1", ls="-", label="Stinson") plt.plot(frequencies,(1-numpy.exp(numpy.imag(wavenumber_kirchoff)*L))*100, color="#D38D7B", ls="--", label="Kirchoff") plt.xlabel("Frequency [Hz]") plt.ylabel("Dissipation [%]") plt.title("Damping of a wave along a rectangular duct \n of length "+str(L)+" m.") plt.legend() plt.show() ```
github_jupyter
%matplotlib inline import matplotlib.pyplot as plt import numpy import acdecom section = "rectangular" a, b = 0.01, 0.1 # [m] Mach_number = 0 stinson_duct = acdecom.WaveGuide(cross_section=section, dimensions=(a, b), M=Mach_number, damping="stinson") kirchoff_duct = acdecom.WaveGuide(cross_section=section, dimensions=(a, b), M=Mach_number, damping="kirchoff") wavenumber_stinson=[] wavenumber_kirchoff=[] frequencies = range(100,2000,50) m, n = 0, 0 for f in frequencies: wavenumber_stinson.append(stinson_duct.get_wavenumber(m, n, f)) wavenumber_kirchoff.append(kirchoff_duct.get_wavenumber(m, n, f)) plt.plot(frequencies,numpy.imag(wavenumber_stinson), color="#67A3C1", linestyle="-", label="Stinson") plt.plot(frequencies,numpy.imag(wavenumber_kirchoff), color="#D38D7B", linestyle="--", label="Kirchoff") plt.legend() plt.xlabel("Frequency [Hz]") plt.ylabel("$Im(k_{00})$") plt.title("Comparing the dispersion of Stinson's and Kirchoff's Model \n for a rectangular duct without flow") plt.show() L = 10 * b plt.figure(2) plt.plot(frequencies,(1-numpy.exp(numpy.imag(wavenumber_stinson)*L))*100, color="#67A3C1", ls="-", label="Stinson") plt.plot(frequencies,(1-numpy.exp(numpy.imag(wavenumber_kirchoff)*L))*100, color="#D38D7B", ls="--", label="Kirchoff") plt.xlabel("Frequency [Hz]") plt.ylabel("Dissipation [%]") plt.title("Damping of a wave along a rectangular duct \n of length "+str(L)+" m.") plt.legend() plt.show()
0.533154
0.963057
# Bank ``` import sage import pickle import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import log_loss from catboost import CatBoostClassifier from sklearn.model_selection import train_test_split # Load data df = sage.datasets.bank() # Feature names and categorical columns (for CatBoost model) feature_names = df.columns.tolist()[:-1] categorical_cols = ['Job', 'Marital', 'Education', 'Default', 'Housing', 'Loan', 'Contact', 'Month', 'Prev Outcome'] categorical_inds = [feature_names.index(col) for col in categorical_cols] # Split data train, test = train_test_split( df.values, test_size=int(0.1 * len(df.values)), random_state=123) train, val = train_test_split( train, test_size=int(0.1 * len(df.values)), random_state=123) Y_train = train[:, -1].copy().astype(int) Y_val = val[:, -1].copy().astype(int) Y_test = test[:, -1].copy().astype(int) train = train[:, :-1].copy() val = val[:, :-1].copy() test = test[:, :-1].copy() with open('trained_models/bank model.pkl', 'rb') as f: model = pickle.load(f) base_loss = log_loss(Y_test, model.predict_proba(test)) scores = np.zeros(len(feature_names)) for i in range(len(feature_names)): # Subsample data inds = np.ones(len(feature_names), dtype=bool) inds[i] = False train_small = train[:, inds] val_small = val[:, inds] test_small = test[:, inds] feature_names_small = np.array(feature_names)[inds] categorical_inds_small = [i for i in range(len(feature_names_small)) if feature_names_small[i] in categorical_cols] # Train model model = CatBoostClassifier(iterations=100, learning_rate=0.3, depth=10) model = model.fit(train_small, Y_train, categorical_inds_small, eval_set=(val_small, Y_val), verbose=False) # Loss loss = log_loss(Y_test, model.predict_proba(test_small)) scores[i] = loss - base_loss with open('results/bank feature_ablation.pkl', 'wb') as f: pickle.dump(scores, f) ``` # Bike ``` import sage import numpy as np import xgboost as xgb from sklearn.model_selection import train_test_split # Load data df = sage.datasets.bike() feature_names = df.columns.tolist()[:-3] # Split data, with total count serving as regression target train, test = train_test_split( df.values, test_size=int(0.1 * len(df.values)), random_state=123) train, val = train_test_split( train, test_size=int(0.1 * len(df.values)), random_state=123) Y_train = train[:, -1].copy() Y_val = val[:, -1].copy() Y_test = test[:, -1].copy() train = train[:, :-3].copy() val = val[:, :-3].copy() test = test[:, :-3].copy() with open('trained_models/bike model.pkl', 'rb') as f: model = pickle.load(f) dtest = xgb.DMatrix(test) base_loss = np.mean((model.predict(dtest) - Y_test) ** 2) scores = np.zeros(len(feature_names)) for i in range(len(feature_names)): # Subsample data inds = np.ones(len(feature_names), dtype=bool) inds[i] = False train_small = train[:, inds] val_small = val[:, inds] test_small = test[:, inds] dtrain = xgb.DMatrix(train_small, label=Y_train) dval = xgb.DMatrix(val_small, label=Y_val) dtest = xgb.DMatrix(test_small) # Train model param = { 'max_depth' : 10, 'objective': 'reg:squarederror', 'nthread': 4 } evallist = [(dtrain, 'train'), (dval, 'val')] num_round = 50 model = xgb.train(param, dtrain, num_round, evallist, verbose_eval=False) # Loss loss = np.mean((model.predict(dtest) - Y_test) ** 2) scores[i] = loss - base_loss with open('results/bike feature_ablation.pkl', 'wb') as f: pickle.dump(scores, f) ``` # Credit ``` import sage from sklearn.model_selection import train_test_split # Load data df = sage.datasets.credit() # Feature names and categorical columns (for CatBoost model) feature_names = df.columns.tolist()[:-1] categorical_columns = [ 'Checking Status', 'Credit History', 'Purpose', 'Credit Amount', 'Savings Account/Bonds', 'Employment Since', 'Personal Status', 'Debtors/Guarantors', 'Property Type', 'Other Installment Plans', 'Housing Ownership', 'Job', 'Telephone', 'Foreign Worker' ] categorical_inds = [feature_names.index(col) for col in categorical_columns] # Split data train, test = train_test_split( df.values, test_size=int(0.1 * len(df.values)), random_state=0) train, val = train_test_split( train, test_size=int(0.1 * len(df.values)), random_state=0) Y_train = train[:, -1].copy().astype(int) Y_val = val[:, -1].copy().astype(int) Y_test = test[:, -1].copy().astype(int) train = train[:, :-1].copy() val = val[:, :-1].copy() test = test[:, :-1].copy() import numpy as np from sklearn.metrics import log_loss from catboost import CatBoostClassifier with open('trained_models/credit model.pkl', 'rb') as f: model = pickle.load(f) base_loss = log_loss(Y_test, model.predict_proba(test)) scores = np.zeros(len(feature_names)) for i in range(len(feature_names)): # Subsample data inds = np.ones(len(feature_names), dtype=bool) inds[i] = False train_small = train[:, inds] val_small = val[:, inds] test_small = test[:, inds] feature_names_small = np.array(feature_names)[inds] categorical_inds_small = [i for i in range(len(feature_names_small)) if feature_names_small[i] in categorical_columns] # Train model model = CatBoostClassifier(iterations=50, learning_rate=0.3, depth=3) model = model.fit(train_small, Y_train, categorical_inds_small, eval_set=(val_small, Y_val), verbose=False) # Loss loss = log_loss(Y_test, model.predict_proba(test_small)) scores[i] = loss - base_loss with open('results/credit feature_ablation.pkl', 'wb') as f: pickle.dump(scores, f) ``` # BRCA ``` import pickle import numpy as np import pandas as pd from sklearn.metrics import log_loss from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split gene_names = [ 'BCL11A', 'IGF1R', 'CCND1', 'CDK6', 'BRCA1', 'BRCA2', 'EZH2', 'SFTPD', 'CDC5L', 'ADMR', 'TSPAN2', 'EIF5B', 'ADRA2C', 'MRCL3', 'CCDC69', 'ADCY4', 'TEX14', 'RRM2B', 'SLC22A5', 'HRH1', 'SLC25A1', 'CEBPE', 'IWS1', 'FLJ10213', 'PSMD10', 'MARCH6', 'PDLIM4', 'SNTB1', 'CHCHD1', 'SCMH1', 'FLJ20489', 'MDP-1', 'FLJ30092', 'YTHDC2', 'LFNG', 'HOXD10', 'RPS6KA5', 'WDR40B', 'CST9L', 'ISLR', 'TMBIM1', 'TRABD', 'ARHGAP29', 'C15orf29', 'SCAMP4', 'TTC31', 'ZNF570', 'RAB42', 'SERPINI2', 'C9orf21' ] # Load data. expression = pd.read_table('data/BRCA_TCGA_microarray.txt', sep='\t', header=0, skiprows=lambda x: x == 1, index_col=0).T expression.index = pd.Index( ['.'.join(sample.split('-')[:3]) for sample in expression.index]) # Filter for reduced gene setif reduced: expression = expression[gene_names] # Impute missing values. expression = expression.fillna(expression.mean()) # Load labels. labels = pd.read_table('data/TCGA_breast_type.tsv', sep='\t', header=None, index_col=0, names=['Sample', 'Label']) # Filter for common samples. expression_index = expression.index.values labels_index = labels.index.values intersection = np.intersect1d(expression_index, labels_index) expression = expression.iloc[[i for i in range(len(expression)) if expression_index[i] in intersection]] labels = labels.iloc[[i for i in range(len(labels)) if labels_index[i] in intersection]] # Join expression data with labels. label_data = labels['Label'].values label_index = list(labels.index) expression['Label'] = np.array( [label_data[label_index.index(sample)] for sample in expression.index]) expression['Label'] = pd.Categorical(expression['Label']).codes data = expression.values # Split data train, test = train_test_split( data, test_size=int(0.2 * len(data)), random_state=0) train, val = train_test_split( train, test_size=int(0.2 * len(data)), random_state=0) Y_train = train[:, -1].copy().astype(int) Y_val = val[:, -1].copy().astype(int) Y_test = test[:, -1].copy().astype(int) train = train[:, :-1].copy() val = val[:, :-1].copy() test = test[:, :-1].copy() # Preprocess mean = train.mean(axis=0) std = train.std(axis=0) train = (train - mean) / std val = (val - mean) / std test = (test - mean) / std def fit_logistic_regression(train, Y_train, val, Y_val): # Tune logistic regression model C_list = np.arange(0.1, 1.0, 0.1) best_loss = np.inf best_C = None for C in C_list: # Fit model model = LogisticRegression(C=C, penalty='l1', multi_class='multinomial', solver='saga', max_iter=10000) model.fit(train, Y_train) # Calculate loss train_loss = log_loss(Y_train, model.predict_proba(train)) val_loss = log_loss(Y_val, model.predict_proba(val)) # print('Train loss = {:.4f}, Val loss = {:.4f}'.format(train_loss, val_loss)) # See if best if val_loss < best_loss: best_loss = val_loss best_C = C # Fit model on combined data model = LogisticRegression(C=best_C, penalty='l1', multi_class='multinomial', solver='saga', max_iter=10000) model.fit(np.concatenate((train, val), axis=0), np.concatenate((Y_train, Y_val), axis=0)) return model with open('trained_models/brca model.pkl', 'rb') as f: model = pickle.load(f) base_loss = log_loss(Y_test, model.predict_proba(test)) scores = np.zeros(len(gene_names)) for i in range(len(gene_names)): # Subsample data inds = np.ones(len(gene_names), dtype=bool) inds[i] = False train_small = train[:, inds] val_small = val[:, inds] test_small = test[:, inds] # Train model model = fit_logistic_regression(train_small, Y_train, val_small, Y_val) # Loss loss = log_loss(Y_test, model.predict_proba(test_small)) scores[i] = loss - base_loss with open('results/brca feature_ablation.pkl', 'wb') as f: pickle.dump(scores, f) ``` # MNIST ``` import torch import numpy as np import torch.nn as nn import torch.optim as optim from copy import deepcopy from torch.utils.data import TensorDataset, DataLoader import torchvision.datasets as dsets # Load train set train = dsets.MNIST('../data', train=True, download=True) imgs = train.data.reshape(-1, 784) / 255.0 labels = train.targets # Shuffle and split into train and val inds = torch.randperm(len(train)) imgs = imgs[inds] labels = labels[inds] val, Y_val = imgs[:6000], labels[:6000] train, Y_train = imgs[6000:], labels[6000:] # Load test set test = dsets.MNIST('../data', train=False, download=True) test, Y_test = test.data.reshape(-1, 784) / 255.0, test.targets # Move test data to numpy test_np = test.cpu().data.numpy() Y_test_np = Y_test.cpu().data.numpy() def train_model(train, Y_train, val, Y_val): # Create model device = torch.device('cuda', 3) model = nn.Sequential( nn.Linear(train.shape[1], 256), nn.ELU(), nn.Linear(256, 256), nn.ELU(), nn.Linear(256, 10)).to(device) # Training parameters lr = 1e-3 mbsize = 64 max_nepochs = 250 loss_fn = nn.CrossEntropyLoss() lookback = 5 verbose = False # Move to GPU train = train.to(device) val = val.to(device) # test = test.to(device) Y_train = Y_train.to(device) Y_val = Y_val.to(device) # Y_test = Y_test.to(device) # Data loader train_set = TensorDataset(train, Y_train) train_loader = DataLoader(train_set, batch_size=mbsize, shuffle=True) # Setup optimizer = optim.Adam(model.parameters(), lr=lr) min_criterion = np.inf min_epoch = 0 # Train for epoch in range(max_nepochs): for x, y in train_loader: # Move to device. x = x.to(device=device) y = y.to(device=device) # Take gradient step. loss = loss_fn(model(x), y) loss.backward() optimizer.step() model.zero_grad() # Check progress. with torch.no_grad(): # Calculate validation loss. val_loss = loss_fn(model(val), Y_val).item() if verbose: print('{}Epoch = {}{}'.format('-' * 10, epoch + 1, '-' * 10)) print('Val loss = {:.4f}'.format(val_loss)) # Check convergence criterion. if val_loss < min_criterion: min_criterion = val_loss min_epoch = epoch best_model = deepcopy(model) elif (epoch - min_epoch) == lookback: if verbose: print('Stopping early') break # Keep best model model = best_model return model device = torch.device('cuda', 3) model = torch.load('trained_models/mnist mlp.pt').to(device) base_loss = log_loss(Y_test_np, model(test.to(device)).softmax(dim=1).cpu().data.numpy()) base_loss scores = np.zeros(train.shape[1]) for i in range(train.shape[1]): # Subsample data inds = np.ones(train.shape[1], dtype=bool) inds[i] = False train_small = train[:, inds] val_small = val[:, inds] test_small = test[:, inds] # Train model model = train_model(train_small, Y_train, val_small, Y_val) # Loss loss = log_loss( Y_test_np, model(test_small.to(device)).softmax(dim=1).cpu().data.numpy()) scores[i] = loss - base_loss print('Done with {} (score = {:.4f})'.format(i, scores[i])) with open('results/mnist feature_ablation.pkl', 'wb') as f: pickle.dump(scores, f) ```
github_jupyter
import sage import pickle import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import log_loss from catboost import CatBoostClassifier from sklearn.model_selection import train_test_split # Load data df = sage.datasets.bank() # Feature names and categorical columns (for CatBoost model) feature_names = df.columns.tolist()[:-1] categorical_cols = ['Job', 'Marital', 'Education', 'Default', 'Housing', 'Loan', 'Contact', 'Month', 'Prev Outcome'] categorical_inds = [feature_names.index(col) for col in categorical_cols] # Split data train, test = train_test_split( df.values, test_size=int(0.1 * len(df.values)), random_state=123) train, val = train_test_split( train, test_size=int(0.1 * len(df.values)), random_state=123) Y_train = train[:, -1].copy().astype(int) Y_val = val[:, -1].copy().astype(int) Y_test = test[:, -1].copy().astype(int) train = train[:, :-1].copy() val = val[:, :-1].copy() test = test[:, :-1].copy() with open('trained_models/bank model.pkl', 'rb') as f: model = pickle.load(f) base_loss = log_loss(Y_test, model.predict_proba(test)) scores = np.zeros(len(feature_names)) for i in range(len(feature_names)): # Subsample data inds = np.ones(len(feature_names), dtype=bool) inds[i] = False train_small = train[:, inds] val_small = val[:, inds] test_small = test[:, inds] feature_names_small = np.array(feature_names)[inds] categorical_inds_small = [i for i in range(len(feature_names_small)) if feature_names_small[i] in categorical_cols] # Train model model = CatBoostClassifier(iterations=100, learning_rate=0.3, depth=10) model = model.fit(train_small, Y_train, categorical_inds_small, eval_set=(val_small, Y_val), verbose=False) # Loss loss = log_loss(Y_test, model.predict_proba(test_small)) scores[i] = loss - base_loss with open('results/bank feature_ablation.pkl', 'wb') as f: pickle.dump(scores, f) import sage import numpy as np import xgboost as xgb from sklearn.model_selection import train_test_split # Load data df = sage.datasets.bike() feature_names = df.columns.tolist()[:-3] # Split data, with total count serving as regression target train, test = train_test_split( df.values, test_size=int(0.1 * len(df.values)), random_state=123) train, val = train_test_split( train, test_size=int(0.1 * len(df.values)), random_state=123) Y_train = train[:, -1].copy() Y_val = val[:, -1].copy() Y_test = test[:, -1].copy() train = train[:, :-3].copy() val = val[:, :-3].copy() test = test[:, :-3].copy() with open('trained_models/bike model.pkl', 'rb') as f: model = pickle.load(f) dtest = xgb.DMatrix(test) base_loss = np.mean((model.predict(dtest) - Y_test) ** 2) scores = np.zeros(len(feature_names)) for i in range(len(feature_names)): # Subsample data inds = np.ones(len(feature_names), dtype=bool) inds[i] = False train_small = train[:, inds] val_small = val[:, inds] test_small = test[:, inds] dtrain = xgb.DMatrix(train_small, label=Y_train) dval = xgb.DMatrix(val_small, label=Y_val) dtest = xgb.DMatrix(test_small) # Train model param = { 'max_depth' : 10, 'objective': 'reg:squarederror', 'nthread': 4 } evallist = [(dtrain, 'train'), (dval, 'val')] num_round = 50 model = xgb.train(param, dtrain, num_round, evallist, verbose_eval=False) # Loss loss = np.mean((model.predict(dtest) - Y_test) ** 2) scores[i] = loss - base_loss with open('results/bike feature_ablation.pkl', 'wb') as f: pickle.dump(scores, f) import sage from sklearn.model_selection import train_test_split # Load data df = sage.datasets.credit() # Feature names and categorical columns (for CatBoost model) feature_names = df.columns.tolist()[:-1] categorical_columns = [ 'Checking Status', 'Credit History', 'Purpose', 'Credit Amount', 'Savings Account/Bonds', 'Employment Since', 'Personal Status', 'Debtors/Guarantors', 'Property Type', 'Other Installment Plans', 'Housing Ownership', 'Job', 'Telephone', 'Foreign Worker' ] categorical_inds = [feature_names.index(col) for col in categorical_columns] # Split data train, test = train_test_split( df.values, test_size=int(0.1 * len(df.values)), random_state=0) train, val = train_test_split( train, test_size=int(0.1 * len(df.values)), random_state=0) Y_train = train[:, -1].copy().astype(int) Y_val = val[:, -1].copy().astype(int) Y_test = test[:, -1].copy().astype(int) train = train[:, :-1].copy() val = val[:, :-1].copy() test = test[:, :-1].copy() import numpy as np from sklearn.metrics import log_loss from catboost import CatBoostClassifier with open('trained_models/credit model.pkl', 'rb') as f: model = pickle.load(f) base_loss = log_loss(Y_test, model.predict_proba(test)) scores = np.zeros(len(feature_names)) for i in range(len(feature_names)): # Subsample data inds = np.ones(len(feature_names), dtype=bool) inds[i] = False train_small = train[:, inds] val_small = val[:, inds] test_small = test[:, inds] feature_names_small = np.array(feature_names)[inds] categorical_inds_small = [i for i in range(len(feature_names_small)) if feature_names_small[i] in categorical_columns] # Train model model = CatBoostClassifier(iterations=50, learning_rate=0.3, depth=3) model = model.fit(train_small, Y_train, categorical_inds_small, eval_set=(val_small, Y_val), verbose=False) # Loss loss = log_loss(Y_test, model.predict_proba(test_small)) scores[i] = loss - base_loss with open('results/credit feature_ablation.pkl', 'wb') as f: pickle.dump(scores, f) import pickle import numpy as np import pandas as pd from sklearn.metrics import log_loss from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split gene_names = [ 'BCL11A', 'IGF1R', 'CCND1', 'CDK6', 'BRCA1', 'BRCA2', 'EZH2', 'SFTPD', 'CDC5L', 'ADMR', 'TSPAN2', 'EIF5B', 'ADRA2C', 'MRCL3', 'CCDC69', 'ADCY4', 'TEX14', 'RRM2B', 'SLC22A5', 'HRH1', 'SLC25A1', 'CEBPE', 'IWS1', 'FLJ10213', 'PSMD10', 'MARCH6', 'PDLIM4', 'SNTB1', 'CHCHD1', 'SCMH1', 'FLJ20489', 'MDP-1', 'FLJ30092', 'YTHDC2', 'LFNG', 'HOXD10', 'RPS6KA5', 'WDR40B', 'CST9L', 'ISLR', 'TMBIM1', 'TRABD', 'ARHGAP29', 'C15orf29', 'SCAMP4', 'TTC31', 'ZNF570', 'RAB42', 'SERPINI2', 'C9orf21' ] # Load data. expression = pd.read_table('data/BRCA_TCGA_microarray.txt', sep='\t', header=0, skiprows=lambda x: x == 1, index_col=0).T expression.index = pd.Index( ['.'.join(sample.split('-')[:3]) for sample in expression.index]) # Filter for reduced gene setif reduced: expression = expression[gene_names] # Impute missing values. expression = expression.fillna(expression.mean()) # Load labels. labels = pd.read_table('data/TCGA_breast_type.tsv', sep='\t', header=None, index_col=0, names=['Sample', 'Label']) # Filter for common samples. expression_index = expression.index.values labels_index = labels.index.values intersection = np.intersect1d(expression_index, labels_index) expression = expression.iloc[[i for i in range(len(expression)) if expression_index[i] in intersection]] labels = labels.iloc[[i for i in range(len(labels)) if labels_index[i] in intersection]] # Join expression data with labels. label_data = labels['Label'].values label_index = list(labels.index) expression['Label'] = np.array( [label_data[label_index.index(sample)] for sample in expression.index]) expression['Label'] = pd.Categorical(expression['Label']).codes data = expression.values # Split data train, test = train_test_split( data, test_size=int(0.2 * len(data)), random_state=0) train, val = train_test_split( train, test_size=int(0.2 * len(data)), random_state=0) Y_train = train[:, -1].copy().astype(int) Y_val = val[:, -1].copy().astype(int) Y_test = test[:, -1].copy().astype(int) train = train[:, :-1].copy() val = val[:, :-1].copy() test = test[:, :-1].copy() # Preprocess mean = train.mean(axis=0) std = train.std(axis=0) train = (train - mean) / std val = (val - mean) / std test = (test - mean) / std def fit_logistic_regression(train, Y_train, val, Y_val): # Tune logistic regression model C_list = np.arange(0.1, 1.0, 0.1) best_loss = np.inf best_C = None for C in C_list: # Fit model model = LogisticRegression(C=C, penalty='l1', multi_class='multinomial', solver='saga', max_iter=10000) model.fit(train, Y_train) # Calculate loss train_loss = log_loss(Y_train, model.predict_proba(train)) val_loss = log_loss(Y_val, model.predict_proba(val)) # print('Train loss = {:.4f}, Val loss = {:.4f}'.format(train_loss, val_loss)) # See if best if val_loss < best_loss: best_loss = val_loss best_C = C # Fit model on combined data model = LogisticRegression(C=best_C, penalty='l1', multi_class='multinomial', solver='saga', max_iter=10000) model.fit(np.concatenate((train, val), axis=0), np.concatenate((Y_train, Y_val), axis=0)) return model with open('trained_models/brca model.pkl', 'rb') as f: model = pickle.load(f) base_loss = log_loss(Y_test, model.predict_proba(test)) scores = np.zeros(len(gene_names)) for i in range(len(gene_names)): # Subsample data inds = np.ones(len(gene_names), dtype=bool) inds[i] = False train_small = train[:, inds] val_small = val[:, inds] test_small = test[:, inds] # Train model model = fit_logistic_regression(train_small, Y_train, val_small, Y_val) # Loss loss = log_loss(Y_test, model.predict_proba(test_small)) scores[i] = loss - base_loss with open('results/brca feature_ablation.pkl', 'wb') as f: pickle.dump(scores, f) import torch import numpy as np import torch.nn as nn import torch.optim as optim from copy import deepcopy from torch.utils.data import TensorDataset, DataLoader import torchvision.datasets as dsets # Load train set train = dsets.MNIST('../data', train=True, download=True) imgs = train.data.reshape(-1, 784) / 255.0 labels = train.targets # Shuffle and split into train and val inds = torch.randperm(len(train)) imgs = imgs[inds] labels = labels[inds] val, Y_val = imgs[:6000], labels[:6000] train, Y_train = imgs[6000:], labels[6000:] # Load test set test = dsets.MNIST('../data', train=False, download=True) test, Y_test = test.data.reshape(-1, 784) / 255.0, test.targets # Move test data to numpy test_np = test.cpu().data.numpy() Y_test_np = Y_test.cpu().data.numpy() def train_model(train, Y_train, val, Y_val): # Create model device = torch.device('cuda', 3) model = nn.Sequential( nn.Linear(train.shape[1], 256), nn.ELU(), nn.Linear(256, 256), nn.ELU(), nn.Linear(256, 10)).to(device) # Training parameters lr = 1e-3 mbsize = 64 max_nepochs = 250 loss_fn = nn.CrossEntropyLoss() lookback = 5 verbose = False # Move to GPU train = train.to(device) val = val.to(device) # test = test.to(device) Y_train = Y_train.to(device) Y_val = Y_val.to(device) # Y_test = Y_test.to(device) # Data loader train_set = TensorDataset(train, Y_train) train_loader = DataLoader(train_set, batch_size=mbsize, shuffle=True) # Setup optimizer = optim.Adam(model.parameters(), lr=lr) min_criterion = np.inf min_epoch = 0 # Train for epoch in range(max_nepochs): for x, y in train_loader: # Move to device. x = x.to(device=device) y = y.to(device=device) # Take gradient step. loss = loss_fn(model(x), y) loss.backward() optimizer.step() model.zero_grad() # Check progress. with torch.no_grad(): # Calculate validation loss. val_loss = loss_fn(model(val), Y_val).item() if verbose: print('{}Epoch = {}{}'.format('-' * 10, epoch + 1, '-' * 10)) print('Val loss = {:.4f}'.format(val_loss)) # Check convergence criterion. if val_loss < min_criterion: min_criterion = val_loss min_epoch = epoch best_model = deepcopy(model) elif (epoch - min_epoch) == lookback: if verbose: print('Stopping early') break # Keep best model model = best_model return model device = torch.device('cuda', 3) model = torch.load('trained_models/mnist mlp.pt').to(device) base_loss = log_loss(Y_test_np, model(test.to(device)).softmax(dim=1).cpu().data.numpy()) base_loss scores = np.zeros(train.shape[1]) for i in range(train.shape[1]): # Subsample data inds = np.ones(train.shape[1], dtype=bool) inds[i] = False train_small = train[:, inds] val_small = val[:, inds] test_small = test[:, inds] # Train model model = train_model(train_small, Y_train, val_small, Y_val) # Loss loss = log_loss( Y_test_np, model(test_small.to(device)).softmax(dim=1).cpu().data.numpy()) scores[i] = loss - base_loss print('Done with {} (score = {:.4f})'.format(i, scores[i])) with open('results/mnist feature_ablation.pkl', 'wb') as f: pickle.dump(scores, f)
0.696062
0.832134
<a href="https://colab.research.google.com/github/LeoVogiatzis/medical_data_analysis/blob/main/iml.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Requirements ``` %%capture !pip install orange3 !pip install eli5 !pip install shap !pip install pdpbox !pip install -U pandas-profiling ``` Import modules ``` from sklearn import datasets,model_selection import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import numpy as np from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from ipywidgets import interactive from sklearn.preprocessing import MinMaxScaler from google.colab import files uploaded = files.upload() data = pd.read_csv('/content/cleaned_cardio_train.csv').drop(['Unnamed: 0'], axis = 'columns') print(data.head, data.columns) # split data into X and y x_train, x_test, y_train, y_test = model_selection.train_test_split(data.drop('cardio', axis=1), data['cardio'], test_size=0.25) scaler = MinMaxScaler() scaler.fit(x_train) x_train = scaler.transform(x_train) x_test = scaler.transform(x_test) ``` # White box models # Linear Models for explainable reasons ``` lin_model = LogisticRegression(solver="liblinear",penalty='l2',max_iter=100,C=100,random_state=0) #lin_model = LogisticRegression(solver="liblinear",penalty='l1',max_iter=1000,C=10,random_state=0) lin_model.fit(x_train, y_train) predicted_train = lin_model.predict(x_train) predicted_test = lin_model.predict(x_test) predicted_proba_test = lin_model.predict_proba(x_test) print("Logistic Regression Model Performance:") print("Accuracy in Train Set",accuracy_score(y_train, predicted_train)) print("Accuracy in Test Set",accuracy_score(y_test, predicted_test)) target_names= list(data['cardio'].unique()) print(f'target data:{target_names}') #x_test.to_numpy df = data.copy() df.drop('cardio', axis=True, inplace=True) # data.head() df.columns type(x_test) ``` Global intepretation, using weights/coeficients of a linear model ``` weights = lin_model.coef_ print(f'the coeficients are:{weights}') feature_names = list(df.columns) print(f'the type is{type(feature_names)}') model_weights = pd.DataFrame({ 'features': feature_names,'weights': list(weights[0])}) #model_weights = model_weights.sort_values(by='weights', ascending=False) #Normal sort model_weights = model_weights.reindex(model_weights['weights'].abs().sort_values(ascending=False).index) #Sort by absolute value model_weights = model_weights[(model_weights["weights"] != 0)] print("Number of features:",len(model_weights.values)) plt.figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k') sns.barplot(x="weights", y="features", data=model_weights) plt.title("Intercept (Bias): "+str(lin_model.intercept_[0]),loc='right') plt.xticks(rotation=90) plt.show() ``` Local interpretation ``` from IPython.display import SVG from IPython.display import display from sklearn.metrics import accuracy_score # x_test = x_test.to_numpy def plot_sensor(instance=0): random_instance = x_test[instance] # print("Original Class:",target_names[y_test[instance]]+", Predicted Class:",target_names[predicted_test[instance]],"with probability of",predicted_proba_test[instance][predicted_test[instance]]) weights = lin_model.coef_ summation = sum(weights[0]*random_instance) bias = lin_model.intercept_[0] res = "" if (summation + bias > 0): res = " > 0 -> 1" else: res = " <= 0 -> 0" print("Sum(weights*instance): "+str(summation)+" + Intercept (Bias): "+str(bias)+" = "+ str(summation+bias)+ res) model_weights = pd.DataFrame({ 'features': list(feature_names),'weights*values': list(weights[0]*random_instance)}) #model_weights = model_weights.sort_values(by='weights*values', ascending=False) model_weights = model_weights.reindex(model_weights['weights*values'].abs().sort_values(ascending=False).index) #Sort by absolute value model_weights = model_weights[(model_weights["weights*values"] != 0)] #print("Number of features:",len(model_weights.values)) plt.figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k') sns.barplot(x="weights*values", y="features", data=model_weights) plt.xticks(rotation=90) plt.show() inter=interactive(plot_sensor , instance=(0,9)) display(inter) ``` Decision tree training ``` from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import classification_report model = DecisionTreeClassifier(criterion='gini',max_depth=2,random_state=0) model.fit(x_train,y_train) y_train_pred = model.predict(x_train) y_pred = model.predict(x_test) print("Decision Tree Performance:") print("Accuracy in Train Set",accuracy_score(y_train, y_train_pred)) print("Accuracy in Test Set",accuracy_score(y_test, y_pred)) model.feature_importances_ ``` Tree visualization ``` from sklearn.tree import export_graphviz from IPython.display import SVG from IPython.display import display from ipywidgets import interactive from graphviz import Source def plot_tree(depth): estimator = DecisionTreeClassifier(random_state = 0 , criterion = 'gini' , max_depth = depth) estimator.fit(x_train, y_train) graph = Source(export_graphviz(estimator , out_file=None , feature_names=feature_names , class_names=[str(i) for i in target_names] , filled = True)) print(accuracy_score(y_test, estimator.predict(x_test))) display(SVG(graph.pipe(format='svg'))) return estimator inter=interactive(plot_tree , depth=(1,5)) display(inter) ``` Feature Importances ``` %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import numpy as np def plot_trees_graph(depth): estimator = DecisionTreeClassifier(random_state = 0 , criterion = 'gini' , max_depth = depth) estimator.fit(x_train, y_train) weights = estimator.feature_importances_ model_weights = pd.DataFrame({ 'features': list(feature_names),'weights': list(weights)}) model_weights = model_weights.sort_values(by='weights', ascending=False) plt.figure(num=None, figsize=(8, 6), dpi=200, facecolor='w', edgecolor='k') sns.barplot(x="weights", y="features", data=model_weights) plt.xticks(rotation=90) plt.show() return estimator inter=interactive(plot_trees_graph , depth=(1,5)) display(inter) ``` Train a random forest as black box ``` from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report classifier = RandomForestClassifier(n_estimators = 1000, criterion = 'entropy', random_state = 0) classifier.fit(x_train, y_train) y_pred = classifier.predict(x_test) print("Random Forests Performance:") print(accuracy_score(y_test,y_pred)) new_x_train = x_train new_y_train = classifier.predict(x_train) ``` Global Surrogate Model ``` from sklearn.tree import DecisionTreeClassifier, export_graphviz from IPython.display import SVG from IPython.display import display from ipywidgets import interactive from graphviz import Source from sklearn.metrics import accuracy_score print("Decision Tree Explanator") def plot_tree(depth=1): estimator = DecisionTreeClassifier(random_state = 0 , criterion = 'gini' , max_depth = depth) estimator.fit(new_x_train, new_y_train) graph = Source(export_graphviz(estimator , out_file=None , feature_names=feature_names , class_names=[str(i) for i in target_names] , filled = True)) print("Fidelity",accuracy_score(y_pred, estimator.predict(x_test))) print("Accuracy in new data") print(accuracy_score(y_test, estimator.predict(x_test))) #We could calculate R-square metric too! display(SVG(graph.pipe(format='svg'))) return estimator inter=interactive(plot_tree , depth=(1,5)) display(inter) ``` Linear Model Explanation ``` from sklearn.linear_model import LogisticRegression lin_model = LogisticRegression(solver="newton-cg",penalty='l2',max_iter=1000,C=100,random_state=0) #lin_model = LogisticRegression(penalty='l1',max_iter=1000,C=100,random_state=0) lin_model.fit(new_x_train, new_y_train) print("Simple Linear Model Performance:") print("Fidelity",accuracy_score(y_pred,lin_model.predict(x_test))) print("Accuracy in new data") print(accuracy_score(y_test,lin_model.predict(x_test))) weights = lin_model.coef_ model_weights = pd.DataFrame({ 'features': list(feature_names),'weights': list(weights[0])}) #model_weights = model_weights.sort_values(by='weights', ascending=False) #Normal sort model_weights = model_weights.reindex(model_weights['weights'].abs().sort_values(ascending=False).index) #Sort by absolute value model_weights = model_weights[(model_weights["weights"] != 0)] print("Number of features:",len(model_weights.values)) plt.figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k') sns.barplot(x="weights", y="features", data=model_weights) plt.title("Intercept (Bias): "+str(lin_model.intercept_[0]),loc='right') plt.xticks(rotation=90) plt.show() from sklearn.svm import SVC import xgboost from sklearn.metrics import classification_report import eli5 from eli5.sklearn import PermutationImportance print(target_names) print("XGBoost Performance Cardio_dataset:") model = xgboost.XGBClassifier().fit(x_train,y_train) y_preds = model.predict(x_test) print(classification_report(y_test,y_preds,data['cardio'])) perm = PermutationImportance(model).fit(x_test, y_test) eli5.show_weights(perm, feature_names = feature_names) import shap shap_values = shap.TreeExplainer(model).shap_values(x_train) shap.summary_plot(shap_values, x_train, plot_type="bar") import matplotlib.pyplot as plt f = plt.figure() print(type(shap_values), len(shap_values), len(x_test)) shap.summary_plot(shap_values, x_train) f.savefig("/summary_plot1.png", bbox_inches='tight', dpi=600) shap.dependence_plot('height', shap_values, x_train) shap.dependence_plot('weight', shap_values, x_train) data.drop('cardio', axis=1, inplace=True) data.columns #no data scaling import pandas as pd from sklearn.ensemble import RandomForestClassifier print("Random Forest Performance on Cardio Dataset:") tree_model = RandomForestClassifier(random_state=0, n_estimators=1000).fit(x_train, y_train) y_preds = tree_model.predict(x_test) print(classification_report(y_test,y_preds,target_names=[str(i) for i in target_names])) # df = pd.DataFrame(breastCancer.data, columns=breastCancer.feature_names) from pdpbox import pdp, get_dataset, info_plots # Create the data that we will plot index = ['Row'+str(i) for i in range(1, len(x_test)+1)] feature_txt = 'gender' #feature_txt = 'radius error' pdp_goals = pdp.pdp_isolate(model=tree_model, dataset=data, model_features=data.columns, feature=feature_txt ) # plot it pdp.pdp_plot(pdp_goals, feature_txt) plt.show() ``` # Local Surrogate Models # ``` from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import classification_report knnmodel = KNeighborsClassifier(n_neighbors=5, weights="distance", metric="minkowski", p=2) knnmodel = knnmodel.fit(x_train, y_train) print("Finding Neighbors of Instance...") test_x = [x_test[0]] ys = knnmodel.kneighbors(test_x, n_neighbors=3, return_distance=False) #Try for 100! new_x_train2 = [] new_y_train2 = [] for i in ys[0]: print(i) if i==17273 or i==45549 or i==24613: continue new_x_train2.append(x_train[i]) new_y_train2.append(y_train[i]) # new_x_train2 = np.asarray(new_x_train2) # new_y_train2 = np.asarray(new_y_train2) # new_x_train2 = new_x_train2.reshape(-1, 1) # new_y_train2 = new_y_train2.reshape(-1, 1) ``` Local Decision Tree ``` from sklearn.tree import DecisionTreeClassifier, export_graphviz from IPython.display import SVG from IPython.display import display from ipywidgets import interactive from graphviz import Source print("Decision Tree Explanator") def plot_tree(depth=1): estimator = DecisionTreeClassifier(random_state = 0 , criterion = 'gini' , max_depth = depth) print("Creating Decision Tree for the Instance:") estimator.fit(new_x_train2, new_y_train2) #print("Decision Tree Predicts and explains for Instance:" + str(estimator.predict(test_x)) + " and Random Forests predicted:" + str(classifier.predict(test_x))) fidelityPreds = estimator.predict(new_x_train2) #print("Let's see fidelity",accuracy_score(new_y_train2,fidelityPreds)) graph = Source(export_graphviz(estimator , out_file=None , feature_names=feature_names , class_names=[str(i) for i in target_names] , filled = True)) display(SVG(graph.pipe(format='svg'))) print("Lets find out the path for this specific instance!") for i in estimator.decision_path(test_x): print(i) return estimator inter=interactive(plot_tree , depth=(1,5)) display(inter) ``` Local linear model ``` from IPython.display import SVG from IPython.display import display from sklearn.metrics import accuracy_score def plot_sensor(instance=0): tar = [str(i) for i in target_names] random_instance = x_test[instance] print("Original Class:",tar[y_test[instance]]+", Predicted Class:",tar[predicted_test[instance]],"with probability of",predicted_proba_test[instance][predicted_test[instance]]) weights = lin_model.coef_ summation = sum(weights[0]*random_instance) bias = lin_model.intercept_[0] res = "" if (summation + bias > 0): res = " > 0 -> 1" else: res = " <= 0 -> 0" print("Sum(weights*instance): "+str(summation)+" + Intercept (Bias): "+str(bias)+" = "+ str(summation+bias)+ res) model_weights = pd.DataFrame({ 'features': list(feature_names),'weights*values': list(weights[0]*random_instance)}) #model_weights = model_weights.sort_values(by='weights*values', ascending=False) model_weights = model_weights.reindex(model_weights['weights*values'].abs().sort_values(ascending=False).index) #Sort by absolute value model_weights = model_weights[(model_weights["weights*values"] != 0)] print("Number of features:",len(model_weights.values)) plt.figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k') sns.barplot(x="weights*values", y="features", data=model_weights) plt.xticks(rotation=90) plt.show() inter=interactive(plot_sensor , instance=(0,9)) display(inter) ``` Rule Based Classifier to find rules that helps our models to predict the class variable ``` import Orange import Orange.evaluation.scoring import Orange.classification.rules import Orange.evaluation learner = Orange.classification.rules.CN2Learner() data = Orange.data.Table(x_train, y_train) print("CN2 Ordered with Entropy Performance:") def plot_rules(bw, mce, mrl): tar = [str(i) for i in target_names] learner.rule_finder.quality_evaluator = Orange.classification.rules.EntropyEvaluator() learner.rule_finder.search_algorithm.beam_width = bw learner.rule_finder.general_validator.min_covered_examples = mce learner.rule_finder.general_validator.max_rule_length = mrl mymodel = learner.fit_storage(data) predicted = mymodel.predict(np.asarray(x_test)) mypred = [] for iii in predicted: if (iii[0] >= iii[1]): mypred.append(0) else: mypred.append(1) print(classification_report(y_test,y_pred,target_names=tar)) model = learner(data) for rule in model.rule_list: #rule = str(rule).replace("Class=v1", "malignant").replace("Class=v2", "benign") for i in range(len(feature_names)-1,0,-1): num = "" if i<10: num = "0"+str(i) else: num = str(i) print(feature_names[i]) rule = rule.replace("Feature "+num, "("+ feature_names[i] + ")") print(rule) print() return learner inter=interactive(plot_rules ,bw = [3,5,8,10] ,mce = [7,9,11] ,mrl = [2,3,5,10]) display(inter) ```
github_jupyter
%%capture !pip install orange3 !pip install eli5 !pip install shap !pip install pdpbox !pip install -U pandas-profiling from sklearn import datasets,model_selection import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import numpy as np from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from ipywidgets import interactive from sklearn.preprocessing import MinMaxScaler from google.colab import files uploaded = files.upload() data = pd.read_csv('/content/cleaned_cardio_train.csv').drop(['Unnamed: 0'], axis = 'columns') print(data.head, data.columns) # split data into X and y x_train, x_test, y_train, y_test = model_selection.train_test_split(data.drop('cardio', axis=1), data['cardio'], test_size=0.25) scaler = MinMaxScaler() scaler.fit(x_train) x_train = scaler.transform(x_train) x_test = scaler.transform(x_test) lin_model = LogisticRegression(solver="liblinear",penalty='l2',max_iter=100,C=100,random_state=0) #lin_model = LogisticRegression(solver="liblinear",penalty='l1',max_iter=1000,C=10,random_state=0) lin_model.fit(x_train, y_train) predicted_train = lin_model.predict(x_train) predicted_test = lin_model.predict(x_test) predicted_proba_test = lin_model.predict_proba(x_test) print("Logistic Regression Model Performance:") print("Accuracy in Train Set",accuracy_score(y_train, predicted_train)) print("Accuracy in Test Set",accuracy_score(y_test, predicted_test)) target_names= list(data['cardio'].unique()) print(f'target data:{target_names}') #x_test.to_numpy df = data.copy() df.drop('cardio', axis=True, inplace=True) # data.head() df.columns type(x_test) weights = lin_model.coef_ print(f'the coeficients are:{weights}') feature_names = list(df.columns) print(f'the type is{type(feature_names)}') model_weights = pd.DataFrame({ 'features': feature_names,'weights': list(weights[0])}) #model_weights = model_weights.sort_values(by='weights', ascending=False) #Normal sort model_weights = model_weights.reindex(model_weights['weights'].abs().sort_values(ascending=False).index) #Sort by absolute value model_weights = model_weights[(model_weights["weights"] != 0)] print("Number of features:",len(model_weights.values)) plt.figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k') sns.barplot(x="weights", y="features", data=model_weights) plt.title("Intercept (Bias): "+str(lin_model.intercept_[0]),loc='right') plt.xticks(rotation=90) plt.show() from IPython.display import SVG from IPython.display import display from sklearn.metrics import accuracy_score # x_test = x_test.to_numpy def plot_sensor(instance=0): random_instance = x_test[instance] # print("Original Class:",target_names[y_test[instance]]+", Predicted Class:",target_names[predicted_test[instance]],"with probability of",predicted_proba_test[instance][predicted_test[instance]]) weights = lin_model.coef_ summation = sum(weights[0]*random_instance) bias = lin_model.intercept_[0] res = "" if (summation + bias > 0): res = " > 0 -> 1" else: res = " <= 0 -> 0" print("Sum(weights*instance): "+str(summation)+" + Intercept (Bias): "+str(bias)+" = "+ str(summation+bias)+ res) model_weights = pd.DataFrame({ 'features': list(feature_names),'weights*values': list(weights[0]*random_instance)}) #model_weights = model_weights.sort_values(by='weights*values', ascending=False) model_weights = model_weights.reindex(model_weights['weights*values'].abs().sort_values(ascending=False).index) #Sort by absolute value model_weights = model_weights[(model_weights["weights*values"] != 0)] #print("Number of features:",len(model_weights.values)) plt.figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k') sns.barplot(x="weights*values", y="features", data=model_weights) plt.xticks(rotation=90) plt.show() inter=interactive(plot_sensor , instance=(0,9)) display(inter) from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import classification_report model = DecisionTreeClassifier(criterion='gini',max_depth=2,random_state=0) model.fit(x_train,y_train) y_train_pred = model.predict(x_train) y_pred = model.predict(x_test) print("Decision Tree Performance:") print("Accuracy in Train Set",accuracy_score(y_train, y_train_pred)) print("Accuracy in Test Set",accuracy_score(y_test, y_pred)) model.feature_importances_ from sklearn.tree import export_graphviz from IPython.display import SVG from IPython.display import display from ipywidgets import interactive from graphviz import Source def plot_tree(depth): estimator = DecisionTreeClassifier(random_state = 0 , criterion = 'gini' , max_depth = depth) estimator.fit(x_train, y_train) graph = Source(export_graphviz(estimator , out_file=None , feature_names=feature_names , class_names=[str(i) for i in target_names] , filled = True)) print(accuracy_score(y_test, estimator.predict(x_test))) display(SVG(graph.pipe(format='svg'))) return estimator inter=interactive(plot_tree , depth=(1,5)) display(inter) %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import numpy as np def plot_trees_graph(depth): estimator = DecisionTreeClassifier(random_state = 0 , criterion = 'gini' , max_depth = depth) estimator.fit(x_train, y_train) weights = estimator.feature_importances_ model_weights = pd.DataFrame({ 'features': list(feature_names),'weights': list(weights)}) model_weights = model_weights.sort_values(by='weights', ascending=False) plt.figure(num=None, figsize=(8, 6), dpi=200, facecolor='w', edgecolor='k') sns.barplot(x="weights", y="features", data=model_weights) plt.xticks(rotation=90) plt.show() return estimator inter=interactive(plot_trees_graph , depth=(1,5)) display(inter) from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report classifier = RandomForestClassifier(n_estimators = 1000, criterion = 'entropy', random_state = 0) classifier.fit(x_train, y_train) y_pred = classifier.predict(x_test) print("Random Forests Performance:") print(accuracy_score(y_test,y_pred)) new_x_train = x_train new_y_train = classifier.predict(x_train) from sklearn.tree import DecisionTreeClassifier, export_graphviz from IPython.display import SVG from IPython.display import display from ipywidgets import interactive from graphviz import Source from sklearn.metrics import accuracy_score print("Decision Tree Explanator") def plot_tree(depth=1): estimator = DecisionTreeClassifier(random_state = 0 , criterion = 'gini' , max_depth = depth) estimator.fit(new_x_train, new_y_train) graph = Source(export_graphviz(estimator , out_file=None , feature_names=feature_names , class_names=[str(i) for i in target_names] , filled = True)) print("Fidelity",accuracy_score(y_pred, estimator.predict(x_test))) print("Accuracy in new data") print(accuracy_score(y_test, estimator.predict(x_test))) #We could calculate R-square metric too! display(SVG(graph.pipe(format='svg'))) return estimator inter=interactive(plot_tree , depth=(1,5)) display(inter) from sklearn.linear_model import LogisticRegression lin_model = LogisticRegression(solver="newton-cg",penalty='l2',max_iter=1000,C=100,random_state=0) #lin_model = LogisticRegression(penalty='l1',max_iter=1000,C=100,random_state=0) lin_model.fit(new_x_train, new_y_train) print("Simple Linear Model Performance:") print("Fidelity",accuracy_score(y_pred,lin_model.predict(x_test))) print("Accuracy in new data") print(accuracy_score(y_test,lin_model.predict(x_test))) weights = lin_model.coef_ model_weights = pd.DataFrame({ 'features': list(feature_names),'weights': list(weights[0])}) #model_weights = model_weights.sort_values(by='weights', ascending=False) #Normal sort model_weights = model_weights.reindex(model_weights['weights'].abs().sort_values(ascending=False).index) #Sort by absolute value model_weights = model_weights[(model_weights["weights"] != 0)] print("Number of features:",len(model_weights.values)) plt.figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k') sns.barplot(x="weights", y="features", data=model_weights) plt.title("Intercept (Bias): "+str(lin_model.intercept_[0]),loc='right') plt.xticks(rotation=90) plt.show() from sklearn.svm import SVC import xgboost from sklearn.metrics import classification_report import eli5 from eli5.sklearn import PermutationImportance print(target_names) print("XGBoost Performance Cardio_dataset:") model = xgboost.XGBClassifier().fit(x_train,y_train) y_preds = model.predict(x_test) print(classification_report(y_test,y_preds,data['cardio'])) perm = PermutationImportance(model).fit(x_test, y_test) eli5.show_weights(perm, feature_names = feature_names) import shap shap_values = shap.TreeExplainer(model).shap_values(x_train) shap.summary_plot(shap_values, x_train, plot_type="bar") import matplotlib.pyplot as plt f = plt.figure() print(type(shap_values), len(shap_values), len(x_test)) shap.summary_plot(shap_values, x_train) f.savefig("/summary_plot1.png", bbox_inches='tight', dpi=600) shap.dependence_plot('height', shap_values, x_train) shap.dependence_plot('weight', shap_values, x_train) data.drop('cardio', axis=1, inplace=True) data.columns #no data scaling import pandas as pd from sklearn.ensemble import RandomForestClassifier print("Random Forest Performance on Cardio Dataset:") tree_model = RandomForestClassifier(random_state=0, n_estimators=1000).fit(x_train, y_train) y_preds = tree_model.predict(x_test) print(classification_report(y_test,y_preds,target_names=[str(i) for i in target_names])) # df = pd.DataFrame(breastCancer.data, columns=breastCancer.feature_names) from pdpbox import pdp, get_dataset, info_plots # Create the data that we will plot index = ['Row'+str(i) for i in range(1, len(x_test)+1)] feature_txt = 'gender' #feature_txt = 'radius error' pdp_goals = pdp.pdp_isolate(model=tree_model, dataset=data, model_features=data.columns, feature=feature_txt ) # plot it pdp.pdp_plot(pdp_goals, feature_txt) plt.show() from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import classification_report knnmodel = KNeighborsClassifier(n_neighbors=5, weights="distance", metric="minkowski", p=2) knnmodel = knnmodel.fit(x_train, y_train) print("Finding Neighbors of Instance...") test_x = [x_test[0]] ys = knnmodel.kneighbors(test_x, n_neighbors=3, return_distance=False) #Try for 100! new_x_train2 = [] new_y_train2 = [] for i in ys[0]: print(i) if i==17273 or i==45549 or i==24613: continue new_x_train2.append(x_train[i]) new_y_train2.append(y_train[i]) # new_x_train2 = np.asarray(new_x_train2) # new_y_train2 = np.asarray(new_y_train2) # new_x_train2 = new_x_train2.reshape(-1, 1) # new_y_train2 = new_y_train2.reshape(-1, 1) from sklearn.tree import DecisionTreeClassifier, export_graphviz from IPython.display import SVG from IPython.display import display from ipywidgets import interactive from graphviz import Source print("Decision Tree Explanator") def plot_tree(depth=1): estimator = DecisionTreeClassifier(random_state = 0 , criterion = 'gini' , max_depth = depth) print("Creating Decision Tree for the Instance:") estimator.fit(new_x_train2, new_y_train2) #print("Decision Tree Predicts and explains for Instance:" + str(estimator.predict(test_x)) + " and Random Forests predicted:" + str(classifier.predict(test_x))) fidelityPreds = estimator.predict(new_x_train2) #print("Let's see fidelity",accuracy_score(new_y_train2,fidelityPreds)) graph = Source(export_graphviz(estimator , out_file=None , feature_names=feature_names , class_names=[str(i) for i in target_names] , filled = True)) display(SVG(graph.pipe(format='svg'))) print("Lets find out the path for this specific instance!") for i in estimator.decision_path(test_x): print(i) return estimator inter=interactive(plot_tree , depth=(1,5)) display(inter) from IPython.display import SVG from IPython.display import display from sklearn.metrics import accuracy_score def plot_sensor(instance=0): tar = [str(i) for i in target_names] random_instance = x_test[instance] print("Original Class:",tar[y_test[instance]]+", Predicted Class:",tar[predicted_test[instance]],"with probability of",predicted_proba_test[instance][predicted_test[instance]]) weights = lin_model.coef_ summation = sum(weights[0]*random_instance) bias = lin_model.intercept_[0] res = "" if (summation + bias > 0): res = " > 0 -> 1" else: res = " <= 0 -> 0" print("Sum(weights*instance): "+str(summation)+" + Intercept (Bias): "+str(bias)+" = "+ str(summation+bias)+ res) model_weights = pd.DataFrame({ 'features': list(feature_names),'weights*values': list(weights[0]*random_instance)}) #model_weights = model_weights.sort_values(by='weights*values', ascending=False) model_weights = model_weights.reindex(model_weights['weights*values'].abs().sort_values(ascending=False).index) #Sort by absolute value model_weights = model_weights[(model_weights["weights*values"] != 0)] print("Number of features:",len(model_weights.values)) plt.figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k') sns.barplot(x="weights*values", y="features", data=model_weights) plt.xticks(rotation=90) plt.show() inter=interactive(plot_sensor , instance=(0,9)) display(inter) import Orange import Orange.evaluation.scoring import Orange.classification.rules import Orange.evaluation learner = Orange.classification.rules.CN2Learner() data = Orange.data.Table(x_train, y_train) print("CN2 Ordered with Entropy Performance:") def plot_rules(bw, mce, mrl): tar = [str(i) for i in target_names] learner.rule_finder.quality_evaluator = Orange.classification.rules.EntropyEvaluator() learner.rule_finder.search_algorithm.beam_width = bw learner.rule_finder.general_validator.min_covered_examples = mce learner.rule_finder.general_validator.max_rule_length = mrl mymodel = learner.fit_storage(data) predicted = mymodel.predict(np.asarray(x_test)) mypred = [] for iii in predicted: if (iii[0] >= iii[1]): mypred.append(0) else: mypred.append(1) print(classification_report(y_test,y_pred,target_names=tar)) model = learner(data) for rule in model.rule_list: #rule = str(rule).replace("Class=v1", "malignant").replace("Class=v2", "benign") for i in range(len(feature_names)-1,0,-1): num = "" if i<10: num = "0"+str(i) else: num = str(i) print(feature_names[i]) rule = rule.replace("Feature "+num, "("+ feature_names[i] + ")") print(rule) print() return learner inter=interactive(plot_rules ,bw = [3,5,8,10] ,mce = [7,9,11] ,mrl = [2,3,5,10]) display(inter)
0.566858
0.90764
## Raster data 101 In this lesson you will learn more about working with two types of raster data: a LiDAR derived digital elevation model (DEM), and high-resolution 4-band orthoimagery collected by the USDA National Agricultural Imagery Program (NAIP) from aircraft. We'll be using a raster virtualization package called *rasterio*. This package allows us to use numpy matrix arithmatic and algebraic functions on raster data. If you want to read more about how lidar data are used to derive raster based surface models, you can check out this chapter on lidar remote sensing data and the various raster data products derived from lidar data: https://www.earthdatascience.org/courses/use-data-open-source-python/data-stories/what-is-lidar-data/ If you are interested in learning more about, or accessing, NAIP data, check here: https://www.fsa.usda.gov/programs-and-services/aerial-photography/imagery-programs/naip-imagery/ ### Adapted from: **Lesson 2. Open, Plot and Explore Raster Data with Python *by Leah Wasser, Chris Holdgraf, Martha Morrissey*** https://www.earthdatascience.org/courses/use-data-open-source-python/intro-raster-data-python/fundamentals-raster-data/open-lidar-raster-python/ **Working with Raster Data *by Zia U Ahmed*** https://zia207.github.io/geospatial-python.io/lesson_06_working-with-raster-data.html ``` # Import necessary packages import os import matplotlib.pyplot as plt import rasterio as rio from rasterio.plot import plotting_extent import numpy as np import earthpy as et import earthpy.plot as ep # Get data and set working directory et.data.get_data("colorado-flood") os.chdir(os.path.join(et.io.HOME, 'earth-analytics', 'data')) ``` Below, you define the path to a lidar derived digital elevation model (DEM) that was created using NEON (the National Ecological Observatory Network) data. Data Tip: DEM’s are also sometimes referred to as DTM (Digital Terrain Model or DTM). ``` # Define relative path to file dem_pre_path = os.path.join("colorado-flood", "spatial", "boulder-leehill-rd", "pre-flood", "lidar", "pre_DTM.tif") # Open the file using a context manager ("with rio.open" statement) with rio.open(dem_pre_path) as dem_src: dtm_pre_arr = dem_src.read(1) ``` You may notice that the code above used to open a raster file is a bit more complex than the code that you used to open vector files (shapefiles) with geopandas or tabular data with pandas. The with rio.open() statement creates what is called a *context manager* for opening files. This allows you to create a connection to the file without modifying the file itself. You can learn more about context managers in the raster data in python chapter in the earth data science intermediate textbook ### Explore Raster Data Values & Structure Next, have a look at the data. Notice that the data structure of type() of Python object returned by rasterio is a numpy array. Numpy arrays are an efficient way to store and work with raster data in python. You can learn more about working with numpy arrays in the numpy array chapter of the introduction to earth data science textbook: https://www.earthdatascience.org/courses/intro-to-earth-data-science/scientific-data-structures-python/numpy-arrays/ ``` #What object type did we create with that call? type(dtm_pre_arr) #What are the dimensions of our array? dtm_pre_arr.shape ``` When you open raster data using rasterio you are creating a numpy array. Numpy is an efficient way to work with and process raster format data. You can plot your data using earthpy plot_bands() which takes a numpy array as an input and generates a plot. ``` # Plot your data using earthpy ep.plot_bands(dtm_pre_arr, title="Lidar Digital Elevation Model (DEM) \n Boulder Flood 2013", cmap="Greys") plt.show() ``` The data above should represent terrain model data. However, the range of values is not what is expected. These data are for Boulder, Colorado where the elevation may range from 1000-3000m. There may be some outlier values in the data that may need to be addressed. Below you check out the min and max values of the data. ``` print("the minimum raster value is: ", dtm_pre_arr.min()) print("the maximum raster value is: ", dtm_pre_arr.max()) # A histogram can also be helpful to look at the range of values in your data # What do you notice about the histogram below? ep.hist(dtm_pre_arr, figsize=(10, 6)) plt.show() ``` Histogram for your LiDAR DTM. Notice the number of values that are below 0. This suggests that there may be no data values in the data. ### Raster Data Exploration - Min and Max Values Looking at the minimum value of the data, there is one of two things going on that need to be fixed: 1. There may be no data values in the data with a negative value that are skewing your plot colors 2. There also could be outlier data in your raster You can explore the first option - that there are no data values by reading in the data and masking no data values using rasterio. To do this, you will use the masked=True parameter for the .read() function - like this: ``` # Read in your data and mask the no data values with rio.open(dem_pre_path) as dem_src: # Masked=True will mask all no data values dtm_pre_arr = dem_src.read(1, masked=True) ``` Notice that now the minimum value looks more like an elevation value (which should most often not be negative). ``` # A histogram can also be helpful to look at the range of values in your data ep.hist(dtm_pre_arr, figsize=(10, 6), title="Histogram of the Data with No Data Values Removed") plt.show() ``` Plot your data again to see how it looks: ``` # Plot data using earthpy ep.plot_bands(dtm_pre_arr, title="Lidar Digital Elevation Model (DEM) \n Boulder Flood 2013", cmap="Greys") plt.show() ``` ### TASK 1: Look closely at the plot above. What do you think the colors and numbers represent in the plot? What units do the numbers represents? Double click on the text in this box to enter your answer. ### Rasterio Reads Files into Python as Numpy Arrays When you call src.read() above, rasterio is reading in the data as a numpy array. A numpy array is a matrix of values. Numpy arrays are an efficient structure for working with large and potentially multi-dimensional (layered) matrices. The numpy array below is type numpy.ma.core.MaskedArray. It is a masked array because you chose to mask the no data values in your data. Masking ensures that when you plot and perform other math operations on your data, those no data values are not included in the operations. Learn more about working with numpy arrays: https://www.earthdatascience.org/courses/intro-to-earth-data-science/scientific-data-structures-python/numpy-arrays/ ``` with rio.open(dem_pre_path) as dem_src: lidar_dem_im = dem_src.read(1, masked=True) print("Numpy Array Shape:", lidar_dem_im.shape) print("Object type:", type(lidar_dem_im)) ``` A numpy array does not by default store spatial information. However, your raster data is spatial - it represents a location on the earth’s surface. You can acccess the spatial metadata within the context manager using dem_src.profile. Notice that the .profile object contains information including the no data values for your data, the shape, the file type and even the coordinate reference system. You will learn more about raster metadata in our next class, but also see this chapter: https://www.earthdatascience.org/courses/use-data-open-source-python/intro-raster-data-python/fundamentals-raster-data/raster-metadata-in-python/ Note to self: numpy.nan is like an R nan R does not deal with the "unrealistic data" the same way as python ``` with rio.open(dem_pre_path) as dem_src: lidar_dem_im = dem_src.read(1, masked=True) # Create an object called lidar_dem_meta that contains the spatial metadata lidar_dem_meta = dem_src.profile lidar_dem_meta ``` ### Context Managers to Open and Close File Connections The steps above represent the steps you need to open and plot a raster dataset using rasterio in python. The with rio.open() statement creates what is known as a context manager. A context manager allows you to open the data and work with it. Within the context manager, Python makes a temporary connection to the file that you are trying to open. ``` with rio.open(`file-path-here`) as file_src: dtm_pre_arr = dem_src.read(1, masked=True) ``` To break this code down, the context manager has a few parts. First, it has a with statement. The with statement creates a connection to the file that you want to open. The default connection type is read only. This means that you can NOT modify that file by default. Not being able to modify the original data is a good thing because it prevents you from making unintended changes to your original data. #### Notice that the first line of the context manager is not indented. It contains two parts 1) rio.open(): This is the code that will open a connection to your .tif file using a path you provide. file_src: this is a rasterio reader object that you can use to read in the actual data. You can also use this object to access the metadata for the raster file. 2) The second line of your with statement dtm_pre_arr = dem_src.read(1, masked=True) is indented. Any code that is indented directly below the with statement will become a part of the context manager. This code has direct access to the file_src object which is you recall above is the rasterio reader object. Opening and closing files using rasterio and context managers is efficient as it establishes a connection to the raster file rather than directly reading it into memory. Once you are done opening and reading in the data, the context manager closes that connection to the file. This efficiently ensures that the file won’t be modified later in your code. You can get a better understanding of how the rasterio context manager works by taking a look at what it is doing line by line. Start by looking at the dem_pre_path object. Notice that this object is a path to the file pre_DEM.tif. The context manager needs to know where the file is that you want to open with Rasterio. ``` # Look at the path to your dem_pre file dem_pre_path ``` Now use the dem_pre_path in the context manager to open and close your connection to the file. Notice that if you print the “src” object within the context manager (notice that the print statement is indented which is how you know that you are inside the context manager), the returl is an open DatasetReader The name of the reader is the path to your file. This means there is an open and active connection to the file. ``` # Opening the file with the dem_pre_path # Notice here the src object is printed and returns an "open" DatasetReader object with rio.open(dem_pre_path) as src: print(src) ``` If you print that same src object outside of the context manager, notice that it is now a closed datasetReader object. It is closed because it is being called outside of the context manager. Once the connection is closed, you can no longer access the data. This is a good thing as it protects you from inadvertently modifying the file itself! ``` # Note that the src object is now closed because it's not within the indented # part of the context manager above print(src) ``` Now look at what .read() does. Below you use the context manager to both open the file and read it. See that the read() method, returns a numpy array that contains the raster cell values in your file. ``` # Open the file using a context manager and get the values as a numpy array with .read() with rio.open(dem_pre_path) as dem_src: dtm_pre_arr = dem_src.read(1) dtm_pre_arr ``` Because you created an object within the context manager that contains those raster values as a numpy array, you can now access the data values without needing to have an open connection to your file. This ensures once again that you are not modifying your original file and that all connections to it are closed. You are now free to play with the numpy array and process your data! ``` # View numpy array of your data dtm_pre_arr ``` You can use the .profile attribute to create an object with metadata on your raster image. The metadata object below contains information like the coordinate reference system and size of the raster image. ``` with rio.open(dem_pre_path) as dem_src: # Create an object called lidar_dem_meta that contains the spatial metadata lidar_dem_meta = dem_src.profile lidar_dem_meta ``` We're breaking our geotiff down into two parts: (a series of) arrays representing the data values for each pixel in our raster dataset, and the spatial metadata required to assign spatial coordiantes to this data. ## Raster calculations Often we are interested in deriving information from our data. Common examples could include scaling data (changing units, apply a log transform, subtracting the mean and dividing by the standard deviation, setting the min and max value to 0 and 1 respectively but preserving the distribution), setting unrealistic values to NAs (for example, cloud masking), or reclassifying data. The fact that our raster data exists as a numpy array makes these types of operations simple. ``` # read in data and metadata through raster connection # MODIFY THIS CODE FOR TASK 3: with rio.open(dem_pre_path) as dem_src: dem_data = dem_src.read(1) dem_meta = dem_src.profile ``` ### TASK 2: dem_data above is reported in meters. Create a new raster, dem_feet, with units of feet: ``` #Task 2: dem_feet = dem_data*3.28084 #View values: dem_feet.min(), dem_feet.max() ``` Here, -np.inf represents an invalid data which should be assigned a no-data value. We can create a no-data mask using our virtual data as well. ``` #Mask out negative infinity dem_feet_ma = np.ma.masked_where(dem_feet == -np.inf, dem_feet, copy=True) dem_feet_ma dem_feet_ma.min(), dem_feet_ma.max() ``` ### TASK 3: How could we read in dem_data with a mask so that we don't have to remove the -np.inf values? See code above. ``` #Task 3: # read in raw dem data and metadata through raster connection dem_raw = pd.read_csv('FILE NAME', infer_datetime_format=True) # use dem_data to create dem_feet_ma # print out min and max values # plot the masked data to make sure it makes sense: fig, ax = plt.subplots(figsize=(6, 6)) ep.plot_bands(dem_feet_ma, cmap='Greys', title="DEM (feet)", scale=False, ax=ax) ax.set_axis_off() plt.show() ``` You can also reclassify your data and convert your continous raster into a categorical raster. For example, say that the elevation for permanent standing water in this region was 5642 feet. How could you create a map of just water extents based on the DEM? ``` # First, define bins that you want, and then classify the data class_bins =[dem_feet.min(),5642, np.inf] # The np.digitize will create numeric cateogirles based on your class bins. dem_waterline = np.digitize(dem_feet, class_bins) # Note that you have an extra class in the data (0) print(np.unique(dem_waterline)) # Plot newly classified and masked raster fig, ax = plt.subplots(figsize=(6,6)) ep.plot_bands(dem_waterline, ax=ax, scale=False) plt.show() ``` ### Here, we've ended up with three classified values. * 0 represents what was previously no data or masked regions of the raster, * 1 represents water, and * 2 represents land. Often in raster manipulation, we want to use certain criteria to mask out raster data. Examples could include multiplying a multispectral satelite image by a cloud mask, or multiplying a SAR image by a terrain shadow mask. In this case, the "masks" that we're using represent seperate rasters that have been designed to tell us where we expect there to be limited information content in our raster. These "mask" layers have values of 1 for each pixel that we want to keep, and a no-data value for all other pixels. When you multiply the raster data by the mask, all the pixels that we want to keep are unchanged (value x 1 = value), and the pixels that we want to discard are converted to no-data (value x no-data = no-data). ### TASK 4: DEMs based on LiDAR will be inaccurate under standing water. Use your dem_waterline raster to create a "water_mask" for your DEM. This will be a raster with values of "1" where there is standing water, and no value everywhere else. Plot your water_mask. ``` # Task 4: create water_mask water_mask = dem_feet_ma >= some number # Task 4: plot water_mask ``` ### TASK 5: Use your water mask to "mask" all regions in your DEM that are underwater. Print out the minimum and maximum values and plot your masked DEM. ``` #Task 5 create water_masked_DEM: #Task 5 print min and max: #Task 5 plot water_masked_DEM ``` ## Imagery - Another Type of Raster Data Another type of raster data that you may see is imagery. If you have used Google Maps or another mapping tool that has an imagery layer, you are looking at raster data. You can open and plot imagery data using Python as well. Below you download and open up some NAIP data that were collected before and after a fire that occured close to Nederland, Colorado. Data Tip: NAIP data is imagery collected by the United States Department of Agriculture every 2 years across the United States. Learn more about NAIP data in this chapter of the earth data science intermediate textbook: https://www.earthdatascience.org/courses/use-data-open-source-python/multispectral-remote-sensing/intro-naip/ ``` # Download NAIP data et.data.get_data(url="https://ndownloader.figshare.com/files/23070791") # Create a path for the data file - notice it is a .tif file naip_pre_fire_path = os.path.join("earthpy-downloads", "naip-before-after", "pre-fire", "crop", "m_3910505_nw_13_1_20150919_crop.tif") naip_pre_fire_path # Open the data using rasterio with rio.open(naip_pre_fire_path) as naip_prefire_src: naip_pre_fire = naip_prefire_src.read() naip_pre_fire ``` ### TASK 6: Read in and print the image metadata. ``` # Task 6: with rio.open(naip_pre_fire) as dem_src: # Create an object called lidar_dem_meta that contains the spatial metadata naip_pre_fire_meta = naip_pre_fire.profile naip_pre_fire_meta ``` Plotting imagery is a bit different because imagery is composed of multiple bands. While we won’t get into the specifics of bands and images in this lesson, you can see below that an image is composed of multiple layers of information. You can plot each band individually as you see below using plot_bands(). Or you can plot a color image, similar to the image that your camera stores when you take a picture. ``` # Plot each layer or band of the image separately ep.plot_bands(naip_pre_fire, figsize=(10, 5)) plt.show() # Plot of all NAIP Data Bands using earthpy plot_bands() # Plot color image ep.plot_rgb(naip_pre_fire, title="naip data pre-fire") plt.show() ``` ### TASK 6: Calculate NDVI The normalized difference vegetation index is a metric of vegetation health calculated from the red and near infared bands of multispectral imagery. ``` NDVI = (NIR – Red) / (NIR + Red) ``` Create a new raster called NDVI_pre that calculates the NDVI in the pre-fire image. Make a plot of NDVI_pre. ``` red_pre_fire=naip_pre_fire[0] NIR_pre_fire=naip_pre_fire[3] # Task 6: create NDVI_pre NDVI_pre = (NDVI_pre_fire-red_pre_fire)/(NIR_pre_fire + red_pre_fire) #Task 6: plot NDVI_pre ``` ### TASK 7: Plot post-fire data In the code below, you see a path to a NAIP imagery of the region in Colorado that was collected after the fire in Colorado. Use that path to: 1. Open the post fire data 2. Plot a color version of data using plot_rgb() ``` # Add the code here to open the raster and read the numpy array inside it # Create a path for the data file - notice it is a .tif file naip_post_fire_path = os.path.join("earthpy-downloads", "naip-before-after", "post-fire", "crop", "m_3910505_nw_13_1_20170902_crop.tif") # Task 7: open naip_post_fire # Task 7: plot naip_post_fire ``` ### TASK 8: Fire kills vegetaion. Make a map showing fire damage by comparing NDVI_pre fire with an NDVI_post fire: ``` # Task 8: create NDVI_post # Task 8: map change in NDVI ```
github_jupyter
# Import necessary packages import os import matplotlib.pyplot as plt import rasterio as rio from rasterio.plot import plotting_extent import numpy as np import earthpy as et import earthpy.plot as ep # Get data and set working directory et.data.get_data("colorado-flood") os.chdir(os.path.join(et.io.HOME, 'earth-analytics', 'data')) # Define relative path to file dem_pre_path = os.path.join("colorado-flood", "spatial", "boulder-leehill-rd", "pre-flood", "lidar", "pre_DTM.tif") # Open the file using a context manager ("with rio.open" statement) with rio.open(dem_pre_path) as dem_src: dtm_pre_arr = dem_src.read(1) #What object type did we create with that call? type(dtm_pre_arr) #What are the dimensions of our array? dtm_pre_arr.shape # Plot your data using earthpy ep.plot_bands(dtm_pre_arr, title="Lidar Digital Elevation Model (DEM) \n Boulder Flood 2013", cmap="Greys") plt.show() print("the minimum raster value is: ", dtm_pre_arr.min()) print("the maximum raster value is: ", dtm_pre_arr.max()) # A histogram can also be helpful to look at the range of values in your data # What do you notice about the histogram below? ep.hist(dtm_pre_arr, figsize=(10, 6)) plt.show() # Read in your data and mask the no data values with rio.open(dem_pre_path) as dem_src: # Masked=True will mask all no data values dtm_pre_arr = dem_src.read(1, masked=True) # A histogram can also be helpful to look at the range of values in your data ep.hist(dtm_pre_arr, figsize=(10, 6), title="Histogram of the Data with No Data Values Removed") plt.show() # Plot data using earthpy ep.plot_bands(dtm_pre_arr, title="Lidar Digital Elevation Model (DEM) \n Boulder Flood 2013", cmap="Greys") plt.show() with rio.open(dem_pre_path) as dem_src: lidar_dem_im = dem_src.read(1, masked=True) print("Numpy Array Shape:", lidar_dem_im.shape) print("Object type:", type(lidar_dem_im)) with rio.open(dem_pre_path) as dem_src: lidar_dem_im = dem_src.read(1, masked=True) # Create an object called lidar_dem_meta that contains the spatial metadata lidar_dem_meta = dem_src.profile lidar_dem_meta with rio.open(`file-path-here`) as file_src: dtm_pre_arr = dem_src.read(1, masked=True) # Look at the path to your dem_pre file dem_pre_path # Opening the file with the dem_pre_path # Notice here the src object is printed and returns an "open" DatasetReader object with rio.open(dem_pre_path) as src: print(src) # Note that the src object is now closed because it's not within the indented # part of the context manager above print(src) # Open the file using a context manager and get the values as a numpy array with .read() with rio.open(dem_pre_path) as dem_src: dtm_pre_arr = dem_src.read(1) dtm_pre_arr # View numpy array of your data dtm_pre_arr with rio.open(dem_pre_path) as dem_src: # Create an object called lidar_dem_meta that contains the spatial metadata lidar_dem_meta = dem_src.profile lidar_dem_meta # read in data and metadata through raster connection # MODIFY THIS CODE FOR TASK 3: with rio.open(dem_pre_path) as dem_src: dem_data = dem_src.read(1) dem_meta = dem_src.profile #Task 2: dem_feet = dem_data*3.28084 #View values: dem_feet.min(), dem_feet.max() #Mask out negative infinity dem_feet_ma = np.ma.masked_where(dem_feet == -np.inf, dem_feet, copy=True) dem_feet_ma dem_feet_ma.min(), dem_feet_ma.max() #Task 3: # read in raw dem data and metadata through raster connection dem_raw = pd.read_csv('FILE NAME', infer_datetime_format=True) # use dem_data to create dem_feet_ma # print out min and max values # plot the masked data to make sure it makes sense: fig, ax = plt.subplots(figsize=(6, 6)) ep.plot_bands(dem_feet_ma, cmap='Greys', title="DEM (feet)", scale=False, ax=ax) ax.set_axis_off() plt.show() # First, define bins that you want, and then classify the data class_bins =[dem_feet.min(),5642, np.inf] # The np.digitize will create numeric cateogirles based on your class bins. dem_waterline = np.digitize(dem_feet, class_bins) # Note that you have an extra class in the data (0) print(np.unique(dem_waterline)) # Plot newly classified and masked raster fig, ax = plt.subplots(figsize=(6,6)) ep.plot_bands(dem_waterline, ax=ax, scale=False) plt.show() # Task 4: create water_mask water_mask = dem_feet_ma >= some number # Task 4: plot water_mask #Task 5 create water_masked_DEM: #Task 5 print min and max: #Task 5 plot water_masked_DEM # Download NAIP data et.data.get_data(url="https://ndownloader.figshare.com/files/23070791") # Create a path for the data file - notice it is a .tif file naip_pre_fire_path = os.path.join("earthpy-downloads", "naip-before-after", "pre-fire", "crop", "m_3910505_nw_13_1_20150919_crop.tif") naip_pre_fire_path # Open the data using rasterio with rio.open(naip_pre_fire_path) as naip_prefire_src: naip_pre_fire = naip_prefire_src.read() naip_pre_fire # Task 6: with rio.open(naip_pre_fire) as dem_src: # Create an object called lidar_dem_meta that contains the spatial metadata naip_pre_fire_meta = naip_pre_fire.profile naip_pre_fire_meta # Plot each layer or band of the image separately ep.plot_bands(naip_pre_fire, figsize=(10, 5)) plt.show() # Plot of all NAIP Data Bands using earthpy plot_bands() # Plot color image ep.plot_rgb(naip_pre_fire, title="naip data pre-fire") plt.show() NDVI = (NIR – Red) / (NIR + Red) red_pre_fire=naip_pre_fire[0] NIR_pre_fire=naip_pre_fire[3] # Task 6: create NDVI_pre NDVI_pre = (NDVI_pre_fire-red_pre_fire)/(NIR_pre_fire + red_pre_fire) #Task 6: plot NDVI_pre # Add the code here to open the raster and read the numpy array inside it # Create a path for the data file - notice it is a .tif file naip_post_fire_path = os.path.join("earthpy-downloads", "naip-before-after", "post-fire", "crop", "m_3910505_nw_13_1_20170902_crop.tif") # Task 7: open naip_post_fire # Task 7: plot naip_post_fire # Task 8: create NDVI_post # Task 8: map change in NDVI
0.60054
0.970799
``` import numpy as np import toppra as ta import toppra.algorithm as algo from toppra import constraint import matplotlib.pyplot as plt waypts = np.array([[-0.64142144, -1.48907971, 1.894104 , -2.2074306 , 1.92707002, 1.50737071, 1.20313358], [-0.64145738, -1.48909926, 1.89408648, -2.20745182, 1.92706716, 1.50738573, 1.20313358], [-0.64145738, -1.48909926, 1.89408648, -2.20745182, 1.92706716, 1.50738573, 1.20313358], [-0.64142966, -1.48908424, 1.89409995, -2.20743537, 1.92706931, 1.50737417, 1.20313358], [-0.63920563, -1.48959816, 1.89454412, -2.2084074 , 1.9284476 , 1.51026404, 1.20470095], [-0.63463145, -1.49070168, 1.89595389, -2.21121669, 1.93131077, 1.51618505, 1.2078774 ], [-0.6278106 , -1.49234819, 1.89800382, -2.2152307 , 1.93571103, 1.52499902, 1.21233833], [-0.61042947, -1.49643397, 1.90333021, -2.22506738, 1.94692171, 1.54750407, 1.22376776], [-0.57696438, -1.50457776, 1.91456592, -2.24278426, 1.96783555, 1.59136438, 1.24697554], [-0.54576355, -1.51244676, 1.92596698, -2.25792789, 1.98712289, 1.63321686, 1.26982975], [-0.52902359, -1.51683176, 1.93251836, -2.26542211, 1.99711704, 1.65592957, 1.28268933], [-0.52094024, -1.51897228, 1.93576336, -2.26886582, 2.00186777, 1.66694438, 1.28901827], [-0.51691443, -1.52007115, 1.9374156 , -2.27054 , 2.0042026 , 1.67247248, 1.29224515], [-0.51555175, -1.5204109 , 1.93802011, -2.27124834, 2.00526309, 1.67445993, 1.29325616]]) vlim = np.array([[-2.175, 2.175], [-2.175, 2.175], [-2.175, 2.175], [-2.175, 2.175], [-2.61 , 2.61 ], [-2.61 , 2.61 ], [-2.61 , 2.61 ]]) alim = np.array([[-15. , 15. ], [ -7.5, 7.5], [-10. , 10. ], [-12.5, 12.5], [-15. , 15. ], [-20. , 20. ], [-20. , 20. ]]) fig, ax = plt.subplots(figsize=(10, 6)) ax.plot(waypts, 'o-') path = ta.SplineInterpolator(np.linspace(0, 0.15, waypts.shape[0]), waypts) pc_vel = constraint.JointVelocityConstraint(vlim) pc_acc = constraint.JointAccelerationConstraint(alim) instance = algo.TOPPRA([pc_vel, pc_acc], path, solver_wrapper="seidel") jnt_traj = instance.compute_trajectory(0, 0) deriv = jnt_traj.evaldd(cs.x) i, j = np.where(~((alim[:, 0] < deriv) & (deriv < alim[:, 1]))) signed_lim = np.where(deriv > 0, alim[:, 1], alim[:, 0]) print(f'elements exceeding limit:\n{deriv[i, j]}') print(f'limit:\n{signed_lim[i, j]}') ```
github_jupyter
import numpy as np import toppra as ta import toppra.algorithm as algo from toppra import constraint import matplotlib.pyplot as plt waypts = np.array([[-0.64142144, -1.48907971, 1.894104 , -2.2074306 , 1.92707002, 1.50737071, 1.20313358], [-0.64145738, -1.48909926, 1.89408648, -2.20745182, 1.92706716, 1.50738573, 1.20313358], [-0.64145738, -1.48909926, 1.89408648, -2.20745182, 1.92706716, 1.50738573, 1.20313358], [-0.64142966, -1.48908424, 1.89409995, -2.20743537, 1.92706931, 1.50737417, 1.20313358], [-0.63920563, -1.48959816, 1.89454412, -2.2084074 , 1.9284476 , 1.51026404, 1.20470095], [-0.63463145, -1.49070168, 1.89595389, -2.21121669, 1.93131077, 1.51618505, 1.2078774 ], [-0.6278106 , -1.49234819, 1.89800382, -2.2152307 , 1.93571103, 1.52499902, 1.21233833], [-0.61042947, -1.49643397, 1.90333021, -2.22506738, 1.94692171, 1.54750407, 1.22376776], [-0.57696438, -1.50457776, 1.91456592, -2.24278426, 1.96783555, 1.59136438, 1.24697554], [-0.54576355, -1.51244676, 1.92596698, -2.25792789, 1.98712289, 1.63321686, 1.26982975], [-0.52902359, -1.51683176, 1.93251836, -2.26542211, 1.99711704, 1.65592957, 1.28268933], [-0.52094024, -1.51897228, 1.93576336, -2.26886582, 2.00186777, 1.66694438, 1.28901827], [-0.51691443, -1.52007115, 1.9374156 , -2.27054 , 2.0042026 , 1.67247248, 1.29224515], [-0.51555175, -1.5204109 , 1.93802011, -2.27124834, 2.00526309, 1.67445993, 1.29325616]]) vlim = np.array([[-2.175, 2.175], [-2.175, 2.175], [-2.175, 2.175], [-2.175, 2.175], [-2.61 , 2.61 ], [-2.61 , 2.61 ], [-2.61 , 2.61 ]]) alim = np.array([[-15. , 15. ], [ -7.5, 7.5], [-10. , 10. ], [-12.5, 12.5], [-15. , 15. ], [-20. , 20. ], [-20. , 20. ]]) fig, ax = plt.subplots(figsize=(10, 6)) ax.plot(waypts, 'o-') path = ta.SplineInterpolator(np.linspace(0, 0.15, waypts.shape[0]), waypts) pc_vel = constraint.JointVelocityConstraint(vlim) pc_acc = constraint.JointAccelerationConstraint(alim) instance = algo.TOPPRA([pc_vel, pc_acc], path, solver_wrapper="seidel") jnt_traj = instance.compute_trajectory(0, 0) deriv = jnt_traj.evaldd(cs.x) i, j = np.where(~((alim[:, 0] < deriv) & (deriv < alim[:, 1]))) signed_lim = np.where(deriv > 0, alim[:, 1], alim[:, 0]) print(f'elements exceeding limit:\n{deriv[i, j]}') print(f'limit:\n{signed_lim[i, j]}')
0.37399
0.322673
<a href="https://colab.research.google.com/github/thingumajig/colab-experiments/blob/master/2Base_BERT_modes.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` # coding: utf-8 # https://github.com/google-research/bert # https://github.com/CyberZHG/keras-bert # папка, куда распаковать преодобученную нейросеть BERT folder = 'multi_cased_L-12_H-768_A-12' download_url = 'https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip' # ссылка на скачивание модели print('Downloading model...') zip_path = '{}.zip'.format(folder) !test -d $folder || (wget $download_url && unzip $zip_path) # скачиваем из BERT репозитория файл tokenization.py !wget https://raw.githubusercontent.com/google-research/bert/master/tokenization.py # install Keras BERT !pip install keras-bert import sys import numpy as np from keras_bert import load_trained_model_from_checkpoint import tokenization config_path = folder+'/bert_config.json' checkpoint_path = folder+'/bert_model.ckpt' vocab_path = folder+'/vocab.txt' # создаем объект для перевода строки с пробелами в токены tokenizer = tokenization.FullTokenizer(vocab_file=vocab_path, do_lower_case=False) # загружаем модель print('Loading model...') model = load_trained_model_from_checkpoint(config_path, checkpoint_path, training=True) #model.summary() # информация о слоях нейросети - количество параметров и т.д. print('OK') # РЕЖИМ 1: предсказание слов, закрытых токеном MASK в фразе. На вход нейросети надо подать фразу в формате: [CLS] Я пришел в [MASK] и купил [MASK]. [SEP] # входная фраза с закрытыми словами с помощью [MASK] sentence = '\u042F \u043F\u0440\u0438\u0448\u0435\u043B \u0432 [MASK] \u0438 \u043A\u0443\u043F\u0438\u043B [MASK].' #@param {type:"string"} print(sentence) #------------------------- # преобразование в токены (tokenizer.tokenize() не обрабатывает [CLS], [MASK], поэтому добавим их вручную) sentence = sentence.replace(' [MASK] ','[MASK]'); sentence = sentence.replace('[MASK] ','[MASK]'); sentence = sentence.replace(' [MASK]','[MASK]') # удаляем лишние пробелы sentence = sentence.split('[MASK]') # разбиваем строку по маске tokens = ['[CLS]'] # фраза всегда должна начинаться на [CLS] # обычные строки преобразуем в токены с помощью tokenizer.tokenize(), вставляя между ними [MASK] for i in range(len(sentence)): if i == 0: tokens = tokens + tokenizer.tokenize(sentence[i]) else: tokens = tokens + ['[MASK]'] + tokenizer.tokenize(sentence[i]) tokens = tokens + ['[SEP]'] # фраза всегда должна заканчиваться на [SEP] # в tokens теперь токены, которые гарантированно по словарю преобразуются в индексы #------------------------- #print(tokens) # преобразуем в массив индексов, который можно подавать на вход сети, причем число 103 в нем это [MASK] token_input = tokenizer.convert_tokens_to_ids(tokens) #print(token_input) # удлиняем до 512 длины token_input = token_input + [0] * (512 - len(token_input)) # создаем маску, заменив все числа 103 на 1, а остальное 0 mask_input = [0]*512 for i in range(len(mask_input)): if token_input[i] == 103: mask_input[i] = 1 #print(mask_input) # маска фраз (вторая фраза маскируется числом 1, а все остальное числом 0) seg_input = [0]*512 # конвертируем в numpy в форму (1,) -> (1,512) token_input = np.asarray([token_input]) mask_input = np.asarray([mask_input]) seg_input = np.asarray([seg_input]) # пропускаем через нейросеть... predicts = model.predict([token_input, seg_input, mask_input])[0] # в [0] полная фраза с заполненными предсказанными словами на месте [MASK] predicts = np.argmax(predicts, axis=-1) # форматируем результат в строку, разделенную пробелами predicts = predicts[0][:len(tokens)] # длиной как исходная фраза (чтобы отсечь случайные выбросы среди нулей дальше) out = [] # добавляем в out только слова в позиции [MASK], которые маскированы цифрой 1 в mask_input for i in range(len(mask_input[0])): if mask_input[0][i] == 1: # [0][i], т.к. требование было (1,512) out.append(predicts[i]) out = tokenizer.convert_ids_to_tokens(out) # индексы в токены out = ' '.join(out) # объединяем в одну строку с пробелами out = tokenization.printable_text(out) # в читабельную версию out = out.replace(' ##','') # объединяем раздъединенные слова "при ##шел" -> "пришел" print('Result:', out) # РЕЖИМ 2: проверка логичности двух фраз. На вход нейросети надо подать фразу в формате: [CLS] Я пришел в магазин. [SEP] И купил молоко. [SEP] sentence_1 = 'Consideration during the planning phase should include: Drainage of control cable trenches; Design of control cable trenches; Design of safety screens. Compile a checklist of items that should be addressed and considered during the planning and construction phase. Use the checklist to ensure that there is enough budget, planning, tools, resources, support for the required implementation/project.' #@param {type:"string"} sentence_2 = 'Planning and construction phases should be completed in phases. ' #@param {type:"string"} print(sentence_1, '->', sentence_2) # строки в массивы токенов tokens_sen_1 = tokenizer.tokenize(sentence_1) tokens_sen_2 = tokenizer.tokenize(sentence_2) tokens = ['[CLS]'] + tokens_sen_1 + ['[SEP]'] + tokens_sen_2 + ['[SEP]'] #print(tokens) # преобразуем строковые токены в числовые индексы: token_input = tokenizer.convert_tokens_to_ids(tokens) # удлиняем до 512 token_input = token_input + [0] * (512 - len(token_input)) # маска в этом режиме все 0 mask_input = [0] * 512 # в маске предложений под второй фразой, включая конечный SEP, надо поставить 1, а все остальное заполнить 0 seg_input = [0]*512 len_1 = len(tokens_sen_1) + 2 # длина первой фразы, +2 - включая начальный CLS и разделитель SEP for i in range(len(tokens_sen_2)+1): # +1, т.к. включая последний SEP seg_input[len_1 + i] = 1 # маскируем вторую фразу, включая последний SEP, единицами #print(seg_input) # конвертируем в numpy в форму (1,) -> (1,512) token_input = np.asarray([token_input]) mask_input = np.asarray([mask_input]) seg_input = np.asarray([seg_input]) # пропускаем через нейросеть... predicts = model.predict([token_input, seg_input, mask_input])[1] # в [1] ответ на вопрос, является ли второе предложение логичным по смыслу #print('Sentence is okey: ', not bool(np.argmax(predicts, axis=-1)[0]), predicts) print('Sentence is okey:', int(round(predicts[0][0]*100)), '%') # [[0.9657724 0.03422766]] - левое число вероятность что второе предложение подходит по смыслу, а правое - что второе предложение случайное out = int(round(predicts[0][0]*100)) ```
github_jupyter
# coding: utf-8 # https://github.com/google-research/bert # https://github.com/CyberZHG/keras-bert # папка, куда распаковать преодобученную нейросеть BERT folder = 'multi_cased_L-12_H-768_A-12' download_url = 'https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip' # ссылка на скачивание модели print('Downloading model...') zip_path = '{}.zip'.format(folder) !test -d $folder || (wget $download_url && unzip $zip_path) # скачиваем из BERT репозитория файл tokenization.py !wget https://raw.githubusercontent.com/google-research/bert/master/tokenization.py # install Keras BERT !pip install keras-bert import sys import numpy as np from keras_bert import load_trained_model_from_checkpoint import tokenization config_path = folder+'/bert_config.json' checkpoint_path = folder+'/bert_model.ckpt' vocab_path = folder+'/vocab.txt' # создаем объект для перевода строки с пробелами в токены tokenizer = tokenization.FullTokenizer(vocab_file=vocab_path, do_lower_case=False) # загружаем модель print('Loading model...') model = load_trained_model_from_checkpoint(config_path, checkpoint_path, training=True) #model.summary() # информация о слоях нейросети - количество параметров и т.д. print('OK') # РЕЖИМ 1: предсказание слов, закрытых токеном MASK в фразе. На вход нейросети надо подать фразу в формате: [CLS] Я пришел в [MASK] и купил [MASK]. [SEP] # входная фраза с закрытыми словами с помощью [MASK] sentence = '\u042F \u043F\u0440\u0438\u0448\u0435\u043B \u0432 [MASK] \u0438 \u043A\u0443\u043F\u0438\u043B [MASK].' #@param {type:"string"} print(sentence) #------------------------- # преобразование в токены (tokenizer.tokenize() не обрабатывает [CLS], [MASK], поэтому добавим их вручную) sentence = sentence.replace(' [MASK] ','[MASK]'); sentence = sentence.replace('[MASK] ','[MASK]'); sentence = sentence.replace(' [MASK]','[MASK]') # удаляем лишние пробелы sentence = sentence.split('[MASK]') # разбиваем строку по маске tokens = ['[CLS]'] # фраза всегда должна начинаться на [CLS] # обычные строки преобразуем в токены с помощью tokenizer.tokenize(), вставляя между ними [MASK] for i in range(len(sentence)): if i == 0: tokens = tokens + tokenizer.tokenize(sentence[i]) else: tokens = tokens + ['[MASK]'] + tokenizer.tokenize(sentence[i]) tokens = tokens + ['[SEP]'] # фраза всегда должна заканчиваться на [SEP] # в tokens теперь токены, которые гарантированно по словарю преобразуются в индексы #------------------------- #print(tokens) # преобразуем в массив индексов, который можно подавать на вход сети, причем число 103 в нем это [MASK] token_input = tokenizer.convert_tokens_to_ids(tokens) #print(token_input) # удлиняем до 512 длины token_input = token_input + [0] * (512 - len(token_input)) # создаем маску, заменив все числа 103 на 1, а остальное 0 mask_input = [0]*512 for i in range(len(mask_input)): if token_input[i] == 103: mask_input[i] = 1 #print(mask_input) # маска фраз (вторая фраза маскируется числом 1, а все остальное числом 0) seg_input = [0]*512 # конвертируем в numpy в форму (1,) -> (1,512) token_input = np.asarray([token_input]) mask_input = np.asarray([mask_input]) seg_input = np.asarray([seg_input]) # пропускаем через нейросеть... predicts = model.predict([token_input, seg_input, mask_input])[0] # в [0] полная фраза с заполненными предсказанными словами на месте [MASK] predicts = np.argmax(predicts, axis=-1) # форматируем результат в строку, разделенную пробелами predicts = predicts[0][:len(tokens)] # длиной как исходная фраза (чтобы отсечь случайные выбросы среди нулей дальше) out = [] # добавляем в out только слова в позиции [MASK], которые маскированы цифрой 1 в mask_input for i in range(len(mask_input[0])): if mask_input[0][i] == 1: # [0][i], т.к. требование было (1,512) out.append(predicts[i]) out = tokenizer.convert_ids_to_tokens(out) # индексы в токены out = ' '.join(out) # объединяем в одну строку с пробелами out = tokenization.printable_text(out) # в читабельную версию out = out.replace(' ##','') # объединяем раздъединенные слова "при ##шел" -> "пришел" print('Result:', out) # РЕЖИМ 2: проверка логичности двух фраз. На вход нейросети надо подать фразу в формате: [CLS] Я пришел в магазин. [SEP] И купил молоко. [SEP] sentence_1 = 'Consideration during the planning phase should include: Drainage of control cable trenches; Design of control cable trenches; Design of safety screens. Compile a checklist of items that should be addressed and considered during the planning and construction phase. Use the checklist to ensure that there is enough budget, planning, tools, resources, support for the required implementation/project.' #@param {type:"string"} sentence_2 = 'Planning and construction phases should be completed in phases. ' #@param {type:"string"} print(sentence_1, '->', sentence_2) # строки в массивы токенов tokens_sen_1 = tokenizer.tokenize(sentence_1) tokens_sen_2 = tokenizer.tokenize(sentence_2) tokens = ['[CLS]'] + tokens_sen_1 + ['[SEP]'] + tokens_sen_2 + ['[SEP]'] #print(tokens) # преобразуем строковые токены в числовые индексы: token_input = tokenizer.convert_tokens_to_ids(tokens) # удлиняем до 512 token_input = token_input + [0] * (512 - len(token_input)) # маска в этом режиме все 0 mask_input = [0] * 512 # в маске предложений под второй фразой, включая конечный SEP, надо поставить 1, а все остальное заполнить 0 seg_input = [0]*512 len_1 = len(tokens_sen_1) + 2 # длина первой фразы, +2 - включая начальный CLS и разделитель SEP for i in range(len(tokens_sen_2)+1): # +1, т.к. включая последний SEP seg_input[len_1 + i] = 1 # маскируем вторую фразу, включая последний SEP, единицами #print(seg_input) # конвертируем в numpy в форму (1,) -> (1,512) token_input = np.asarray([token_input]) mask_input = np.asarray([mask_input]) seg_input = np.asarray([seg_input]) # пропускаем через нейросеть... predicts = model.predict([token_input, seg_input, mask_input])[1] # в [1] ответ на вопрос, является ли второе предложение логичным по смыслу #print('Sentence is okey: ', not bool(np.argmax(predicts, axis=-1)[0]), predicts) print('Sentence is okey:', int(round(predicts[0][0]*100)), '%') # [[0.9657724 0.03422766]] - левое число вероятность что второе предложение подходит по смыслу, а правое - что второе предложение случайное out = int(round(predicts[0][0]*100))
0.202522
0.851336
#### Importando as bibliotecas ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd ``` #### Importando a base de dados ``` dataset = pd.read_csv('C:\\Users\\Fabiel Fernando\\Desktop\\PROVA\\agrupamento_Q1.csv') dataset.head() dataset2 = pd.read_csv('C:\\Users\\Fabiel Fernando\\Desktop\\PROVA\\agrup_centroides_Q1.csv') dataset2 = dataset2.iloc[:, 1:5] dataset2.head() ``` #### Usando gráfico de cotovelo para estimar o número de clusters ``` from sklearn.cluster import KMeans wcss = [] for i in range(2, 11): kmeans = KMeans(n_clusters=i, init=np.array(dataset2.iloc[0:i, :],np.float64), max_iter=10, n_init=1, random_state=42) kmeans.fit(dataset) wcss.append(kmeans.inertia_) plt.plot(range(2, 11), wcss) plt.title('Método cotovelo') plt.xlabel('Número de clusters') plt.ylabel('WCSS') plt.show() ``` #### Aplicando k-means ``` inicializar = dataset2.iloc[0:5] inicializar.shape inicializar kmeans = KMeans(n_clusters=5, init=inicializar, max_iter=10, n_init=1, random_state=42) y_kmeans = kmeans.fit_predict(dataset) pd.DataFrame(kmeans.cluster_centers_) ``` #### Visualizando os clusters ``` X = np.array(dataset.iloc[:, :]) plt.scatter(X[y_kmeans==0, 0], X[y_kmeans==0, 1], s=100, c='red', label='Cluster 1') plt.scatter(X[y_kmeans==1, 0], X[y_kmeans==1, 1], s=100, c='blue', label='Cluster 2') plt.scatter(X[y_kmeans==2, 0], X[y_kmeans==2, 1], s=100, c='green', label='Cluster 3') plt.scatter(X[y_kmeans==3, 0], X[y_kmeans==3, 1], s=100, c='cyan', label='Cluster 4') plt.scatter(X[y_kmeans==4, 0], X[y_kmeans==4, 1], s=100, c='magenta', label='Cluster 5') plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s=300, c='yellow', label='Centroids') plt.title('Clusters of clients') plt.xlabel('Annual Income (k$)') plt.ylabel('Spending Score (1-100)') plt.legend() plt.show() ``` #### Selecionando o número de clusters com análise da Silhueta ``` from sklearn.metrics import silhouette_samples, silhouette_score for i in range(2,11): kmeans2 = KMeans(n_clusters=i, init=np.array(dataset2.iloc[0:i, :],np.float64), max_iter=10, n_init=1, random_state=42) kmeans2.fit(dataset) preds = kmeans2.predict(dataset) centers = kmeans2.cluster_centers_ score = silhouette_score(dataset, preds, metric='euclidean') print("Para n clusters = {}, silhouette score is : {})".format(i, score)) inicializar = dataset2.iloc[0:5] kmeans3 = KMeans(n_clusters=5, init=inicializar, max_iter=10, n_init=1, random_state=42) y_kmeans = kmeans3.fit_predict(dataset) pd.DataFrame(kmeans3.cluster_centers_) ```
github_jupyter
import numpy as np import matplotlib.pyplot as plt import pandas as pd dataset = pd.read_csv('C:\\Users\\Fabiel Fernando\\Desktop\\PROVA\\agrupamento_Q1.csv') dataset.head() dataset2 = pd.read_csv('C:\\Users\\Fabiel Fernando\\Desktop\\PROVA\\agrup_centroides_Q1.csv') dataset2 = dataset2.iloc[:, 1:5] dataset2.head() from sklearn.cluster import KMeans wcss = [] for i in range(2, 11): kmeans = KMeans(n_clusters=i, init=np.array(dataset2.iloc[0:i, :],np.float64), max_iter=10, n_init=1, random_state=42) kmeans.fit(dataset) wcss.append(kmeans.inertia_) plt.plot(range(2, 11), wcss) plt.title('Método cotovelo') plt.xlabel('Número de clusters') plt.ylabel('WCSS') plt.show() inicializar = dataset2.iloc[0:5] inicializar.shape inicializar kmeans = KMeans(n_clusters=5, init=inicializar, max_iter=10, n_init=1, random_state=42) y_kmeans = kmeans.fit_predict(dataset) pd.DataFrame(kmeans.cluster_centers_) X = np.array(dataset.iloc[:, :]) plt.scatter(X[y_kmeans==0, 0], X[y_kmeans==0, 1], s=100, c='red', label='Cluster 1') plt.scatter(X[y_kmeans==1, 0], X[y_kmeans==1, 1], s=100, c='blue', label='Cluster 2') plt.scatter(X[y_kmeans==2, 0], X[y_kmeans==2, 1], s=100, c='green', label='Cluster 3') plt.scatter(X[y_kmeans==3, 0], X[y_kmeans==3, 1], s=100, c='cyan', label='Cluster 4') plt.scatter(X[y_kmeans==4, 0], X[y_kmeans==4, 1], s=100, c='magenta', label='Cluster 5') plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s=300, c='yellow', label='Centroids') plt.title('Clusters of clients') plt.xlabel('Annual Income (k$)') plt.ylabel('Spending Score (1-100)') plt.legend() plt.show() from sklearn.metrics import silhouette_samples, silhouette_score for i in range(2,11): kmeans2 = KMeans(n_clusters=i, init=np.array(dataset2.iloc[0:i, :],np.float64), max_iter=10, n_init=1, random_state=42) kmeans2.fit(dataset) preds = kmeans2.predict(dataset) centers = kmeans2.cluster_centers_ score = silhouette_score(dataset, preds, metric='euclidean') print("Para n clusters = {}, silhouette score is : {})".format(i, score)) inicializar = dataset2.iloc[0:5] kmeans3 = KMeans(n_clusters=5, init=inicializar, max_iter=10, n_init=1, random_state=42) y_kmeans = kmeans3.fit_predict(dataset) pd.DataFrame(kmeans3.cluster_centers_)
0.430387
0.835148
# REINFORCE --- In this notebook, we will train REINFORCE with OpenAI Gym's Cartpole environment. ### 1. Import the Necessary Packages ``` import gym gym.logger.set_level(40) # suppress warnings (please remove if gives error) import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline import torch torch.manual_seed(0) # set random seed import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.distributions import Categorical ``` ### 2. Define the Architecture of the Policy ``` env = gym.make('CartPole-v0') env.seed(0) print('observation space:', env.observation_space) print('action space:', env.action_space) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") class Policy(nn.Module): def __init__(self, s_size=4, h_size=16, a_size=2): super(Policy, self).__init__() self.fc1 = nn.Linear(s_size, h_size) self.fc2 = nn.Linear(h_size, a_size) def forward(self, x): x = F.relu(self.fc1(x)) x = self.fc2(x) return F.softmax(x, dim=1) def act(self, state): state = torch.from_numpy(state).float().unsqueeze(0).to(device) probs = self.forward(state).cpu() m = Categorical(probs) action = m.sample() return action.item(), m.log_prob(action) ``` ### 3. Train the Agent with REINFORCE ``` policy = Policy().to(device) optimizer = optim.Adam(policy.parameters(), lr=1e-2) def reinforce(n_episodes=1000, max_t=1000, gamma=1.0, print_every=100): scores_deque = deque(maxlen=100) scores = [] for i_episode in range(1, n_episodes+1): saved_log_probs = [] rewards = [] state = env.reset() for t in range(max_t): action, log_prob = policy.act(state) saved_log_probs.append(log_prob) state, reward, done, _ = env.step(action) rewards.append(reward) if done: break scores_deque.append(sum(rewards)) scores.append(sum(rewards)) discounts = [gamma**i for i in range(len(rewards)+1)] R = sum([a*b for a,b in zip(discounts, rewards)]) policy_loss = [] for log_prob in saved_log_probs: policy_loss.append(-log_prob * R) policy_loss = torch.cat(policy_loss).sum() optimizer.zero_grad() policy_loss.backward() optimizer.step() if i_episode % print_every == 0: print('Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque))) if np.mean(scores_deque)>=195.0: print('Environment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_deque))) break return scores scores = reinforce() ``` ### 4. Plot the Scores ``` fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(1, len(scores)+1), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ``` ### 5. Watch a Smart Agent! ``` env = gym.make('CartPole-v0') state = env.reset() for t in range(1000): action, _ = policy.act(state) env.render() state, reward, done, _ = env.step(action) if done: break #env.close() env.close() ```
github_jupyter
import gym gym.logger.set_level(40) # suppress warnings (please remove if gives error) import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline import torch torch.manual_seed(0) # set random seed import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.distributions import Categorical env = gym.make('CartPole-v0') env.seed(0) print('observation space:', env.observation_space) print('action space:', env.action_space) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") class Policy(nn.Module): def __init__(self, s_size=4, h_size=16, a_size=2): super(Policy, self).__init__() self.fc1 = nn.Linear(s_size, h_size) self.fc2 = nn.Linear(h_size, a_size) def forward(self, x): x = F.relu(self.fc1(x)) x = self.fc2(x) return F.softmax(x, dim=1) def act(self, state): state = torch.from_numpy(state).float().unsqueeze(0).to(device) probs = self.forward(state).cpu() m = Categorical(probs) action = m.sample() return action.item(), m.log_prob(action) policy = Policy().to(device) optimizer = optim.Adam(policy.parameters(), lr=1e-2) def reinforce(n_episodes=1000, max_t=1000, gamma=1.0, print_every=100): scores_deque = deque(maxlen=100) scores = [] for i_episode in range(1, n_episodes+1): saved_log_probs = [] rewards = [] state = env.reset() for t in range(max_t): action, log_prob = policy.act(state) saved_log_probs.append(log_prob) state, reward, done, _ = env.step(action) rewards.append(reward) if done: break scores_deque.append(sum(rewards)) scores.append(sum(rewards)) discounts = [gamma**i for i in range(len(rewards)+1)] R = sum([a*b for a,b in zip(discounts, rewards)]) policy_loss = [] for log_prob in saved_log_probs: policy_loss.append(-log_prob * R) policy_loss = torch.cat(policy_loss).sum() optimizer.zero_grad() policy_loss.backward() optimizer.step() if i_episode % print_every == 0: print('Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque))) if np.mean(scores_deque)>=195.0: print('Environment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_deque))) break return scores scores = reinforce() fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(1, len(scores)+1), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() env = gym.make('CartPole-v0') state = env.reset() for t in range(1000): action, _ = policy.act(state) env.render() state, reward, done, _ = env.step(action) if done: break #env.close() env.close()
0.712832
0.902223
## Caroline's raw material planning <img align='right' src='https://drive.google.com/uc?export=view&id=1FYTs46ptGHrOaUMEi5BzePH9Gl3YM_2C' width=200> As we know, BIM produces logic and memory chips using copper, silicon, germanium and plastic. Each chip has the following consumption of materials: | chip | copper | silicon | germanium | plastic | |:-------|-------:|--------:|----------:|--------:| |Logic | 0.4 | 1 | | 1 | |Memory | 0.2 | | 1 | 1 | BIM hired Caroline to manage the acquisition and the inventory of these raw materials. Caroline conducted a data analysis which lead to the following prediction of monthly demands for her trophies: | chip | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | |:-------|----:|----:|----:|----:|----:|----:|----:|----:|----:|----:|----:|----:| |Logic | 88 | 125 | 260 | 217 | 238 | 286 | 248 | 238 | 265 | 293 | 259 | 244 | |Memory | 47 | 62 | 81 | 65 | 95 | 118 | 86 | 89 | 82 | 82 | 84 | 66 | As you recall, BIM has the following stock at the moment: |copper|silicon|germanium|plastic| |-----:|------:|--------:|------:| | 480| 1000 | 1500| 1750 | BIM would like to have at least the following stock at the end of the year: |copper|silicon|germanium|plastic| |-----:|------:|--------:|------:| | 200| 500 | 500| 1000 | Each product can be acquired at each month, but the unit prices vary as follows: | product | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | |:---------|----:|----:|----:|----:|----:|----:|----:|----:|----:|----:|----:|----:| |copper | 1 | 1 | 1 | 2 | 2 | 3 | 3 | 2 | 2 | 1 | 1 | 2 | |silicon | 4 | 3 | 3 | 3 | 5 | 5 | 6 | 5 | 4 | 3 | 3 | 5 | |germanium | 5 | 5 | 5 | 3 | 3 | 3 | 3 | 2 | 3 | 4 | 5 | 6 | |plastic | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | The inventory is limited by a capacity of a total of 9000 units per month, regardless of the composition of products in stock. The holding costs of the inventory are 0.05 per unit per month regardless of the product. Caroline cannot spend more than 5000 per month on acquisition. Note that Caroline aims at minimizing the acquisition and holding costs of the materials while meeting the required quantities for production. The production is made to order, meaning that no inventory of chips is kept. Please help Caroline to model the material planning and solve it with the data above. ``` import sys if 'google.colab' in sys.modules: import shutil if not shutil.which('pyomo'): !pip install -q pyomo assert(shutil.which('pyomo')) # cbc !apt-get install -y -qq coinor-cbc ``` To be self contained... alternative is to upload and read a file. ``` demand_data = '''chip,Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec Logic,88,125,260,217,238,286,248,238,265,293,259,244 Memory,47,62,81,65,95,118,86,89,82,82,84,66''' from io import StringIO import pandas as pd demand_chips = pd.read_csv( StringIO(demand_data), index_col='chip' ) demand_chips price_data = '''product,Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec copper,1,1,1,2,2,3,3,2,2,1,1,2 silicon,4,3,3,3,5,5,6,5,4,3,3,5 germanium,5,5,5,3,3,3,3,2,3,4,5,6 plastic,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1''' price = pd.read_csv( StringIO(price_data), index_col='product' ) price ``` # A possible resolution ## A simple dataframe with the consumptions ``` use = dict() use['Logic'] = { 'silicon' : 1, 'plastic' : 1, 'copper' : 4 } use['Memory'] = { 'germanium' : 1, 'plastic' : 1, 'copper' : 2 } use = pd.DataFrame.from_dict( use ).fillna(0).astype( int ) use ``` ## A simple matrix multiplication ``` demand = use.dot( demand_chips ) demand import pyomo.environ as pyo m = pyo.ConcreteModel() ``` # Add the relevant data to the model ``` m.Time = demand.columns m.Product = demand.index m.Demand = demand m.UnitPrice = price m.HoldingCost = .05 m.StockLimit = 9000 m.Budget = 2000 m.existing = {'silicon' : 1000, 'germanium': 1500, 'plastic': 1750, 'copper' : 4800 } m.desired = {'silicon' : 500, 'germanium': 500, 'plastic': 1000, 'copper' : 2000 } ``` # Some care to deal with the `time` index ``` m.first = m.Time[0] m.last = m.Time[-1] m.prev = { j : i for i,j in zip(m.Time,m.Time[1:]) } ``` # Variables for the decision (buy) and consequence (stock) ``` m.buy = pyo.Var( m.Product, m.Time, within=pyo.NonNegativeReals ) m.stock = pyo.Var( m.Product, m.Time, within=pyo.NonNegativeReals ) ``` # The constraints that balance acquisition with inventory and demand ``` def BalanceRule( m, p, t ): if t == m.first: return m.existing[p] + m.buy[p,t] == m.Demand.loc[p,t] + m.stock[p,t] else: return m.buy[p,t] + m.stock[p,m.prev[t]] == m.Demand.loc[p,t] + m.stock[p,t] m.balance = pyo.Constraint( m.Product, m.Time, rule = BalanceRule ) ``` # The remaining constraints Note that these rules are so simple, one liners, that it is better to just define them 'on the spot' as anonymous (or `'lambda`) functions. ## Ensure the desired inventory at the end of the horizon ``` m.finish = pyo.Constraint( m.Product, rule = lambda m, p : m.stock[p,m.last] >= m.desired[p] ) ``` ## Ensure that the inventory fits the capacity ``` m.inventory = pyo.Constraint( m.Time, rule = lambda m, t : sum( m.stock[p,t] for p in m.Product ) <= m.StockLimit ) ``` ## Ensure that the acquisition fits the budget ``` m.budget = pyo.Constraint( m.Time, rule = lambda m, t : sum( m.UnitPrice.loc[p,t]*m.buy[p,t] for p in m.Product ) <= m.Budget ) m.obj = pyo.Objective( expr = sum( m.UnitPrice.loc[p,t]*m.buy[p,t] for p in m.Product for t in m.Time ) + sum( m.HoldingCost*m.stock[p,t] for p in m.Product for t in m.Time ) , sense = pyo.minimize ) pyo.SolverFactory( 'gurobi_direct' ).solve(m) def ShowDouble( X, I,J ): return pd.DataFrame.from_records( [ [ X[i,j].value for j in J ] for i in I ], index=I, columns=J ) ShowDouble( m.buy, m.Product, m.Time ) ShowDouble( m.stock, m.Product, m.Time ) ShowDouble( m.stock, m.Product, m.Time ).T.plot(drawstyle='steps-mid',grid=True, figsize=(20,4)) ``` # Notes * The budget is not limitative. * With the given budget the solution remains integer. * Lowering the budget to 2000 forces acquiring fractional quantities. * Lower values of the budget end up making the problem infeasible.
github_jupyter
import sys if 'google.colab' in sys.modules: import shutil if not shutil.which('pyomo'): !pip install -q pyomo assert(shutil.which('pyomo')) # cbc !apt-get install -y -qq coinor-cbc demand_data = '''chip,Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec Logic,88,125,260,217,238,286,248,238,265,293,259,244 Memory,47,62,81,65,95,118,86,89,82,82,84,66''' from io import StringIO import pandas as pd demand_chips = pd.read_csv( StringIO(demand_data), index_col='chip' ) demand_chips price_data = '''product,Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec copper,1,1,1,2,2,3,3,2,2,1,1,2 silicon,4,3,3,3,5,5,6,5,4,3,3,5 germanium,5,5,5,3,3,3,3,2,3,4,5,6 plastic,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1''' price = pd.read_csv( StringIO(price_data), index_col='product' ) price use = dict() use['Logic'] = { 'silicon' : 1, 'plastic' : 1, 'copper' : 4 } use['Memory'] = { 'germanium' : 1, 'plastic' : 1, 'copper' : 2 } use = pd.DataFrame.from_dict( use ).fillna(0).astype( int ) use demand = use.dot( demand_chips ) demand import pyomo.environ as pyo m = pyo.ConcreteModel() m.Time = demand.columns m.Product = demand.index m.Demand = demand m.UnitPrice = price m.HoldingCost = .05 m.StockLimit = 9000 m.Budget = 2000 m.existing = {'silicon' : 1000, 'germanium': 1500, 'plastic': 1750, 'copper' : 4800 } m.desired = {'silicon' : 500, 'germanium': 500, 'plastic': 1000, 'copper' : 2000 } m.first = m.Time[0] m.last = m.Time[-1] m.prev = { j : i for i,j in zip(m.Time,m.Time[1:]) } m.buy = pyo.Var( m.Product, m.Time, within=pyo.NonNegativeReals ) m.stock = pyo.Var( m.Product, m.Time, within=pyo.NonNegativeReals ) def BalanceRule( m, p, t ): if t == m.first: return m.existing[p] + m.buy[p,t] == m.Demand.loc[p,t] + m.stock[p,t] else: return m.buy[p,t] + m.stock[p,m.prev[t]] == m.Demand.loc[p,t] + m.stock[p,t] m.balance = pyo.Constraint( m.Product, m.Time, rule = BalanceRule ) m.finish = pyo.Constraint( m.Product, rule = lambda m, p : m.stock[p,m.last] >= m.desired[p] ) m.inventory = pyo.Constraint( m.Time, rule = lambda m, t : sum( m.stock[p,t] for p in m.Product ) <= m.StockLimit ) m.budget = pyo.Constraint( m.Time, rule = lambda m, t : sum( m.UnitPrice.loc[p,t]*m.buy[p,t] for p in m.Product ) <= m.Budget ) m.obj = pyo.Objective( expr = sum( m.UnitPrice.loc[p,t]*m.buy[p,t] for p in m.Product for t in m.Time ) + sum( m.HoldingCost*m.stock[p,t] for p in m.Product for t in m.Time ) , sense = pyo.minimize ) pyo.SolverFactory( 'gurobi_direct' ).solve(m) def ShowDouble( X, I,J ): return pd.DataFrame.from_records( [ [ X[i,j].value for j in J ] for i in I ], index=I, columns=J ) ShowDouble( m.buy, m.Product, m.Time ) ShowDouble( m.stock, m.Product, m.Time ) ShowDouble( m.stock, m.Product, m.Time ).T.plot(drawstyle='steps-mid',grid=True, figsize=(20,4))
0.212477
0.925365
# About: Hadoop Prerequisites - Ready! on CentOS6 --- Hadoopをマシンに配備するため、Prerequisite Playbookを適用する。なお、PlaybookはNIIクラウド担当のプライベートなGitLabにて管理している。 ## *Operation Note* *This is a cell for your own recording. ここに経緯を記述* # Notebookと環境のBinding Inventory中のgroup名でBind対象を指示する。 ``` target_group = 'test-hadoop-vm' ``` Bind対象への疎通状態を確認する。 ``` !ansible -m ping {target_group} ``` /etc/hostsの生成に必要なため、ホスト表ファイルの読み込みも行います。 ``` # ホスト表ファイルのパス hosts_csv = 'hosts.csv' # 対象クラスタ名 target_cluster = 'TestCluster' %run scripts/loader.py header, machines = read_machines(hosts_csv) machines = filter(lambda m: m['Cluster'] == target_cluster, machines) pd.DataFrame(map(lambda m: get_row(header, m), machines), columns=header) ``` # Prerequisite Playbookの適用 Prerequisite Playbookのダウンロードと適用をおこなう。**Prerequisite Playbookはベースとなる環境により異なります。この情報はNIIのクラウド環境固有のものであるため、非公開とします。** HDPを動作させるためには、以下の操作をおこなう必要があります。 - IPv6の無効化 - NTP設定を適切なものに変更 - パッケージの更新 - SELinuxの無効化 ``` import tempfile work_dir = tempfile.mkdtemp() work_dir ``` Playbookをcloneする。**(NIIクラウドのプライベートなリポジトリを使っています。非公開)** ``` !git clone ssh://xxx.nii.ac.jp/xxx/aic-dataanalysis-prerequisite.git {work_dir}/playbook ``` cloneされたファイルの構成を、念のため確認しておく。 ``` !tree {work_dir}/playbook ``` 今回はVMでのHadoop実行をおこなうため、以下のroleを適用することにする。 - common ... IPv6の無効化 - ntp ... NTP設定を所内向け設定に変更 - packages ... パッケージの更新 - selinux ... SELinuxの無効化 ``` with open('{work_dir}/playbook/site.yml'.format(work_dir=work_dir), 'w') as f: f.write('''- hosts: {target_group} become: yes roles: - common - ntp - packages - selinux'''.format(target_group=target_group)) !cat {work_dir}/playbook/site.yml ``` Ansible Playbookを適用する・・・ ``` !ansible-playbook -CDv {work_dir}/playbook/site.yml !ansible-playbook {work_dir}/playbook/site.yml ``` ## 時刻同期の確認 NTPの同期状態を確認する `ntpXX.sinet.ad.jp` に `*`, `+` のマークがついていればOK。 ``` !ansible -b -a 'ntpq -p' {target_group} ``` ## その他必要なツールのインストール JDKをインストールする際に、wgetが必要。 ``` !ansible -b -m yum -a 'name=wget' {target_group} ``` ## hostnameの設定 マシンにホスト名を設定する。 ``` for m in machines: !ansible -b -m hostname -a "name={m['Name']}" {m['Service IP']} ``` ## /etc/hostsの変更 hosts.csvに基づき、/etc/hostsを変更する。 ``` with open('{work_dir}/hosts'.format(work_dir=work_dir), 'w') as f: f.write('''127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 {all_hosts} '''.format(all_hosts='\n'.join(map(lambda m: '{} {}'.format(m['Service IP'], m['Name']), machines)))) !cat {work_dir}/hosts ``` 変更内容を確認するため、まずはdry-runする。 ``` !ansible -CDv -b -m copy -a 'src={work_dir}/hosts dest=/etc/hosts' {target_group} ``` 差分を確認し、意図と異なる変更がないことを確認したうえで、変更処理を実施する。 ``` !ansible -b -m copy -a 'src={work_dir}/hosts dest=/etc/hosts' {target_group} ``` 各ホスト間で意図した通り名前解決できることを確認する。 全組み合わせ試すのは面倒なので、MasterとなるNodeから、SlaveとなるNodeへの疎通を確認する。 ``` for src in filter(lambda m: m['NameNode'], machines): for dest in filter(lambda m: m['DataNode'], machines): print('{} -> {}'.format(src['Name'], dest['Name'])) !ansible -a "ping -c 4 {dest['Name']}" {src['Service IP']} ``` # 後始末 一時ディレクトリを削除する。 ``` !rm -fr {work_dir} ```
github_jupyter
target_group = 'test-hadoop-vm' !ansible -m ping {target_group} # ホスト表ファイルのパス hosts_csv = 'hosts.csv' # 対象クラスタ名 target_cluster = 'TestCluster' %run scripts/loader.py header, machines = read_machines(hosts_csv) machines = filter(lambda m: m['Cluster'] == target_cluster, machines) pd.DataFrame(map(lambda m: get_row(header, m), machines), columns=header) import tempfile work_dir = tempfile.mkdtemp() work_dir !git clone ssh://xxx.nii.ac.jp/xxx/aic-dataanalysis-prerequisite.git {work_dir}/playbook !tree {work_dir}/playbook with open('{work_dir}/playbook/site.yml'.format(work_dir=work_dir), 'w') as f: f.write('''- hosts: {target_group} become: yes roles: - common - ntp - packages - selinux'''.format(target_group=target_group)) !cat {work_dir}/playbook/site.yml !ansible-playbook -CDv {work_dir}/playbook/site.yml !ansible-playbook {work_dir}/playbook/site.yml !ansible -b -a 'ntpq -p' {target_group} !ansible -b -m yum -a 'name=wget' {target_group} for m in machines: !ansible -b -m hostname -a "name={m['Name']}" {m['Service IP']} with open('{work_dir}/hosts'.format(work_dir=work_dir), 'w') as f: f.write('''127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 {all_hosts} '''.format(all_hosts='\n'.join(map(lambda m: '{} {}'.format(m['Service IP'], m['Name']), machines)))) !cat {work_dir}/hosts !ansible -CDv -b -m copy -a 'src={work_dir}/hosts dest=/etc/hosts' {target_group} !ansible -b -m copy -a 'src={work_dir}/hosts dest=/etc/hosts' {target_group} for src in filter(lambda m: m['NameNode'], machines): for dest in filter(lambda m: m['DataNode'], machines): print('{} -> {}'.format(src['Name'], dest['Name'])) !ansible -a "ping -c 4 {dest['Name']}" {src['Service IP']} !rm -fr {work_dir}
0.168583
0.921499
``` import re import requests from bs4 import BeautifulSoup import pandas as pd from konlpy.tag import Okt okt = Okt() import tensorflow as tf import numpy as np from collections import Counter from wordcloud import WordCloud import matplotlib.pyplot as plt import urllib.request from tqdm import tqdm from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt %matplotlib inline from string import punctuation import warnings warnings.filterwarnings('ignore') emotion = pd.read_csv('emotion_dataset.csv') emotion.head(1) emotion.info() ``` ## 전처리 ``` emotion emotion['Sentence'] = emotion['Sentence'].str.replace("[^ㄱ-ㅎㅏ-ㅣ가-힣 ]","") # 한글과 공백을 제외하고 모두 제거 emotion[:5] emotion['Sentence'] = emotion['Sentence'].str.replace('^ +', "") # white space 데이터를 empty value로 변경 emotion['Sentence'].replace('', np.nan, inplace=True) print(emotion.isnull().sum()) emotion[emotion['Sentence'].isnull()]b emotion = emotion.dropna(how = 'any') print('전처리 후 데이터의 개수 :',len(emotion)) ``` ## 데이터 라벨링 ``` emotion.loc[(emotion['Emotion'] == "공포"), 'Emotion'] = 0 #공포 => 0 emotion.loc[(emotion['Emotion'] == "놀람"), 'Emotion'] = 1 #놀람 => 1 emotion.loc[(emotion['Emotion'] == "분노"), 'Emotion'] = 2 #분노 => 2 emotion.loc[(emotion['Emotion'] == "슬픔"), 'Emotion'] = 3 #슬픔 => 3 emotion.loc[(emotion['Emotion'] == "중립"), 'Emotion'] = 4 #중립 => 4 emotion.loc[(emotion['Emotion'] == "행복"), 'Emotion'] = 5 #행복 => 5 emotion.loc[(emotion['Emotion'] == "혐오"), 'Emotion'] = 6 #혐오 => 6 ## 시도 1 astype()으로 변경 emotion.Emotion = emotion.Emotion.astype(int) emotion.info() emotion.Emotion.unique() ``` --- ``` ## 시도 2 하나씩 분리해서 다시 append fear = emotion[emotion['Emotion'] == 0] fear['Emotion'] = 0 sur = emotion[emotion['Emotion'] == 1] sur['Emotion'] = 1 ang = emotion[emotion['Emotion'] == 2] ang['Emotion'] = 2 sad = emotion[emotion['Emotion'] == 3] sad['Emotion'] = 3 neu = emotion[emotion['Emotion'] == 4] neu['Emotion'] = 4 joy = emotion[emotion['Emotion'] == 5] joy['Emotion'] = 5 hat = emotion[emotion['Emotion'] == 6] hat['Emotion'] = 6 emotion = fear.append(sur) emotion = emotion.append(ang) emotion = emotion.append(sad) emotion = emotion.append(neu) emotion = emotion.append(joy) emotion = emotion.append(hat) ``` --- ``` emotion.shape # print(data_list[0]) # print(data_list[6000]) # print(data_list[12000]) # print(data_list[18000]) # print(data_list[24000]) # print(data_list[30000]) # print(data_list[-1]) emotion.reset_index(drop=True,inplace=True) type(emotion['Emotion'].iloc[0]) ``` ## 데이터 분리 ``` train_data, test_data = train_test_split( emotion, test_size = 0.25, random_state = 5 ) train_data.head(3) # 불용어 (가사 빈도수 높은 + 감정분류와 무관한 단어 추가 중) stop_w = ['all','이렇게','네가','있는','니가','없는','너의','너무','그런', 'oh','whoo','tuesday','내가','너를','나를','we','this','the','그렇게', 'so','am','baby','and','can','you','much','me','for','go','in', '은', '는', '이', '가', '하','부터','처럼','까지', 'know','no','of','let','my','수','너','내','나','그','난','봐', '돼','건','모든','에서','에게','싶어','잖아', '날','널','수','것','못','말','넌','젠','하나','정말','알','여기', '우리','다시','하게','니까', '때','아','더','게','또','채','일','걸','누구','나는','너는','라면', '같아','있어', '의','가','보','들','좀','잘','걍','과','도','를','으로','우린','하지', '해도','하고','없어','않아', '자','에','와','한','하다','네','있다','나의','해','다','내게','왜', '거야','이제','그냥','했던','하는'] # 학습 데이터 X_train = [] for sentence in tqdm(train_data['Sentence']): tokenized_sentence = okt.morphs(sentence, stem=True) # 토큰화 stopwords_removed_sentence = [word for word in tokenized_sentence if not word in stop_w] # 불용어 제거 X_train.append(stopwords_removed_sentence) # 테스트 데이터 X_test = [] for sentence in tqdm(test_data['Sentence']): tokenized_sentence = okt.morphs(sentence, stem=True) # 토큰화 stopwords_removed_sentence = [word for word in tokenized_sentence if not word in stop_w] # 불용어 제거 X_test.append(stopwords_removed_sentence) X_train[:1] ``` ## 정답 데이터 저장 ``` y_train = np.array(train_data['Emotion']) y_test = np.array(test_data['Emotion']) drop_train = [index for index, sentence in enumerate(X_train) if len(sentence) < 1] drop_test = [index for index, sentence in enumerate(X_test) if len(sentence) < 1] print(drop_train) X_train = np.delete(X_train, drop_train, axis=0) y_train = np.delete(y_train, drop_train, axis=0) print(len(X_train)) print(len(y_train)) print(len(X_test)) print(len(y_test)) X_test = np.delete(X_test, drop_test, axis=0) y_test = np.delete(y_test, drop_test, axis=0) print(len(X_test)) print(len(y_test)) ``` ## 정수 인코딩 ``` tokenizer = Tokenizer() tokenizer.fit_on_texts(X_train) # print(tokenizer.word_index) # print(tokenizer.word_counts.items()) print(X_train[:1]) ``` ## 빈도수 확인 ``` threshold = 3 total_cnt = len(tokenizer.word_index) # 단어의 수 rare_cnt = 0 # 등장 빈도수가 threshold보다 작은 단어의 개수를 카운트 total_freq = 0 # 훈련 데이터의 전체 단어 빈도수 총 합 rare_freq = 0 # 등장 빈도수가 threshold보다 작은 단어의 등장 빈도수의 총 합 # 단어와 빈도수의 쌍(pair)을 key와 value로 받는다. for key, value in tokenizer.word_counts.items(): total_freq = total_freq + value # 단어의 등장 빈도수가 threshold보다 작으면 if(value < threshold): rare_cnt = rare_cnt + 1 rare_freq = rare_freq + value print('단어 집합(vocabulary)의 크기 :',total_cnt) print('등장 빈도가 %s번 이하인 희귀 단어의 수: %s'%(threshold - 1, rare_cnt)) print("단어 집합에서 희귀 단어의 비율:", (rare_cnt / total_cnt)*100) print("전체 등장 빈도에서 희귀 단어 등장 빈도 비율:", (rare_freq / total_freq)*100) # 전체 단어 개수 중 빈도수 2이하인 단어는 제거. # 0번 패딩 토큰을 고려하여 + 1 vocab_size = total_cnt - rare_cnt + 1 print('단어 집합의 크기 :',vocab_size) tokenizer = Tokenizer(vocab_size) tokenizer.fit_on_texts(X_train) X_train = tokenizer.texts_to_sequences(X_train) X_test = tokenizer.texts_to_sequences(X_test) ``` ### 정수 인코딩 확인 ``` print(X_train[:1]) ``` ## 패딩 ``` print('문장의 최대 길이 :',max(len(l) for l in X_train)) print('문장의 평균 길이 :',sum(map(len, X_train))/len(X_train)) plt.hist([len(s) for s in X_train], bins=50) plt.xlabel('length of samples') plt.ylabel('number of samples') plt.show() def below_threshold_len(max_len, nested_list): cnt = 0 for s in nested_list: if(len(s) <= max_len): cnt = cnt + 1 print('전체 샘플 중 길이가 %s 이하인 샘플의 비율: %s'%(max_len, (cnt / len(nested_list))*100)) max_len = 20 below_threshold_len(max_len, X_train) ``` ## 모든 샘플의 길이를 max_len로 조정 ``` X_train = pad_sequences(X_train, maxlen = max_len) X_test = pad_sequences(X_test, maxlen = max_len) X_test[0] ``` ## 모델 적용 ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd from tensorflow.keras.layers import Dense, Flatten, Dropout, Conv2D, MaxPooling2D, LSTM, Embedding, Bidirectional,TimeDistributed from tensorflow.keras import Model from tensorflow.keras.models import Sequential from tensorflow.keras.models import load_model import tensorflow as tf from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint # 결과값 seq to seq : many to many # model = Sequential() # model.add(Embedding(vocab_size, 300, mask_zero=True)) # model.add(Bidirectional(LSTM(128))) # model.add(Dense(64,activation="relu")) # model.add(Dense(32,activation="relu")) # model.add(Dense(16,activation="relu")) # model.add(Dense(7,activation='sigmoid')) # , activation='sigmoid' ``` ## 적용 다중 분류 모델이므로 * `softmax` * 7가지 감정 * loss는 `binary_crossentropy`가 아닌 `sparse_categorical_crossentropy`로 compile ### softmax , 7 , sparse_categorical_crossentropy ``` model = Sequential() model.add(Embedding(vocab_size, 300, mask_zero=True)) model.add(Bidirectional(LSTM(128))) model.add(Dense(64,activation="relu")) model.add(Dense(32,activation="relu")) model.add(Dense(16,activation="relu")) model.add(Dense(7,activation='softmax')) # , activation='sigmoid' es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=4) mc = ModelCheckpoint('best_model.h5', monitor='val_acc', mode='max', verbose=1, save_best_only=True) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc']) history = model.fit(X_train, y_train, epochs=15, callbacks=[es, mc], batch_size=64, validation_split=0.2) model.save('./models/multi_emotion.h5') loaded_model = keras.models.load_model('./models/multi_emotion.h5') print("\n 테스트 정확도: %.4f" % (loaded_model.evaluate(X_test, y_test)[1])) ``` --- ## 다중 분류 예측 함수 과정 ``` new_sentence = re.sub(r'[^ㄱ-ㅎㅏ-ㅣ가-힣 ]','', '진짜 많은 사람들앞에서 실제로 손님이 있다고치고 연습하나요') new_sentence = okt.morphs(new_sentence, stem=True) # 토큰화 new_sentence = [word for word in new_sentence if not word in stop_w] # 불용어 제거 encoded = tokenizer.texts_to_sequences([new_sentence]) # 정수 인코딩 pad_new = pad_sequences(encoded, maxlen = max_len) # 패딩 score = model.predict(pad_new) # 예측 print(score) def multi_sentiment_predict(new_sentence): new_sentence = re.sub(r'[^ㄱ-ㅎㅏ-ㅣ가-힣 ]','', new_sentence) new_sentence = okt.morphs(new_sentence, stem=True) # 토큰화 new_sentence = [word for word in new_sentence if not word in stop_w] # 불용어 제거 encoded = tokenizer.texts_to_sequences([new_sentence]) # 정수 인코딩 pad_new = pad_sequences(encoded, maxlen = max_len) # 패딩 score = model.predict(pad_new) # 예측 if score[0][0] == score[0].max(): print(f"{round(score[0][0] * 100,2)} 확률로 공포 문장입니다.\n") elif score[0][1] == score[0].max(): print(f"{round(score[0][1] * 100,2)} 확률로 놀람 문장입니다.\n") elif score[0][2] == score[0].max(): print(f"{round(score[0][2] * 100,2)} 확률로 분노 문장입니다.\n") elif score[0][3] == score[0].max(): print(f"{round(score[0][3] * 100,2)} 확률로 슬픔 문장입니다.\n") elif score[0][4] == score[0].max(): print(f"{round(score[0][4] * 100,2)} 확률로 중립 문장입니다.\n") elif score[0][5] == score[0].max(): print(f"{round(score[0][5] * 100,2)} 확률로 행복 문장입니다.\n") elif score[0][6] == score[0].max(): print(f"{round(score[0][6] * 100,2)} 확률로 혐오 문장입니다.\n") multi_sentiment_predict('유치원버스 사고 낫다던데') multi_sentiment_predict('유투브 땅굴 발견 전쟁임박') multi_sentiment_predict('근데 원래이런거맞나요') multi_sentiment_predict('적막한 밤하늘 내 맘에도 드리우면 난 늘 그대가 보고 싶곤 해') multi_sentiment_predict('너의 웃음소리 참 듣기가 좋아') multi_sentiment_predict('그대와 나 이별하던 그날 그 아침 나는 울지 않았소') multi_sentiment_predict('나 그댈 위해 시 한 편을 쓰겠어') multi_sentiment_predict('두 눈에 비친 너의 미소 지친 날 감싸듯') multi_sentiment_predict('창가에 요란히 내리는 빗물 소리만큼') multi_sentiment_predict('창가에 요란히 내리는 빗물 소리만큼 시린 기억들') multi_sentiment_predict(playlist.Lyric[0]) playlist = pd.read_csv('pre_total_playlist.csv') playlist.Lyric[0] # loaded_model = load_model('GRU_model.h5') # print("\n 테스트 정확도: %.4f" % (loaded_model.evaluate(X_test, y_test)[1])) ``` # 예측 결과 DataFrame ``` def multi_sentiment_predict(new_sentence): new_sentence = re.sub(r'[^ㄱ-ㅎㅏ-ㅣ가-힣 ]','', new_sentence) new_sentence = okt.morphs(new_sentence, stem=True) # 토큰화 new_sentence = [word for word in new_sentence if not word in stop_w] # 불용어 제거 encoded = tokenizer.texts_to_sequences([new_sentence]) # 정수 인코딩 pad_new = pad_sequences(encoded, maxlen = max_len) # 패딩 score = model.predict(pad_new) # 예측 if score[0][0] == score[0].max(): y = round(score[0][0] * 100,2) e = '공포' elif score[0][1] == score[0].max(): y = round(score[0][1] * 100,2) e = '놀람' elif score[0][2] == score[0].max(): y = round(score[0][2] * 100,2) e = '분노' elif score[0][3] == score[0].max(): y = round(score[0][3] * 100,2) e = '슬픔' elif score[0][4] == score[0].max(): y = round(score[0][4] * 100,2) e = '중립' elif score[0][5] == score[0].max(): y = round(score[0][5] * 100,2) e = '행복' elif score[0][6] == score[0].max(): y = round(score[0][6] * 100,2) e = '혐오' return [y,e] result = [] for i in range(len(playlist)): emotion_list = { 'emotion' : multi_sentiment_predict(playlist.Lyric[i])[1], 'percentage': multi_sentiment_predict(playlist.Lyric[i])[0] } df = pd.DataFrame.from_dict(emotion_list, orient='index') df = df.transpose() result.append(df) emotion_df = pd.concat(result).reset_index(drop=True) emotion_df meta_emotion_df = emotion_df.join(playlist,lsuffix='index') meta_emotion_df.head() meta_emotion_df.to_csv('meta_emotion.csv',index=False) ``` --- ``` latest = pd.read_csv('latest_meta.csv') result = [] for i in range(len(latest)): emotion_list = { 'emotion' : multi_sentiment_predict(latest.Lyric[i])[1], 'percentage': multi_sentiment_predict(latest.Lyric[i])[0] } df = pd.DataFrame.from_dict(emotion_list, orient='index') df = df.transpose() result.append(df) emotion_df = pd.concat(result).reset_index(drop=True) emotion_df meta_emotion_df = emotion_df.join(latest,lsuffix='index') meta_emotion_df.head() meta_emotion_df.to_csv('meta_emotion_latest.csv',index=False) ```
github_jupyter
import re import requests from bs4 import BeautifulSoup import pandas as pd from konlpy.tag import Okt okt = Okt() import tensorflow as tf import numpy as np from collections import Counter from wordcloud import WordCloud import matplotlib.pyplot as plt import urllib.request from tqdm import tqdm from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt %matplotlib inline from string import punctuation import warnings warnings.filterwarnings('ignore') emotion = pd.read_csv('emotion_dataset.csv') emotion.head(1) emotion.info() emotion emotion['Sentence'] = emotion['Sentence'].str.replace("[^ㄱ-ㅎㅏ-ㅣ가-힣 ]","") # 한글과 공백을 제외하고 모두 제거 emotion[:5] emotion['Sentence'] = emotion['Sentence'].str.replace('^ +', "") # white space 데이터를 empty value로 변경 emotion['Sentence'].replace('', np.nan, inplace=True) print(emotion.isnull().sum()) emotion[emotion['Sentence'].isnull()]b emotion = emotion.dropna(how = 'any') print('전처리 후 데이터의 개수 :',len(emotion)) emotion.loc[(emotion['Emotion'] == "공포"), 'Emotion'] = 0 #공포 => 0 emotion.loc[(emotion['Emotion'] == "놀람"), 'Emotion'] = 1 #놀람 => 1 emotion.loc[(emotion['Emotion'] == "분노"), 'Emotion'] = 2 #분노 => 2 emotion.loc[(emotion['Emotion'] == "슬픔"), 'Emotion'] = 3 #슬픔 => 3 emotion.loc[(emotion['Emotion'] == "중립"), 'Emotion'] = 4 #중립 => 4 emotion.loc[(emotion['Emotion'] == "행복"), 'Emotion'] = 5 #행복 => 5 emotion.loc[(emotion['Emotion'] == "혐오"), 'Emotion'] = 6 #혐오 => 6 ## 시도 1 astype()으로 변경 emotion.Emotion = emotion.Emotion.astype(int) emotion.info() emotion.Emotion.unique() ## 시도 2 하나씩 분리해서 다시 append fear = emotion[emotion['Emotion'] == 0] fear['Emotion'] = 0 sur = emotion[emotion['Emotion'] == 1] sur['Emotion'] = 1 ang = emotion[emotion['Emotion'] == 2] ang['Emotion'] = 2 sad = emotion[emotion['Emotion'] == 3] sad['Emotion'] = 3 neu = emotion[emotion['Emotion'] == 4] neu['Emotion'] = 4 joy = emotion[emotion['Emotion'] == 5] joy['Emotion'] = 5 hat = emotion[emotion['Emotion'] == 6] hat['Emotion'] = 6 emotion = fear.append(sur) emotion = emotion.append(ang) emotion = emotion.append(sad) emotion = emotion.append(neu) emotion = emotion.append(joy) emotion = emotion.append(hat) emotion.shape # print(data_list[0]) # print(data_list[6000]) # print(data_list[12000]) # print(data_list[18000]) # print(data_list[24000]) # print(data_list[30000]) # print(data_list[-1]) emotion.reset_index(drop=True,inplace=True) type(emotion['Emotion'].iloc[0]) train_data, test_data = train_test_split( emotion, test_size = 0.25, random_state = 5 ) train_data.head(3) # 불용어 (가사 빈도수 높은 + 감정분류와 무관한 단어 추가 중) stop_w = ['all','이렇게','네가','있는','니가','없는','너의','너무','그런', 'oh','whoo','tuesday','내가','너를','나를','we','this','the','그렇게', 'so','am','baby','and','can','you','much','me','for','go','in', '은', '는', '이', '가', '하','부터','처럼','까지', 'know','no','of','let','my','수','너','내','나','그','난','봐', '돼','건','모든','에서','에게','싶어','잖아', '날','널','수','것','못','말','넌','젠','하나','정말','알','여기', '우리','다시','하게','니까', '때','아','더','게','또','채','일','걸','누구','나는','너는','라면', '같아','있어', '의','가','보','들','좀','잘','걍','과','도','를','으로','우린','하지', '해도','하고','없어','않아', '자','에','와','한','하다','네','있다','나의','해','다','내게','왜', '거야','이제','그냥','했던','하는'] # 학습 데이터 X_train = [] for sentence in tqdm(train_data['Sentence']): tokenized_sentence = okt.morphs(sentence, stem=True) # 토큰화 stopwords_removed_sentence = [word for word in tokenized_sentence if not word in stop_w] # 불용어 제거 X_train.append(stopwords_removed_sentence) # 테스트 데이터 X_test = [] for sentence in tqdm(test_data['Sentence']): tokenized_sentence = okt.morphs(sentence, stem=True) # 토큰화 stopwords_removed_sentence = [word for word in tokenized_sentence if not word in stop_w] # 불용어 제거 X_test.append(stopwords_removed_sentence) X_train[:1] y_train = np.array(train_data['Emotion']) y_test = np.array(test_data['Emotion']) drop_train = [index for index, sentence in enumerate(X_train) if len(sentence) < 1] drop_test = [index for index, sentence in enumerate(X_test) if len(sentence) < 1] print(drop_train) X_train = np.delete(X_train, drop_train, axis=0) y_train = np.delete(y_train, drop_train, axis=0) print(len(X_train)) print(len(y_train)) print(len(X_test)) print(len(y_test)) X_test = np.delete(X_test, drop_test, axis=0) y_test = np.delete(y_test, drop_test, axis=0) print(len(X_test)) print(len(y_test)) tokenizer = Tokenizer() tokenizer.fit_on_texts(X_train) # print(tokenizer.word_index) # print(tokenizer.word_counts.items()) print(X_train[:1]) threshold = 3 total_cnt = len(tokenizer.word_index) # 단어의 수 rare_cnt = 0 # 등장 빈도수가 threshold보다 작은 단어의 개수를 카운트 total_freq = 0 # 훈련 데이터의 전체 단어 빈도수 총 합 rare_freq = 0 # 등장 빈도수가 threshold보다 작은 단어의 등장 빈도수의 총 합 # 단어와 빈도수의 쌍(pair)을 key와 value로 받는다. for key, value in tokenizer.word_counts.items(): total_freq = total_freq + value # 단어의 등장 빈도수가 threshold보다 작으면 if(value < threshold): rare_cnt = rare_cnt + 1 rare_freq = rare_freq + value print('단어 집합(vocabulary)의 크기 :',total_cnt) print('등장 빈도가 %s번 이하인 희귀 단어의 수: %s'%(threshold - 1, rare_cnt)) print("단어 집합에서 희귀 단어의 비율:", (rare_cnt / total_cnt)*100) print("전체 등장 빈도에서 희귀 단어 등장 빈도 비율:", (rare_freq / total_freq)*100) # 전체 단어 개수 중 빈도수 2이하인 단어는 제거. # 0번 패딩 토큰을 고려하여 + 1 vocab_size = total_cnt - rare_cnt + 1 print('단어 집합의 크기 :',vocab_size) tokenizer = Tokenizer(vocab_size) tokenizer.fit_on_texts(X_train) X_train = tokenizer.texts_to_sequences(X_train) X_test = tokenizer.texts_to_sequences(X_test) print(X_train[:1]) print('문장의 최대 길이 :',max(len(l) for l in X_train)) print('문장의 평균 길이 :',sum(map(len, X_train))/len(X_train)) plt.hist([len(s) for s in X_train], bins=50) plt.xlabel('length of samples') plt.ylabel('number of samples') plt.show() def below_threshold_len(max_len, nested_list): cnt = 0 for s in nested_list: if(len(s) <= max_len): cnt = cnt + 1 print('전체 샘플 중 길이가 %s 이하인 샘플의 비율: %s'%(max_len, (cnt / len(nested_list))*100)) max_len = 20 below_threshold_len(max_len, X_train) X_train = pad_sequences(X_train, maxlen = max_len) X_test = pad_sequences(X_test, maxlen = max_len) X_test[0] import numpy as np import matplotlib.pyplot as plt import pandas as pd from tensorflow.keras.layers import Dense, Flatten, Dropout, Conv2D, MaxPooling2D, LSTM, Embedding, Bidirectional,TimeDistributed from tensorflow.keras import Model from tensorflow.keras.models import Sequential from tensorflow.keras.models import load_model import tensorflow as tf from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint # 결과값 seq to seq : many to many # model = Sequential() # model.add(Embedding(vocab_size, 300, mask_zero=True)) # model.add(Bidirectional(LSTM(128))) # model.add(Dense(64,activation="relu")) # model.add(Dense(32,activation="relu")) # model.add(Dense(16,activation="relu")) # model.add(Dense(7,activation='sigmoid')) # , activation='sigmoid' model = Sequential() model.add(Embedding(vocab_size, 300, mask_zero=True)) model.add(Bidirectional(LSTM(128))) model.add(Dense(64,activation="relu")) model.add(Dense(32,activation="relu")) model.add(Dense(16,activation="relu")) model.add(Dense(7,activation='softmax')) # , activation='sigmoid' es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=4) mc = ModelCheckpoint('best_model.h5', monitor='val_acc', mode='max', verbose=1, save_best_only=True) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc']) history = model.fit(X_train, y_train, epochs=15, callbacks=[es, mc], batch_size=64, validation_split=0.2) model.save('./models/multi_emotion.h5') loaded_model = keras.models.load_model('./models/multi_emotion.h5') print("\n 테스트 정확도: %.4f" % (loaded_model.evaluate(X_test, y_test)[1])) new_sentence = re.sub(r'[^ㄱ-ㅎㅏ-ㅣ가-힣 ]','', '진짜 많은 사람들앞에서 실제로 손님이 있다고치고 연습하나요') new_sentence = okt.morphs(new_sentence, stem=True) # 토큰화 new_sentence = [word for word in new_sentence if not word in stop_w] # 불용어 제거 encoded = tokenizer.texts_to_sequences([new_sentence]) # 정수 인코딩 pad_new = pad_sequences(encoded, maxlen = max_len) # 패딩 score = model.predict(pad_new) # 예측 print(score) def multi_sentiment_predict(new_sentence): new_sentence = re.sub(r'[^ㄱ-ㅎㅏ-ㅣ가-힣 ]','', new_sentence) new_sentence = okt.morphs(new_sentence, stem=True) # 토큰화 new_sentence = [word for word in new_sentence if not word in stop_w] # 불용어 제거 encoded = tokenizer.texts_to_sequences([new_sentence]) # 정수 인코딩 pad_new = pad_sequences(encoded, maxlen = max_len) # 패딩 score = model.predict(pad_new) # 예측 if score[0][0] == score[0].max(): print(f"{round(score[0][0] * 100,2)} 확률로 공포 문장입니다.\n") elif score[0][1] == score[0].max(): print(f"{round(score[0][1] * 100,2)} 확률로 놀람 문장입니다.\n") elif score[0][2] == score[0].max(): print(f"{round(score[0][2] * 100,2)} 확률로 분노 문장입니다.\n") elif score[0][3] == score[0].max(): print(f"{round(score[0][3] * 100,2)} 확률로 슬픔 문장입니다.\n") elif score[0][4] == score[0].max(): print(f"{round(score[0][4] * 100,2)} 확률로 중립 문장입니다.\n") elif score[0][5] == score[0].max(): print(f"{round(score[0][5] * 100,2)} 확률로 행복 문장입니다.\n") elif score[0][6] == score[0].max(): print(f"{round(score[0][6] * 100,2)} 확률로 혐오 문장입니다.\n") multi_sentiment_predict('유치원버스 사고 낫다던데') multi_sentiment_predict('유투브 땅굴 발견 전쟁임박') multi_sentiment_predict('근데 원래이런거맞나요') multi_sentiment_predict('적막한 밤하늘 내 맘에도 드리우면 난 늘 그대가 보고 싶곤 해') multi_sentiment_predict('너의 웃음소리 참 듣기가 좋아') multi_sentiment_predict('그대와 나 이별하던 그날 그 아침 나는 울지 않았소') multi_sentiment_predict('나 그댈 위해 시 한 편을 쓰겠어') multi_sentiment_predict('두 눈에 비친 너의 미소 지친 날 감싸듯') multi_sentiment_predict('창가에 요란히 내리는 빗물 소리만큼') multi_sentiment_predict('창가에 요란히 내리는 빗물 소리만큼 시린 기억들') multi_sentiment_predict(playlist.Lyric[0]) playlist = pd.read_csv('pre_total_playlist.csv') playlist.Lyric[0] # loaded_model = load_model('GRU_model.h5') # print("\n 테스트 정확도: %.4f" % (loaded_model.evaluate(X_test, y_test)[1])) def multi_sentiment_predict(new_sentence): new_sentence = re.sub(r'[^ㄱ-ㅎㅏ-ㅣ가-힣 ]','', new_sentence) new_sentence = okt.morphs(new_sentence, stem=True) # 토큰화 new_sentence = [word for word in new_sentence if not word in stop_w] # 불용어 제거 encoded = tokenizer.texts_to_sequences([new_sentence]) # 정수 인코딩 pad_new = pad_sequences(encoded, maxlen = max_len) # 패딩 score = model.predict(pad_new) # 예측 if score[0][0] == score[0].max(): y = round(score[0][0] * 100,2) e = '공포' elif score[0][1] == score[0].max(): y = round(score[0][1] * 100,2) e = '놀람' elif score[0][2] == score[0].max(): y = round(score[0][2] * 100,2) e = '분노' elif score[0][3] == score[0].max(): y = round(score[0][3] * 100,2) e = '슬픔' elif score[0][4] == score[0].max(): y = round(score[0][4] * 100,2) e = '중립' elif score[0][5] == score[0].max(): y = round(score[0][5] * 100,2) e = '행복' elif score[0][6] == score[0].max(): y = round(score[0][6] * 100,2) e = '혐오' return [y,e] result = [] for i in range(len(playlist)): emotion_list = { 'emotion' : multi_sentiment_predict(playlist.Lyric[i])[1], 'percentage': multi_sentiment_predict(playlist.Lyric[i])[0] } df = pd.DataFrame.from_dict(emotion_list, orient='index') df = df.transpose() result.append(df) emotion_df = pd.concat(result).reset_index(drop=True) emotion_df meta_emotion_df = emotion_df.join(playlist,lsuffix='index') meta_emotion_df.head() meta_emotion_df.to_csv('meta_emotion.csv',index=False) latest = pd.read_csv('latest_meta.csv') result = [] for i in range(len(latest)): emotion_list = { 'emotion' : multi_sentiment_predict(latest.Lyric[i])[1], 'percentage': multi_sentiment_predict(latest.Lyric[i])[0] } df = pd.DataFrame.from_dict(emotion_list, orient='index') df = df.transpose() result.append(df) emotion_df = pd.concat(result).reset_index(drop=True) emotion_df meta_emotion_df = emotion_df.join(latest,lsuffix='index') meta_emotion_df.head() meta_emotion_df.to_csv('meta_emotion_latest.csv',index=False)
0.100486
0.67549
# Talks markdown generator for academicpages Takes a TSV of talks with metadata and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)). The core python code is also in `talks.py`. Run either from the `markdown_generator` folder after replacing `talks.tsv` with one containing your data. TODO: Make this work with BibTex and other databases, rather than Stuart's non-standard TSV format and citation style. ``` import pandas as pd import os ``` ## Data format The TSV needs to have the following columns: title, type, url_slug, venue, date, location, talk_url, description, with a header at the top. Many of these fields can be blank, but the columns must be in the TSV. - Fields that cannot be blank: `title`, `url_slug`, `date`. All else can be blank. `type` defaults to "Talk" - `date` must be formatted as YYYY-MM-DD. - `url_slug` will be the descriptive part of the .md file and the permalink URL for the page about the paper. - The .md file will be `YYYY-MM-DD-[url_slug].md` and the permalink will be `https://[yourdomain]/talks/YYYY-MM-DD-[url_slug]` - The combination of `url_slug` and `date` must be unique, as it will be the basis for your filenames This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create). ``` !cat talks.tsv ``` ## Import TSV Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or `\t`. I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others. ``` talks = pd.read_csv("talks.tsv", sep="\t", header=0) talks ``` ## Escape special characters YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely. ``` html_escape_table = { "&": "&amp;", '"': "&quot;", "'": "&apos;" } def html_escape(text): if type(text) is str: return "".join(html_escape_table.get(c,c) for c in text) else: return "False" ``` ## Creating the markdown files This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (```md```) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page. ``` loc_dict = {} for row, item in talks.iterrows(): md_filename = str(item.date) + "-" + item.url_slug + ".md" html_filename = str(item.date) + "-" + item.url_slug year = item.date[:4] md = "---\ntitle: \"" + item.title + '"\n' md += "collection: talks" + "\n" if len(str(item.type)) > 3: md += 'type: "' + item.type + '"\n' else: md += 'type: "Talk"\n' md += "permalink: /talks/" + html_filename + "\n" if len(str(item.venue)) > 3: md += 'venue: "' + item.venue + '"\n' if len(str(item.location)) > 3: md += "date: " + str(item.date) + "\n" if len(str(item.location)) > 3: md += 'location: "' + str(item.location) + '"\n' md += "---\n" if len(str(item.talk_url)) > 3: md += "\n[More information here](" + item.talk_url + ")\n" if len(str(item.description)) > 3: md += "\n" + html_escape(item.description) + "\n" md_filename = os.path.basename(md_filename) #print(md) with open("../_talks/" + md_filename, 'w') as f: f.write(md) ``` These files are in the talks directory, one directory below where we're working from. ``` !ls ../_talks !cat ../_talks/2013-03-01-tutorial-1.md ```
github_jupyter
import pandas as pd import os !cat talks.tsv talks = pd.read_csv("talks.tsv", sep="\t", header=0) talks html_escape_table = { "&": "&amp;", '"': "&quot;", "'": "&apos;" } def html_escape(text): if type(text) is str: return "".join(html_escape_table.get(c,c) for c in text) else: return "False" loc_dict = {} for row, item in talks.iterrows(): md_filename = str(item.date) + "-" + item.url_slug + ".md" html_filename = str(item.date) + "-" + item.url_slug year = item.date[:4] md = "---\ntitle: \"" + item.title + '"\n' md += "collection: talks" + "\n" if len(str(item.type)) > 3: md += 'type: "' + item.type + '"\n' else: md += 'type: "Talk"\n' md += "permalink: /talks/" + html_filename + "\n" if len(str(item.venue)) > 3: md += 'venue: "' + item.venue + '"\n' if len(str(item.location)) > 3: md += "date: " + str(item.date) + "\n" if len(str(item.location)) > 3: md += 'location: "' + str(item.location) + '"\n' md += "---\n" if len(str(item.talk_url)) > 3: md += "\n[More information here](" + item.talk_url + ")\n" if len(str(item.description)) > 3: md += "\n" + html_escape(item.description) + "\n" md_filename = os.path.basename(md_filename) #print(md) with open("../_talks/" + md_filename, 'w') as f: f.write(md) !ls ../_talks !cat ../_talks/2013-03-01-tutorial-1.md
0.088154
0.760473
# Sentiment Analysis using NLP (Spacy/ntlk) ### Check out Spacy's API! It's very extensive. https://spacy.io/api/doc ``` # Before you begin, make sure to install spacy and download 1 of the language models that have a host of relationships already # built in! Each model comes with prebuilt tokenization icons, Tags, dependencies, sentence segmentation, and entities! # pip install spacy # Download the large word vector models from spacy # Other Language models to be downloaded here: https://spacy.io/usage/models # python -m spacy download en_core_web_sm # python -m spacy download en_core_web_md # python -m spacy download en_core_web_lg ``` ### If you're not familiar with Basic Machine Learning Concepts and Logistic Regression, check out these videos to get caught up! (Links are in the description) ![Logistic_Regression](logisticRegression.PNG) ``` # DataFrame import pandas as pd # Matplot import matplotlib.pyplot as plt %matplotlib inline # Scikit-learn from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder from sklearn.metrics import confusion_matrix, classification_report, accuracy_score from sklearn.manifold import TSNE from sklearn.feature_extraction.text import TfidfVectorizer # Keras from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.models import Sequential from keras.layers import Activation, Dense, Dropout, Embedding, Flatten, Conv1D, MaxPooling1D, LSTM from keras import utils from keras.callbacks import ReduceLROnPlateau, EarlyStopping import spacy # nltk import nltk from nltk.corpus import stopwords from nltk.stem import SnowballStemmer # Word2vec import gensim # Utility import re import numpy as np from collections import Counter import time ``` # Preprocessing text data includes the following steps: - Tokeniziation (Break down document into words, sentences, or other categories) - Removal of unnecessary text that don't add a lot of value to our machine learning model. - Normalization (simplifying word to its root form -- you can think of this as getting the infinitive of a verb) # Let's start with Tokenization ``` text = """ Hi! My name is Spencer and I love doing videos related to tech implementation/strategy. If you like finance, tech, or real world Q&A videos about Data Science, Data Engineering, and Data Analyst related topics, let me know in the comments down below! *psst* If you made it this far in the video, why don't you subscribe? I upload videos every week on a Sunday ~7:00 PM EST. If you like this type of content make sure to hit that like button! That really helps out with the growth of this channel. :) """ # In the NLP pipeline world, you would typically preprocess your data before you start feeding it to your NLP model. # For instance, let's begin by removing special characters. # In the real world, you would have many data streams coming in in different formats. And, you would run your various regex # functions to identify known patterns of text and store them in JSONL or other data storage files. pattern = r'[^A-Za-z ]' regex = re.compile(pattern) result = regex.sub('', text) result # Load in the NLP model that you have chosen to downloaded; I have the large model. nlp = spacy.load("en_core_web_lg") doc = nlp(result) # Let's get each individual word as an element. tokens = [token for token in doc] tokens ''' The idea behind normalizing words (lemmatization) seeks to convert your text to the 'base' format. Print out the following: Token, Check if stop word, lemma version. ''' for t in tokens: print('Token is : ', t,'--- Is this a stop word? ', t.is_stop, '--- Lemmatized token is: ', t.lemma_) # Store the lemmas without the words. lemmas = [t.lemma_ for t in tokens if not t.is_stop] tokens[1] ``` # From this point on, I used a lot of the code from this kaggle notebook (And gained much of my inspiration to look into NLP). SO DO CHECK IT OUT! Link is in the description. # https://www.kaggle.com/paoloripamonti/twitter-sentiment-analysis/output ``` # Reading in twitter data on sentiment. (NEGATIVE, POSITIVE for target) # Already cleaned and preprocessed... df = pd.read_csv('twitter_data.csv') df = df.sample(frac=1).reset_index() df = df.drop(['index'], axis = 1) df nltk.download('stopwords') stop_words = stopwords.words("english") stemmer = SnowballStemmer("english") def preprocess(text, stem=False): # Remove link,user and special characters text = re.sub("@\S+|https?:\S+|http?:\S|[^A-Za-z0-9]+", ' ', str(text).lower()).strip() tokens = [] for token in text.split(): if token not in stop_words: if stem: tokens.append(stemmer.stem(token)) else: tokens.append(token) return " ".join(tokens) %%time df.text = df.text.apply(lambda x: preprocess(x)) # preprocessing the text data. df.text[0] # Split into train and test dataset. df_train, df_test = train_test_split(df, test_size=0.2, random_state=42) print("TRAIN size:", len(df_train)) print("TEST size:", len(df_test)) df.text[0:10] ``` # Word2vec This neural network vectorizes the words so that each 'text' can be understood in a neural network. But, the main purpose of the 2 layer neural network is to convert text to a vector, where the vector can be read in to a future neural network AND has a relationship embeddings behind the vectorized values based on cosine similarity. ``` %%time documents = [_text.split() for _text in df_train.text] w2v_model = gensim.models.word2vec.Word2Vec(vector_size =300, # vector size window=7, # distance between current and predicted word within a sentence min_count=10, # ignores words with total frequency less than the parameter workers=8) # threads w2v_model.build_vocab(documents) words = w2v_model.wv.index_to_key vocab_size = len(words) print("Vocab size", vocab_size) %%time w2v_model.train(documents, total_examples=len(documents), epochs=32) w2v_model.wv.most_similar("like") w2v_model.wv.most_similar("comment") # w2v_model.wv.most_similar("subscribe") ``` # Tokenize Text & Create Embedding Layer. Once you have created a word2vec model, go back and your observations to a token. ``` %%time tokenizer = Tokenizer() tokenizer.fit_on_texts(df_train.text) vocab_size = len(tokenizer.word_index) + 1 print("Total words", vocab_size) %%time x_train = pad_sequences(tokenizer.texts_to_sequences(df_train.text), maxlen=300) x_test = pad_sequences(tokenizer.texts_to_sequences(df_test.text), maxlen=300) print(len(x_train), len(x_test)) # Creating an embedding layer that will act as an input layer for the neural network. embedding_matrix = np.zeros((vocab_size, 300)) for word, i in tokenizer.word_index.items(): if word in w2v_model.wv: embedding_matrix[i] = w2v_model.wv[word] print(embedding_matrix.shape) # used in the future. embedding_layer = Embedding(vocab_size, 300, weights=[embedding_matrix], input_length=300, trainable=False) ``` # Label Encoder (Encoding the Dependent variables to 0 or 1) ``` labels = df_train.target.unique().tolist() labels encoder = LabelEncoder() encoder.fit(df_train.target.tolist()) y_train = encoder.transform(df_train.target.tolist()) y_test = encoder.transform(df_test.target.tolist()) y_train = y_train.reshape(-1,1) y_test = y_test.reshape(-1,1) print("y_train",y_train.shape) print("y_test",y_test.shape) df_train y_train ``` # Creating the Model. ``` model = Sequential() model.add(embedding_layer) model.add(Dropout(0.5)) model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2)) # This can be BERT which may have better performance. model.add(Dense(1, activation='sigmoid')) model.summary() model.compile(loss='binary_crossentropy', optimizer="adam", metrics=['accuracy']) callbacks = [ ReduceLROnPlateau(monitor='val_loss', patience=5, cooldown=0), EarlyStopping(monitor='val_accuracy', min_delta=1e-4, patience=5)] ``` # Train ## Note to self... I already ran the model and it took 2 hours to run... ``` %%time history = model.fit(x_train, y_train, batch_size=1024, epochs=8, validation_split=0.1, verbose=1, callbacks = callbacks) ``` # Evaluate ``` %%time score = model.evaluate(x_test, y_test, batch_size=32) print() print("ACCURACY:",score[1]) print("LOSS:",score[0]) history.history acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'b', label='Training acc') plt.plot(epochs, val_acc, 'r', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'b', label='Training loss') plt.plot(epochs, val_loss, 'r', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() # Seems like the model may be overfitting. training loss << validation loss def decode_sentiment(score): return 'NEGATIVE' if score < 0.5 else 'POSITIVE' def predict(text, include_neutral=True): start_at = time.time() # Tokenize text x_test = pad_sequences(tokenizer.texts_to_sequences([text]), maxlen=300) # Predict score = model.predict([x_test])[0] # print(score) # Decode sentiment label = decode_sentiment(score[0]) return {"label": label, "score": float(score), "elapsed_time": time.time()-start_at} predict("Leave a like on this video, comment, and subscribe for more!") predict("I hope you like it!") predict("I love you") ```
github_jupyter
# Before you begin, make sure to install spacy and download 1 of the language models that have a host of relationships already # built in! Each model comes with prebuilt tokenization icons, Tags, dependencies, sentence segmentation, and entities! # pip install spacy # Download the large word vector models from spacy # Other Language models to be downloaded here: https://spacy.io/usage/models # python -m spacy download en_core_web_sm # python -m spacy download en_core_web_md # python -m spacy download en_core_web_lg # DataFrame import pandas as pd # Matplot import matplotlib.pyplot as plt %matplotlib inline # Scikit-learn from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder from sklearn.metrics import confusion_matrix, classification_report, accuracy_score from sklearn.manifold import TSNE from sklearn.feature_extraction.text import TfidfVectorizer # Keras from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.models import Sequential from keras.layers import Activation, Dense, Dropout, Embedding, Flatten, Conv1D, MaxPooling1D, LSTM from keras import utils from keras.callbacks import ReduceLROnPlateau, EarlyStopping import spacy # nltk import nltk from nltk.corpus import stopwords from nltk.stem import SnowballStemmer # Word2vec import gensim # Utility import re import numpy as np from collections import Counter import time text = """ Hi! My name is Spencer and I love doing videos related to tech implementation/strategy. If you like finance, tech, or real world Q&A videos about Data Science, Data Engineering, and Data Analyst related topics, let me know in the comments down below! *psst* If you made it this far in the video, why don't you subscribe? I upload videos every week on a Sunday ~7:00 PM EST. If you like this type of content make sure to hit that like button! That really helps out with the growth of this channel. :) """ # In the NLP pipeline world, you would typically preprocess your data before you start feeding it to your NLP model. # For instance, let's begin by removing special characters. # In the real world, you would have many data streams coming in in different formats. And, you would run your various regex # functions to identify known patterns of text and store them in JSONL or other data storage files. pattern = r'[^A-Za-z ]' regex = re.compile(pattern) result = regex.sub('', text) result # Load in the NLP model that you have chosen to downloaded; I have the large model. nlp = spacy.load("en_core_web_lg") doc = nlp(result) # Let's get each individual word as an element. tokens = [token for token in doc] tokens ''' The idea behind normalizing words (lemmatization) seeks to convert your text to the 'base' format. Print out the following: Token, Check if stop word, lemma version. ''' for t in tokens: print('Token is : ', t,'--- Is this a stop word? ', t.is_stop, '--- Lemmatized token is: ', t.lemma_) # Store the lemmas without the words. lemmas = [t.lemma_ for t in tokens if not t.is_stop] tokens[1] # Reading in twitter data on sentiment. (NEGATIVE, POSITIVE for target) # Already cleaned and preprocessed... df = pd.read_csv('twitter_data.csv') df = df.sample(frac=1).reset_index() df = df.drop(['index'], axis = 1) df nltk.download('stopwords') stop_words = stopwords.words("english") stemmer = SnowballStemmer("english") def preprocess(text, stem=False): # Remove link,user and special characters text = re.sub("@\S+|https?:\S+|http?:\S|[^A-Za-z0-9]+", ' ', str(text).lower()).strip() tokens = [] for token in text.split(): if token not in stop_words: if stem: tokens.append(stemmer.stem(token)) else: tokens.append(token) return " ".join(tokens) %%time df.text = df.text.apply(lambda x: preprocess(x)) # preprocessing the text data. df.text[0] # Split into train and test dataset. df_train, df_test = train_test_split(df, test_size=0.2, random_state=42) print("TRAIN size:", len(df_train)) print("TEST size:", len(df_test)) df.text[0:10] %%time documents = [_text.split() for _text in df_train.text] w2v_model = gensim.models.word2vec.Word2Vec(vector_size =300, # vector size window=7, # distance between current and predicted word within a sentence min_count=10, # ignores words with total frequency less than the parameter workers=8) # threads w2v_model.build_vocab(documents) words = w2v_model.wv.index_to_key vocab_size = len(words) print("Vocab size", vocab_size) %%time w2v_model.train(documents, total_examples=len(documents), epochs=32) w2v_model.wv.most_similar("like") w2v_model.wv.most_similar("comment") # w2v_model.wv.most_similar("subscribe") %%time tokenizer = Tokenizer() tokenizer.fit_on_texts(df_train.text) vocab_size = len(tokenizer.word_index) + 1 print("Total words", vocab_size) %%time x_train = pad_sequences(tokenizer.texts_to_sequences(df_train.text), maxlen=300) x_test = pad_sequences(tokenizer.texts_to_sequences(df_test.text), maxlen=300) print(len(x_train), len(x_test)) # Creating an embedding layer that will act as an input layer for the neural network. embedding_matrix = np.zeros((vocab_size, 300)) for word, i in tokenizer.word_index.items(): if word in w2v_model.wv: embedding_matrix[i] = w2v_model.wv[word] print(embedding_matrix.shape) # used in the future. embedding_layer = Embedding(vocab_size, 300, weights=[embedding_matrix], input_length=300, trainable=False) labels = df_train.target.unique().tolist() labels encoder = LabelEncoder() encoder.fit(df_train.target.tolist()) y_train = encoder.transform(df_train.target.tolist()) y_test = encoder.transform(df_test.target.tolist()) y_train = y_train.reshape(-1,1) y_test = y_test.reshape(-1,1) print("y_train",y_train.shape) print("y_test",y_test.shape) df_train y_train model = Sequential() model.add(embedding_layer) model.add(Dropout(0.5)) model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2)) # This can be BERT which may have better performance. model.add(Dense(1, activation='sigmoid')) model.summary() model.compile(loss='binary_crossentropy', optimizer="adam", metrics=['accuracy']) callbacks = [ ReduceLROnPlateau(monitor='val_loss', patience=5, cooldown=0), EarlyStopping(monitor='val_accuracy', min_delta=1e-4, patience=5)] %%time history = model.fit(x_train, y_train, batch_size=1024, epochs=8, validation_split=0.1, verbose=1, callbacks = callbacks) %%time score = model.evaluate(x_test, y_test, batch_size=32) print() print("ACCURACY:",score[1]) print("LOSS:",score[0]) history.history acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'b', label='Training acc') plt.plot(epochs, val_acc, 'r', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'b', label='Training loss') plt.plot(epochs, val_loss, 'r', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() # Seems like the model may be overfitting. training loss << validation loss def decode_sentiment(score): return 'NEGATIVE' if score < 0.5 else 'POSITIVE' def predict(text, include_neutral=True): start_at = time.time() # Tokenize text x_test = pad_sequences(tokenizer.texts_to_sequences([text]), maxlen=300) # Predict score = model.predict([x_test])[0] # print(score) # Decode sentiment label = decode_sentiment(score[0]) return {"label": label, "score": float(score), "elapsed_time": time.time()-start_at} predict("Leave a like on this video, comment, and subscribe for more!") predict("I hope you like it!") predict("I love you")
0.822902
0.863564
# Session 2 - Instrumental Variables ## Contents - [Overview](#Overview) - [Simple Linear Regression](#Simple-Linear-Regression) - [Extending the Linear Regression Model](#Extending-the-Linear-Regression-Model) - [Endogeneity](#Endogeneity) - [Matrix Algebra](#Matrix-Algebra) ``` # Import everything import pandas as pd import numpy as np import seaborn as sns import statsmodels.api as sm from numpy.linalg import inv from statsmodels.iolib.summary2 import summary_col from linearmodels.iv import IV2SLS # Import matplotlib for graphs import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import axes3d # Set global parameters %matplotlib inline plt.style.use('seaborn-white') plt.rcParams['figure.figsize'] = (8,5) plt.rcParams['figure.titlesize'] = 20 plt.rcParams['axes.titlesize'] = 18 plt.rcParams['axes.labelsize'] = 12 plt.rcParams['legend.fontsize'] = 12 ``` ## Simple Linear Regression In [Acemoglu, Johnson, Robinson (2001), "*The Colonial Origins of Comparative Development*"](https://economics.mit.edu/files/4123) the authors wish to determine whether or not differences in institutions can help to explain observed economic outcomes. How do we measure *institutional differences* and *economic outcomes*? In this paper, - economic outcomes are proxied by log GDP per capita in 1995, adjusted for exchange rates. - institutional differences are proxied by an index of protection against expropriation on average over 1985-95, constructed by the [Political Risk Services Group](https://www.prsgroup.com/). These variables and other data used in the paper are available for download on Daron Acemoglu’s [webpage](https://economics.mit.edu/faculty/acemoglu/data/ajr2001). THe original dataset in in Stata `.dta` format but has been converted to `.csv`. ``` # Load Acemoglu Johnson Robinson Dataset df = pd.read_csv('data/AJR02.csv') df.head() ``` Let’s use a scatterplot to see whether any obvious relationship exists between GDP per capita and the protection against expropriation. ``` # Plot relationship between GDP and expropriation rate df.plot(x='avexpr', y='logpgp95', kind='scatter'); ``` The plot shows a fairly strong positive relationship between protection against expropriation and log GDP per capita. Specifically, if higher protection against expropriation is a measure of institutional quality, then better institutions appear to be positively correlated with better economic outcomes (higher GDP per capita). Given the plot, choosing a linear model to describe this relationship seems like a reasonable assumption. We can write our model as $$ {logpgp95}_i = \beta_0 + \beta_1 {avexpr}_i + \varepsilon_i $$ where: - $ \beta_0 $ is the intercept of the linear trend line on the y-axis - $ \beta_1 $ is the slope of the linear trend line, representing the *marginal effect* of protection against risk on log GDP per capita - $ \varepsilon_i $ is a random error term (deviations of observations from the linear trend due to factors not included in the model) Visually, this linear model involves choosing a straight line that best fits the data, as in the following plot (Figure 2 in [[AJR01]](https://python-programming.quantecon.org/zreferences.html#acemoglu2001)) ``` # Dropping NA's is required to use numpy's polyfit df_subset = df.dropna(subset=['logpgp95', 'avexpr']) # Use only 'base sample' for plotting purposes df_subset = df_subset[df_subset['baseco'] == 1] X = df_subset['avexpr'] y = df_subset['logpgp95'] labels = df_subset['shortnam'] fig, ax = plt.subplots(1,1,figsize=(8,5)) ax.set_title('Figure 2', fontsize=18); ax.scatter(X, y, marker='') # Replace markers with country labels for i, label in enumerate(labels): ax.annotate(label, (X.iloc[i], y.iloc[i])) # Fit a linear trend line sns.regplot(x=X, y=y, ax=ax, order=1, scatter=False) ax.set_xlim([3.3,10.5]) ax.set_ylim([4,10.5]) ax.set_xlabel('Average Expropriation Risk 1985-95') ax.set_ylabel('Log GDP per capita, PPP, 1995') plt.show() ``` The most common technique to estimate the parameters ($ \beta $’s) of the linear model is Ordinary Least Squares (OLS). As the name implies, an OLS model is solved by finding the parameters that minimize *the sum of squared residuals*, i.e. $$ \underset{\hat{\beta}}{\min} \sum^N_{i=1}{\hat{u}^2_i} $$ where $ \hat{u}_i $ is the difference between the observation and the predicted value of the dependent variable. To estimate the constant term $ \beta_0 $, we need to add a column of 1’s to our dataset (consider the equation if $ \beta_0 $ was replaced with $ \beta_0 x_i $ and $ x_i = 1 $) ``` df['const'] = 1 ``` Now we can construct our model in `statsmodels` using the OLS function. We will use `pandas` dataframes with `statsmodels`, however standard arrays can also be used as arguments ``` # Regress GDP on Expropriation Rate reg1 = sm.OLS(endog=df['logpgp95'], exog=df[['const', 'avexpr']], \ missing='drop') type(reg1) ``` So far we have simply constructed our model. We need to use `.fit()` to obtain parameter estimates $ \hat{\beta}_0 $ and $ \hat{\beta}_1 $ ``` # Fit regression results = reg1.fit() type(results) ``` We now have the fitted regression model stored in `results`. To view the OLS regression results, we can call the `.summary()` method. Note that an observation was mistakenly dropped from the results in the original paper (see the note located in maketable2.do from Acemoglu’s webpage), and thus the coefficients differ slightly. ``` results.summary() ``` From our results, we see that - The intercept $ \hat{\beta}_0 = 4.63 $. - The slope $ \hat{\beta}_1 = 0.53 $. - The positive $ \hat{\beta}_1 $ parameter estimate implies that. institutional quality has a positive effect on economic outcomes, as we saw in the figure. - The p-value of 0.000 for $ \hat{\beta}_1 $ implies that the effect of institutions on GDP is statistically significant (using p < 0.05 as a rejection rule). - The R-squared value of 0.611 indicates that around 61% of variation in log GDP per capita is explained by protection against expropriation. Using our parameter estimates, we can now write our estimated relationship as $$ \widehat{logpgp95}_i = 4.63 + 0.53 \ {avexpr}_i $$ This equation describes the line that best fits our data, as shown in Figure 2. We can use this equation to predict the level of log GDP per capita for a value of the index of expropriation protection. For example, for a country with an index value of 6.51 (the average for the dataset), we find that their predicted level of log GDP per capita in 1995 is 8.09. ``` mean_expr = np.mean(df_subset['avexpr']) mean_expr predicted_logpdp95 = results.params[0] + results.params[1] * mean_expr predicted_logpdp95 ``` An easier (and more accurate) way to obtain this result is to use `.predict()` and set $ constant = 1 $ and $ {avexpr}_i = mean\_expr $ ``` results.predict(exog=[1, mean_expr]) ``` We can obtain an array of predicted $ {logpgp95}_i $ for every value of $ {avexpr}_i $ in our dataset by calling `.predict()` on our results. Plotting the predicted values against $ {avexpr}_i $ shows that the predicted values lie along the linear line that we fitted above. The observed values of $ {logpgp95}_i $ are also plotted for comparison purposes ``` fig, ax = plt.subplots(1,1,figsize=(8,5)) ax.set_title('OLS predicted values', fontsize=18) # Drop missing observations from whole sample df_plot = df.dropna(subset=['logpgp95', 'avexpr']) ax.plot(df_plot['avexpr'], results.predict(), alpha=0.5, label='predicted', c='r') # Plot observed values ax.scatter(df_plot['avexpr'], df_plot['logpgp95'], alpha=0.5, label='observed') ax.legend() ax.set_xlabel('avexpr') ax.set_ylabel('logpgp95') plt.show() ``` ## Extending the Linear Regression Model So far we have only accounted for institutions affecting economic performance - almost certainly there are numerous other factors affecting GDP that are not included in our model. Leaving out variables that affect $ logpgp95_i $ will result in **omitted variable bias**, yielding biased and inconsistent parameter estimates. We can extend our bivariate regression model to a **multivariate regression model** by adding in other factors that may affect $ logpgp95_i $. [[AJR01]](https://python-programming.quantecon.org/zreferences.html#acemoglu2001) consider other factors such as: - the effect of climate on economic outcomes; latitude is used to proxy this - differences that affect both economic performance and institutions, eg. cultural, historical, etc.; controlled for with the use of continent dummies Let’s estimate some of the extended models considered in the paper (Table 2) using data from `maketable2.dta` ``` # Add constant term to dataset df['const'] = 1 # Create lists of variables to be used in each regression X1 = ['const', 'avexpr'] X2 = ['const', 'avexpr', 'lat_abst'] X3 = ['const', 'avexpr', 'lat_abst', 'asia', 'africa', 'other'] # Estimate an OLS regression for each set of variables reg1 = sm.OLS(df['logpgp95'], df[X1], missing='drop').fit() reg2 = sm.OLS(df['logpgp95'], df[X2], missing='drop').fit() reg3 = sm.OLS(df['logpgp95'], df[X3], missing='drop').fit() ``` Now that we have fitted our model, we will use `summary_col` to display the results in a single table (model numbers correspond to those in the paper) ``` info_dict={'No. observations' : lambda x: f"{int(x.nobs):d}"} results_table = summary_col(results=[reg1,reg2,reg3], float_format='%0.2f', stars = True, model_names=['Model 1','Model 3','Model 4'], info_dict=info_dict, regressor_order=['const','avexpr','lat_abst','asia','africa']) results_table ``` ## Endogeneity As [[AJR01]](https://python-programming.quantecon.org/zreferences.html#acemoglu2001) discuss, the OLS models likely suffer from **endogeneity** issues, resulting in biased and inconsistent model estimates. Namely, there is likely a two-way relationship between institutions and economic outcomes: - richer countries may be able to afford or prefer better institutions - variables that affect income may also be correlated with institutional differences - the construction of the index may be biased; analysts may be biased towards seeing countries with higher income having better institutions To deal with endogeneity, we can use **two-stage least squares (2SLS) regression**, which is an extension of OLS regression. This method requires replacing the endogenous variable $ {avexpr}_i $ with a variable that is: 1. correlated with $ {avexpr}_i $ 1. not correlated with the error term (ie. it should not directly affect the dependent variable, otherwise it would be correlated with $ u_i $ due to omitted variable bias) The new set of regressors is called an **instrument**, which aims to remove endogeneity in our proxy of institutional differences. The main contribution of [[AJR01]](https://python-programming.quantecon.org/zreferences.html#acemoglu2001) is the use of settler mortality rates to instrument for institutional differences. They hypothesize that higher mortality rates of colonizers led to the establishment of institutions that were more extractive in nature (less protection against expropriation), and these institutions still persist today. Using a scatterplot (Figure 3 in [[AJR01]](https://python-programming.quantecon.org/zreferences.html#acemoglu2001)), we can see protection against expropriation is negatively correlated with settler mortality rates, coinciding with the authors’ hypothesis and satisfying the first condition of a valid instrument. ``` # Dropping NA's is required to use numpy's polyfit df_subset2 = df.dropna(subset=['logem4', 'avexpr']) X = df_subset2['logem4'] y = df_subset2['avexpr'] labels = df_subset2['shortnam'] fig, ax = plt.subplots(1,1,figsize=(8,5)) ax.set_title('Figure 3: First-stage', fontsize=18) # Replace markers with country labels ax.scatter(X, y, marker='') for i, label in enumerate(labels): ax.annotate(label, (X.iloc[i], y.iloc[i])) # Fit a linear trend line ax.plot(np.unique(X), np.poly1d(np.polyfit(X, y, 1))(np.unique(X)), color='black') ax.set_xlim([1.8,8.4]) ax.set_ylim([3.3,10.4]) ax.set_xlabel('Log of Settler Mortality') ax.set_ylabel('Average Expropriation Risk 1985-95'); ``` The second condition may not be satisfied if settler mortality rates in the 17th to 19th centuries have a direct effect on current GDP (in addition to their indirect effect through institutions). For example, settler mortality rates may be related to the current disease environment in a country, which could affect current economic performance. [[AJR01]](https://python-programming.quantecon.org/zreferences.html#acemoglu2001) argue this is unlikely because: - The majority of settler deaths were due to malaria and yellow fever and had a limited effect on local people. - The disease burden on local people in Africa or India, for example, did not appear to be higher than average, supported by relatively high population densities in these areas before colonization. As we appear to have a valid instrument, we can use 2SLS regression to obtain consistent and unbiased parameter estimates. **First stage** The first stage involves regressing the endogenous variable ($ {avexpr}_i $) on the instrument. The instrument is the set of all exogenous variables in our model (and not just the variable we have replaced). Using model 1 as an example, our instrument is simply a constant and settler mortality rates $ {logem4}_i $. Therefore, we will estimate the first-stage regression as $$ {avexpr}_i = \delta_0 + \delta_1 {logem4}_i + v_i $$ ``` # Import and select the data df = df.loc[df['baseco']==1,:] # Add a constant variable df['const'] = 1 # Fit the first stage regression and print summary results_fs = sm.OLS(df['avexpr'], df.loc[:,['const', 'logem4']], missing='drop').fit() results_fs.summary() ``` **Second stage** We need to retrieve the predicted values of $ {avexpr}_i $ using `.predict()`. We then replace the endogenous variable $ {avexpr}_i $ with the predicted values $ \widehat{avexpr}_i $ in the original linear model. Our second stage regression is thus $$ {logpgp95}_i = \beta_0 + \beta_1 \widehat{avexpr}_i + u_i $$ ``` # Second stage df['predicted_avexpr'] = results_fs.predict() results_ss = sm.OLS(df['logpgp95'], df[['const', 'predicted_avexpr']]).fit() # Print results_ss.summary() ``` The second-stage regression results give us an unbiased and consistent estimate of the effect of institutions on economic outcomes. The result suggests a stronger positive relationship than what the OLS results indicated. Note that while our parameter estimates are correct, our standard errors are not and for this reason, computing 2SLS ‘manually’ (in stages with OLS) is not recommended. We can correctly estimate a 2SLS regression in one step using the [linearmodels](https://github.com/bashtage/linearmodels) package, an extension of `statsmodels` Note that when using `IV2SLS`, the exogenous and instrument variables are split up in the function arguments (whereas before the instrument included exogenous variables) ``` # IV regression iv = IV2SLS(dependent=df['logpgp95'], exog=df['const'], endog=df['avexpr'], instruments=df['logem4']).fit(cov_type='unadjusted') # Print iv.summary ``` Given that we now have consistent and unbiased estimates, we can infer from the model we have estimated that institutional differences (stemming from institutions set up during colonization) can help to explain differences in income levels across countries today. [[AJR01]](https://python-programming.quantecon.org/zreferences.html#acemoglu2001) use a marginal effect of 0.94 to calculate that the difference in the index between Chile and Nigeria (ie. institutional quality) implies up to a 7-fold difference in income, emphasizing the significance of institutions in economic development. ## Matrix Algebra The OLS parameter $ \beta $ can also be estimated using matrix algebra and `numpy`. The linear equation we want to estimate is (written in matrix form) $$ y = X\beta + \varepsilon $$ To solve for the unknown parameter $ \beta $, we want to minimize the sum of squared residuals $$ \underset{\hat{\beta}}{\min} \ \hat{\varepsilon}'\hat{\varepsilon} $$ Rearranging the first equation and substituting into the second equation, we can write $$ \underset{\hat{\beta}}{\min} \ (Y - X\hat{\beta})' (Y - X\hat{\beta}) $$ Solving this optimization problem gives the solution for the $ \hat{\beta} $ coefficients $$ \hat{\beta} = (X'X)^{-1}X'y $$ ``` # Init X = df[['const', 'avexpr']].values Z = df[['const', 'logem4']].values y = df['logpgp95'].values # Compute beta OLS beta_OLS = inv(X.T @ X) @ X.T @ y print(beta_OLS) ``` As we as see above, the OLS coefficient might suffer from endogeneity bias. We can solve the issue by instrumenting the predicted average expropriation rate with the average settler mortality. If we define settler mortality as $Z$, our full model is $$ y = X\beta + \varepsilon \\ X = Z\gamma + \mu $$ Where we refer to the second equation as second stage and to the first equation as the reduced form equation. In our case, since the number of endogenous varaibles is equal to the number of insturments, there are two equivalent estimators that do not suffer from endogeneity bias: 2SLS and IV. IV, the one stage estimator $$ \hat \beta_{IV} = (Z'X)^{-1} Z' y $$ ``` # Compute beta IV beta_IV = inv(Z.T @ X) @ Z.T @ y print(beta_IV) ``` One of the hypothesis behind the IV estimator is the *relevance* of the instrument, i.e. we have a strong predictor in the first stage. This is the only hypothesis that we can empirically assess by checking the significance of the first stage coefficient. $$ \hat \gamma = (Z' Z)^{-1} Z'X \\ \hat Var (\hat \gamma) = \sigma_u^2 (Z' Z)^{-1} $$ where $$ u = X - Z \hat \gamma $$ ``` # Estimate first stage coefficient gamma_hat = (inv(Z.T @ Z) @ Z.T @ X) print(gamma_hat[1,1]) # Compute variance of the estimator u = X - Z @ gamma_hat var_gamma_hat = np.var(u) * inv(Z.T @ Z) # Compute standard errors std_gamma_hat = var_gamma_hat[1,1]**.5 print(std_gamma_hat) # Compute 95% confidence interval CI = [gamma_hat[1,1] - 1.96*std_gamma_hat, gamma_hat[1,1] + 1.96*std_gamma_hat] print(CI) ``` The first stage coefficient is negative and significant, i.e. settler mortality is negatively correlated with the expropriation rate. How does it work when we have more instruments than endogenous variables? Two-State Least Squares. 1. Regress $X$ on $Z$ and obtain $\hat X$: $$ \hat X = Z (Z' Z)^{-1} Z'X $$ 2. Regress $Y$ on $\hat X$ and obtain $\hat \beta_{2SLS}$ $$ \hat \beta_{2SLS} = (\hat X' \hat X)^{-1} \hat X' y $$ In our case, just for the sake of exposition, let's generate a second instrument: the settler mortality squared, `logem4_2` = `logem4`^2. ``` df['logem4_2'] = df['logem4']**2 # Define Z Z1 = df[['const', 'logem4', 'logem4_2']].values # Compute beta 2SLS X_hat = Z1 @ (inv(Z1.T @ Z1) @ Z1.T @ X) beta_2SLS = inv(X_hat.T @ X_hat) @ X_hat.T @ y print(beta_2SLS) ``` Changing the instruments has changes the estimated coefficient values. ## Next Lecture Jump to [Session 3 - Nonparametrics](https://nbviewer.jupyter.org/github/matteocourthoud/Machine-Learning-for-Economic-Analysis-2020/blob/master/3_nonparametric.ipynb)
github_jupyter
# Import everything import pandas as pd import numpy as np import seaborn as sns import statsmodels.api as sm from numpy.linalg import inv from statsmodels.iolib.summary2 import summary_col from linearmodels.iv import IV2SLS # Import matplotlib for graphs import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import axes3d # Set global parameters %matplotlib inline plt.style.use('seaborn-white') plt.rcParams['figure.figsize'] = (8,5) plt.rcParams['figure.titlesize'] = 20 plt.rcParams['axes.titlesize'] = 18 plt.rcParams['axes.labelsize'] = 12 plt.rcParams['legend.fontsize'] = 12 # Load Acemoglu Johnson Robinson Dataset df = pd.read_csv('data/AJR02.csv') df.head() # Plot relationship between GDP and expropriation rate df.plot(x='avexpr', y='logpgp95', kind='scatter'); # Dropping NA's is required to use numpy's polyfit df_subset = df.dropna(subset=['logpgp95', 'avexpr']) # Use only 'base sample' for plotting purposes df_subset = df_subset[df_subset['baseco'] == 1] X = df_subset['avexpr'] y = df_subset['logpgp95'] labels = df_subset['shortnam'] fig, ax = plt.subplots(1,1,figsize=(8,5)) ax.set_title('Figure 2', fontsize=18); ax.scatter(X, y, marker='') # Replace markers with country labels for i, label in enumerate(labels): ax.annotate(label, (X.iloc[i], y.iloc[i])) # Fit a linear trend line sns.regplot(x=X, y=y, ax=ax, order=1, scatter=False) ax.set_xlim([3.3,10.5]) ax.set_ylim([4,10.5]) ax.set_xlabel('Average Expropriation Risk 1985-95') ax.set_ylabel('Log GDP per capita, PPP, 1995') plt.show() df['const'] = 1 # Regress GDP on Expropriation Rate reg1 = sm.OLS(endog=df['logpgp95'], exog=df[['const', 'avexpr']], \ missing='drop') type(reg1) # Fit regression results = reg1.fit() type(results) results.summary() mean_expr = np.mean(df_subset['avexpr']) mean_expr predicted_logpdp95 = results.params[0] + results.params[1] * mean_expr predicted_logpdp95 results.predict(exog=[1, mean_expr]) fig, ax = plt.subplots(1,1,figsize=(8,5)) ax.set_title('OLS predicted values', fontsize=18) # Drop missing observations from whole sample df_plot = df.dropna(subset=['logpgp95', 'avexpr']) ax.plot(df_plot['avexpr'], results.predict(), alpha=0.5, label='predicted', c='r') # Plot observed values ax.scatter(df_plot['avexpr'], df_plot['logpgp95'], alpha=0.5, label='observed') ax.legend() ax.set_xlabel('avexpr') ax.set_ylabel('logpgp95') plt.show() # Add constant term to dataset df['const'] = 1 # Create lists of variables to be used in each regression X1 = ['const', 'avexpr'] X2 = ['const', 'avexpr', 'lat_abst'] X3 = ['const', 'avexpr', 'lat_abst', 'asia', 'africa', 'other'] # Estimate an OLS regression for each set of variables reg1 = sm.OLS(df['logpgp95'], df[X1], missing='drop').fit() reg2 = sm.OLS(df['logpgp95'], df[X2], missing='drop').fit() reg3 = sm.OLS(df['logpgp95'], df[X3], missing='drop').fit() info_dict={'No. observations' : lambda x: f"{int(x.nobs):d}"} results_table = summary_col(results=[reg1,reg2,reg3], float_format='%0.2f', stars = True, model_names=['Model 1','Model 3','Model 4'], info_dict=info_dict, regressor_order=['const','avexpr','lat_abst','asia','africa']) results_table # Dropping NA's is required to use numpy's polyfit df_subset2 = df.dropna(subset=['logem4', 'avexpr']) X = df_subset2['logem4'] y = df_subset2['avexpr'] labels = df_subset2['shortnam'] fig, ax = plt.subplots(1,1,figsize=(8,5)) ax.set_title('Figure 3: First-stage', fontsize=18) # Replace markers with country labels ax.scatter(X, y, marker='') for i, label in enumerate(labels): ax.annotate(label, (X.iloc[i], y.iloc[i])) # Fit a linear trend line ax.plot(np.unique(X), np.poly1d(np.polyfit(X, y, 1))(np.unique(X)), color='black') ax.set_xlim([1.8,8.4]) ax.set_ylim([3.3,10.4]) ax.set_xlabel('Log of Settler Mortality') ax.set_ylabel('Average Expropriation Risk 1985-95'); # Import and select the data df = df.loc[df['baseco']==1,:] # Add a constant variable df['const'] = 1 # Fit the first stage regression and print summary results_fs = sm.OLS(df['avexpr'], df.loc[:,['const', 'logem4']], missing='drop').fit() results_fs.summary() # Second stage df['predicted_avexpr'] = results_fs.predict() results_ss = sm.OLS(df['logpgp95'], df[['const', 'predicted_avexpr']]).fit() # Print results_ss.summary() # IV regression iv = IV2SLS(dependent=df['logpgp95'], exog=df['const'], endog=df['avexpr'], instruments=df['logem4']).fit(cov_type='unadjusted') # Print iv.summary # Init X = df[['const', 'avexpr']].values Z = df[['const', 'logem4']].values y = df['logpgp95'].values # Compute beta OLS beta_OLS = inv(X.T @ X) @ X.T @ y print(beta_OLS) # Compute beta IV beta_IV = inv(Z.T @ X) @ Z.T @ y print(beta_IV) # Estimate first stage coefficient gamma_hat = (inv(Z.T @ Z) @ Z.T @ X) print(gamma_hat[1,1]) # Compute variance of the estimator u = X - Z @ gamma_hat var_gamma_hat = np.var(u) * inv(Z.T @ Z) # Compute standard errors std_gamma_hat = var_gamma_hat[1,1]**.5 print(std_gamma_hat) # Compute 95% confidence interval CI = [gamma_hat[1,1] - 1.96*std_gamma_hat, gamma_hat[1,1] + 1.96*std_gamma_hat] print(CI) df['logem4_2'] = df['logem4']**2 # Define Z Z1 = df[['const', 'logem4', 'logem4_2']].values # Compute beta 2SLS X_hat = Z1 @ (inv(Z1.T @ Z1) @ Z1.T @ X) beta_2SLS = inv(X_hat.T @ X_hat) @ X_hat.T @ y print(beta_2SLS)
0.791982
0.982724
# Crowdastro ATLAS-CDFS Catalogue This notebook generates a catalogue of host galaxies for ATLAS-CDFS objects. This process proceeds as follows: 1. Take a radio object. 2. Find all nearby infrared objects. 3. Classify all nearby infrared objects and predict the probability of a positive label. 4. Select the infrared object with the highest probability. This is the host galaxy. This has some clear problems: What do we mean by "nearby"? What if we have two unrelated radio objects nearby each other? A model-based approach *à la* Fan et al. (2015) may resolve this kind of issue, but as we are investigating a model-free approach, we leave this for future research. We take nearby to mean within a $1'$ radius, as this is the radius that Radio Galaxy Zoo volunteers see. In the code below, note that we internally represent ATLAS and SWIRE objects by IDs. These are arbitrary integers. ## Functions We begin with some functions to perform the above steps. ``` # Imports. from typing import List import astropy.io.ascii import astropy.table import h5py import numpy import sklearn.linear_model import sklearn.cross_validation # Globals. # This file stores the ATLAS-CDFS and SWIRE-CDFS catalogues. CROWDASTRO_PATH = '../data/crowdastro_swire.h5' # This file stores the training features and labels. TRAINING_PATH = '../data/training_swire.h5' # ATLAS catalogue. ATLAS_CATALOGUE_PATH = '../data/ATLASDR3_cmpcat_23July2015.csv' # Path to output catalogue to. OUTPUT_PATH = '../data/crowdastro_catalogue.dat' # Radius we should consider an object "nearby". NEARBY = 1 / 60 # 1 arcmin in degrees. # Size of an ATLAS image vector. IMAGE_SIZE = 200 * 200 # Number of numeric features before the distance features. ATLAS_DIST_IDX = 2 + IMAGE_SIZE def find_host(probabilities: numpy.ndarray, atlas_id: int) -> int: """Finds the host galaxy associated with an ATLAS object. Arguments --------- probabilities (N,) array of predicted probabilities of SWIRE objects. atlas_id ID of the ATLAS object to find the host of. Returns ------- int ID of predicted host galaxy. """ with h5py.File(CROWDASTRO_PATH, 'r') as cr, h5py.File(TRAINING_PATH, 'r') as tr: # Get all nearby objects. ir_distances = cr['/atlas/cdfs/numeric'][atlas_id, ATLAS_DIST_IDX:] assert ir_distances.shape[0] == tr['features'].shape[0] # Make a list of IDs of nearby objects. nearby = sorted((ir_distances <= NEARBY).nonzero()[0]) # Find the best nearby candidate. nearby_probabilities = probabilities[nearby] # Select the highest probability object. best_index = nearby_probabilities.argmax() best_index = nearby[best_index] # Convert back into an IR index. return best_index def train_classifier(indices: List[int]) -> sklearn.linear_model.LogisticRegression: """Trains a classifier. Arguments --------- indices List of infrared training indices. Returns ------- sklearn.linear_model.LogisticRegression Trained logistic regression classifier. """ with h5py.File(TRAINING_PATH, 'r') as tr: features = numpy.nan_to_num(tr['features'].value[indices]) labels = tr['labels'].value[indices] lr = sklearn.linear_model.LogisticRegression(class_weight='balanced', penalty='l1') lr.fit(features, labels) return lr def predict(classifier: sklearn.linear_model.LogisticRegression, indices: List[int]) -> numpy.ndarray: """Predicts probabilities for a set of IR objects. Arguments --------- classifier Trained classifier. indices List of IR indices to predict probability of. Returns ------- numpy.ndarray (N,) NumPy array of predicted probabilities. """ with h5py.File(TRAINING_PATH, 'r') as tr: features = numpy.nan_to_num(tr['features'].value[indices]) return classifier.predict_proba(features)[:, 1] def train_and_predict(n_splits: int=10) -> numpy.ndarray: """Generates probabilities for IR objects. Notes ----- Instances will be split according to ATLAS index, not IR index. This is because there is overlap in IR objects' features, so we need to make sure that this overlap is not present in the testing data. Arguments --------- n_splits Number of splits in cross-validation. Returns ------- numpy.ndarray (N,) NumPy array of predictions. """ with h5py.File(CROWDASTRO_PATH, 'r') as cr: # Get the number of ATLAS IDs. n_atlas = cr['/atlas/cdfs/numeric'].shape[0] # Get the number of SWIRE IDs. n_swire = cr['/swire/cdfs/numeric'].shape[0] # Allocate the array of predicted probabilities. probabilities = numpy.zeros((n_swire,)) # Split into training/testing sets. kf = sklearn.cross_validation.KFold(n_atlas, n_folds=n_splits) # Train and predict. for train_indices, test_indices in kf: nearby_train = (cr['/atlas/cdfs/numeric'].value[train_indices, ATLAS_DIST_IDX:] <= NEARBY).nonzero()[0] nearby_test = (cr['/atlas/cdfs/numeric'].value[test_indices, ATLAS_DIST_IDX:] <= NEARBY).nonzero()[0] classifier = train_classifier(nearby_train) fold_probs = predict(classifier, nearby_test) probabilities[nearby_test] = fold_probs return probabilities ``` ## Making predictions In this section, we predict probabilities of SWIRE objects and use these probabilities to find the predicted host galaxies of ATLAS objects. ``` probabilities = train_and_predict() with h5py.File(CROWDASTRO_PATH, 'r') as cr: n_atlas = cr['/atlas/cdfs/numeric'].shape[0] hosts = [find_host(probabilities, i) for i in range(n_atlas)] ``` ## Generating the catalogue We now generate a catalogue matching each ATLAS object to a SWIRE host galaxy. ``` # First, we need to get a list of the ATLAS and SWIRE object names. with h5py.File(CROWDASTRO_PATH, 'r') as cr: atlas_ids = cr['/atlas/cdfs/string'].value atlas_locs = cr['/atlas/cdfs/numeric'][:, :2] # Convert ATLAS IDs into names. atlas_catalogue = astropy.io.ascii.read(ATLAS_CATALOGUE_PATH) id_to_name = {r['ID']: r['name'] for r in atlas_catalogue} atlas_names = [id_to_name[id_.decode('ascii')] for zooniverse_id, id_ in atlas_ids] swire_names = [n.decode('ascii') for n in cr['/swire/cdfs/string']] swire_locs = cr['/swire/cdfs/numeric'][:, :2] # Now we can generate the catalogue. names = ('radio_object', 'infrared_host', 'ra', 'dec') table = astropy.table.Table(names=names, dtype=('S50', 'S50', 'float', 'float')) for atlas_index, atlas_name in enumerate(atlas_names): host = hosts[atlas_index] swire_name = swire_names[host] ra, dec = swire_locs[host] table.add_row((atlas_name, swire_name, ra, dec)) astropy.io.ascii.write(table=table, output=OUTPUT_PATH) ``` ## Analysis We will now compare this to the Norris et al. (2006) catalogue.
github_jupyter
# Imports. from typing import List import astropy.io.ascii import astropy.table import h5py import numpy import sklearn.linear_model import sklearn.cross_validation # Globals. # This file stores the ATLAS-CDFS and SWIRE-CDFS catalogues. CROWDASTRO_PATH = '../data/crowdastro_swire.h5' # This file stores the training features and labels. TRAINING_PATH = '../data/training_swire.h5' # ATLAS catalogue. ATLAS_CATALOGUE_PATH = '../data/ATLASDR3_cmpcat_23July2015.csv' # Path to output catalogue to. OUTPUT_PATH = '../data/crowdastro_catalogue.dat' # Radius we should consider an object "nearby". NEARBY = 1 / 60 # 1 arcmin in degrees. # Size of an ATLAS image vector. IMAGE_SIZE = 200 * 200 # Number of numeric features before the distance features. ATLAS_DIST_IDX = 2 + IMAGE_SIZE def find_host(probabilities: numpy.ndarray, atlas_id: int) -> int: """Finds the host galaxy associated with an ATLAS object. Arguments --------- probabilities (N,) array of predicted probabilities of SWIRE objects. atlas_id ID of the ATLAS object to find the host of. Returns ------- int ID of predicted host galaxy. """ with h5py.File(CROWDASTRO_PATH, 'r') as cr, h5py.File(TRAINING_PATH, 'r') as tr: # Get all nearby objects. ir_distances = cr['/atlas/cdfs/numeric'][atlas_id, ATLAS_DIST_IDX:] assert ir_distances.shape[0] == tr['features'].shape[0] # Make a list of IDs of nearby objects. nearby = sorted((ir_distances <= NEARBY).nonzero()[0]) # Find the best nearby candidate. nearby_probabilities = probabilities[nearby] # Select the highest probability object. best_index = nearby_probabilities.argmax() best_index = nearby[best_index] # Convert back into an IR index. return best_index def train_classifier(indices: List[int]) -> sklearn.linear_model.LogisticRegression: """Trains a classifier. Arguments --------- indices List of infrared training indices. Returns ------- sklearn.linear_model.LogisticRegression Trained logistic regression classifier. """ with h5py.File(TRAINING_PATH, 'r') as tr: features = numpy.nan_to_num(tr['features'].value[indices]) labels = tr['labels'].value[indices] lr = sklearn.linear_model.LogisticRegression(class_weight='balanced', penalty='l1') lr.fit(features, labels) return lr def predict(classifier: sklearn.linear_model.LogisticRegression, indices: List[int]) -> numpy.ndarray: """Predicts probabilities for a set of IR objects. Arguments --------- classifier Trained classifier. indices List of IR indices to predict probability of. Returns ------- numpy.ndarray (N,) NumPy array of predicted probabilities. """ with h5py.File(TRAINING_PATH, 'r') as tr: features = numpy.nan_to_num(tr['features'].value[indices]) return classifier.predict_proba(features)[:, 1] def train_and_predict(n_splits: int=10) -> numpy.ndarray: """Generates probabilities for IR objects. Notes ----- Instances will be split according to ATLAS index, not IR index. This is because there is overlap in IR objects' features, so we need to make sure that this overlap is not present in the testing data. Arguments --------- n_splits Number of splits in cross-validation. Returns ------- numpy.ndarray (N,) NumPy array of predictions. """ with h5py.File(CROWDASTRO_PATH, 'r') as cr: # Get the number of ATLAS IDs. n_atlas = cr['/atlas/cdfs/numeric'].shape[0] # Get the number of SWIRE IDs. n_swire = cr['/swire/cdfs/numeric'].shape[0] # Allocate the array of predicted probabilities. probabilities = numpy.zeros((n_swire,)) # Split into training/testing sets. kf = sklearn.cross_validation.KFold(n_atlas, n_folds=n_splits) # Train and predict. for train_indices, test_indices in kf: nearby_train = (cr['/atlas/cdfs/numeric'].value[train_indices, ATLAS_DIST_IDX:] <= NEARBY).nonzero()[0] nearby_test = (cr['/atlas/cdfs/numeric'].value[test_indices, ATLAS_DIST_IDX:] <= NEARBY).nonzero()[0] classifier = train_classifier(nearby_train) fold_probs = predict(classifier, nearby_test) probabilities[nearby_test] = fold_probs return probabilities probabilities = train_and_predict() with h5py.File(CROWDASTRO_PATH, 'r') as cr: n_atlas = cr['/atlas/cdfs/numeric'].shape[0] hosts = [find_host(probabilities, i) for i in range(n_atlas)] # First, we need to get a list of the ATLAS and SWIRE object names. with h5py.File(CROWDASTRO_PATH, 'r') as cr: atlas_ids = cr['/atlas/cdfs/string'].value atlas_locs = cr['/atlas/cdfs/numeric'][:, :2] # Convert ATLAS IDs into names. atlas_catalogue = astropy.io.ascii.read(ATLAS_CATALOGUE_PATH) id_to_name = {r['ID']: r['name'] for r in atlas_catalogue} atlas_names = [id_to_name[id_.decode('ascii')] for zooniverse_id, id_ in atlas_ids] swire_names = [n.decode('ascii') for n in cr['/swire/cdfs/string']] swire_locs = cr['/swire/cdfs/numeric'][:, :2] # Now we can generate the catalogue. names = ('radio_object', 'infrared_host', 'ra', 'dec') table = astropy.table.Table(names=names, dtype=('S50', 'S50', 'float', 'float')) for atlas_index, atlas_name in enumerate(atlas_names): host = hosts[atlas_index] swire_name = swire_names[host] ra, dec = swire_locs[host] table.add_row((atlas_name, swire_name, ra, dec)) astropy.io.ascii.write(table=table, output=OUTPUT_PATH)
0.825871
0.928603
HP strings can be converted to the following formats via the `output_format` parameter: * `compact`: only number strings without any seperators or whitespace, like "516179157" * `standard`: HP strings with proper whitespace in the proper places. Note that in the case of HP, the compact format is the same as the standard one. Invalid parsing is handled with the `errors` parameter: * `coerce` (default): invalid parsing will be set to NaN * `ignore`: invalid parsing will return the input * `raise`: invalid parsing will raise an exception The following sections demonstrate the functionality of `clean_il_hp()` and `validate_il_hp()`. ### An example dataset containing HP strings ``` import pandas as pd import numpy as np df = pd.DataFrame( { "hp": [ ' 5161 79157 ', '516179150', 'BE 428759497', 'BE431150351', "002 724 334", "hello", np.nan, "NULL", ], "address": [ "123 Pine Ave.", "main st", "1234 west main heights 57033", "apt 1 789 s maple rd manhattan", "robie house, 789 north main street", "1111 S Figueroa St, Los Angeles, CA 90015", "(staples center) 1111 S Figueroa St, Los Angeles", "hello", ] } ) df ``` ## 1. Default `clean_il_hp` By default, `clean_il_hp` will clean hp strings and output them in the standard format with proper separators. ``` from dataprep.clean import clean_il_hp clean_il_hp(df, column = "hp") ``` ## 2. Output formats This section demonstrates the output parameter. ### `standard` (default) ``` clean_il_hp(df, column = "hp", output_format="standard") ``` ### `compact` ``` clean_il_hp(df, column = "hp", output_format="compact") ``` ## 3. `inplace` parameter This deletes the given column from the returned DataFrame. A new column containing cleaned HP strings is added with a title in the format `"{original title}_clean"`. ``` clean_il_hp(df, column="hp", inplace=True) ``` ## 4. `errors` parameter ### `coerce` (default) ``` clean_il_hp(df, "hp", errors="coerce") ``` ### `ignore` ``` clean_il_hp(df, "hp", errors="ignore") ``` ## 4. `validate_il_hp()` `validate_il_hp()` returns `True` when the input is a valid HP. Otherwise it returns `False`. The input of `validate_il_hp()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame. When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated. When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_il_hp()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_il_hp()` returns the validation result for the whole DataFrame. ``` from dataprep.clean import validate_il_hp print(validate_il_hp(' 5161 79157 ')) print(validate_il_hp('516179150')) print(validate_il_hp('BE 428759497')) print(validate_il_hp('BE431150351')) print(validate_il_hp("004085616")) print(validate_il_hp("hello")) print(validate_il_hp(np.nan)) print(validate_il_hp("NULL")) ``` ### Series ``` validate_il_hp(df["hp"]) ``` ### DataFrame + Specify Column ``` validate_il_hp(df, column="hp") ``` ### Only DataFrame ``` validate_il_hp(df) ```
github_jupyter
import pandas as pd import numpy as np df = pd.DataFrame( { "hp": [ ' 5161 79157 ', '516179150', 'BE 428759497', 'BE431150351', "002 724 334", "hello", np.nan, "NULL", ], "address": [ "123 Pine Ave.", "main st", "1234 west main heights 57033", "apt 1 789 s maple rd manhattan", "robie house, 789 north main street", "1111 S Figueroa St, Los Angeles, CA 90015", "(staples center) 1111 S Figueroa St, Los Angeles", "hello", ] } ) df from dataprep.clean import clean_il_hp clean_il_hp(df, column = "hp") clean_il_hp(df, column = "hp", output_format="standard") clean_il_hp(df, column = "hp", output_format="compact") clean_il_hp(df, column="hp", inplace=True) clean_il_hp(df, "hp", errors="coerce") clean_il_hp(df, "hp", errors="ignore") from dataprep.clean import validate_il_hp print(validate_il_hp(' 5161 79157 ')) print(validate_il_hp('516179150')) print(validate_il_hp('BE 428759497')) print(validate_il_hp('BE431150351')) print(validate_il_hp("004085616")) print(validate_il_hp("hello")) print(validate_il_hp(np.nan)) print(validate_il_hp("NULL")) validate_il_hp(df["hp"]) validate_il_hp(df, column="hp") validate_il_hp(df)
0.4206
0.98631
# Scikit-Learn IRIS Model * Wrap a scikit-learn python model for use as a prediction microservice in seldon-core * Run locally on Docker to test * Deploy on seldon-core running on a kubernetes cluster ## Dependencies * [S2I](https://github.com/openshift/source-to-image) ```bash pip install sklearn pip install seldon-core ``` ## Train locally ``` %%writefile train_iris.py import joblib from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression from sklearn import datasets OUTPUT_FILE = "IrisClassifier.sav" def main(): clf = LogisticRegression(solver="liblinear", multi_class="ovr") p = Pipeline([("clf", clf)]) print("Training model...") p.fit(X, y) print("Model trained!") print(f"Saving model in {OUTPUT_FILE}") joblib.dump(p, OUTPUT_FILE) print("Model saved!") if __name__ == "__main__": print("Loading iris data set...") iris = datasets.load_iris() X, y = iris.data, iris.target print("Dataset loaded!") main() !python train_iris.py ``` ## Wrap model with Python Wrapper Class ``` %%writefile IrisClassifier.py import joblib class IrisClassifier(object): def __init__(self): self.model = joblib.load('IrisClassifier.sav') def predict(self,X,features_names): return self.model.predict_proba(X) ``` Wrap model using s2i ## REST test ``` !s2i build -E environment_rest . seldonio/seldon-core-s2i-python3:1.5.0-dev seldonio/sklearn-iris:0.1 !docker run --name "iris_predictor" -d --rm -p 5000:5000 seldonio/sklearn-iris:0.1 ``` Send some random features that conform to the contract ``` !curl -s http://localhost:5000/predict -H "Content-Type: application/json" -d '{"data":{"ndarray":[[5.964,4.006,2.081,1.031]]}}' !docker rm iris_predictor --force ``` ## Setup Seldon Core Use the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html) to setup Seldon Core with an ingress - either Ambassador or Istio. Then port-forward to that ingress on localhost:8003 in a separate terminal either with: * Ambassador: `kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080` * Istio: `kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:80` ``` !kubectl create namespace seldon !kubectl config set-context $(kubectl config current-context) --namespace=seldon ``` Create Seldon Core config file ``` %%writefile sklearn_iris_deployment.yaml apiVersion: machinelearning.seldon.io/v1alpha2 kind: SeldonDeployment metadata: name: seldon-deployment-example spec: name: sklearn-iris-deployment predictors: - componentSpecs: - spec: containers: - image: seldonio/sklearn-iris:0.1 imagePullPolicy: IfNotPresent name: sklearn-iris-classifier graph: children: [] endpoint: type: REST name: sklearn-iris-classifier type: MODEL name: sklearn-iris-predictor replicas: 1 !kubectl create -f sklearn_iris_deployment.yaml !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=seldon-deployment-example \ -o jsonpath='{.items[0].metadata.name}') for i in range(60): state=!kubectl get sdep seldon-deployment-example -o jsonpath='{.status.state}' state=state[0] print(state) if state=="Available": break time.sleep(1) assert(state=="Available") res=!curl -s http://localhost:8003/seldon/seldon/seldon-deployment-example/api/v0.1/predictions -H "Content-Type: application/json" -d '{"data":{"ndarray":[[5.964,4.006,2.081,1.031]]}}' res print(res) import json j=json.loads(res[0]) assert(j["data"]["ndarray"][0][0]>0.0) !kubectl delete -f sklearn_iris_deployment.yaml ```
github_jupyter
pip install sklearn pip install seldon-core %%writefile train_iris.py import joblib from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression from sklearn import datasets OUTPUT_FILE = "IrisClassifier.sav" def main(): clf = LogisticRegression(solver="liblinear", multi_class="ovr") p = Pipeline([("clf", clf)]) print("Training model...") p.fit(X, y) print("Model trained!") print(f"Saving model in {OUTPUT_FILE}") joblib.dump(p, OUTPUT_FILE) print("Model saved!") if __name__ == "__main__": print("Loading iris data set...") iris = datasets.load_iris() X, y = iris.data, iris.target print("Dataset loaded!") main() !python train_iris.py %%writefile IrisClassifier.py import joblib class IrisClassifier(object): def __init__(self): self.model = joblib.load('IrisClassifier.sav') def predict(self,X,features_names): return self.model.predict_proba(X) !s2i build -E environment_rest . seldonio/seldon-core-s2i-python3:1.5.0-dev seldonio/sklearn-iris:0.1 !docker run --name "iris_predictor" -d --rm -p 5000:5000 seldonio/sklearn-iris:0.1 !curl -s http://localhost:5000/predict -H "Content-Type: application/json" -d '{"data":{"ndarray":[[5.964,4.006,2.081,1.031]]}}' !docker rm iris_predictor --force !kubectl create namespace seldon !kubectl config set-context $(kubectl config current-context) --namespace=seldon %%writefile sklearn_iris_deployment.yaml apiVersion: machinelearning.seldon.io/v1alpha2 kind: SeldonDeployment metadata: name: seldon-deployment-example spec: name: sklearn-iris-deployment predictors: - componentSpecs: - spec: containers: - image: seldonio/sklearn-iris:0.1 imagePullPolicy: IfNotPresent name: sklearn-iris-classifier graph: children: [] endpoint: type: REST name: sklearn-iris-classifier type: MODEL name: sklearn-iris-predictor replicas: 1 !kubectl create -f sklearn_iris_deployment.yaml !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=seldon-deployment-example \ -o jsonpath='{.items[0].metadata.name}') for i in range(60): state=!kubectl get sdep seldon-deployment-example -o jsonpath='{.status.state}' state=state[0] print(state) if state=="Available": break time.sleep(1) assert(state=="Available") res=!curl -s http://localhost:8003/seldon/seldon/seldon-deployment-example/api/v0.1/predictions -H "Content-Type: application/json" -d '{"data":{"ndarray":[[5.964,4.006,2.081,1.031]]}}' res print(res) import json j=json.loads(res[0]) assert(j["data"]["ndarray"][0][0]>0.0) !kubectl delete -f sklearn_iris_deployment.yaml
0.53777
0.874023
# Autoregressive modelling with DeepAR and DeepVAR ``` import os import warnings warnings.filterwarnings("ignore") os.chdir("../../..") import matplotlib.pyplot as plt import pandas as pd import pytorch_lightning as pl from pytorch_lightning.callbacks import EarlyStopping import torch from pytorch_forecasting import Baseline, DeepAR, TimeSeriesDataSet from pytorch_forecasting.data import NaNLabelEncoder from pytorch_forecasting.data.examples import generate_ar_data from pytorch_forecasting.metrics import SMAPE, MultivariateNormalDistributionLoss ``` ## Load data We generate a synthetic dataset to demonstrate the network's capabilities. The data consists of a quadratic trend and a seasonality component. ``` data = generate_ar_data(seasonality=10.0, timesteps=400, n_series=100, seed=42) data["static"] = 2 data["date"] = pd.Timestamp("2020-01-01") + pd.to_timedelta(data.time_idx, "D") data.head() data = data.astype(dict(series=str)) # create dataset and dataloaders max_encoder_length = 60 max_prediction_length = 20 training_cutoff = data["time_idx"].max() - max_prediction_length context_length = max_encoder_length prediction_length = max_prediction_length training = TimeSeriesDataSet( data[lambda x: x.time_idx <= training_cutoff], time_idx="time_idx", target="value", categorical_encoders={"series": NaNLabelEncoder().fit(data.series)}, group_ids=["series"], static_categoricals=[ "series" ], # as we plan to forecast correlations, it is important to use series characteristics (e.g. a series identifier) time_varying_unknown_reals=["value"], max_encoder_length=context_length, max_prediction_length=prediction_length, ) validation = TimeSeriesDataSet.from_dataset(training, data, min_prediction_idx=training_cutoff + 1) batch_size = 128 # synchronize samples in each batch over time - only necessary for DeepVAR, not for DeepAR train_dataloader = training.to_dataloader( train=True, batch_size=batch_size, num_workers=0, batch_sampler="synchronized" ) val_dataloader = validation.to_dataloader( train=False, batch_size=batch_size, num_workers=0, batch_sampler="synchronized" ) ``` ## Calculate baseline error ``` # calculate baseline absolute error actuals = torch.cat([y[0] for x, y in iter(val_dataloader)]) baseline_predictions = Baseline().predict(val_dataloader) SMAPE()(baseline_predictions, actuals) ``` ``` pl.seed_everything(42) import pytorch_forecasting as ptf trainer = pl.Trainer(gpus=0, gradient_clip_val=1e-1) net = DeepAR.from_dataset( training, learning_rate=3e-2, hidden_size=30, rnn_layers=2, loss=MultivariateNormalDistributionLoss(rank=30) ) ``` ## Train network Finding the optimal learning rate using [PyTorch Lightning](https://pytorch-lightning.readthedocs.io/) is easy. ``` # find optimal learning rate res = trainer.tuner.lr_find( net, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader, min_lr=1e-5, max_lr=1e0, early_stop_threshold=100, ) print(f"suggested learning rate: {res.suggestion()}") fig = res.plot(show=True, suggest=True) fig.show() net.hparams.learning_rate = res.suggestion() early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=10, verbose=False, mode="min") trainer = pl.Trainer( max_epochs=30, gpus=0, weights_summary="top", gradient_clip_val=0.1, callbacks=[early_stop_callback], limit_train_batches=50, enable_checkpointing=True, ) net = DeepAR.from_dataset( training, learning_rate=0.1, log_interval=10, log_val_interval=1, hidden_size=30, rnn_layers=2, loss=MultivariateNormalDistributionLoss(rank=30), ) trainer.fit( net, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader, ) best_model_path = trainer.checkpoint_callback.best_model_path best_model = DeepAR.load_from_checkpoint(best_model_path) actuals = torch.cat([y[0] for x, y in iter(val_dataloader)]) predictions = best_model.predict(val_dataloader) (actuals - predictions).abs().mean() raw_predictions, x = net.predict(val_dataloader, mode="raw", return_x=True, n_samples=100) series = validation.x_to_index(x)["series"] for idx in range(20): # plot 10 examples best_model.plot_prediction(x, raw_predictions, idx=idx, add_loss_to_title=True) plt.suptitle(f"Series: {series.iloc[idx]}") ``` When using DeepVAR as a multivariate forecaster, we might be also interested in the correlation matrix. Here, there is no correlation between the series and we probably would need to train longer for this to show up. ``` cov_matrix = best_model.loss.map_x_to_distribution( best_model.predict(val_dataloader, mode=("raw", "prediction"), n_samples=None) ).base_dist.covariance_matrix.mean(0) # normalize the covariance matrix diagnoal to 1.0 correlation_matrix = cov_matrix / torch.sqrt(torch.diag(cov_matrix)[None] * torch.diag(cov_matrix)[None].T) fig, ax = plt.subplots(1, 1, figsize=(10, 10)) ax.imshow(correlation_matrix, cmap="bwr"); # distribution of off-diagonal correlations plt.hist(correlation_matrix[correlation_matrix < 1].numpy()); 1 ```
github_jupyter
import os import warnings warnings.filterwarnings("ignore") os.chdir("../../..") import matplotlib.pyplot as plt import pandas as pd import pytorch_lightning as pl from pytorch_lightning.callbacks import EarlyStopping import torch from pytorch_forecasting import Baseline, DeepAR, TimeSeriesDataSet from pytorch_forecasting.data import NaNLabelEncoder from pytorch_forecasting.data.examples import generate_ar_data from pytorch_forecasting.metrics import SMAPE, MultivariateNormalDistributionLoss data = generate_ar_data(seasonality=10.0, timesteps=400, n_series=100, seed=42) data["static"] = 2 data["date"] = pd.Timestamp("2020-01-01") + pd.to_timedelta(data.time_idx, "D") data.head() data = data.astype(dict(series=str)) # create dataset and dataloaders max_encoder_length = 60 max_prediction_length = 20 training_cutoff = data["time_idx"].max() - max_prediction_length context_length = max_encoder_length prediction_length = max_prediction_length training = TimeSeriesDataSet( data[lambda x: x.time_idx <= training_cutoff], time_idx="time_idx", target="value", categorical_encoders={"series": NaNLabelEncoder().fit(data.series)}, group_ids=["series"], static_categoricals=[ "series" ], # as we plan to forecast correlations, it is important to use series characteristics (e.g. a series identifier) time_varying_unknown_reals=["value"], max_encoder_length=context_length, max_prediction_length=prediction_length, ) validation = TimeSeriesDataSet.from_dataset(training, data, min_prediction_idx=training_cutoff + 1) batch_size = 128 # synchronize samples in each batch over time - only necessary for DeepVAR, not for DeepAR train_dataloader = training.to_dataloader( train=True, batch_size=batch_size, num_workers=0, batch_sampler="synchronized" ) val_dataloader = validation.to_dataloader( train=False, batch_size=batch_size, num_workers=0, batch_sampler="synchronized" ) # calculate baseline absolute error actuals = torch.cat([y[0] for x, y in iter(val_dataloader)]) baseline_predictions = Baseline().predict(val_dataloader) SMAPE()(baseline_predictions, actuals) pl.seed_everything(42) import pytorch_forecasting as ptf trainer = pl.Trainer(gpus=0, gradient_clip_val=1e-1) net = DeepAR.from_dataset( training, learning_rate=3e-2, hidden_size=30, rnn_layers=2, loss=MultivariateNormalDistributionLoss(rank=30) ) # find optimal learning rate res = trainer.tuner.lr_find( net, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader, min_lr=1e-5, max_lr=1e0, early_stop_threshold=100, ) print(f"suggested learning rate: {res.suggestion()}") fig = res.plot(show=True, suggest=True) fig.show() net.hparams.learning_rate = res.suggestion() early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=10, verbose=False, mode="min") trainer = pl.Trainer( max_epochs=30, gpus=0, weights_summary="top", gradient_clip_val=0.1, callbacks=[early_stop_callback], limit_train_batches=50, enable_checkpointing=True, ) net = DeepAR.from_dataset( training, learning_rate=0.1, log_interval=10, log_val_interval=1, hidden_size=30, rnn_layers=2, loss=MultivariateNormalDistributionLoss(rank=30), ) trainer.fit( net, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader, ) best_model_path = trainer.checkpoint_callback.best_model_path best_model = DeepAR.load_from_checkpoint(best_model_path) actuals = torch.cat([y[0] for x, y in iter(val_dataloader)]) predictions = best_model.predict(val_dataloader) (actuals - predictions).abs().mean() raw_predictions, x = net.predict(val_dataloader, mode="raw", return_x=True, n_samples=100) series = validation.x_to_index(x)["series"] for idx in range(20): # plot 10 examples best_model.plot_prediction(x, raw_predictions, idx=idx, add_loss_to_title=True) plt.suptitle(f"Series: {series.iloc[idx]}") cov_matrix = best_model.loss.map_x_to_distribution( best_model.predict(val_dataloader, mode=("raw", "prediction"), n_samples=None) ).base_dist.covariance_matrix.mean(0) # normalize the covariance matrix diagnoal to 1.0 correlation_matrix = cov_matrix / torch.sqrt(torch.diag(cov_matrix)[None] * torch.diag(cov_matrix)[None].T) fig, ax = plt.subplots(1, 1, figsize=(10, 10)) ax.imshow(correlation_matrix, cmap="bwr"); # distribution of off-diagonal correlations plt.hist(correlation_matrix[correlation_matrix < 1].numpy()); 1
0.734501
0.924483
``` import json import time import pickle import requests import pandas as pd url_main = 'https://api.hh.ru/vacancies' param = "?" hosts = ["host=hh.ua", "host=hh.ru", "host=career.ru", "host=jobs.tut.by", "host=jobs.day.az", "host=hh.uz", "host=hh.kz", "host=headhunter.ge", "host=headhunter.kg"] an = "&" per_p = "per_page=" page = "page=" url = 'https://api.hh.ru/vacancies?host=hh.ua&per_page=100&page=' r = requests.get('https://api.hh.ru/vacancies?host=hh.ua&per_page=100&page=0') print(r.status_code) #print(r.text) #vacancies = json.loads(r.text) #print(json.loads(r.text)["items"][0]) #print() #print(json.loads(r.text)["items"][0]["snippet"]) len(json.loads(r.text)["items"]) def load_vacancies_ids(url, hosts): vacancies = [] vac_ids = [] param = "?" an = "&" per_page = "per_page=100" page = "page=" for host in hosts: for i in range(20): url_mod = url + param + host url_mod = url_mod + an + per_page + an + page + str(i) r = requests.get(url_mod) vac = json.loads(r.text) vacancies = vacancies + vac["items"] vac_ids = vac_ids + [i["id"] for i in vac["items"]] return vac_ids, vacancies %%time vac_ids, vacancies = load_vacancies_ids(url_main, hosts) len(vac_ids) ids_set = set(vac_ids) len(ids_set) list(ids_set)[100] with open("headHunter_data/hh_ids.dat", "rb") as inf: ids2_list = pickle.load(inf) with open("headHunter_data/hh_vacancies.dat", "rb") as inf: vacancies2 = pickle.load(inf) with open("headHunter_data/hh_vacancies_ext.dat", "rb") as inf: vacancies_ext2 = pickle.load(inf) print(len(vacancies_ext2)) print(len(ids2_list)) #print(len(ids2_list + list(ids_set))) ids_all = set(ids2_list + list(ids_set)) print(len(ids_all)) len(ids_set.intersection(set(ids2_list))) unic_ids = ids_set.difference(ids_set.intersection(set(ids2_list))) print(len(unic_ids)) vacancies = vacancies2 + vacancies print(len(vacancies)) ids_all = set(ids2_list + list(unic_ids)) print(len(ids_all)) def load_vacancies_extended(url, ids): vacancies_ext = [] count = 0 for i in ids: url_mod = url + "/" + str(i) r = requests.get(url_mod) vacancy = json.loads(r.text) vacancies_ext.append(vacancy) count += 1 if count % 500 == 0: print(count) return vacancies_ext %%time vacancies_ext = load_vacancies_extended(url_main, list(unic_ids)) print(len(vacancies_ext)) print(len(vacancies_ext2)) vacancies_ext = vacancies_ext2 + vacancies_ext print(len(vacancies_ext)) ``` ### Save loaded new vacancies and ald ``` with open("hh_ids.dat", "wb") as ouf: pickle.dump(list(ids_all), ouf) with open("hh_vacancies.dat", "wb") as ouf: pickle.dump(vacancies, ouf) with open("hh_vacancies_ext.dat", "wb") as ouf: pickle.dump(list(vacancies_ext), ouf) with open("hh_ids.json", "w") as ouf: json.dump(list(ids_all), ouf, ensure_ascii=False) with open("hh_vacancies.json", "w") as ouf: json.dump(vacancies, ouf, ensure_ascii=False) with open("hh_vacancies_ext.json", "w") as ouf: json.dump(vacancies_ext, ouf, ensure_ascii=False) ``` ### Check saved data ``` with open("hh_ids.dat", "rb") as inf: t = pickle.load(inf) print("uniq vacancies ids number =", len(t)) with open("hh_vacancies.dat", "rb") as inf: t = pickle.load(inf) print("num total loaded vac description =", len(t)) with open("hh_vacancies_ext.dat", "rb") as inf: t = pickle.load(inf) print("num loaded full vac =", len(t)) r = requests.get('https://drive.google.com/open?id=1-XZVxCYUdARwi8BZYOsV_M3uKPrXaVYf') print(r.status_code) print(r.text) ```
github_jupyter
import json import time import pickle import requests import pandas as pd url_main = 'https://api.hh.ru/vacancies' param = "?" hosts = ["host=hh.ua", "host=hh.ru", "host=career.ru", "host=jobs.tut.by", "host=jobs.day.az", "host=hh.uz", "host=hh.kz", "host=headhunter.ge", "host=headhunter.kg"] an = "&" per_p = "per_page=" page = "page=" url = 'https://api.hh.ru/vacancies?host=hh.ua&per_page=100&page=' r = requests.get('https://api.hh.ru/vacancies?host=hh.ua&per_page=100&page=0') print(r.status_code) #print(r.text) #vacancies = json.loads(r.text) #print(json.loads(r.text)["items"][0]) #print() #print(json.loads(r.text)["items"][0]["snippet"]) len(json.loads(r.text)["items"]) def load_vacancies_ids(url, hosts): vacancies = [] vac_ids = [] param = "?" an = "&" per_page = "per_page=100" page = "page=" for host in hosts: for i in range(20): url_mod = url + param + host url_mod = url_mod + an + per_page + an + page + str(i) r = requests.get(url_mod) vac = json.loads(r.text) vacancies = vacancies + vac["items"] vac_ids = vac_ids + [i["id"] for i in vac["items"]] return vac_ids, vacancies %%time vac_ids, vacancies = load_vacancies_ids(url_main, hosts) len(vac_ids) ids_set = set(vac_ids) len(ids_set) list(ids_set)[100] with open("headHunter_data/hh_ids.dat", "rb") as inf: ids2_list = pickle.load(inf) with open("headHunter_data/hh_vacancies.dat", "rb") as inf: vacancies2 = pickle.load(inf) with open("headHunter_data/hh_vacancies_ext.dat", "rb") as inf: vacancies_ext2 = pickle.load(inf) print(len(vacancies_ext2)) print(len(ids2_list)) #print(len(ids2_list + list(ids_set))) ids_all = set(ids2_list + list(ids_set)) print(len(ids_all)) len(ids_set.intersection(set(ids2_list))) unic_ids = ids_set.difference(ids_set.intersection(set(ids2_list))) print(len(unic_ids)) vacancies = vacancies2 + vacancies print(len(vacancies)) ids_all = set(ids2_list + list(unic_ids)) print(len(ids_all)) def load_vacancies_extended(url, ids): vacancies_ext = [] count = 0 for i in ids: url_mod = url + "/" + str(i) r = requests.get(url_mod) vacancy = json.loads(r.text) vacancies_ext.append(vacancy) count += 1 if count % 500 == 0: print(count) return vacancies_ext %%time vacancies_ext = load_vacancies_extended(url_main, list(unic_ids)) print(len(vacancies_ext)) print(len(vacancies_ext2)) vacancies_ext = vacancies_ext2 + vacancies_ext print(len(vacancies_ext)) with open("hh_ids.dat", "wb") as ouf: pickle.dump(list(ids_all), ouf) with open("hh_vacancies.dat", "wb") as ouf: pickle.dump(vacancies, ouf) with open("hh_vacancies_ext.dat", "wb") as ouf: pickle.dump(list(vacancies_ext), ouf) with open("hh_ids.json", "w") as ouf: json.dump(list(ids_all), ouf, ensure_ascii=False) with open("hh_vacancies.json", "w") as ouf: json.dump(vacancies, ouf, ensure_ascii=False) with open("hh_vacancies_ext.json", "w") as ouf: json.dump(vacancies_ext, ouf, ensure_ascii=False) with open("hh_ids.dat", "rb") as inf: t = pickle.load(inf) print("uniq vacancies ids number =", len(t)) with open("hh_vacancies.dat", "rb") as inf: t = pickle.load(inf) print("num total loaded vac description =", len(t)) with open("hh_vacancies_ext.dat", "rb") as inf: t = pickle.load(inf) print("num loaded full vac =", len(t)) r = requests.get('https://drive.google.com/open?id=1-XZVxCYUdARwi8BZYOsV_M3uKPrXaVYf') print(r.status_code) print(r.text)
0.116236
0.191365
``` import numpy as np import matplotlib import matplotlib.pyplot as plt import astropy import astropy.table as atpy from astropy import cosmology from astropy.cosmology import FlatLambdaCDM import astropy.units as u from astropy.table import Column import sherpa import sherpa.ui as ui import scipy import scipy.integrate import scipy.optimize as op import logging import time import emcee import corner #add in all needed modules for things here... %matplotlib inline #avoid sherpa suppression of traceback import sys sys.tracebacklimit = 100 # default parameters and unit conversion factors import defaultparams.params as params import defaultparams.uconv as uconv # functions to read data into format used by module from bmpmod.set_prof_data import set_ne, set_tspec, set_meta # functions to fit the gas density profile from bmpmod.fit_density import fitne, find_nemodeltype # functions to determine mass profile through backwards modelling from bmpmod.fit_massprof import fit_ml, fit_mcmc # functions to analyze the marginalized posterior distribution from bmpmod.posterior_mcmc import calc_posterior_mcmc, samples_results # plotting functions from bmpmod.plotting import plt_mcmc_freeparam, plt_summary, plt_summary_nice # functions specifically to generate mock data from Vikhlinin+ profiles from exampledata.vikhlinin_prof import vikhlinin_tprof, vikhlinin_neprof, gen_mock_data ``` # Goal: The primary goal of this example script is to showcase the tools available in the bmpmod package using mock data. The mock data is produced by randomly sampling the density and temperature profiles models published in Vikhlinin+06 for a sample of clusters (Vikhlinin, A., et al. 2006, ApJ, 640, 691). A secondary goal of this example is thus to also explore how the backwards mass modeling process used in the bmpmod package compares to the forward fitting results of Vikhlinin+. The mock profiles generated here allow for a flexible choice in noise and radial sampling rate, which enables an exploration of how these quantities affect the output of the backwards-fitting process. There is also some flexibility built into the bmpmod package that can be additionally tested such as allowing for the stellar mass of the central galaxy to be included (or not included) in the model of total gravitating mass. If the stellar mass profile of the BCG is toggled on, the values for the BCG effective radius Re are pulled from the 2MASS catalog values for a de Vaucouleurs fit to K-band data . After generating the mock temperature and density profiles, the script walks the user through performing the backwards-fitting mass modelling analysis which can be summarized as fitting the below $T_{\mathrm{model}}$ expression to the observed temperature profile by constraining the parameters in the total gravitating mass model $M_{\mathrm{tot}}$. $kT_{\mathrm{model}}(R) = \frac{kT(R_{\mathrm{ref}}) \ n_{e}(R_{\mathrm{ref}})}{n_{e}(R)} -\frac{\mu m_{p} G}{n_{e}(R)} \int_{R_{\mathrm{ref}}}^R \frac{n_{e}(r) M_{\mathrm{grav}}(r)}{r^2} dr$ The output of the bmpmod analysis includes a parametric model fit to the gas denisty profile, a non-parametric model fit to the temperature profile, the total mass profile and its associated parameters describing the profile (e.g., the NFW c, Rs), and the contributions of different mass components (i.e., DM, gas, stars) to the total mass profile. This tutorial will go over: 1. Generating mock gas density and temperature data 2. Fiting the gas density profile with a parametric model 3. Maximum likelihood mass profile parameter estimation 4. MCMC mass profile parameter estimation 5. Plotting and summarizing the results ### A note on usage: Any of the clusters in Vikhlinin+06 are options to be used to generate randomly sampled temperature and density profiles. The full list of clusters is as follows: Vikhlinin+ clusters: [A133, A262, A383, A478, A907, A1413, A1795, A1991, A2029, A2390, RXJ1159+5531, MKW4, USGCS152] After selecting one of these clusters, this example script will automatically generate the cluster and profile data in the proper format to be used by the bmpmod modules. If you have your own data you would like to analyze with the bmpmod package, please see the included template.py file. ``` #select any cluster ID from the Vikhlinin+ paper clusterID='A383' ``` # 1. Generate mock gas density and temperature profiles To generate the mock profiles, the density and temperature models define in Table 2 and 3 of Vikhlinin+06 are sampled. The sampling of the models occurs in equally log-spaced radial bins with the number of bins set by N_ne and N_temp in gen_mock_data(). At each radial point, the density and temperature values are randomly sampled from a Gaussian distribution centered on the model value and with standard deviation equal to noise_ne and noise_temp multiplied by the model value for density or temperature. Args for gen_mock_data(): N_ne: the number of gas density profile data points N_temp: the number of temperature profile data points noise_ne: the percent noise on the density values noise_temp: the percent noise on the temperature values refindex: index into profile where Tmodel = Tspec incl_mstar: include stellar mass of the central galaxy in the model for total gravitating mass incl_mgas: include gas mass of ICM in the model for total gravitating mass ``` clustermeta, ne_data, tspec_data, nemodel_vikhlinin, tmodel_vikhlinin \ = gen_mock_data(clusterID=clusterID, N_ne=30, N_temp=10, noise_ne=0.10, noise_temp=0.05, refindex=-1, incl_mstar=1, incl_mgas=1) ``` Now let's take a look at the returns... while these are generated automatically here, if you use your own data, things should be in a similar form. ``` # clustermeta: # dictionary that stores relevant properties of cluster # (i.e., name, redshift, bcg_re: the effective radius of the central galaxy in kpc, # bcg_sersc_n: the sersic index of the central galaxy) # as well as selections for analysis # (i.e., incl_mstar, incl_mgas, refindex as input previously) clustermeta #ne_data: dictionary that stores the mock "observed" gas density profile ne_data[:3] #tspec_data: dictionary that store the mock "observed" temperature profile tspec_data[:3] ``` Let's take a look at how our mock profiles compare to the model we're sampling from ... ``` fig1 = plt.figure(1, (12, 4)) ax = fig1.add_subplot(1, 2, 1) ''' mock gas denisty profile ''' # plot Vikhlinin+06 density model xplot = np.logspace(np.log10(min(ne_data['radius'])), np.log10(max(ne_data['radius'])), 1000) plt.loglog(xplot, vikhlinin_neprof(nemodel_vikhlinin, xplot), 'k') plt.xlim(xmin=min(ne_data['radius'])) # plot sampled density data plt.errorbar(ne_data['radius'], ne_data['ne'], xerr=[ne_data['radius_lowerbound'], ne_data['radius_upperbound']], yerr=ne_data['ne_err'], marker='o', markersize=2, linestyle='none', color='b') ax.set_xscale("log", nonposx='clip') ax.set_yscale("log", nonposy='clip') plt.xlabel('r [kpc]') plt.ylabel('$n_{e}$ [cm$^{-3}$]') ''' mock temperature profile ''' ax = fig1.add_subplot(1, 2, 2) # plot Vikhlinin+06 temperature model xplot = np.logspace(np.log10(min(tspec_data['radius'])), np.log10(max(tspec_data['radius'])), 1000) plt.semilogx(xplot, vikhlinin_tprof(tmodel_vikhlinin, xplot), 'k-') # plot sampled temperature data plt.errorbar(tspec_data['radius'], tspec_data['tspec'], xerr=[tspec_data['radius_lowerbound'], tspec_data['radius_upperbound']], yerr=[tspec_data['tspec_lowerbound'], tspec_data['tspec_upperbound']], marker='o', linestyle='none', color='b') plt.xlabel('r [kpc]') plt.ylabel('kT [keV]') ``` # 2. Fitting the gas density profile with a parametric model To determine the best-fitting gas density model, bmpmod has the option of fitting the four following $n_{e}$ models through the Levenberg-Marquardt optimization method. "single\_beta": $n_{e} = n_{e,0} \ (1+(r/r_{c})^{2})^{-\frac{3}{2}\beta}$ "cusped\_beta": $n_{e} = n_{e,0} \ (r/r_{c})^{-\alpha} \ (1+(r/r_{c})^{2})^{-\frac{3}{2}\beta+\frac{1}{2}\alpha}$ "double\_beta\_tied": $n_{e} = n_{e,1}(n_{e,0,1}, r_{c,1}, \beta)+n_{e,2}(n_{e,0,2}, r_{c,2}, \beta)$ "double\_beta": $n_{e} = n_{e,1}(n_{e,0,1}, r_{c,1}, \beta_1)+n_{e,2}(n_{e,0,2}, r_{c,2}, \beta_2)$ All four models can be fit and compared using the find_nemodeltype() function. A selected model must then be chosen for the following mass profile analysis with the fitne() function. ``` #suppress verbose log info from sherpa logger = logging.getLogger("sherpa") logger.setLevel(logging.ERROR) #fit all four ne moels and return the model with the lowest reduced chi-squared as nemodeltype nemodeltype, fig=find_nemodeltype(ne_data=ne_data, tspec_data=tspec_data, optplt=1) print 'model with lowest reduced chi-squared:', nemodeltype ``` *Note*: while the function find_nemodeltype() returns the model type producing the lowest reduced chi-squared fit, it may be better to choose a simpler model with fewer free-parameters if the reduced chi-squared values are similar ``` # Turn on logging for sherpa to see details of fit logger = logging.getLogger("sherpa") logger.setLevel(logging.INFO) # Find the parameters and errors of the seleted gas density model nemodel=fitne(ne_data=ne_data,tspec_data=tspec_data,nemodeltype=str(nemodeltype)) #[cm^-3] #nemodel stores all the useful information from the fit to the gas denisty profile print nemodel.keys() ``` # 3. Maximum likelihood estimation of mass profile free-parameters The maximum likelihood method can be used to perform an initial estimation of the free-parameters in the cluster mass profile model. The free parameters in the mass model, which will be returned in this estimation, are: - the mass concentration $c$ of the NFW profile used to model the DM halo, - the scale radius $R_s$ of the NFW profile - optionally, the log of the normalization of the Sersic model $\rho_{\star,0}$ used to model the stellar mass profile of the central galaxy The maximum likelihood estimation is performed using a Gaussian log-likelihood function of the form: $\ln(p) = -\frac{1}{2} \sum_{n} \left[\frac{(T_{\mathrm{spec},n} - T_{\mathrm{model},n})^{2}}{\sigma_{T_{\mathrm{spec},n}}^{2}} + \ln (2 \pi \sigma_{T_{\mathrm{spec},n}}^{2}) \right]$ ``` ml_results = fit_ml(ne_data, tspec_data, nemodel, clustermeta) ``` bmpmod uses these maximum likelihood results to initialize the walkers in the MCMC chain... # 4. MCMC estimation of mass profile model parameters Here the emcee python package is implemented to estimate the free-parameters of the mass model through the MCMC algorithm. bmpmod utilizes the ensemble sampler from emcee, and initializes the walkers in narrow Gaussian distribution about the parameter values returned from the maximum likelihood analysis. Returns of fit_mcmc(): samples - the marginalized posterior distribution sampler - the sampler class output by emcee ``` #fit for the mass model and temperature profile model through MCMC samples, sampler = fit_mcmc(ne_data=ne_data, tspec_data=tspec_data, nemodel=nemodel, ml_results=ml_results, clustermeta=clustermeta, Ncores=3, Nwalkers=50, Nsteps=50, Nburnin=15) ``` *Note*: autocorrelation time should be longer than Nburnin #### 4.1 analysis of the marginalized MCMC distribution We also want to calculate the radius of the cluster $R_{500}$ and the mass (total, DM, gas, stars) within this radius. The auxililary calculations are taken care of in samples_aux() for each step of the MCMC chain. ``` # calculate R500 and M(R500) for each step of MCMC chain samples_aux = calc_posterior_mcmc(samples=samples, nemodel=nemodel, clustermeta=clustermeta, Ncores=1) ``` From the marginialized MCMC distribution, we can calculate the free-parameter and auxiliary parameter (R500, M500) values as the median of the distribution with confidence intervals defined by the 16th and 84th percentiles. With samples_results() we combine all output parameter values and their upper and lower 1$\sigma$ error bounds. ``` # combine all MCMC results mcmc_results = samples_results(samples=samples, samples_aux=samples_aux, clustermeta=clustermeta) for key in mcmc_results.keys(): print 'MCMC: '+str(key)+' = '+str(mcmc_results[str(key)]) #Corner plot of marginalized posterior distribution of free params from MCMC fig1 = plt_mcmc_freeparam(mcmc_results=mcmc_results, samples=samples, sampler=sampler, tspec_data=tspec_data, clustermeta=clustermeta) ``` # 5. Summary plot ``` # Summary plot: density profile, temperature profile, mass profile fig2, ax1, ax2 = plt_summary(ne_data=ne_data, tspec_data=tspec_data, nemodel=nemodel, mcmc_results=mcmc_results, clustermeta=clustermeta) # add vikhlinin model to density plot xplot = np.logspace(np.log10(min(ne_data['radius'])), np.log10(max(ne_data['radius'])), 1000) ax1.plot(xplot, vikhlinin_neprof(nemodel_vikhlinin, xplot), 'k') #plt.xlim(xmin=min(ne_data['radius'])) # add viklinin model to temperature plot xplot = np.logspace(np.log10(min(tspec_data['radius'])), np.log10(max(tspec_data['radius'])), 1000) ax2.plot(xplot, vikhlinin_tprof(tmodel_vikhlinin, xplot), 'k-') ```
github_jupyter
import numpy as np import matplotlib import matplotlib.pyplot as plt import astropy import astropy.table as atpy from astropy import cosmology from astropy.cosmology import FlatLambdaCDM import astropy.units as u from astropy.table import Column import sherpa import sherpa.ui as ui import scipy import scipy.integrate import scipy.optimize as op import logging import time import emcee import corner #add in all needed modules for things here... %matplotlib inline #avoid sherpa suppression of traceback import sys sys.tracebacklimit = 100 # default parameters and unit conversion factors import defaultparams.params as params import defaultparams.uconv as uconv # functions to read data into format used by module from bmpmod.set_prof_data import set_ne, set_tspec, set_meta # functions to fit the gas density profile from bmpmod.fit_density import fitne, find_nemodeltype # functions to determine mass profile through backwards modelling from bmpmod.fit_massprof import fit_ml, fit_mcmc # functions to analyze the marginalized posterior distribution from bmpmod.posterior_mcmc import calc_posterior_mcmc, samples_results # plotting functions from bmpmod.plotting import plt_mcmc_freeparam, plt_summary, plt_summary_nice # functions specifically to generate mock data from Vikhlinin+ profiles from exampledata.vikhlinin_prof import vikhlinin_tprof, vikhlinin_neprof, gen_mock_data #select any cluster ID from the Vikhlinin+ paper clusterID='A383' clustermeta, ne_data, tspec_data, nemodel_vikhlinin, tmodel_vikhlinin \ = gen_mock_data(clusterID=clusterID, N_ne=30, N_temp=10, noise_ne=0.10, noise_temp=0.05, refindex=-1, incl_mstar=1, incl_mgas=1) # clustermeta: # dictionary that stores relevant properties of cluster # (i.e., name, redshift, bcg_re: the effective radius of the central galaxy in kpc, # bcg_sersc_n: the sersic index of the central galaxy) # as well as selections for analysis # (i.e., incl_mstar, incl_mgas, refindex as input previously) clustermeta #ne_data: dictionary that stores the mock "observed" gas density profile ne_data[:3] #tspec_data: dictionary that store the mock "observed" temperature profile tspec_data[:3] fig1 = plt.figure(1, (12, 4)) ax = fig1.add_subplot(1, 2, 1) ''' mock gas denisty profile ''' # plot Vikhlinin+06 density model xplot = np.logspace(np.log10(min(ne_data['radius'])), np.log10(max(ne_data['radius'])), 1000) plt.loglog(xplot, vikhlinin_neprof(nemodel_vikhlinin, xplot), 'k') plt.xlim(xmin=min(ne_data['radius'])) # plot sampled density data plt.errorbar(ne_data['radius'], ne_data['ne'], xerr=[ne_data['radius_lowerbound'], ne_data['radius_upperbound']], yerr=ne_data['ne_err'], marker='o', markersize=2, linestyle='none', color='b') ax.set_xscale("log", nonposx='clip') ax.set_yscale("log", nonposy='clip') plt.xlabel('r [kpc]') plt.ylabel('$n_{e}$ [cm$^{-3}$]') ''' mock temperature profile ''' ax = fig1.add_subplot(1, 2, 2) # plot Vikhlinin+06 temperature model xplot = np.logspace(np.log10(min(tspec_data['radius'])), np.log10(max(tspec_data['radius'])), 1000) plt.semilogx(xplot, vikhlinin_tprof(tmodel_vikhlinin, xplot), 'k-') # plot sampled temperature data plt.errorbar(tspec_data['radius'], tspec_data['tspec'], xerr=[tspec_data['radius_lowerbound'], tspec_data['radius_upperbound']], yerr=[tspec_data['tspec_lowerbound'], tspec_data['tspec_upperbound']], marker='o', linestyle='none', color='b') plt.xlabel('r [kpc]') plt.ylabel('kT [keV]') #suppress verbose log info from sherpa logger = logging.getLogger("sherpa") logger.setLevel(logging.ERROR) #fit all four ne moels and return the model with the lowest reduced chi-squared as nemodeltype nemodeltype, fig=find_nemodeltype(ne_data=ne_data, tspec_data=tspec_data, optplt=1) print 'model with lowest reduced chi-squared:', nemodeltype # Turn on logging for sherpa to see details of fit logger = logging.getLogger("sherpa") logger.setLevel(logging.INFO) # Find the parameters and errors of the seleted gas density model nemodel=fitne(ne_data=ne_data,tspec_data=tspec_data,nemodeltype=str(nemodeltype)) #[cm^-3] #nemodel stores all the useful information from the fit to the gas denisty profile print nemodel.keys() ml_results = fit_ml(ne_data, tspec_data, nemodel, clustermeta) #fit for the mass model and temperature profile model through MCMC samples, sampler = fit_mcmc(ne_data=ne_data, tspec_data=tspec_data, nemodel=nemodel, ml_results=ml_results, clustermeta=clustermeta, Ncores=3, Nwalkers=50, Nsteps=50, Nburnin=15) # calculate R500 and M(R500) for each step of MCMC chain samples_aux = calc_posterior_mcmc(samples=samples, nemodel=nemodel, clustermeta=clustermeta, Ncores=1) # combine all MCMC results mcmc_results = samples_results(samples=samples, samples_aux=samples_aux, clustermeta=clustermeta) for key in mcmc_results.keys(): print 'MCMC: '+str(key)+' = '+str(mcmc_results[str(key)]) #Corner plot of marginalized posterior distribution of free params from MCMC fig1 = plt_mcmc_freeparam(mcmc_results=mcmc_results, samples=samples, sampler=sampler, tspec_data=tspec_data, clustermeta=clustermeta) # Summary plot: density profile, temperature profile, mass profile fig2, ax1, ax2 = plt_summary(ne_data=ne_data, tspec_data=tspec_data, nemodel=nemodel, mcmc_results=mcmc_results, clustermeta=clustermeta) # add vikhlinin model to density plot xplot = np.logspace(np.log10(min(ne_data['radius'])), np.log10(max(ne_data['radius'])), 1000) ax1.plot(xplot, vikhlinin_neprof(nemodel_vikhlinin, xplot), 'k') #plt.xlim(xmin=min(ne_data['radius'])) # add viklinin model to temperature plot xplot = np.logspace(np.log10(min(tspec_data['radius'])), np.log10(max(tspec_data['radius'])), 1000) ax2.plot(xplot, vikhlinin_tprof(tmodel_vikhlinin, xplot), 'k-')
0.496094
0.882073
``` %load_ext autoreload %autoreload 2 %matplotlib inline import numpy as np import matplotlib.pyplot as plt from hottbox.core import Tensor, TensorCPD, TensorTKD ``` [Return to Table of Contents](./0_Table_of_contents.ipynb) # Efficient representation of multidimensional arrays A tensor of order $N$ is said to be of **rank-1** if it can be represented as an outer product of $N$ vectors. The figure below illustrates an example of a rank-1 tensor $\mathbf{\underline{X}}$ and provides intuition on how to compute the operation of outer product: <img src="./imgs/outerproduct.png" alt="Drawing" style="width: 500px;"/> # Kruskal representation For a third order tensor or rank $R$ the Kruskal representation can be expressed as follows: $$ \mathbf{\underline{X}} = \sum_{r=1}^R \mathbf{\underline{X}}_r = \sum_{r=1}^R \lambda_{r} \cdot \mathbf{a}_r \circ \mathbf{b}_r \circ \mathbf{c}_r $$ The vectors $\mathbf{a}_r, \mathbf{b}_r$ and $\mathbf{c}_r$ are oftentime combined into the corresponding **factor matrices**: $$ \mathbf{A} = \Big[ \mathbf{a}_1 \cdots \mathbf{a}_R \Big] \quad \mathbf{B} = \Big[ \mathbf{b}_1 \cdots \mathbf{b}_R \Big] \quad \mathbf{C} = \Big[ \mathbf{c}_1 \cdots \mathbf{c}_R \Big] \quad $$ Thus, if we employ the mode-$n$ product, the **Kruskal representation** takes the form: $$ \mathbf{\underline{X}} = \mathbf{\underline{\Lambda}} \times_1 \mathbf{A} \times_2 \mathbf{B} \times_3 \mathbf{C} = \Big[\mathbf{\underline{\Lambda}}; \mathbf{A}, \mathbf{B}, \mathbf{C} \Big] $$ where the elements on the super-diagonal of the core tensor $\mathbf{\underline{\Lambda}}$ are occupied by the values $\lambda_r$ and all other entries are equal to zero. This can be visualised as shown on figure below: <img src="./imgs/TensorCPD.png" alt="Drawing" style="width: 500px;"/> ``` # Create factor matrices I, J, K = 3, 4, 5 R = 2 A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) # Create core values values = np.arange(R) # Create Kruskal representation tensor_cpd = TensorCPD(fmat=[A, B, C], core_values=values) # Result preview print(tensor_cpd) ``` ## **Assigment 1** 1. What is the order of a tensor if its Kruskal representation consists of 5 factor matrices. 2. What is the order of a tensor if its Kruskal representation consists of core tensor which has only 5 elements on the super-diagonal. 3. For a 3-rd order tensor that consists of 500 elements, provide three different Kruskal representations. 4. For a tensor that consits of 1000 elements, provide three Kruskal representations, each of which should have different number of factor matrices. 5. For a 4-th order tensor that consists of 2401 elements, provide Kruskal representation if its core tensor consisting of 81 elements. ### Solution: Part 1 ``` answer_1_1 = "The tensor is of order 5 if its Kruskal representation consists of 5 factor matrices" # use this variable for your answer print(answer_1_1) ``` ### Solution: Part 2 ``` answer_1_2 = "The order of a tensor is not related to the rank determined by the elements on the super diagonal. Therefore it is not possible to infer on the order of the tensor" # use this variable for your answer print(answer_1_2) ``` ### Solution: Part 3 ``` # First representation I, J, K = 5, 10, 10 # define shape of the tensor in full form R = 4 # define Kryskal rank of a tensor in CP form A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) values = np.arange(R) tensor_cpd = TensorCPD(fmat=[A, B, C], core_values=values) print(tensor_cpd) tensor_full = tensor_cpd.reconstruct() print(tensor_full) # Second representation I, J, K = 5, 10, 10 # define shape of the tensor in full form R = 3 # define Kryskal rank of a tensor in CP form A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) values = np.arange(R) tensor_cpd = TensorCPD(fmat=[A, B, C], core_values=values) print(tensor_cpd) tensor_full = tensor_cpd.reconstruct() print(tensor_full) # Third representation I, J, K = 5, 10, 10 # define shape of the tensor in full form R = 6 # define Kryskal rank of a tensor in CP form A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) values = np.arange(R) tensor_cpd = TensorCPD(fmat=[A, B, C], core_values=values) print(tensor_cpd) tensor_full = tensor_cpd.reconstruct() print(tensor_full) ``` ### Solution: Part 4 ``` # First representation I, J, K = 10, 10, 10 # define shape of the tensor in full form R = 4 # define Kryskal rank of a tensor in CP form A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) values = np.arange(R) tensor_cpd = TensorCPD(fmat=[A, B, C], core_values=values) print(tensor_cpd) tensor_full = tensor_cpd.reconstruct() print(tensor_full) # Second representation I, J, K, T = 10, 10, 5, 2 # define shape of the tensor in full form R = 4 # define Kryskal rank of a tensor in CP form A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) D = np.arange(T * R).reshape(T, R) values = np.arange(R) tensor_cpd = TensorCPD(fmat=[A, B, C, D], core_values=values) print(tensor_cpd) tensor_full = tensor_cpd.reconstruct() print(tensor_full) # Third representation I, J, K, T, H = 10, 5, 5, 2, 2 # define shape of the tensor in full form R = 4 # define Kryskal rank of a tensor in CP form A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) D = np.arange(T * R).reshape(T, R) E = np.arange(H * R).reshape(H, R) values = np.arange(R) tensor_cpd = TensorCPD(fmat=[A, B, C, D, E], core_values=values) print(tensor_cpd) tensor_full = tensor_cpd.reconstruct() print(tensor_full) ``` ### Solution: Part 5 ``` # Provide Kruskal representation here I, J, K, T = 7, 7, 7, 7 # define shape of the tensor in full form R = 3 # define Kryskal rank of a tensor in CP form A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) D = np.arange(T * R).reshape(T, R) values = np.arange(R) tensor_cpd = TensorCPD(fmat=[A, B, C, D], core_values=values) print(tensor_cpd) tensor_full = tensor_cpd.reconstruct() print(tensor_full) print('\n\tCore tensor') print(tensor_cpd.core) tensor_cpd.core.data ``` # Tucker representation <img src="./imgs/TensorTKD.png" alt="Drawing" style="width: 600px;"/> For a tensor $\mathbf{\underline{X}} \in \mathbb{R}^{I \times J \times K}$ illustrated above, the **Tucker form** represents the tensor in hand through a dense core tensor $\mathbf{\underline{G}}$ with multi-linear rank ($Q, R, P$) and a set of accompanying factor matrices $\mathbf{A} \in \mathbb{R}^{I \times Q}, \mathbf{B} \in \mathbb{R}^{J \times R}$ and $\mathbf{C} \in \mathbb{R}^{K \times P}$. $$ \mathbf{\underline{X}} = \sum_{q=1}^Q \sum_{r=1}^R \sum_{p=1}^P \mathbf{\underline{X}}_{qrp} = \sum_{q=1}^Q \sum_{r=1}^R \sum_{p=1}^P g_{qrp} \cdot \mathbf{a}_q \circ \mathbf{b}_r \circ \mathbf{c}_p $$ The Tucker form of a tensor is closely related to the Kruskal representation and can be expressed through a sequence of mode-$n$ products in a similar way, that is $$ \mathbf{\underline{X}} = \mathbf{\underline{G}} \times_1 \mathbf{A} \times_2 \mathbf{B} \times_3 \mathbf{C} = \Big[\mathbf{\underline{G}}; \mathbf{A}, \mathbf{B}, \mathbf{C} \Big] $$ ``` # Create factor matrices I, J, K = 5, 6, 7 # define shape of the tensor in full form Q, R, P = 2, 3, 4 # define multi-linear rank of the tensor in Tucker form A = np.arange(I * Q).reshape(I, Q) B = np.arange(J * R).reshape(J, R) C = np.arange(K * P).reshape(K, P) # Create core values values = np.arange(Q * R * P).reshape(Q, R, P) # Create Tucker representation tensor_tkd = TensorTKD(fmat=[A, B, C], core_values=values) # Result preview print(tensor_tkd) print('\n\tCore tensor') print(tensor_tkd.core) tensor_tkd.core.data ``` ## **Assigment 2** 1. Core tensor of a Tucker representation consists of 1848 elements. Explain what tensor order should a tensor have to able to be represented in such form. 2. For a 4-th order tensor that consists of 1000 elements, provide three different Tucker representations. 3. For a 3-rd order tensor that consists of 500 elements, provide three different Tucker representations given that its core tensor consists of 42 elements. 4. Provide an intuition behind the main difference between the Tucker and Kruskal representations. ### Solution: Part 1 ``` answer_2_1 = "The tensor order is does not affect the number of elements in the core tensor. The multiplication of all the multi-linear rank of the tensor should be equal to 1848. Any combination is then possible." # use this variable for your answer print(answer_2_1) ``` ### Solution: Part 2 ``` # First representation I, J, K, L = 10, 10, 5, 2 # define shape of the tensor in full form Q, R, P, S = 2, 3, 4, 5 # define multi-linear rank of the tensor in Tucker form A = np.arange(I * Q).reshape(I, Q) B = np.arange(J * R).reshape(J, R) C = np.arange(K * P).reshape(K, P) D = np.arange(L * S).reshape(L, S) values = np.arange(Q * R * P * S).reshape(Q, R, P, S) tensor_tkd = TensorTKD(fmat=[A, B, C, D], core_values=values) print(tensor_tkd) tensor_full = tensor_tkd.reconstruct() print(tensor_full) # Second representation I, J, K, L = 10, 10, 5, 2 # define shape of the tensor in full form Q, R, P, S = 3, 2, 4, 5 # define multi-linear rank of the tensor in Tucker form A = np.arange(I * Q).reshape(I, Q) B = np.arange(J * R).reshape(J, R) C = np.arange(K * P).reshape(K, P) D = np.arange(L * S).reshape(L, S) values = np.arange(Q * R * P * S).reshape(Q, R, P, S) tensor_tkd = TensorTKD(fmat=[A, B, C, D], core_values=values) print(tensor_tkd) tensor_full = tensor_tkd.reconstruct() print(tensor_full) # Third representation I, J, K, L = 10, 10, 5, 2 # define shape of the tensor in full form Q, R, P, S = 3, 1, 7, 2 # define multi-linear rank of the tensor in Tucker form A = np.arange(I * Q).reshape(I, Q) B = np.arange(J * R).reshape(J, R) C = np.arange(K * P).reshape(K, P) D = np.arange(L * S).reshape(L, S) values = np.arange(Q * R * P * S).reshape(Q, R, P, S) tensor_tkd = TensorTKD(fmat=[A, B, C, D], core_values=values) print(tensor_tkd) tensor_full = tensor_tkd.reconstruct() print(tensor_full) ``` ### Solution: Part 3 ``` # First representation I, J, K = 5, 6, 7 # define shape of the tensor in full form Q, R, P = 2, 3, 7 # define multi-linear rank of the tensor in Tucker form A = np.arange(I * Q).reshape(I, Q) B = np.arange(J * R).reshape(J, R) C = np.arange(K * P).reshape(K, P) # Create core values values = np.arange(Q * R * P).reshape(Q, R, P) # Create Tucker representation tensor_tkd = TensorTKD(fmat=[A, B, C], core_values=values) # Result preview print(tensor_tkd) print('\n\tCore tensor') print(tensor_tkd.core) tensor_tkd.core.data # Second representation I, J, K = 5, 6, 7 # define shape of the tensor in full form Q, R, P = 21, 1, 2 # define multi-linear rank of the tensor in Tucker form A = np.arange(I * Q).reshape(I, Q) B = np.arange(J * R).reshape(J, R) C = np.arange(K * P).reshape(K, P) # Create core values values = np.arange(Q * R * P).reshape(Q, R, P) # Create Tucker representation tensor_tkd = TensorTKD(fmat=[A, B, C], core_values=values) # Result preview print(tensor_tkd) print('\n\tCore tensor') print(tensor_tkd.core) tensor_tkd.core.data # Third representation I, J, K = 5, 6, 7 # define shape of the tensor in full form Q, R, P = 3, 7, 2 # define multi-linear rank of the tensor in Tucker form A = np.arange(I * Q).reshape(I, Q) B = np.arange(J * R).reshape(J, R) C = np.arange(K * P).reshape(K, P) # Create core values values = np.arange(Q * R * P).reshape(Q, R, P) # Create Tucker representation tensor_tkd = TensorTKD(fmat=[A, B, C], core_values=values) # Result preview print(tensor_tkd) print('\n\tCore tensor') print(tensor_tkd.core) tensor_tkd.core.data ``` ### Solution: Part 4 ``` answer_2_4 = "The main difference between Kruskal and Tucker decomposition is the presence of the core tensor with Tucker having multi-linear ranks. This allows for column vectors of mode matrices to interact with each other for reconstruction" # use this variable for your answer print(answer_2_4) ```
github_jupyter
%load_ext autoreload %autoreload 2 %matplotlib inline import numpy as np import matplotlib.pyplot as plt from hottbox.core import Tensor, TensorCPD, TensorTKD # Create factor matrices I, J, K = 3, 4, 5 R = 2 A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) # Create core values values = np.arange(R) # Create Kruskal representation tensor_cpd = TensorCPD(fmat=[A, B, C], core_values=values) # Result preview print(tensor_cpd) answer_1_1 = "The tensor is of order 5 if its Kruskal representation consists of 5 factor matrices" # use this variable for your answer print(answer_1_1) answer_1_2 = "The order of a tensor is not related to the rank determined by the elements on the super diagonal. Therefore it is not possible to infer on the order of the tensor" # use this variable for your answer print(answer_1_2) # First representation I, J, K = 5, 10, 10 # define shape of the tensor in full form R = 4 # define Kryskal rank of a tensor in CP form A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) values = np.arange(R) tensor_cpd = TensorCPD(fmat=[A, B, C], core_values=values) print(tensor_cpd) tensor_full = tensor_cpd.reconstruct() print(tensor_full) # Second representation I, J, K = 5, 10, 10 # define shape of the tensor in full form R = 3 # define Kryskal rank of a tensor in CP form A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) values = np.arange(R) tensor_cpd = TensorCPD(fmat=[A, B, C], core_values=values) print(tensor_cpd) tensor_full = tensor_cpd.reconstruct() print(tensor_full) # Third representation I, J, K = 5, 10, 10 # define shape of the tensor in full form R = 6 # define Kryskal rank of a tensor in CP form A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) values = np.arange(R) tensor_cpd = TensorCPD(fmat=[A, B, C], core_values=values) print(tensor_cpd) tensor_full = tensor_cpd.reconstruct() print(tensor_full) # First representation I, J, K = 10, 10, 10 # define shape of the tensor in full form R = 4 # define Kryskal rank of a tensor in CP form A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) values = np.arange(R) tensor_cpd = TensorCPD(fmat=[A, B, C], core_values=values) print(tensor_cpd) tensor_full = tensor_cpd.reconstruct() print(tensor_full) # Second representation I, J, K, T = 10, 10, 5, 2 # define shape of the tensor in full form R = 4 # define Kryskal rank of a tensor in CP form A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) D = np.arange(T * R).reshape(T, R) values = np.arange(R) tensor_cpd = TensorCPD(fmat=[A, B, C, D], core_values=values) print(tensor_cpd) tensor_full = tensor_cpd.reconstruct() print(tensor_full) # Third representation I, J, K, T, H = 10, 5, 5, 2, 2 # define shape of the tensor in full form R = 4 # define Kryskal rank of a tensor in CP form A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) D = np.arange(T * R).reshape(T, R) E = np.arange(H * R).reshape(H, R) values = np.arange(R) tensor_cpd = TensorCPD(fmat=[A, B, C, D, E], core_values=values) print(tensor_cpd) tensor_full = tensor_cpd.reconstruct() print(tensor_full) # Provide Kruskal representation here I, J, K, T = 7, 7, 7, 7 # define shape of the tensor in full form R = 3 # define Kryskal rank of a tensor in CP form A = np.arange(I * R).reshape(I, R) B = np.arange(J * R).reshape(J, R) C = np.arange(K * R).reshape(K, R) D = np.arange(T * R).reshape(T, R) values = np.arange(R) tensor_cpd = TensorCPD(fmat=[A, B, C, D], core_values=values) print(tensor_cpd) tensor_full = tensor_cpd.reconstruct() print(tensor_full) print('\n\tCore tensor') print(tensor_cpd.core) tensor_cpd.core.data # Create factor matrices I, J, K = 5, 6, 7 # define shape of the tensor in full form Q, R, P = 2, 3, 4 # define multi-linear rank of the tensor in Tucker form A = np.arange(I * Q).reshape(I, Q) B = np.arange(J * R).reshape(J, R) C = np.arange(K * P).reshape(K, P) # Create core values values = np.arange(Q * R * P).reshape(Q, R, P) # Create Tucker representation tensor_tkd = TensorTKD(fmat=[A, B, C], core_values=values) # Result preview print(tensor_tkd) print('\n\tCore tensor') print(tensor_tkd.core) tensor_tkd.core.data answer_2_1 = "The tensor order is does not affect the number of elements in the core tensor. The multiplication of all the multi-linear rank of the tensor should be equal to 1848. Any combination is then possible." # use this variable for your answer print(answer_2_1) # First representation I, J, K, L = 10, 10, 5, 2 # define shape of the tensor in full form Q, R, P, S = 2, 3, 4, 5 # define multi-linear rank of the tensor in Tucker form A = np.arange(I * Q).reshape(I, Q) B = np.arange(J * R).reshape(J, R) C = np.arange(K * P).reshape(K, P) D = np.arange(L * S).reshape(L, S) values = np.arange(Q * R * P * S).reshape(Q, R, P, S) tensor_tkd = TensorTKD(fmat=[A, B, C, D], core_values=values) print(tensor_tkd) tensor_full = tensor_tkd.reconstruct() print(tensor_full) # Second representation I, J, K, L = 10, 10, 5, 2 # define shape of the tensor in full form Q, R, P, S = 3, 2, 4, 5 # define multi-linear rank of the tensor in Tucker form A = np.arange(I * Q).reshape(I, Q) B = np.arange(J * R).reshape(J, R) C = np.arange(K * P).reshape(K, P) D = np.arange(L * S).reshape(L, S) values = np.arange(Q * R * P * S).reshape(Q, R, P, S) tensor_tkd = TensorTKD(fmat=[A, B, C, D], core_values=values) print(tensor_tkd) tensor_full = tensor_tkd.reconstruct() print(tensor_full) # Third representation I, J, K, L = 10, 10, 5, 2 # define shape of the tensor in full form Q, R, P, S = 3, 1, 7, 2 # define multi-linear rank of the tensor in Tucker form A = np.arange(I * Q).reshape(I, Q) B = np.arange(J * R).reshape(J, R) C = np.arange(K * P).reshape(K, P) D = np.arange(L * S).reshape(L, S) values = np.arange(Q * R * P * S).reshape(Q, R, P, S) tensor_tkd = TensorTKD(fmat=[A, B, C, D], core_values=values) print(tensor_tkd) tensor_full = tensor_tkd.reconstruct() print(tensor_full) # First representation I, J, K = 5, 6, 7 # define shape of the tensor in full form Q, R, P = 2, 3, 7 # define multi-linear rank of the tensor in Tucker form A = np.arange(I * Q).reshape(I, Q) B = np.arange(J * R).reshape(J, R) C = np.arange(K * P).reshape(K, P) # Create core values values = np.arange(Q * R * P).reshape(Q, R, P) # Create Tucker representation tensor_tkd = TensorTKD(fmat=[A, B, C], core_values=values) # Result preview print(tensor_tkd) print('\n\tCore tensor') print(tensor_tkd.core) tensor_tkd.core.data # Second representation I, J, K = 5, 6, 7 # define shape of the tensor in full form Q, R, P = 21, 1, 2 # define multi-linear rank of the tensor in Tucker form A = np.arange(I * Q).reshape(I, Q) B = np.arange(J * R).reshape(J, R) C = np.arange(K * P).reshape(K, P) # Create core values values = np.arange(Q * R * P).reshape(Q, R, P) # Create Tucker representation tensor_tkd = TensorTKD(fmat=[A, B, C], core_values=values) # Result preview print(tensor_tkd) print('\n\tCore tensor') print(tensor_tkd.core) tensor_tkd.core.data # Third representation I, J, K = 5, 6, 7 # define shape of the tensor in full form Q, R, P = 3, 7, 2 # define multi-linear rank of the tensor in Tucker form A = np.arange(I * Q).reshape(I, Q) B = np.arange(J * R).reshape(J, R) C = np.arange(K * P).reshape(K, P) # Create core values values = np.arange(Q * R * P).reshape(Q, R, P) # Create Tucker representation tensor_tkd = TensorTKD(fmat=[A, B, C], core_values=values) # Result preview print(tensor_tkd) print('\n\tCore tensor') print(tensor_tkd.core) tensor_tkd.core.data answer_2_4 = "The main difference between Kruskal and Tucker decomposition is the presence of the core tensor with Tucker having multi-linear ranks. This allows for column vectors of mode matrices to interact with each other for reconstruction" # use this variable for your answer print(answer_2_4)
0.665954
0.987055
``` #export """ This module is for nice visualization tools. This is exposed automatically with:: from k1lib.imports import * viz.mask # exposed """ import k1lib, base64, io, torch, os, matplotlib as mpl import matplotlib.pyplot as plt, numpy as np from typing import Callable, List, Union from functools import partial, update_wrapper __all__ = ["SliceablePlot", "plotSegments", "Carousel", "confusionMatrix", "FAnim", "mask"] #export class _PlotDecorator: """The idea with decorators is that you can do something like this:: sp = k1lib.viz.SliceablePlot() sp.yscale("log") # will format every plot as if ``plt.yscale("log")`` has been called This class is not expected to be used by end users though.""" def __init__(self, sliceablePlot:"SliceablePlot", name:str): """ :param sliceablePlot: the parent plot :param name: the decorator's name, like "yscale" """ self.sliceablePlot = sliceablePlot self.name = name; self.args, self.kwargs = None, None def __call__(self, *args, **kwargs): """Stores all args, then return the parent :class:`SliceablePlot`""" self.args = args; self.kwargs = kwargs; return self.sliceablePlot def run(self): getattr(plt, self.name)(*self.args, **self.kwargs) #export class SliceablePlot: """This is a plot that is "sliceable", meaning you can focus into a particular region of the plot quickly. A minimal example looks something like this:: import numpy as np, matplotlib.pyplot as plt, k1lib x = np.linspace(-2, 2, 100) def normalF(): plt.plot(x, x**2) @k1lib.viz.SliceablePlot.decorate def plotF(_slice): plt.plot(x[_slice], (x**2)[_slice]) plotF()[70:] # plots x^2 equation with x in [0.8, 2] So, ``normalF`` plots the equation :math:`x^2` with x going from -2 to 2. You can convert this into a :class:`SliceablePlot` by adding a term of type :class:`slice` to the args, and decorate with :meth:`decorate`. Now, every time you slice the :class:`SliceablePlot` with a specific range, ``plotF`` will receive it. How intuitive everything is depends on how you slice your data. ``[70:]`` results in x in [0.8, 2] is rather unintuitive. You can change it into something like this:: @k1lib.viz.SliceablePlot.decorate def niceF(_slice): n = 100; r = k1lib.Range(-2, 2) x = np.linspace(*r, n) _slice = r.toRange(k1lib.Range(n), r.bound(_slice)).slice_ plt.plot(x[_slice], (x**2)[_slice]) # this works without a decorator too btw: k1lib.viz.SliceablePlot(niceF) niceF()[0.3:0.7] # plots x^2 equation with x in [0.3, 0.7] niceF()[0.3:] # plots x^2 equation with x in [0.3, 2] The idea is to just take the input :class:`slice`, put some bounds on its parts, then convert that slice from [-2, 2] to [0, 100]. Check out :class:`k1lib.Range` if it's not obvious how this works. A really cool feature of :class:`SliceablePlot` looks like this:: niceF().legend(["A"])[-1:].grid(True).yscale("log") This will plot :math:`x^2` with range in [-1, 2] with a nice grid, and with y axis's scale set to log. Essentially, undefined method calls on a :class:`SliceablePlot` will translate into ``plt`` calls. So the above is roughly equivalent to this:: x = np.linspace(-2, 2, 100) plt.plot(x, x**2) plt.legend(["A"]) plt.grid(True) plt.yscale("log") .. image:: images/SliceablePlot.png This works even if you have multiple axes inside your figure. It's wonderful, isn't it?""" def __init__(self, plotF:Callable[[slice], None], slices:Union[slice, List[slice]]=slice(None), plotDecorators:List[_PlotDecorator]=[], docs=""): """Creates a new SliceablePlot. Only use params listed below: :param plotF: function that takes in a :class:`slice` or tuple of :class:`slice`s :param docs: optional docs for the function that will be displayed in :meth:`__repr__`""" self.plotF = plotF self.slices = [slices] if isinstance(slices, slice) else slices self.docs = docs; self.plotDecorators = list(plotDecorators) @staticmethod def decorate(f): """Decorates a plotting function so that it becomes a SliceablePlot.""" answer = partial(SliceablePlot, plotF=f) update_wrapper(answer, f) return answer @property def squeezedSlices(self) -> Union[List[slice], slice]: """If :attr:`slices` only has 1 element, then return that element, else return the entire list.""" return k1lib.squeeze(self.slices) def __getattr__(self, attr): if attr.startswith("_"): raise AttributeError() # automatically assume the attribute is a plt.attr method dec = _PlotDecorator(self, attr) self.plotDecorators.append(dec); return dec def __getitem__(self, idx): if type(idx) == slice: return SliceablePlot(self.plotF, [idx], self.plotDecorators, self.docs) if type(idx) == tuple and all([isinstance(elem, slice) for elem in idx]): return SliceablePlot(self.plotF, idx, self.plotDecorators, self.docs) raise Exception(f"Don't understand {idx}") def __repr__(self): self.plotF(self.squeezedSlices) for ax in plt.gcf().get_axes(): plt.sca(ax) for decorator in self.plotDecorators: decorator.run() plt.show() return f"""Sliceable plot. Can... - p[a:b]: to focus on a specific range of the plot - p.yscale("log"): to perform operation as if you're using plt{self.docs}""" @SliceablePlot.decorate def plotF(_slice): n = 100; r = k1lib.Range(-2, 2) x = np.linspace(*r, n) _slice = r.toRange(k1lib.Range(n), r.bound(_slice)).slice_ plt.plot(x[_slice], (x**2)[_slice]) plotF()[-1:].grid(True).yscale("log") #export def plotSegments(x:List[float], y:List[float], states:List[int], colors:List[str]=None): """Plots a line graph, with multiple segments with different colors. Idea is, you have a normal line graph, but you want to color parts of the graph red, other parts blue. Then, you can pass a "state" array, with the same length as your data, filled with ints, like this:: y = np.array([ 460800, 921600, 921600, 1445888, 1970176, 1970176, 2301952, 2633728, 2633728, 3043328, 3452928, 3452928, 3457024, 3461120, 3463680, 3463680, 3470336, 3470336, 3467776, 3869184, 3865088, 3865088, 3046400, 2972672, 2972672, 2309632, 2504192, 2504192, 1456128, 1393664, 1393664, 472576]) s = np.array([1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1]) plotSegments(None, y, s, colors=["tab:blue", "tab:red"]) .. image:: images/plotSegments.png :param x: (nullable) list of x coordinate at each point :param y: list of y coordinates at each point :param states: list of color at each point :param colors: string colors (matplotlib color strings) to display for each states""" if x is None: x = range(len(y)) if colors is None: colors = ["tab:blue", "tab:red", "tab:green", "tab:orange", "tab:purple", "tab:brown"][:len(x)] _x = []; _y = []; state = -1; count = -1 # stretchs, and bookkeeping nums lx = None; ly = None # last x and y from last stretch, for plot autocompletion while count + 1 < len(x): count += 1 if state != states[count]: if len(_x) > 0 and state >= 0: if lx != None: _x = [lx] + _x; _y = [ly] + _y plt.plot(_x, _y, colors[state]); lx = _x[-1]; ly = _y[-1] _x = [x[count]]; _y = [y[count]]; state = states[count] else: _x.append(x[count]); _y.append(y[count]) if len(_x) > 0 and state >= 0: if lx != None: _x = [lx] + _x; _y = [ly] + _y plt.plot(_x, _y, colors[state]) y = np.array([ 460800, 921600, 921600, 1445888, 1970176, 1970176, 2301952, 2633728, 2633728, 3043328, 3452928, 3452928, 3457024, 3461120, 3463680, 3463680, 3470336, 3470336, 3467776, 3869184, 3865088, 3865088, 3046400, 2972672, 2972672, 2309632, 2504192, 2504192, 1456128, 1393664, 1393664, 472576]) s = np.array([1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1]) plotSegments(None, y, s, colors=["tab:blue", "tab:red"]) x = np.linspace(-3, 3, 1000) states = np.zeros(1000, dtype=np.int); states[:300] = 1 colors = ["tab:blue", "tab:red"] plotSegments(x, x**2, states) #plt.show() #export class Carousel: _idx = k1lib.AutoIncrement.random() def __init__(self): """Creates a new Carousel. You can then add images and whatnot. Will even work even when you export the notebook as html. Example:: import numpy as np, matplotlib.pyplot as plt, k1lib c = k1lib.viz.Carousel() x = np.linspace(-2, 2); plt.plot(x, x ** 2); c.savePlt() x = np.linspace(-1, 3); plt.plot(x, x ** 2); c.savePlt() c # displays in notebook cell .. image:: images/carousel.png """ self.imgs:List[Tuple[str, str]] = [] # Tuple[format, base64 img] self.defaultFormat = "jpeg" def saveBytes(self, _bytes:bytes, fmt:str=None): """Saves bytes as another image. :param fmt: format of image""" self.imgs.append((fmt or self.defaultFormat, base64.b64encode(_bytes).decode())) def save(self, f:Callable[[io.BytesIO], None]): """Generic image save function. Treat :class:`io.BytesIO` as if it's a file when you're doing this:: with open("file.txt") as f: pass # "f" is similar to io.BytesIO So, you can do stuff like:: import matplotlib.pyplot as plt, numpy as np x = np.linspace(-2, 2) plt.plot(x, x**2) c = k1lib.viz.Carousel() c.save(lambda io: plt.savefig(io, format="png")) :param f: lambda that provides a :class:`io.BytesIO` for you to write to """ byteArr = io.BytesIO(); f(byteArr); byteArr.seek(0) self.saveBytes(byteArr.read()) def savePlt(self): """Saves current plot from matplotlib""" self.save(lambda byteArr: plt.savefig(byteArr, format=self.defaultFormat)) plt.clf() def savePIL(self, image): """Saves a PIL image""" self.save(lambda byteArr: image.save(byteArr, format=self.defaultFormat)) def saveFile(self, fileName:str, fmt:str=None): """Saves image from file. :param fmt: format of the file. Will figure out from file extension automatically if left empty """ with open(fileName, "rb") as f: if fmt is None: # automatically infer image format baseName = os.path.basename(fileName) if "." in baseName: fmt = baseName.split(".")[-1] self.saveBytes(f.read(), fmt) def saveGraphviz(self, g): """Saves a graphviz graph""" import tempfile; a = tempfile.NamedTemporaryFile() g.render(a.name, format="jpeg"); self.saveFile(f"{a.name}.jpeg") def pop(self): """Pops last image""" return self.imgs.pop() def __getitem__(self, idx): return self.imgs[idx] def _repr_html_(self): imgs = [f"\"<img src='data:image/{fmt};base64, {img}' />\"" for fmt, img in self.imgs] idx = Carousel._idx.value pre = f"k1c_{idx}" html = f""" <style> .{pre}_btn {{ cursor: pointer; padding: 10px 15px; background: #9e9e9e; float: left; margin-right: 5px; color: #000; user-select: none }} .{pre}_btn:hover {{ background: #4caf50; color: #fff; }} </style> <div> <div id="{pre}_prevBtn" class="{pre}_btn">Prev</div> <div id="{pre}_nextBtn" class="{pre}_btn">Next</div> <div style="clear:both"/> <div id="{pre}_status" style="padding: 10px"></div> </div> <div id="{pre}_imgContainer"></div> <script> {pre}_imgs = [{','.join(imgs)}]; {pre}_imgIdx = 0; function {pre}_display() {{ document.querySelector("#{pre}_imgContainer").innerHTML = {pre}_imgs[{pre}_imgIdx]; document.querySelector("#{pre}_status").innerHTML = "Image: " + ({pre}_imgIdx + 1) + "/" + {pre}_imgs.length; }}; document.querySelector("#{pre}_prevBtn").onclick = () => {{ {pre}_imgIdx -= 1; {pre}_imgIdx = Math.max({pre}_imgIdx, 0); {pre}_display(); }}; document.querySelector("#{pre}_nextBtn").onclick = () => {{ {pre}_imgIdx += 1; {pre}_imgIdx = Math.min({pre}_imgIdx, {pre}_imgs.length - 1); {pre}_display(); }}; {pre}_display(); </script> """ return html c = Carousel() x = np.linspace(-2, 2); plt.plot(x, x ** 2); c.savePlt() x = np.linspace(-1, 3); plt.plot(x, x ** 2); c.savePlt(); c #export def confusionMatrix(matrix:torch.Tensor, categories:List[str]=None, **kwargs): """Plots a confusion matrix. Example:: k1lib.viz.confusionMatrix(torch.rand(5, 5), ["a", "b", "c", "d", "e"]) .. image:: images/confusionMatrix.png :param matrix: 2d matrix of shape (n, n) :param categories: list of string categories :param kwargs: keyword args passed into :meth:`plt.figure`""" if isinstance(matrix, torch.Tensor): matrix = matrix.numpy() if categories is None: categories = [f"{e}" for e in range(len(matrix))] fig = plt.figure(**{"dpi":100, **kwargs}); ax = fig.add_subplot(111) cax = ax.matshow(matrix); fig.colorbar(cax) with k1lib.ignoreWarnings(): ax.set_xticklabels([''] + categories, rotation=90) ax.set_yticklabels([''] + categories) # Force label at every tick ax.xaxis.set_major_locator(mpl.ticker.MultipleLocator(1)) ax.yaxis.set_major_locator(mpl.ticker.MultipleLocator(1)) ax.xaxis.set_label_position('top') plt.xlabel("Predictions"); plt.ylabel("Ground truth") confusionMatrix(torch.rand(5, 5), ["a", "b", "c", "d", "e"]) c = Carousel() confusionMatrix(torch.rand(5, 5), ["a", "b", "c", "d", "e"]) c.savePlt(); assert len(c[0][1]) > 10000; c #export def FAnim(fig, f, frames, *args, **kwargs): """Matplotlib function animation, 60fps. Example:: # line below so that the animation is displayed in the notebook. Included in :mod:`k1lib.imports` already, so you don't really have to do this! plt.rcParams["animation.html"] = "jshtml" x = np.linspace(-2, 2); y = x**2 fig, ax = plt.subplots() plt.close() # close cause it'll display 1 animation, 1 static if we don't do this def f(frame): ax.clear() ax.set_ylim(0, 4); ax.set_xlim(-2, 2) ax.plot(x[:frame], y[:frame]) k1lib.FAnim(fig, f, len(x)) # plays animation in cell :param fig: figure object from `plt.figure(...)` command :param f: function that accepts 1 frame from `frames`. :param frames: number of frames, or iterator, to pass into function""" return partial(mpl.animation.FuncAnimation, interval=1000/30)(fig, f, frames, *args, **kwargs) plt.rcParams["animation.html"] = "jshtml" x = np.linspace(-2, 2); y = x**2 fig, ax = plt.subplots() plt.close() # close cause it'll display 1 animation, 1 static if we don't do this def f(frame): ax.clear() ax.set_ylim(0, 4); ax.set_xlim(-2, 2) ax.plot(x[:frame], y[:frame]) FAnim(fig, f, len(x)); # plays animation in cell #export from torch import nn from k1lib.cli import op def mask(img:torch.Tensor, act:torch.Tensor) -> torch.Tensor: """Shows which part of the image the network is focusing on. :param img: the image, expected to have dimension of (3, h, w) :param act: the activation, expected to have dimension of (x, y), and with elements from 0 to 1.""" *_, h, w = img.shape mask = act[None,] | nn.AdaptiveAvgPool2d([h//16, w//16]) | nn.AdaptiveAvgPool2d([h//8, w//8]) | nn.AdaptiveAvgPool2d([h, w]) return mask * img | op().permute(1, 2, 0) !../export.py viz ```
github_jupyter
#export """ This module is for nice visualization tools. This is exposed automatically with:: from k1lib.imports import * viz.mask # exposed """ import k1lib, base64, io, torch, os, matplotlib as mpl import matplotlib.pyplot as plt, numpy as np from typing import Callable, List, Union from functools import partial, update_wrapper __all__ = ["SliceablePlot", "plotSegments", "Carousel", "confusionMatrix", "FAnim", "mask"] #export class _PlotDecorator: """The idea with decorators is that you can do something like this:: sp = k1lib.viz.SliceablePlot() sp.yscale("log") # will format every plot as if ``plt.yscale("log")`` has been called This class is not expected to be used by end users though.""" def __init__(self, sliceablePlot:"SliceablePlot", name:str): """ :param sliceablePlot: the parent plot :param name: the decorator's name, like "yscale" """ self.sliceablePlot = sliceablePlot self.name = name; self.args, self.kwargs = None, None def __call__(self, *args, **kwargs): """Stores all args, then return the parent :class:`SliceablePlot`""" self.args = args; self.kwargs = kwargs; return self.sliceablePlot def run(self): getattr(plt, self.name)(*self.args, **self.kwargs) #export class SliceablePlot: """This is a plot that is "sliceable", meaning you can focus into a particular region of the plot quickly. A minimal example looks something like this:: import numpy as np, matplotlib.pyplot as plt, k1lib x = np.linspace(-2, 2, 100) def normalF(): plt.plot(x, x**2) @k1lib.viz.SliceablePlot.decorate def plotF(_slice): plt.plot(x[_slice], (x**2)[_slice]) plotF()[70:] # plots x^2 equation with x in [0.8, 2] So, ``normalF`` plots the equation :math:`x^2` with x going from -2 to 2. You can convert this into a :class:`SliceablePlot` by adding a term of type :class:`slice` to the args, and decorate with :meth:`decorate`. Now, every time you slice the :class:`SliceablePlot` with a specific range, ``plotF`` will receive it. How intuitive everything is depends on how you slice your data. ``[70:]`` results in x in [0.8, 2] is rather unintuitive. You can change it into something like this:: @k1lib.viz.SliceablePlot.decorate def niceF(_slice): n = 100; r = k1lib.Range(-2, 2) x = np.linspace(*r, n) _slice = r.toRange(k1lib.Range(n), r.bound(_slice)).slice_ plt.plot(x[_slice], (x**2)[_slice]) # this works without a decorator too btw: k1lib.viz.SliceablePlot(niceF) niceF()[0.3:0.7] # plots x^2 equation with x in [0.3, 0.7] niceF()[0.3:] # plots x^2 equation with x in [0.3, 2] The idea is to just take the input :class:`slice`, put some bounds on its parts, then convert that slice from [-2, 2] to [0, 100]. Check out :class:`k1lib.Range` if it's not obvious how this works. A really cool feature of :class:`SliceablePlot` looks like this:: niceF().legend(["A"])[-1:].grid(True).yscale("log") This will plot :math:`x^2` with range in [-1, 2] with a nice grid, and with y axis's scale set to log. Essentially, undefined method calls on a :class:`SliceablePlot` will translate into ``plt`` calls. So the above is roughly equivalent to this:: x = np.linspace(-2, 2, 100) plt.plot(x, x**2) plt.legend(["A"]) plt.grid(True) plt.yscale("log") .. image:: images/SliceablePlot.png This works even if you have multiple axes inside your figure. It's wonderful, isn't it?""" def __init__(self, plotF:Callable[[slice], None], slices:Union[slice, List[slice]]=slice(None), plotDecorators:List[_PlotDecorator]=[], docs=""): """Creates a new SliceablePlot. Only use params listed below: :param plotF: function that takes in a :class:`slice` or tuple of :class:`slice`s :param docs: optional docs for the function that will be displayed in :meth:`__repr__`""" self.plotF = plotF self.slices = [slices] if isinstance(slices, slice) else slices self.docs = docs; self.plotDecorators = list(plotDecorators) @staticmethod def decorate(f): """Decorates a plotting function so that it becomes a SliceablePlot.""" answer = partial(SliceablePlot, plotF=f) update_wrapper(answer, f) return answer @property def squeezedSlices(self) -> Union[List[slice], slice]: """If :attr:`slices` only has 1 element, then return that element, else return the entire list.""" return k1lib.squeeze(self.slices) def __getattr__(self, attr): if attr.startswith("_"): raise AttributeError() # automatically assume the attribute is a plt.attr method dec = _PlotDecorator(self, attr) self.plotDecorators.append(dec); return dec def __getitem__(self, idx): if type(idx) == slice: return SliceablePlot(self.plotF, [idx], self.plotDecorators, self.docs) if type(idx) == tuple and all([isinstance(elem, slice) for elem in idx]): return SliceablePlot(self.plotF, idx, self.plotDecorators, self.docs) raise Exception(f"Don't understand {idx}") def __repr__(self): self.plotF(self.squeezedSlices) for ax in plt.gcf().get_axes(): plt.sca(ax) for decorator in self.plotDecorators: decorator.run() plt.show() return f"""Sliceable plot. Can... - p[a:b]: to focus on a specific range of the plot - p.yscale("log"): to perform operation as if you're using plt{self.docs}""" @SliceablePlot.decorate def plotF(_slice): n = 100; r = k1lib.Range(-2, 2) x = np.linspace(*r, n) _slice = r.toRange(k1lib.Range(n), r.bound(_slice)).slice_ plt.plot(x[_slice], (x**2)[_slice]) plotF()[-1:].grid(True).yscale("log") #export def plotSegments(x:List[float], y:List[float], states:List[int], colors:List[str]=None): """Plots a line graph, with multiple segments with different colors. Idea is, you have a normal line graph, but you want to color parts of the graph red, other parts blue. Then, you can pass a "state" array, with the same length as your data, filled with ints, like this:: y = np.array([ 460800, 921600, 921600, 1445888, 1970176, 1970176, 2301952, 2633728, 2633728, 3043328, 3452928, 3452928, 3457024, 3461120, 3463680, 3463680, 3470336, 3470336, 3467776, 3869184, 3865088, 3865088, 3046400, 2972672, 2972672, 2309632, 2504192, 2504192, 1456128, 1393664, 1393664, 472576]) s = np.array([1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1]) plotSegments(None, y, s, colors=["tab:blue", "tab:red"]) .. image:: images/plotSegments.png :param x: (nullable) list of x coordinate at each point :param y: list of y coordinates at each point :param states: list of color at each point :param colors: string colors (matplotlib color strings) to display for each states""" if x is None: x = range(len(y)) if colors is None: colors = ["tab:blue", "tab:red", "tab:green", "tab:orange", "tab:purple", "tab:brown"][:len(x)] _x = []; _y = []; state = -1; count = -1 # stretchs, and bookkeeping nums lx = None; ly = None # last x and y from last stretch, for plot autocompletion while count + 1 < len(x): count += 1 if state != states[count]: if len(_x) > 0 and state >= 0: if lx != None: _x = [lx] + _x; _y = [ly] + _y plt.plot(_x, _y, colors[state]); lx = _x[-1]; ly = _y[-1] _x = [x[count]]; _y = [y[count]]; state = states[count] else: _x.append(x[count]); _y.append(y[count]) if len(_x) > 0 and state >= 0: if lx != None: _x = [lx] + _x; _y = [ly] + _y plt.plot(_x, _y, colors[state]) y = np.array([ 460800, 921600, 921600, 1445888, 1970176, 1970176, 2301952, 2633728, 2633728, 3043328, 3452928, 3452928, 3457024, 3461120, 3463680, 3463680, 3470336, 3470336, 3467776, 3869184, 3865088, 3865088, 3046400, 2972672, 2972672, 2309632, 2504192, 2504192, 1456128, 1393664, 1393664, 472576]) s = np.array([1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1]) plotSegments(None, y, s, colors=["tab:blue", "tab:red"]) x = np.linspace(-3, 3, 1000) states = np.zeros(1000, dtype=np.int); states[:300] = 1 colors = ["tab:blue", "tab:red"] plotSegments(x, x**2, states) #plt.show() #export class Carousel: _idx = k1lib.AutoIncrement.random() def __init__(self): """Creates a new Carousel. You can then add images and whatnot. Will even work even when you export the notebook as html. Example:: import numpy as np, matplotlib.pyplot as plt, k1lib c = k1lib.viz.Carousel() x = np.linspace(-2, 2); plt.plot(x, x ** 2); c.savePlt() x = np.linspace(-1, 3); plt.plot(x, x ** 2); c.savePlt() c # displays in notebook cell .. image:: images/carousel.png """ self.imgs:List[Tuple[str, str]] = [] # Tuple[format, base64 img] self.defaultFormat = "jpeg" def saveBytes(self, _bytes:bytes, fmt:str=None): """Saves bytes as another image. :param fmt: format of image""" self.imgs.append((fmt or self.defaultFormat, base64.b64encode(_bytes).decode())) def save(self, f:Callable[[io.BytesIO], None]): """Generic image save function. Treat :class:`io.BytesIO` as if it's a file when you're doing this:: with open("file.txt") as f: pass # "f" is similar to io.BytesIO So, you can do stuff like:: import matplotlib.pyplot as plt, numpy as np x = np.linspace(-2, 2) plt.plot(x, x**2) c = k1lib.viz.Carousel() c.save(lambda io: plt.savefig(io, format="png")) :param f: lambda that provides a :class:`io.BytesIO` for you to write to """ byteArr = io.BytesIO(); f(byteArr); byteArr.seek(0) self.saveBytes(byteArr.read()) def savePlt(self): """Saves current plot from matplotlib""" self.save(lambda byteArr: plt.savefig(byteArr, format=self.defaultFormat)) plt.clf() def savePIL(self, image): """Saves a PIL image""" self.save(lambda byteArr: image.save(byteArr, format=self.defaultFormat)) def saveFile(self, fileName:str, fmt:str=None): """Saves image from file. :param fmt: format of the file. Will figure out from file extension automatically if left empty """ with open(fileName, "rb") as f: if fmt is None: # automatically infer image format baseName = os.path.basename(fileName) if "." in baseName: fmt = baseName.split(".")[-1] self.saveBytes(f.read(), fmt) def saveGraphviz(self, g): """Saves a graphviz graph""" import tempfile; a = tempfile.NamedTemporaryFile() g.render(a.name, format="jpeg"); self.saveFile(f"{a.name}.jpeg") def pop(self): """Pops last image""" return self.imgs.pop() def __getitem__(self, idx): return self.imgs[idx] def _repr_html_(self): imgs = [f"\"<img src='data:image/{fmt};base64, {img}' />\"" for fmt, img in self.imgs] idx = Carousel._idx.value pre = f"k1c_{idx}" html = f""" <style> .{pre}_btn {{ cursor: pointer; padding: 10px 15px; background: #9e9e9e; float: left; margin-right: 5px; color: #000; user-select: none }} .{pre}_btn:hover {{ background: #4caf50; color: #fff; }} </style> <div> <div id="{pre}_prevBtn" class="{pre}_btn">Prev</div> <div id="{pre}_nextBtn" class="{pre}_btn">Next</div> <div style="clear:both"/> <div id="{pre}_status" style="padding: 10px"></div> </div> <div id="{pre}_imgContainer"></div> <script> {pre}_imgs = [{','.join(imgs)}]; {pre}_imgIdx = 0; function {pre}_display() {{ document.querySelector("#{pre}_imgContainer").innerHTML = {pre}_imgs[{pre}_imgIdx]; document.querySelector("#{pre}_status").innerHTML = "Image: " + ({pre}_imgIdx + 1) + "/" + {pre}_imgs.length; }}; document.querySelector("#{pre}_prevBtn").onclick = () => {{ {pre}_imgIdx -= 1; {pre}_imgIdx = Math.max({pre}_imgIdx, 0); {pre}_display(); }}; document.querySelector("#{pre}_nextBtn").onclick = () => {{ {pre}_imgIdx += 1; {pre}_imgIdx = Math.min({pre}_imgIdx, {pre}_imgs.length - 1); {pre}_display(); }}; {pre}_display(); </script> """ return html c = Carousel() x = np.linspace(-2, 2); plt.plot(x, x ** 2); c.savePlt() x = np.linspace(-1, 3); plt.plot(x, x ** 2); c.savePlt(); c #export def confusionMatrix(matrix:torch.Tensor, categories:List[str]=None, **kwargs): """Plots a confusion matrix. Example:: k1lib.viz.confusionMatrix(torch.rand(5, 5), ["a", "b", "c", "d", "e"]) .. image:: images/confusionMatrix.png :param matrix: 2d matrix of shape (n, n) :param categories: list of string categories :param kwargs: keyword args passed into :meth:`plt.figure`""" if isinstance(matrix, torch.Tensor): matrix = matrix.numpy() if categories is None: categories = [f"{e}" for e in range(len(matrix))] fig = plt.figure(**{"dpi":100, **kwargs}); ax = fig.add_subplot(111) cax = ax.matshow(matrix); fig.colorbar(cax) with k1lib.ignoreWarnings(): ax.set_xticklabels([''] + categories, rotation=90) ax.set_yticklabels([''] + categories) # Force label at every tick ax.xaxis.set_major_locator(mpl.ticker.MultipleLocator(1)) ax.yaxis.set_major_locator(mpl.ticker.MultipleLocator(1)) ax.xaxis.set_label_position('top') plt.xlabel("Predictions"); plt.ylabel("Ground truth") confusionMatrix(torch.rand(5, 5), ["a", "b", "c", "d", "e"]) c = Carousel() confusionMatrix(torch.rand(5, 5), ["a", "b", "c", "d", "e"]) c.savePlt(); assert len(c[0][1]) > 10000; c #export def FAnim(fig, f, frames, *args, **kwargs): """Matplotlib function animation, 60fps. Example:: # line below so that the animation is displayed in the notebook. Included in :mod:`k1lib.imports` already, so you don't really have to do this! plt.rcParams["animation.html"] = "jshtml" x = np.linspace(-2, 2); y = x**2 fig, ax = plt.subplots() plt.close() # close cause it'll display 1 animation, 1 static if we don't do this def f(frame): ax.clear() ax.set_ylim(0, 4); ax.set_xlim(-2, 2) ax.plot(x[:frame], y[:frame]) k1lib.FAnim(fig, f, len(x)) # plays animation in cell :param fig: figure object from `plt.figure(...)` command :param f: function that accepts 1 frame from `frames`. :param frames: number of frames, or iterator, to pass into function""" return partial(mpl.animation.FuncAnimation, interval=1000/30)(fig, f, frames, *args, **kwargs) plt.rcParams["animation.html"] = "jshtml" x = np.linspace(-2, 2); y = x**2 fig, ax = plt.subplots() plt.close() # close cause it'll display 1 animation, 1 static if we don't do this def f(frame): ax.clear() ax.set_ylim(0, 4); ax.set_xlim(-2, 2) ax.plot(x[:frame], y[:frame]) FAnim(fig, f, len(x)); # plays animation in cell #export from torch import nn from k1lib.cli import op def mask(img:torch.Tensor, act:torch.Tensor) -> torch.Tensor: """Shows which part of the image the network is focusing on. :param img: the image, expected to have dimension of (3, h, w) :param act: the activation, expected to have dimension of (x, y), and with elements from 0 to 1.""" *_, h, w = img.shape mask = act[None,] | nn.AdaptiveAvgPool2d([h//16, w//16]) | nn.AdaptiveAvgPool2d([h//8, w//8]) | nn.AdaptiveAvgPool2d([h, w]) return mask * img | op().permute(1, 2, 0) !../export.py viz
0.878842
0.677781
<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/> # Generate Readme for Awesome Notebooks ``` import os import naas_drivers import urllib.parse import json import copy import nbformat from nbconvert import MarkdownExporter from papermill.iorw import ( load_notebook_node, write_ipynb, ) ``` ## Variables ``` readme_template = "README_template.md" readme = "README.md" json_file = "templates.json" replace_var = "[[DYNAMIC_LIST]]" current_file = '.' notebook_ext = '.ipynb' github_url = 'https://github.com/jupyter-naas/awesome-notebooks/tree/master' github_download_url = 'https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/' naas_download_url='https://app.naas.ai/user-redirect/naas/downloader?url=' naas_logo='https://img.shields.io/badge/-Open%20in%20Naas-success?labelColor=000000&logo=data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iMTAyNHB4IiBoZWlnaHQ9IjEwMjRweCIgdmlld0JveD0iMCAwIDEwMjQgMTAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgdmVyc2lvbj0iMS4xIj4KIDwhLS0gR2VuZXJhdGVkIGJ5IFBpeGVsbWF0b3IgUHJvIDIuMC41IC0tPgogPGRlZnM+CiAgPHRleHQgaWQ9InN0cmluZyIgdHJhbnNmb3JtPSJtYXRyaXgoMS4wIDAuMCAwLjAgMS4wIDIyOC4wIDU0LjUpIiBmb250LWZhbWlseT0iQ29tZm9ydGFhLVJlZ3VsYXIsIENvbWZvcnRhYSIgZm9udC1zaXplPSI4MDAiIHRleHQtZGVjb3JhdGlvbj0ibm9uZSIgZmlsbD0iI2ZmZmZmZiIgeD0iMS4xOTk5OTk5OTk5OTk5ODg2IiB5PSI3MDUuMCI+bjwvdGV4dD4KIDwvZGVmcz4KIDx1c2UgaWQ9Im4iIHhsaW5rOmhyZWY9IiNzdHJpbmciLz4KPC9zdmc+Cg==' ``` ## Get files list ``` total = [] for root, directories, files in os.walk(current_file, topdown=False): total.append({"root": root, "directories":directories, "files":files}) total.sort(key=lambda x: x.get('root')) ``` ## Set 'Naas Download' link on notebook ``` def get_open_button(download_link): return f"""<a href="{download_link}" target="_parent"><img src="{naas_logo}"/></a>""" def get_title(folder_nice, file_nice, download_link): return f"""# {folder_nice} - {file_nice}\n{get_open_button(download_link)}""" def get_tags(text): result = [] tags = text.split(' ') for tag in tags: if len(tag) >= 2 and tag[0] == '#' and tag[1] != ' ' and tag[1] != '#': result.append(tag) return result def set_notebook_title(notebook_path, title_source): header_found = False tag_found = False tags = None count = 0 nb = load_notebook_node(notebook_path) nb = copy.deepcopy(nb) for cell in nb.cells: source = cell.source if cell.cell_type == "code": nb.cells[count].outputs = [] if header_found and not tag_found: if cell.cell_type == "markdown": tags = get_tags(cell.source) tag_found = True if not header_found and cell.cell_type == "markdown" and len(source) > 2 and source[0] == '#' and source[1] == ' ': nb.cells[count].source = title_source header_found = True count += 1 write_ipynb(nb, notebook_path) # if notebook_path == "LinkedIn/LinkedIn_Send_message_to_profile.ipynb": # (body, resources) = MarkdownExporter().from_notebook_node(nb) # f = open(notebook_path.replace(".ipynb", ".md"), "w") # f.write(body) # f.close() return tags ``` ## Convert filepath in Markdown text ``` def get_file_md(folder_nice, folder_url, files, json_templates, title_sep="##", subtitle_sep="*"): md = "" if (len(files) > 0): md += f"\n{title_sep} {folder_nice}\n" for file in files: # print(file) if file.endswith(notebook_ext): file_url = urllib.parse.quote(file) file_nice = file.replace('_', ' ') file_nice = file_nice.replace(notebook_ext, '') file_nice = file_nice.replace(folder_nice, '') file_nice = file_nice.strip() if (file_nice != ""): file_nice = file_nice[0].capitalize() + file_nice[1:] path = urllib.parse.unquote(f"{folder_url}/{file_url}") dl_url = f"{naas_download_url}{github_download_url}{folder_url}/{file_url}" title = get_title(folder_nice, file_nice, dl_url) tags = set_notebook_title(path, title) nb_redirect = f"[{file_nice}]({github_url}/{folder_url}/{file_url})" open_button = get_open_button(dl_url) md += f"{subtitle_sep} {nb_redirect}\n" json_templates.append({ 'tool': folder_nice, 'notebook': file_nice, 'tags': tags, 'update': '', 'action': open_button }) return md ``` ## Create list of all notebooks ``` generated_list = "" template_json = [] for cur in total: root = cur.get('root') md_round = "" directories = cur.get('directories') files = cur.get('files') files = sorted(files) if ('.git' not in root and '.ipynb_checkpoints' not in root and '.' != root): folder_nice = root.replace('./', '') folder_url = urllib.parse.quote(folder_nice) if ('/' not in folder_nice): md_round += get_file_md(folder_nice, folder_url, files, template_json) elif ('/' in folder_nice): folder_url = urllib.parse.quote(folder_nice) subfolder_nice = folder_nice.split('/')[1].replace('_', ' ').replace(folder_nice, '').strip() md_round += get_file_md(subfolder_nice, folder_url, files, template_json, "\t###", "\t-") elif ('.ipynb_checkpoints' in root): # print(root, files) for file in files: try: os.remove(os.path.join(root, file)) except: pass try: os.rmdir(root) except: pass # print(md_round) generated_list += md_round ``` ## Preview the generated list ``` naas_drivers.markdown.display(generated_list) template = open(readme_template).read() template = template.replace(replace_var, generated_list) f = open(readme, "w+") f.write(template) f.close() f = open(json_file, "w") f.write(json.dumps(template_json)) f.close ```
github_jupyter
import os import naas_drivers import urllib.parse import json import copy import nbformat from nbconvert import MarkdownExporter from papermill.iorw import ( load_notebook_node, write_ipynb, ) readme_template = "README_template.md" readme = "README.md" json_file = "templates.json" replace_var = "[[DYNAMIC_LIST]]" current_file = '.' notebook_ext = '.ipynb' github_url = 'https://github.com/jupyter-naas/awesome-notebooks/tree/master' github_download_url = 'https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/' naas_download_url='https://app.naas.ai/user-redirect/naas/downloader?url=' naas_logo='https://img.shields.io/badge/-Open%20in%20Naas-success?labelColor=000000&logo=data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iMTAyNHB4IiBoZWlnaHQ9IjEwMjRweCIgdmlld0JveD0iMCAwIDEwMjQgMTAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgdmVyc2lvbj0iMS4xIj4KIDwhLS0gR2VuZXJhdGVkIGJ5IFBpeGVsbWF0b3IgUHJvIDIuMC41IC0tPgogPGRlZnM+CiAgPHRleHQgaWQ9InN0cmluZyIgdHJhbnNmb3JtPSJtYXRyaXgoMS4wIDAuMCAwLjAgMS4wIDIyOC4wIDU0LjUpIiBmb250LWZhbWlseT0iQ29tZm9ydGFhLVJlZ3VsYXIsIENvbWZvcnRhYSIgZm9udC1zaXplPSI4MDAiIHRleHQtZGVjb3JhdGlvbj0ibm9uZSIgZmlsbD0iI2ZmZmZmZiIgeD0iMS4xOTk5OTk5OTk5OTk5ODg2IiB5PSI3MDUuMCI+bjwvdGV4dD4KIDwvZGVmcz4KIDx1c2UgaWQ9Im4iIHhsaW5rOmhyZWY9IiNzdHJpbmciLz4KPC9zdmc+Cg==' total = [] for root, directories, files in os.walk(current_file, topdown=False): total.append({"root": root, "directories":directories, "files":files}) total.sort(key=lambda x: x.get('root')) def get_open_button(download_link): return f"""<a href="{download_link}" target="_parent"><img src="{naas_logo}"/></a>""" def get_title(folder_nice, file_nice, download_link): return f"""# {folder_nice} - {file_nice}\n{get_open_button(download_link)}""" def get_tags(text): result = [] tags = text.split(' ') for tag in tags: if len(tag) >= 2 and tag[0] == '#' and tag[1] != ' ' and tag[1] != '#': result.append(tag) return result def set_notebook_title(notebook_path, title_source): header_found = False tag_found = False tags = None count = 0 nb = load_notebook_node(notebook_path) nb = copy.deepcopy(nb) for cell in nb.cells: source = cell.source if cell.cell_type == "code": nb.cells[count].outputs = [] if header_found and not tag_found: if cell.cell_type == "markdown": tags = get_tags(cell.source) tag_found = True if not header_found and cell.cell_type == "markdown" and len(source) > 2 and source[0] == '#' and source[1] == ' ': nb.cells[count].source = title_source header_found = True count += 1 write_ipynb(nb, notebook_path) # if notebook_path == "LinkedIn/LinkedIn_Send_message_to_profile.ipynb": # (body, resources) = MarkdownExporter().from_notebook_node(nb) # f = open(notebook_path.replace(".ipynb", ".md"), "w") # f.write(body) # f.close() return tags def get_file_md(folder_nice, folder_url, files, json_templates, title_sep="##", subtitle_sep="*"): md = "" if (len(files) > 0): md += f"\n{title_sep} {folder_nice}\n" for file in files: # print(file) if file.endswith(notebook_ext): file_url = urllib.parse.quote(file) file_nice = file.replace('_', ' ') file_nice = file_nice.replace(notebook_ext, '') file_nice = file_nice.replace(folder_nice, '') file_nice = file_nice.strip() if (file_nice != ""): file_nice = file_nice[0].capitalize() + file_nice[1:] path = urllib.parse.unquote(f"{folder_url}/{file_url}") dl_url = f"{naas_download_url}{github_download_url}{folder_url}/{file_url}" title = get_title(folder_nice, file_nice, dl_url) tags = set_notebook_title(path, title) nb_redirect = f"[{file_nice}]({github_url}/{folder_url}/{file_url})" open_button = get_open_button(dl_url) md += f"{subtitle_sep} {nb_redirect}\n" json_templates.append({ 'tool': folder_nice, 'notebook': file_nice, 'tags': tags, 'update': '', 'action': open_button }) return md generated_list = "" template_json = [] for cur in total: root = cur.get('root') md_round = "" directories = cur.get('directories') files = cur.get('files') files = sorted(files) if ('.git' not in root and '.ipynb_checkpoints' not in root and '.' != root): folder_nice = root.replace('./', '') folder_url = urllib.parse.quote(folder_nice) if ('/' not in folder_nice): md_round += get_file_md(folder_nice, folder_url, files, template_json) elif ('/' in folder_nice): folder_url = urllib.parse.quote(folder_nice) subfolder_nice = folder_nice.split('/')[1].replace('_', ' ').replace(folder_nice, '').strip() md_round += get_file_md(subfolder_nice, folder_url, files, template_json, "\t###", "\t-") elif ('.ipynb_checkpoints' in root): # print(root, files) for file in files: try: os.remove(os.path.join(root, file)) except: pass try: os.rmdir(root) except: pass # print(md_round) generated_list += md_round naas_drivers.markdown.display(generated_list) template = open(readme_template).read() template = template.replace(replace_var, generated_list) f = open(readme, "w+") f.write(template) f.close() f = open(json_file, "w") f.write(json.dumps(template_json)) f.close
0.08873
0.683198
``` import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set_theme() import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import MaxPooling2D from tensorflow.keras.utils import to_categorical from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.utils import shuffle from sklearn.metrics import confusion_matrix import warnings warnings.filterwarnings('ignore') dataset = pd.read_csv("../../data/A_Z Handwritten Data.csv").astype('float32') dataset.rename(columns={'0':'label'}, inplace=True) mnist_data = pd.read_csv("../../data/mnist_train.csv").astype('float32') mnist_data.rename(columns={'0':'label'}, inplace=True) mnist_data.iloc[:,0] = mnist_data.iloc[:,0].replace({0.0:26.0, 1.0:27.0, 2.0:28.0, 3.0:29.0, 4.0:30.0, 5.0:31.0, 6.0:32.0, 7.0:33.0, 8.0:34.0, 9.0:35.0}) print(dataset.shape[0]) dataset = dataset.append(mnist_data) print(dataset.shape[0]) X = dataset.drop('label',axis = 1) y = dataset['label'] X_shuffle = shuffle(X) plt.figure(figsize = (3,2.5), frameon=False) plt.rcParams["axes.grid"] = False row, col = 2, 2 for i in range(4): plt.subplot(col, row, i+1) plt.imshow( X_shuffle.iloc[i].values.reshape(28,28), interpolation='nearest', cmap='Greys') plt.show() label_mapper = { 0:'A', 1:'B', 2:'C', 3:'D', 4:'E', 5:'F', 6:'G', 7:'H', 8:'I', 9:'J', 10:'K', 11:'L', 12:'M', 13:'N', 14:'O', 15:'P', 16:'Q', 17:'R', 18:'S', 19:'T', 20:'U', 21:'V', 22:'W', 23:'X', 24:'Y', 25:'Z' ,26:'0', 27:'1', 28:'2', 29:'3', 30:'4', 31:'5', 32:'6', 33:'7', 34:'8', 35:'9'} dataset['label'] = dataset['label'].map(label_mapper) label_size = dataset.groupby('label').size() label_size.plot.barh(figsize=(6,6)) plt.title("Character class counts") plt.show() # split data+labels X_train, X_test, y_train, y_test = train_test_split(X,y) # scale data standard_scaler = MinMaxScaler() X_train = standard_scaler.fit_transform(X_train) X_test = standard_scaler.transform(X_test) print(X_train.shape[0], X_test.shape[0]) X_shuffle = shuffle(X_train) plt.figure(figsize = (5, 4), frameon=False) plt.rcParams["axes.grid"] = False plt.axis('off') row, col = 2, 2 for i in range(4): plt.subplot(col, row, i+1) plt.imshow( X_shuffle[i].reshape(28,28), interpolation='nearest', cmap='Greys') plt.show() # reshaping 1D array to 2D: 784 = 28*28 X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32') X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32') y_train = to_categorical(y_train) y_test = to_categorical(y_test) #define model model = Sequential() model.add(Conv2D(32, (5, 5), input_shape=(28, 28, 1), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.3)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(len(y.unique()), activation='softmax')) # compile model model.compile( loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'] ) model.summary() history = model.fit( X_train, y_train, validation_data=(X_test, y_test), epochs=5, batch_size=200, verbose=2 ) scores = model.evaluate(X_test, y_test, verbose=0) print("CNN model Score: ", scores[1]) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper right') plt.show() pred = model.predict(X_test) sample_test = X_test[12].reshape(28, 28) plt.figure(figsize = (3,2.5), frameon=False) plt.rcParams["axes.grid"] = False plt.imshow( sample_test, interpolation='nearest', cmap='Greys' ) plt.show() label_mapper[pred[12].argmax()] cm = confusion_matrix( y_test.argmax(axis=1), pred.argmax(axis=1) ) df_cm = pd.DataFrame( cm, range(36), range(36) ) plt.figure(figsize = (10,7)) sns.set_theme(font_scale=0.7) sns.heatmap(df_cm, annot=True) model.save('char&mnist_recog.h5') ```
github_jupyter
import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set_theme() import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import MaxPooling2D from tensorflow.keras.utils import to_categorical from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.utils import shuffle from sklearn.metrics import confusion_matrix import warnings warnings.filterwarnings('ignore') dataset = pd.read_csv("../../data/A_Z Handwritten Data.csv").astype('float32') dataset.rename(columns={'0':'label'}, inplace=True) mnist_data = pd.read_csv("../../data/mnist_train.csv").astype('float32') mnist_data.rename(columns={'0':'label'}, inplace=True) mnist_data.iloc[:,0] = mnist_data.iloc[:,0].replace({0.0:26.0, 1.0:27.0, 2.0:28.0, 3.0:29.0, 4.0:30.0, 5.0:31.0, 6.0:32.0, 7.0:33.0, 8.0:34.0, 9.0:35.0}) print(dataset.shape[0]) dataset = dataset.append(mnist_data) print(dataset.shape[0]) X = dataset.drop('label',axis = 1) y = dataset['label'] X_shuffle = shuffle(X) plt.figure(figsize = (3,2.5), frameon=False) plt.rcParams["axes.grid"] = False row, col = 2, 2 for i in range(4): plt.subplot(col, row, i+1) plt.imshow( X_shuffle.iloc[i].values.reshape(28,28), interpolation='nearest', cmap='Greys') plt.show() label_mapper = { 0:'A', 1:'B', 2:'C', 3:'D', 4:'E', 5:'F', 6:'G', 7:'H', 8:'I', 9:'J', 10:'K', 11:'L', 12:'M', 13:'N', 14:'O', 15:'P', 16:'Q', 17:'R', 18:'S', 19:'T', 20:'U', 21:'V', 22:'W', 23:'X', 24:'Y', 25:'Z' ,26:'0', 27:'1', 28:'2', 29:'3', 30:'4', 31:'5', 32:'6', 33:'7', 34:'8', 35:'9'} dataset['label'] = dataset['label'].map(label_mapper) label_size = dataset.groupby('label').size() label_size.plot.barh(figsize=(6,6)) plt.title("Character class counts") plt.show() # split data+labels X_train, X_test, y_train, y_test = train_test_split(X,y) # scale data standard_scaler = MinMaxScaler() X_train = standard_scaler.fit_transform(X_train) X_test = standard_scaler.transform(X_test) print(X_train.shape[0], X_test.shape[0]) X_shuffle = shuffle(X_train) plt.figure(figsize = (5, 4), frameon=False) plt.rcParams["axes.grid"] = False plt.axis('off') row, col = 2, 2 for i in range(4): plt.subplot(col, row, i+1) plt.imshow( X_shuffle[i].reshape(28,28), interpolation='nearest', cmap='Greys') plt.show() # reshaping 1D array to 2D: 784 = 28*28 X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32') X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32') y_train = to_categorical(y_train) y_test = to_categorical(y_test) #define model model = Sequential() model.add(Conv2D(32, (5, 5), input_shape=(28, 28, 1), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.3)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(len(y.unique()), activation='softmax')) # compile model model.compile( loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'] ) model.summary() history = model.fit( X_train, y_train, validation_data=(X_test, y_test), epochs=5, batch_size=200, verbose=2 ) scores = model.evaluate(X_test, y_test, verbose=0) print("CNN model Score: ", scores[1]) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper right') plt.show() pred = model.predict(X_test) sample_test = X_test[12].reshape(28, 28) plt.figure(figsize = (3,2.5), frameon=False) plt.rcParams["axes.grid"] = False plt.imshow( sample_test, interpolation='nearest', cmap='Greys' ) plt.show() label_mapper[pred[12].argmax()] cm = confusion_matrix( y_test.argmax(axis=1), pred.argmax(axis=1) ) df_cm = pd.DataFrame( cm, range(36), range(36) ) plt.figure(figsize = (10,7)) sns.set_theme(font_scale=0.7) sns.heatmap(df_cm, annot=True) model.save('char&mnist_recog.h5')
0.599954
0.629803
# Introduction to Probability and Statistics | In this notebook, we will play around with some of the concepts we have previously discussed. Many concepts from probability and statistics are well-represented in major libraries for data processing in Python, such as `numpy` and `pandas`. ``` import numpy as np import pandas as pd import random import matplotlib.pyplot as plt ``` ## Random Variables and Distributions Let's start with drawing a sample of 30 variables from a uniform disribution from 0 to 9. We will also compute mean and variance. ``` sample = [ random.randint(0,10) for _ in range(30) ] print(f"Sample: {sample}") print(f"Mean = {np.mean(sample)}") print(f"Variance = {np.var(sample)}") ``` To visually estimate how many different values are there in the sample, we can plot the **histogram**: ``` plt.hist(sample) plt.show() ``` ## Analyzing Real Data Mean and variance are very important when analyzing real-world data. Let's load the data about baseball players from [SOCR MLB Height/Weight Data](http://wiki.stat.ucla.edu/socr/index.php/SOCR_Data_MLB_HeightsWeights) ``` df = pd.read_csv("../../data/SOCR_MLB.tsv",sep='\t',header=None,names=['Name','Team','Role','Height','Weight','Age']) df ``` > We are using a package called **Pandas** here for data analysis. We will talk more about Pandas and working with data in Python later in this course. Let's compute average values for age, height and weight: ``` df[['Age','Height','Weight']].mean() ``` Now let's focus on height, and compute standard deviation and variance: ``` print(list(df['Height'])[:20]) mean = df['Height'].mean() var = df['Height'].var() std = df['Height'].std() print(f"Mean = {mean}\nVariance = {var}\nStandard Deviation = {std}") ``` In addition to mean, it makes sense to look at median value and quartiles. They can be visualized using **box plot**: ``` plt.figure(figsize=(10,2)) plt.boxplot(df['Height'],vert=False,showmeans=True) plt.grid(color='gray',linestyle='dotted') plt.show() ``` We can also make box plots of subsets of our dataset, for example, grouped by player role. ``` df.boxplot(column='Height',by='Role') plt.xticks(rotation='vertical') plt.show() df.boxplot(column='Age',by='Role') plt.xticks(rotation='vertical') plt.show() ``` > **Note**: This diagram suggests, that on average, height of first basemen is higher that height of second basemen. Later we will learn how we can test this hypothesis more formally, and how to demonstrate that our data is statistically significant to show that. Age, height and weight are all continuous random variables. What do you think their distribution is? A good way to find out is to plot the histogram of values: ``` df['Weight'].hist(bins=15) plt.suptitle('Weight distribution of MLB Players') plt.xlabel('Weight') plt.ylabel('Count') plt.show() df['Age'].hist(bins=15) plt.suptitle('Age distribution of MLB Players') plt.xlabel('Age') plt.ylabel('Count') plt.show() df['Height'].hist(bins=15) plt.suptitle('Height distribution of MLB Players') plt.xlabel('Height') plt.ylabel('Count') plt.show() ``` ## Normal Distribution Let's create an artificial sample of weights that follows normal distribution with the same mean and variance as real data: ``` generated = np.random.normal(mean,std,1000) generated[:20] plt.hist(generated,bins=15) plt.show() plt.hist(np.random.normal(0,1,50000),bins=300) plt.show() ``` Since most values in real life are normally distributed, it means we should not use uniform random number generator to generate sample data. Here is what happens if we try to generate weights with uniform distribution (generated by `np.random.rand`): ``` wrong_sample = np.random.rand(1000)*2*std+mean-std plt.hist(wrong_sample) plt.show() ``` ## Confidence Intervals Let's now calculate confidence intervals for the weights and heights of baseball players. We will use the code [from this stackoverflow discussion](https://stackoverflow.com/questions/15033511/compute-a-confidence-interval-from-sample-data): ``` import scipy.stats def mean_confidence_interval(data, confidence=0.95): a = 1.0 * np.array(data) n = len(a) m, se = np.mean(a), scipy.stats.sem(a) h = se * scipy.stats.t.ppf((1 + confidence) / 2., n-1) return m, h for p in [0.85, 0.9, 0.95]: m, h = mean_confidence_interval(df['Weight'].fillna(method='pad'),p) print(f"p={p:.2f}, mean = {m:.2f}±{h:.2f}") ``` ## Hypothesis Testing Let's explore different roles in our baseball players dataset: ``` df.groupby('Role').agg({ 'Height' : 'mean', 'Weight' : 'mean', 'Age' : 'count'}).rename(columns={ 'Age' : 'Count'}) ``` Let's test the hypothesis that First Basemen are higher then Second Basemen. The simplest way to do it is to test the confidence intervals: ``` for p in [0.85,0.9,0.95]: m1, h1 = mean_confidence_interval(df.loc[df['Role']=='First_Baseman',['Height']],p) m2, h2 = mean_confidence_interval(df.loc[df['Role']=='Second_Baseman',['Height']],p) print(f'Conf={p:.2f}, 1st basemen height: {m1-h1[0]:.2f}..{m1+h1[0]:.2f}, 2nd basemen height: {m2-h2[0]:.2f}..{m2+h2[0]:.2f}') ``` We can see that intervals do not overlap. More statistically correct way to prove the hypothesis is to use **Student t-test**: ``` from scipy.stats import ttest_ind tval, pval = ttest_ind(df.loc[df['Role']=='First_Baseman',['Height']], df.loc[df['Role']=='Second_Baseman',['Height']],equal_var=False) print(f"T-value = {tval[0]:.2f}\nP-value: {pval[0]}") ``` Two values returned by the `ttest_ind` functions are: * p-value can be considered as the probability of two distributions having the same mean. In our case, it is very low, meaning that there is strong evidence supporting that first basemen are taller * t-value is the intermediate value of normalized mean difference that is used in t-test, and it is compared against threshold value for a given confidence value ## Simulating Normal Distribution with Central Limit Theorem Pseudo-random generator in Python is designed to give us uniform distribution. If we want to create a generator for normal distribution, we can use central limit theorem. To get a normally distributed value we will just compute a mean of a uniform-generated sample. ``` def normal_random(sample_size=100): sample = [random.uniform(0,1) for _ in range(sample_size) ] return sum(sample)/sample_size sample = [normal_random() for _ in range(100)] plt.hist(sample) plt.show() ``` ## Correlation and Evil Baseball Corp Correlation allows us to find inner connection between data sequences. In our toy example, let's pretend there is an evil baseball corporation that pays it's players according to their height - the taller the player is, the more money he/she gets. Suppose there is a base salary of $1000, and an additional bonus from $0 to $100, depending on height. We will take the real players from MLB, and compute their imaginary salaries: ``` heights = df['Height'] salaries = 1000+(heights-heights.min())/(heights.max()-heights.mean())*100 print(list(zip(heights,salaries))[:10]) ``` Let's now compute covariance and correlation of those sequences. `np.cov` will give us so-called **covariance matrix**, which is an extension of covariance to multiple variables. The element $M_{ij}$ of the covariance matrix $M$ is a correlation between input variables $X_i$ and $X_j$, and diagonal values $M_{ii}$ is the variance of $X_{i}$. Similarly, `np.corrcoef` will give us **correlation matrix**. ``` print(f"Covariance matrix:\n{np.cov(heights,salaries)}") print(f"Covariance = {np.cov(heights,salaries)[0,1]}") print(f"Correlation = {np.corrcoef(heights,salaries)[0,1]}") ``` Correlation equal to 1 means that there is a strong **linear relation** between two variables. We can visually see the linear relation by plotting one value against the other: ``` plt.scatter(heights,salaries) plt.show() ``` Let's see what happens if the relation is not linear. Suppose that our corporation decided to hide the obvious linear dependency between heights and salaries, and introduced some non-linearity into the formula, such as `sin`: ``` salaries = 1000+np.sin((heights-heights.min())/(heights.max()-heights.mean()))*100 print(f"Correlation = {np.corrcoef(heights,salaries)[0,1]}") ``` In this case, the correlation is slightly smaller, but it is still quite high. Now, to make the relation even less obvious, we might want to add some extra randomness by adding some random variable to the salary. Let's see what happens: ``` salaries = 1000+np.sin((heights-heights.min())/(heights.max()-heights.mean()))*100+np.random.random(size=len(heights))*20-10 print(f"Correlation = {np.corrcoef(heights,salaries)[0,1]}") plt.scatter(heights, salaries) plt.show() ``` > Can you guess why the dots line up into vertical lines like this? We have observed the correlation between artificially engineered concept like salary and the observed variable *height*. Let's also see if the two observed variables, such as height and weight, also correlate: ``` np.corrcoef(df['Height'],df['Weight']) ``` Unfortunately, we did not get any results - only some strange `nan` values. This is due to the fact that some of the values in our series are undefined, represented as `nan`, which causes the result of the operation to be undefined as well. By looking at the matrix we can see that `Weight` is problematic column, because self-correlation between `Height` values has been computed. > This example shows the importance of **data preparation** and **cleaning**. Without proper data we cannot compute anything. Let's use `fillna` method to fill the missing values, and compute the correlation: ``` np.corrcoef(df['Height'],df['Weight'].fillna(method='pad')) ``` The is indeed a correlation, but not such a strong one as in our artificial example. Indeed, if we look at the scatter plot of one value against the other, the relation would be much less obvious: ``` plt.scatter(df['Height'],df['Weight']) plt.xlabel('Height') plt.ylabel('Weight') plt.show() ``` ## Conclusion In this notebook, we have learnt how to perform basic operations on data to compute statistical functions. We now know how to use sound apparatus of math and statistics in order to prove some hypotheses, and how to compute confidence intervals for random variable given data sample.
github_jupyter
import numpy as np import pandas as pd import random import matplotlib.pyplot as plt sample = [ random.randint(0,10) for _ in range(30) ] print(f"Sample: {sample}") print(f"Mean = {np.mean(sample)}") print(f"Variance = {np.var(sample)}") plt.hist(sample) plt.show() df = pd.read_csv("../../data/SOCR_MLB.tsv",sep='\t',header=None,names=['Name','Team','Role','Height','Weight','Age']) df df[['Age','Height','Weight']].mean() print(list(df['Height'])[:20]) mean = df['Height'].mean() var = df['Height'].var() std = df['Height'].std() print(f"Mean = {mean}\nVariance = {var}\nStandard Deviation = {std}") plt.figure(figsize=(10,2)) plt.boxplot(df['Height'],vert=False,showmeans=True) plt.grid(color='gray',linestyle='dotted') plt.show() df.boxplot(column='Height',by='Role') plt.xticks(rotation='vertical') plt.show() df.boxplot(column='Age',by='Role') plt.xticks(rotation='vertical') plt.show() df['Weight'].hist(bins=15) plt.suptitle('Weight distribution of MLB Players') plt.xlabel('Weight') plt.ylabel('Count') plt.show() df['Age'].hist(bins=15) plt.suptitle('Age distribution of MLB Players') plt.xlabel('Age') plt.ylabel('Count') plt.show() df['Height'].hist(bins=15) plt.suptitle('Height distribution of MLB Players') plt.xlabel('Height') plt.ylabel('Count') plt.show() generated = np.random.normal(mean,std,1000) generated[:20] plt.hist(generated,bins=15) plt.show() plt.hist(np.random.normal(0,1,50000),bins=300) plt.show() wrong_sample = np.random.rand(1000)*2*std+mean-std plt.hist(wrong_sample) plt.show() import scipy.stats def mean_confidence_interval(data, confidence=0.95): a = 1.0 * np.array(data) n = len(a) m, se = np.mean(a), scipy.stats.sem(a) h = se * scipy.stats.t.ppf((1 + confidence) / 2., n-1) return m, h for p in [0.85, 0.9, 0.95]: m, h = mean_confidence_interval(df['Weight'].fillna(method='pad'),p) print(f"p={p:.2f}, mean = {m:.2f}±{h:.2f}") df.groupby('Role').agg({ 'Height' : 'mean', 'Weight' : 'mean', 'Age' : 'count'}).rename(columns={ 'Age' : 'Count'}) for p in [0.85,0.9,0.95]: m1, h1 = mean_confidence_interval(df.loc[df['Role']=='First_Baseman',['Height']],p) m2, h2 = mean_confidence_interval(df.loc[df['Role']=='Second_Baseman',['Height']],p) print(f'Conf={p:.2f}, 1st basemen height: {m1-h1[0]:.2f}..{m1+h1[0]:.2f}, 2nd basemen height: {m2-h2[0]:.2f}..{m2+h2[0]:.2f}') from scipy.stats import ttest_ind tval, pval = ttest_ind(df.loc[df['Role']=='First_Baseman',['Height']], df.loc[df['Role']=='Second_Baseman',['Height']],equal_var=False) print(f"T-value = {tval[0]:.2f}\nP-value: {pval[0]}") def normal_random(sample_size=100): sample = [random.uniform(0,1) for _ in range(sample_size) ] return sum(sample)/sample_size sample = [normal_random() for _ in range(100)] plt.hist(sample) plt.show() heights = df['Height'] salaries = 1000+(heights-heights.min())/(heights.max()-heights.mean())*100 print(list(zip(heights,salaries))[:10]) print(f"Covariance matrix:\n{np.cov(heights,salaries)}") print(f"Covariance = {np.cov(heights,salaries)[0,1]}") print(f"Correlation = {np.corrcoef(heights,salaries)[0,1]}") plt.scatter(heights,salaries) plt.show() salaries = 1000+np.sin((heights-heights.min())/(heights.max()-heights.mean()))*100 print(f"Correlation = {np.corrcoef(heights,salaries)[0,1]}") salaries = 1000+np.sin((heights-heights.min())/(heights.max()-heights.mean()))*100+np.random.random(size=len(heights))*20-10 print(f"Correlation = {np.corrcoef(heights,salaries)[0,1]}") plt.scatter(heights, salaries) plt.show() np.corrcoef(df['Height'],df['Weight']) np.corrcoef(df['Height'],df['Weight'].fillna(method='pad')) plt.scatter(df['Height'],df['Weight']) plt.xlabel('Height') plt.ylabel('Weight') plt.show()
0.519034
0.992259
``` %matplotlib inline %reload_ext autoreload %autoreload 2 %config InlineBackend.figure_format = 'retina' %reload_ext lab_black import logging import string import sys import os import pandas as pd from glob import glob import matplotlib.pyplot as plt import numpy as np import seaborn as sns from src.figure_utilities import ( PAGE_HEIGHT, ONE_COLUMN, TWO_COLUMN, save_figure, set_figure_defaults, ) set_figure_defaults() from glob import glob import os import pandas as pd from src.parameters import PROCESSED_DATA_DIR, STATE_COLORS, STATE_ORDER from src.figure_utilities import TWO_COLUMN, PAGE_HEIGHT, save_figure import seaborn as sns import matplotlib.pyplot as plt from src.visualization import ( plot_category_counts, plot_category_duration, plot_linear_position_markers, ) from loren_frank_data_processing import make_tetrode_dataframe from src.parameters import ANIMALS, STATE_ORDER, _BRAIN_AREAS tetrode_info = make_tetrode_dataframe(ANIMALS) data_type, dim = "sorted_spikes", "1D" n_unique_spiking = 2 file_paths = glob( os.path.join(PROCESSED_DATA_DIR, f"*_{data_type}_{dim}_replay_info_80.csv") ) def read_file(file_path): try: return pd.read_csv(file_path) except pd.errors.EmptyDataError: pass replay_info = pd.concat( [read_file(file_path) for file_path in file_paths], axis=0, ).set_index(["animal", "day", "epoch", "ripple_number"]) replay_info = replay_info.loc[ replay_info.n_unique_spiking >= n_unique_spiking ].sort_index() is_brain_areas = tetrode_info.area.astype(str).str.upper().isin(_BRAIN_AREAS) n_tetrodes = ( tetrode_info.loc[is_brain_areas] .groupby(["animal", "day", "epoch"]) .tetrode_id.count() .rename("n_tetrodes") ) replay_info = pd.merge( replay_info.reset_index(), pd.DataFrame(n_tetrodes).reset_index() ).set_index(["animal", "day", "epoch", "ripple_number"]) for state in STATE_ORDER: replay_info[f"{state}_pct_unique_spiking"] = ( replay_info[f"{state}_n_unique_spiking"] / replay_info["n_tetrodes"] ) replay_info = replay_info.rename(index={"Cor": "cor"}).rename_axis( index={"animal": "Animal ID"} ) replay_info.head() fig, axes = plt.subplots( 1, 4, figsize=(TWO_COLUMN, PAGE_HEIGHT / 4), sharex=True, sharey=True, constrained_layout=True, ) dot_color = "black" dot_size = 2.5 # ax 0 df = pd.DataFrame( replay_info.groupby(["Animal ID", "day"]) .apply(lambda df: (df["is_classified"]).mean() * 100) .rename("Percentage of All SWRs") ).reset_index() sns.swarmplot( data=df, x="Percentage of All SWRs", y="Animal ID", ax=axes[0], size=dot_size, color=dot_color, clip_on=False, ) axes[0].set_title("Classified", fontsize=10) axes[0].grid(True, axis="y", linestyle="-", alpha=0.5) axes[0].set_xlabel("Percentage of\nAll SWRs") # ax 1 df = pd.DataFrame( replay_info.loc[replay_info.is_classified] .groupby(["Animal ID", "day"]) .apply( lambda df: (df["Hover"] | df["Hover-Continuous-Mix"] | df["Continuous"]).mean() * 100 ) .rename("Percentage of Classified SWRs") ).reset_index() sns.swarmplot( data=df, x="Percentage of Classified SWRs", y="Animal ID", ax=axes[1], size=dot_size, color=dot_color, clip_on=False, ) axes[1].set_ylabel("") axes[1].set_xlabel("Percentage of\nClassified SWRs") axes[1].set_title("Spatially Coherent", fontsize=10) axes[1].grid(True, axis="y", linestyle="-", alpha=0.5) # ax 2 df = pd.DataFrame( replay_info.loc[replay_info.is_classified] .groupby(["Animal ID", "day"]) .apply(lambda df: (df["Fragmented-Continuous-Mix"] | df["Fragmented"]).mean() * 100) .rename("Percentage of Classified SWRs") ).reset_index() sns.swarmplot( data=df, x="Percentage of Classified SWRs", y="Animal ID", ax=axes[2], size=dot_size, color=dot_color, clip_on=False, ) axes[2].set_ylabel("") axes[2].set_xlabel("Percentage of\nClassified SWRs") axes[2].set_title("Spatially Incoherent", fontsize=10) axes[2].grid(True, axis="y", linestyle="-", alpha=0.5) # ax 3 df = pd.DataFrame( replay_info.groupby(["Animal ID", "day"]) .apply(lambda df: (df["Continuous"]).mean() * 100) .rename("Percentage of Classified SWRs") ).reset_index() sns.swarmplot( data=df, x="Percentage of Classified SWRs", y="Animal ID", ax=axes[3], size=dot_size, color=dot_color, clip_on=False, ) axes[3].set_xlabel("Percentage of\nClassified SWRs") axes[3].set_ylabel("") axes[3].set_title("Continuous", fontsize=10) axes[3].grid(True, axis="y", linestyle="-", alpha=0.5) plt.xlim((0, 100)) sns.despine(offset=5) for ind in range(0, 4): axes[ind].spines["left"].set_visible(False) axes[ind].tick_params(left=False) n_animals = replay_info.reset_index()["Animal ID"].unique().size axes[0].set_yticklabels(np.arange(n_animals) + 1) from src.visualization import SHORT_STATE_NAMES from src.parameters import SHORT_STATE_ORDER, STATE_ORDER from upsetplot import UpSet def plot_category_counts(replay_info): df = replay_info.rename(columns=SHORT_STATE_NAMES).set_index( SHORT_STATE_ORDER[::-1] ) upset = UpSet( df, sort_sets_by=None, show_counts=False, subset_size="count", sort_by="cardinality", intersection_plot_elements=5, ) ax_dict = upset.plot() n_classified = replay_info.is_classified.sum() _, intersect_max = ax_dict["intersections"].get_ylim() ax_dict["intersections"].set_yticks(n_classified * np.arange(0, 0.6, 0.1)) ax_dict["intersections"].set_yticklabels(range(0, 60, 10)) ax_dict["intersections"].set_ylabel( "Percentage\nof Ripples", ha="center", va="center", rotation="horizontal", labelpad=30, ) ax_dict["intersections"].text( 9, n_classified * 0.45, f"N = {n_classified}", zorder=1000, fontsize=9 ) ax_dict["totals"].set_xticks([0, 0.5 * n_classified]) ax_dict["totals"].set_xticklabels([0, 50]) ax_dict["totals"].set_xlabel("Marginal Percentage\nof Ripples") ax_dict["totals"].set_ylim([-0.5, 4.4]) plt.suptitle("Most Common Combinations of Dynamics", fontsize=14, x=0.55, y=0.925) for i, color in enumerate(STATE_ORDER): rect = plt.Rectangle( xy=(0, len(STATE_ORDER) - i - 1.4), width=1, height=0.8, facecolor=STATE_COLORS[color], lw=0, zorder=0, alpha=0.25, ) ax_dict["shading"].add_patch(rect) return ax_dict ax_dict = plot_category_counts(replay_info.loc[replay_info.is_classified]) ```
github_jupyter
%matplotlib inline %reload_ext autoreload %autoreload 2 %config InlineBackend.figure_format = 'retina' %reload_ext lab_black import logging import string import sys import os import pandas as pd from glob import glob import matplotlib.pyplot as plt import numpy as np import seaborn as sns from src.figure_utilities import ( PAGE_HEIGHT, ONE_COLUMN, TWO_COLUMN, save_figure, set_figure_defaults, ) set_figure_defaults() from glob import glob import os import pandas as pd from src.parameters import PROCESSED_DATA_DIR, STATE_COLORS, STATE_ORDER from src.figure_utilities import TWO_COLUMN, PAGE_HEIGHT, save_figure import seaborn as sns import matplotlib.pyplot as plt from src.visualization import ( plot_category_counts, plot_category_duration, plot_linear_position_markers, ) from loren_frank_data_processing import make_tetrode_dataframe from src.parameters import ANIMALS, STATE_ORDER, _BRAIN_AREAS tetrode_info = make_tetrode_dataframe(ANIMALS) data_type, dim = "sorted_spikes", "1D" n_unique_spiking = 2 file_paths = glob( os.path.join(PROCESSED_DATA_DIR, f"*_{data_type}_{dim}_replay_info_80.csv") ) def read_file(file_path): try: return pd.read_csv(file_path) except pd.errors.EmptyDataError: pass replay_info = pd.concat( [read_file(file_path) for file_path in file_paths], axis=0, ).set_index(["animal", "day", "epoch", "ripple_number"]) replay_info = replay_info.loc[ replay_info.n_unique_spiking >= n_unique_spiking ].sort_index() is_brain_areas = tetrode_info.area.astype(str).str.upper().isin(_BRAIN_AREAS) n_tetrodes = ( tetrode_info.loc[is_brain_areas] .groupby(["animal", "day", "epoch"]) .tetrode_id.count() .rename("n_tetrodes") ) replay_info = pd.merge( replay_info.reset_index(), pd.DataFrame(n_tetrodes).reset_index() ).set_index(["animal", "day", "epoch", "ripple_number"]) for state in STATE_ORDER: replay_info[f"{state}_pct_unique_spiking"] = ( replay_info[f"{state}_n_unique_spiking"] / replay_info["n_tetrodes"] ) replay_info = replay_info.rename(index={"Cor": "cor"}).rename_axis( index={"animal": "Animal ID"} ) replay_info.head() fig, axes = plt.subplots( 1, 4, figsize=(TWO_COLUMN, PAGE_HEIGHT / 4), sharex=True, sharey=True, constrained_layout=True, ) dot_color = "black" dot_size = 2.5 # ax 0 df = pd.DataFrame( replay_info.groupby(["Animal ID", "day"]) .apply(lambda df: (df["is_classified"]).mean() * 100) .rename("Percentage of All SWRs") ).reset_index() sns.swarmplot( data=df, x="Percentage of All SWRs", y="Animal ID", ax=axes[0], size=dot_size, color=dot_color, clip_on=False, ) axes[0].set_title("Classified", fontsize=10) axes[0].grid(True, axis="y", linestyle="-", alpha=0.5) axes[0].set_xlabel("Percentage of\nAll SWRs") # ax 1 df = pd.DataFrame( replay_info.loc[replay_info.is_classified] .groupby(["Animal ID", "day"]) .apply( lambda df: (df["Hover"] | df["Hover-Continuous-Mix"] | df["Continuous"]).mean() * 100 ) .rename("Percentage of Classified SWRs") ).reset_index() sns.swarmplot( data=df, x="Percentage of Classified SWRs", y="Animal ID", ax=axes[1], size=dot_size, color=dot_color, clip_on=False, ) axes[1].set_ylabel("") axes[1].set_xlabel("Percentage of\nClassified SWRs") axes[1].set_title("Spatially Coherent", fontsize=10) axes[1].grid(True, axis="y", linestyle="-", alpha=0.5) # ax 2 df = pd.DataFrame( replay_info.loc[replay_info.is_classified] .groupby(["Animal ID", "day"]) .apply(lambda df: (df["Fragmented-Continuous-Mix"] | df["Fragmented"]).mean() * 100) .rename("Percentage of Classified SWRs") ).reset_index() sns.swarmplot( data=df, x="Percentage of Classified SWRs", y="Animal ID", ax=axes[2], size=dot_size, color=dot_color, clip_on=False, ) axes[2].set_ylabel("") axes[2].set_xlabel("Percentage of\nClassified SWRs") axes[2].set_title("Spatially Incoherent", fontsize=10) axes[2].grid(True, axis="y", linestyle="-", alpha=0.5) # ax 3 df = pd.DataFrame( replay_info.groupby(["Animal ID", "day"]) .apply(lambda df: (df["Continuous"]).mean() * 100) .rename("Percentage of Classified SWRs") ).reset_index() sns.swarmplot( data=df, x="Percentage of Classified SWRs", y="Animal ID", ax=axes[3], size=dot_size, color=dot_color, clip_on=False, ) axes[3].set_xlabel("Percentage of\nClassified SWRs") axes[3].set_ylabel("") axes[3].set_title("Continuous", fontsize=10) axes[3].grid(True, axis="y", linestyle="-", alpha=0.5) plt.xlim((0, 100)) sns.despine(offset=5) for ind in range(0, 4): axes[ind].spines["left"].set_visible(False) axes[ind].tick_params(left=False) n_animals = replay_info.reset_index()["Animal ID"].unique().size axes[0].set_yticklabels(np.arange(n_animals) + 1) from src.visualization import SHORT_STATE_NAMES from src.parameters import SHORT_STATE_ORDER, STATE_ORDER from upsetplot import UpSet def plot_category_counts(replay_info): df = replay_info.rename(columns=SHORT_STATE_NAMES).set_index( SHORT_STATE_ORDER[::-1] ) upset = UpSet( df, sort_sets_by=None, show_counts=False, subset_size="count", sort_by="cardinality", intersection_plot_elements=5, ) ax_dict = upset.plot() n_classified = replay_info.is_classified.sum() _, intersect_max = ax_dict["intersections"].get_ylim() ax_dict["intersections"].set_yticks(n_classified * np.arange(0, 0.6, 0.1)) ax_dict["intersections"].set_yticklabels(range(0, 60, 10)) ax_dict["intersections"].set_ylabel( "Percentage\nof Ripples", ha="center", va="center", rotation="horizontal", labelpad=30, ) ax_dict["intersections"].text( 9, n_classified * 0.45, f"N = {n_classified}", zorder=1000, fontsize=9 ) ax_dict["totals"].set_xticks([0, 0.5 * n_classified]) ax_dict["totals"].set_xticklabels([0, 50]) ax_dict["totals"].set_xlabel("Marginal Percentage\nof Ripples") ax_dict["totals"].set_ylim([-0.5, 4.4]) plt.suptitle("Most Common Combinations of Dynamics", fontsize=14, x=0.55, y=0.925) for i, color in enumerate(STATE_ORDER): rect = plt.Rectangle( xy=(0, len(STATE_ORDER) - i - 1.4), width=1, height=0.8, facecolor=STATE_COLORS[color], lw=0, zorder=0, alpha=0.25, ) ax_dict["shading"].add_patch(rect) return ax_dict ax_dict = plot_category_counts(replay_info.loc[replay_info.is_classified])
0.309024
0.308405
# Use a Gradient Boosting Regressor (GBR) method as model prediction Compared to the kNN method, GBR has more hyperparameters that need to be tuned to find the optimal bias-variance balance, that is between model complexity and generalization error. We will tune the hyperparameters by varying them in several stages. While this does not guarantee that we find the optimum value, it is probably a reasonable approach in most cases. It has the big advantage of reducing the dimensionality of the hyperparameter space and thus having a faster and computationally cheaper method for hyperparameter tunning. ``` import os import sys nb_dir = "./include_files" if nb_dir not in sys.path: sys.path.append(nb_dir) import numpy as np import matplotlib.pyplot as plt from matplotlib import gridspec from matplotlib.ticker import MultipleLocator, FormatStrFormatter, AutoMinorLocator %matplotlib inline import h5py as h5 from matplotlib.backends.backend_pdf import PdfPages import matplotlib.style as style style.use('fivethirtyeight') # plt.style.use("./include_files/marius.mplstyle") # fontSize = 15 # lineWidth = 1.5 colors = [u'#1f77b4', u'#ff7f0e', u'#2ca02c', u'#d62728', u'#9467bd', u'#8c564b', u'#e377c2', u'#7f7f7f', u'#bcbd22', u'#17becf'] ``` ## Load the data ``` data = np.load( "data/NN_feature_data_N=5e4.npz" ) for key in data.files: code = key + ' = data["' + key + '"]' print(code) exec( code ) print( "\nSize of the input feature vector: ", data_input.shape, len(name_input) ) print( "Size of the output vector: ", data_output.shape ) num_features = data_input.shape[1] ``` ## 1. Split the data into train/test and scale it to min-max values ``` from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split raw_x_train, raw_x_test, raw_y_train, raw_y_test = train_test_split( data_input, data_output, \ test_size=.4, random_state=0 ) print( "Size of training set: %i" % raw_x_train.shape[0] ) print( "Size of testing set: %i" % raw_x_test.shape[0] ) x_scaler = MinMaxScaler() y_scaler = MinMaxScaler() x_train = x_scaler.fit_transform( raw_x_train ) x_test = x_scaler.transform( raw_x_test ) y_train = y_scaler.fit_transform( raw_y_train ) y_test = y_scaler.transform( raw_y_test ) # check the PDF of the training and test samples for the output variable bins = np.linspace(0.,1.,101) plt.xlabel( "Output variable" ) plt.ylabel( "PDF" ) discard = plt.hist( y_train, bins=bins, density=True, histtype='step', lw=3, label="train" ) discard = plt.hist( y_test, bins=bins, density=True, histtype='step', lw=3, label="test" ) plt.legend() ``` ## 3. Systematic hyperparameter tuning The hyperparameters we need to set include: - `loss`: a loss function to be minimized. We will use 'ls', which is basically MSE. - `max_depth`: the maximum depth limits the number of nodes in the trees; its best value depends on the interaction of the input variables; we will start with 10 and can tune it later. - `learning_rate`: learning rate shrinks the contribution of each tree; there is a trade-off between learning rate and boosting steps; we will use 0.02 and, for simplicity, keep it constant. When searching for the optimal hyperparameter values, we will use a 5 times higher learning rate to speed up the search. - `min_samples_split`: the minimum number of samples required to split an internal node; we will start with 50 and can tune it later. - `max_features`: the number of features to consider when looking for the best split; we will use the number of features in the data. - `subsample`: the fraction of samples to be used for fitting the individual trees; if smaller than 1.0, this results in Stochastic Gradient Boosting. We will use 0.9. - `n_estimators`: the number of boosting steps or decision trees. ### Step 1: Optimize `n_estimators` using high learning rate, `learning_rate=0.1` ``` from sklearn.ensemble import GradientBoostingRegressor from sklearn.model_selection import cross_val_score, GridSearchCV # candidates param_test_n_est = {'n_estimators': range(40, 100, 10)} # create the regressor gbr_n_est = GradientBoostingRegressor(loss='ls', learning_rate=0.1, max_features=num_features, max_depth=10, min_samples_split=50, subsample=0.9, random_state=0) # define hyperparameter search gsearch = GridSearchCV(estimator= gbr_n_est, param_grid = param_test_n_est, scoring='neg_mean_squared_error', cv=5, verbose=3) # perform search gsearch.fit( x_train, y_train.flatten() ) # print best n_estimators print( gsearch.best_params_ ) best_n_estimators = gsearch.best_params_['n_estimators'] print( "Best 'n_estimators' parameter:", best_n_estimators ) ``` ### Step 2: Optimize tree parameters, `max_depth` and `min_samples_split`, with best `n_estimators` ``` # candidates param_test_tree = {'max_depth': range(5, 16, 2), 'min_samples_split': range(10, 100, 20) } # create the regressor gbr_tree = GradientBoostingRegressor(loss='ls', learning_rate=0.1, max_features=num_features, subsample=0.9, n_estimators=best_n_estimators, random_state=0) # define hyperparameter search gsearch = GridSearchCV(estimator= gbr_tree, param_grid = param_test_tree, scoring='neg_mean_squared_error', cv=5, verbose=3) # perform search gsearch.fit( x_train, y_train.flatten() ) print( gsearch.best_params_ ) best_max_depth = gsearch.best_params_['max_depth'] best_min_samples_split = gsearch.best_params_['min_samples_split'] ``` ### Step 3: Lower `learning_rate` and increase `n_estimators` Here we use a factor of 5, so `learning_rate` is lowered to 0.02 and `n_estimators` is increased to 250. Also, cross-validation (CV) means using a smaller training set than the full one, so will underpredict the number of estimators that minimize the generalization error. To account for this effect, we further multiply `n_estimators` by 2. ``` factor = 5 learning_rate = 0.1 / factor n_estimators = int(best_n_estimators * factor * 2) # create the "optimised" regressor gbr = GradientBoostingRegressor(loss='ls', learning_rate=learning_rate, max_features=num_features, subsample=0.9, n_estimators=n_estimators, random_state=0, max_depth=best_max_depth, min_samples_split=best_min_samples_split, verbose=1 ) # fit the training data gbr.fit( x_train, y_train.flatten() ) ``` ## Plot the learning curve of the model ``` from sklearn.metrics import mean_squared_error test_score = np.zeros( (n_estimators,), dtype=np.float64 ) staged_predict = gbr.staged_predict(x_test) for i in range(n_estimators): test_score[i] = mean_squared_error( y_test, next(staged_predict) ) # plot the scores fig1 = plt.figure( figsize=(1*7.5,1*6.5) ) plt.xlabel( "number of estimators" ) plt.ylabel( "loss = MSE" ) train_curve, = plt.plot(gbr.train_score_, label='Loss on training set') test_curve, = plt.plot(test_score, label='Loss on test set') plt.legend() plt.show() ``` The loss of the test set approaches a constant value and seems to change little for number of estimators above 200. Is this really the case? Or is it an artifact of the large range covered by the y-axis due to large losses at the start of the training. We can further inspect this by zooming on the test set loss function for number of estimators > 200. ``` fig1 = plt.figure( figsize=(1*7.5,1*6.5) ) plt.xlabel( "number of estimators" ) plt.ylabel( "loss = MSE" ) sel = np.arange(n_estimators)>200 # select only points for number estimators > 200 plt.plot( np.arange(n_estimators)[sel], test_score[sel], c=test_curve.get_color(), label='Loss on test set' ) plt.legend() ``` So the loss for the test set keeps decreasing for up to around 470 estimators, after which increases slowly indicating mild overfitting. The decrease after 200 estimators is very low indicating only a marginal increase in prediction accuracy. ``` # calculate the predictions pred_y_test = gbr.predict(x_test) # save the test set predictions to a file outfile = "data_output/GBR/pred_y_test_full_model.npz" np.savez_compressed( outfile, pred_y_test=pred_y_test ) ``` ## Load the kNN predictions and compare against the GBR predictions ``` with np.load( "data_output/kNN/pred_y_test_full_model.npz" ) as data: pred_y_test_kNN = data["pred_y_test"][:,0] MSE = test_score[-1] # MSE of the GBR predcition MSE_kNN = mean_squared_error( y_test, pred_y_test_kNN ) print("MSE for GBR prediction: %.5f" % MSE) print("MSE for kNN prediction: %.5f" % MSE_kNN) print("Difference MSE kNN - GBR: %.5f (%.1f %%)" % (MSE_kNN-MSE, (MSE_kNN-MSE)*100./MSE) ) ``` The more complex GBR method leads only to a modest increase in prediction accuracy of only 4%. ## Compare the PDF of the true output and predicted values ``` bins = np.linspace(0.,1.,51) fig1 = plt.figure( figsize=(1*7.5,1*6.5) ) plt.xlabel( "output" ) plt.ylabel( "PDF" ) plt.hist( y_test, bins=bins, density=True, alpha=0.5, label="truth") plt.hist( pred_y_test, bins=bins, density=True, alpha=0.5, label="GBR prediction") plt.hist( pred_y_test_kNN, bins=bins, density=True, histtype='step', lw=3, color=colors[2], label="kNN prediction") plt.legend() ``` As expected, we find that the ML cannot fully reproduce the output PDF, failing to obtain the values in the tails of the distribution. This is to be expected since **it is the tendency of ML to make predictions towards the mean value.** Compared to kNN, the PDF of the GBR prediction is slightly wider and a bit closer to the true PDF **indicating that GBR indeed leads to better predictions.** However the differences between GBR and kNN the PDF are minor, in line with the observation that the GBR loss is only 4% lower than the kNN one. ``` def running_mean(x, y, x_bins): """Calculates the mean y in bins of x.""" mean = [] for i in range( len(x_bins)-1 ): sel = (x>x_bins[i]) * (x<=x_bins[i+1]) if sel.sum()>10: mean.append( y[sel].mean() ) # when enough point inside the bin else: mean.append( 0 ) return np.array(mean) SE = (pred_y_test - y_test.flatten())**2 SE_kNN = (pred_y_test_kNN - y_test.flatten())**2 fig1 = plt.figure( figsize=(1*7.5,1*6.5) ) plt.xlabel( "output" ) plt.ylabel( "MSE in output bins" ) # calculate the error as a function of target value target_bins = np.linspace( 0., 1., 21 ) target_vals = 0.5 * (target_bins[1:] + target_bins[:-1]) MSE_bins = running_mean( pred_y_test, SE, target_bins ) MSE_bins_kNN= running_mean( pred_y_test_kNN, SE_kNN, target_bins ) plt.xlim( [0.,1.]) valid = MSE_bins > 0. line, = plt.plot( target_vals[valid], MSE_bins[valid], label="GBR MSE in bins" ) plt.hlines( MSE, 0., 1., ls='--', color=line.get_color(), label="GBR MSE overall" ) valid = MSE_bins_kNN > 0. line, = plt.plot( target_vals[valid], MSE_bins_kNN[valid], label="kNN MSE in bins" ) plt.hlines( MSE_kNN, 0., 1., ls='--', color=line.get_color(), label="kNN MSE overall" ) plt.legend() ``` ## Calculate which features dominate the prediction ``` from sklearn.inspection import permutation_importance Nmax = 5000 # use a subset of the test set to speed up the calculation perm_imp = permutation_importance( gbr, x_test[:Nmax], y_test[:Nmax], n_repeats=5, random_state=0, \ scoring='neg_mean_squared_error' ) # save the permutation importance to a file outfile = "data_output/GBR/permutation_importance_test_full_model.npz" np.savez_compressed( outfile, importances_mean=perm_imp.importances_mean, importances_std=perm_imp.importances_std, \ importances=perm_imp.importances) # load the kNN permutation importance with np.load("data_output/kNN/permutation_importance_test_full_model.npz") as data: importances_mean_kNN = data["importances_mean"] # order of feature importance perm_sorted_idx = perm_imp.importances_mean.argsort() print( "List of feature indexes sorted by importance: ", perm_sorted_idx[::-1] ) print( "List of feature names sorted by importance: ", name_input[perm_sorted_idx][::-1] ) fig1 = plt.figure( figsize=(1*7.5,1*6.5) ) plt.title( "Permutation importance of features" ) plt.ylabel( "Features" ) plt.xlabel( "Increase in MSE [percentage]" ) plt.xlim( [0,60] ) plt.yticks( np.arange(10), name_input[perm_sorted_idx] ) plt.plot( perm_imp.importances_mean[perm_sorted_idx].T*100./MSE, np.arange(10), 's', ms=10, label="GBR" ) plt.plot( importances_mean_kNN[perm_sorted_idx].T*100./MSE_kNN, np.arange(10), 'D', ms=10, label="kNN" ) plt.legend( loc=4, title="Method" ) ``` We find differences in the permutation importance of features in GBR vs. kNN. These differences are somewhat larger than the 4% improvement in MSE between GBR and kNN indicating that even a small improvement in model accuracy can lead to considerable changes in the importances of features. ## Compare the feature importance when removing the high multicollinearity features In this case, there is only one high multicollinearity feature: feature 9 = 'mean L'. Normally we would need to calculate the optimal values of the hyperparameters again, but for simplicity and since we only remove 1 out of 10 features, we can as well use the previous optimal values. ``` sel = np.ones( num_features, bool ) sel[8] = False print( "Reruning the GBR pipeline after removing the features:", name_input[~sel] ) # define the new features after removing the high multicollinearity features # noMC = no multicollinearity name_input_noMC = name_input[sel] x_train_noMC = x_train[:,sel] x_test_noMC = x_test[:,sel] gbr_noMC = GradientBoostingRegressor(loss='ls', learning_rate=learning_rate, max_features=num_features-1, subsample=0.9, n_estimators=n_estimators, random_state=0, max_depth=best_max_depth, min_samples_split=best_min_samples_split, verbose=1 ) gbr_noMC.fit( x_train_noMC, y_train.flatten() ) MSE_noMC = mean_squared_error( gbr_noMC.predict( x_test_noMC ), y_test.flatten() ) print( "MSE no high MC features: \t%.5f" % MSE_noMC ) print( "MSE full features: \t%.5f" % MSE ) print( "Increase in MSE: \t%.5f (%.1f %%)" % (MSE_noMC-MSE, (MSE_noMC-MSE)*100./MSE) ) perm_imp_noMC = permutation_importance( gbr_noMC, x_test_noMC[:Nmax], y_test[:Nmax], n_repeats=5, random_state=0, \ scoring='neg_mean_squared_error' ) # order of feature importance for the no multicollinearity case perm_sorted_idx_noMC = perm_imp_noMC.importances_mean.argsort() print( "List of feature indexes sorted by importance: ", perm_sorted_idx_noMC[::-1] ) print( "List of feature names sorted by importance: ", name_input_noMC[perm_sorted_idx_noMC][::-1] ) fig1 = plt.figure( figsize=(1*7.5,1*6.5) ) plt.title( "Permutation importance of kNN features" ) plt.ylabel( "Features" ) plt.xlabel( "Increase in MSE [percentage]" ) plt.xlim( [0,60] ) plt.yticks( np.arange(9), name_input_noMC[perm_sorted_idx_noMC] ) plt.plot( perm_imp.importances_mean[sel][perm_sorted_idx_noMC].T*100./MSE, np.arange(9), 's', ms=10, label="initial" ) plt.plot( perm_imp_noMC.importances_mean[perm_sorted_idx_noMC].T*100./MSE_noMC, np.arange(9), 'D', ms=10, \ label="after high MC removal" ) plt.legend(loc=4) ```
github_jupyter
import os import sys nb_dir = "./include_files" if nb_dir not in sys.path: sys.path.append(nb_dir) import numpy as np import matplotlib.pyplot as plt from matplotlib import gridspec from matplotlib.ticker import MultipleLocator, FormatStrFormatter, AutoMinorLocator %matplotlib inline import h5py as h5 from matplotlib.backends.backend_pdf import PdfPages import matplotlib.style as style style.use('fivethirtyeight') # plt.style.use("./include_files/marius.mplstyle") # fontSize = 15 # lineWidth = 1.5 colors = [u'#1f77b4', u'#ff7f0e', u'#2ca02c', u'#d62728', u'#9467bd', u'#8c564b', u'#e377c2', u'#7f7f7f', u'#bcbd22', u'#17becf'] data = np.load( "data/NN_feature_data_N=5e4.npz" ) for key in data.files: code = key + ' = data["' + key + '"]' print(code) exec( code ) print( "\nSize of the input feature vector: ", data_input.shape, len(name_input) ) print( "Size of the output vector: ", data_output.shape ) num_features = data_input.shape[1] from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split raw_x_train, raw_x_test, raw_y_train, raw_y_test = train_test_split( data_input, data_output, \ test_size=.4, random_state=0 ) print( "Size of training set: %i" % raw_x_train.shape[0] ) print( "Size of testing set: %i" % raw_x_test.shape[0] ) x_scaler = MinMaxScaler() y_scaler = MinMaxScaler() x_train = x_scaler.fit_transform( raw_x_train ) x_test = x_scaler.transform( raw_x_test ) y_train = y_scaler.fit_transform( raw_y_train ) y_test = y_scaler.transform( raw_y_test ) # check the PDF of the training and test samples for the output variable bins = np.linspace(0.,1.,101) plt.xlabel( "Output variable" ) plt.ylabel( "PDF" ) discard = plt.hist( y_train, bins=bins, density=True, histtype='step', lw=3, label="train" ) discard = plt.hist( y_test, bins=bins, density=True, histtype='step', lw=3, label="test" ) plt.legend() from sklearn.ensemble import GradientBoostingRegressor from sklearn.model_selection import cross_val_score, GridSearchCV # candidates param_test_n_est = {'n_estimators': range(40, 100, 10)} # create the regressor gbr_n_est = GradientBoostingRegressor(loss='ls', learning_rate=0.1, max_features=num_features, max_depth=10, min_samples_split=50, subsample=0.9, random_state=0) # define hyperparameter search gsearch = GridSearchCV(estimator= gbr_n_est, param_grid = param_test_n_est, scoring='neg_mean_squared_error', cv=5, verbose=3) # perform search gsearch.fit( x_train, y_train.flatten() ) # print best n_estimators print( gsearch.best_params_ ) best_n_estimators = gsearch.best_params_['n_estimators'] print( "Best 'n_estimators' parameter:", best_n_estimators ) # candidates param_test_tree = {'max_depth': range(5, 16, 2), 'min_samples_split': range(10, 100, 20) } # create the regressor gbr_tree = GradientBoostingRegressor(loss='ls', learning_rate=0.1, max_features=num_features, subsample=0.9, n_estimators=best_n_estimators, random_state=0) # define hyperparameter search gsearch = GridSearchCV(estimator= gbr_tree, param_grid = param_test_tree, scoring='neg_mean_squared_error', cv=5, verbose=3) # perform search gsearch.fit( x_train, y_train.flatten() ) print( gsearch.best_params_ ) best_max_depth = gsearch.best_params_['max_depth'] best_min_samples_split = gsearch.best_params_['min_samples_split'] factor = 5 learning_rate = 0.1 / factor n_estimators = int(best_n_estimators * factor * 2) # create the "optimised" regressor gbr = GradientBoostingRegressor(loss='ls', learning_rate=learning_rate, max_features=num_features, subsample=0.9, n_estimators=n_estimators, random_state=0, max_depth=best_max_depth, min_samples_split=best_min_samples_split, verbose=1 ) # fit the training data gbr.fit( x_train, y_train.flatten() ) from sklearn.metrics import mean_squared_error test_score = np.zeros( (n_estimators,), dtype=np.float64 ) staged_predict = gbr.staged_predict(x_test) for i in range(n_estimators): test_score[i] = mean_squared_error( y_test, next(staged_predict) ) # plot the scores fig1 = plt.figure( figsize=(1*7.5,1*6.5) ) plt.xlabel( "number of estimators" ) plt.ylabel( "loss = MSE" ) train_curve, = plt.plot(gbr.train_score_, label='Loss on training set') test_curve, = plt.plot(test_score, label='Loss on test set') plt.legend() plt.show() fig1 = plt.figure( figsize=(1*7.5,1*6.5) ) plt.xlabel( "number of estimators" ) plt.ylabel( "loss = MSE" ) sel = np.arange(n_estimators)>200 # select only points for number estimators > 200 plt.plot( np.arange(n_estimators)[sel], test_score[sel], c=test_curve.get_color(), label='Loss on test set' ) plt.legend() # calculate the predictions pred_y_test = gbr.predict(x_test) # save the test set predictions to a file outfile = "data_output/GBR/pred_y_test_full_model.npz" np.savez_compressed( outfile, pred_y_test=pred_y_test ) with np.load( "data_output/kNN/pred_y_test_full_model.npz" ) as data: pred_y_test_kNN = data["pred_y_test"][:,0] MSE = test_score[-1] # MSE of the GBR predcition MSE_kNN = mean_squared_error( y_test, pred_y_test_kNN ) print("MSE for GBR prediction: %.5f" % MSE) print("MSE for kNN prediction: %.5f" % MSE_kNN) print("Difference MSE kNN - GBR: %.5f (%.1f %%)" % (MSE_kNN-MSE, (MSE_kNN-MSE)*100./MSE) ) bins = np.linspace(0.,1.,51) fig1 = plt.figure( figsize=(1*7.5,1*6.5) ) plt.xlabel( "output" ) plt.ylabel( "PDF" ) plt.hist( y_test, bins=bins, density=True, alpha=0.5, label="truth") plt.hist( pred_y_test, bins=bins, density=True, alpha=0.5, label="GBR prediction") plt.hist( pred_y_test_kNN, bins=bins, density=True, histtype='step', lw=3, color=colors[2], label="kNN prediction") plt.legend() def running_mean(x, y, x_bins): """Calculates the mean y in bins of x.""" mean = [] for i in range( len(x_bins)-1 ): sel = (x>x_bins[i]) * (x<=x_bins[i+1]) if sel.sum()>10: mean.append( y[sel].mean() ) # when enough point inside the bin else: mean.append( 0 ) return np.array(mean) SE = (pred_y_test - y_test.flatten())**2 SE_kNN = (pred_y_test_kNN - y_test.flatten())**2 fig1 = plt.figure( figsize=(1*7.5,1*6.5) ) plt.xlabel( "output" ) plt.ylabel( "MSE in output bins" ) # calculate the error as a function of target value target_bins = np.linspace( 0., 1., 21 ) target_vals = 0.5 * (target_bins[1:] + target_bins[:-1]) MSE_bins = running_mean( pred_y_test, SE, target_bins ) MSE_bins_kNN= running_mean( pred_y_test_kNN, SE_kNN, target_bins ) plt.xlim( [0.,1.]) valid = MSE_bins > 0. line, = plt.plot( target_vals[valid], MSE_bins[valid], label="GBR MSE in bins" ) plt.hlines( MSE, 0., 1., ls='--', color=line.get_color(), label="GBR MSE overall" ) valid = MSE_bins_kNN > 0. line, = plt.plot( target_vals[valid], MSE_bins_kNN[valid], label="kNN MSE in bins" ) plt.hlines( MSE_kNN, 0., 1., ls='--', color=line.get_color(), label="kNN MSE overall" ) plt.legend() from sklearn.inspection import permutation_importance Nmax = 5000 # use a subset of the test set to speed up the calculation perm_imp = permutation_importance( gbr, x_test[:Nmax], y_test[:Nmax], n_repeats=5, random_state=0, \ scoring='neg_mean_squared_error' ) # save the permutation importance to a file outfile = "data_output/GBR/permutation_importance_test_full_model.npz" np.savez_compressed( outfile, importances_mean=perm_imp.importances_mean, importances_std=perm_imp.importances_std, \ importances=perm_imp.importances) # load the kNN permutation importance with np.load("data_output/kNN/permutation_importance_test_full_model.npz") as data: importances_mean_kNN = data["importances_mean"] # order of feature importance perm_sorted_idx = perm_imp.importances_mean.argsort() print( "List of feature indexes sorted by importance: ", perm_sorted_idx[::-1] ) print( "List of feature names sorted by importance: ", name_input[perm_sorted_idx][::-1] ) fig1 = plt.figure( figsize=(1*7.5,1*6.5) ) plt.title( "Permutation importance of features" ) plt.ylabel( "Features" ) plt.xlabel( "Increase in MSE [percentage]" ) plt.xlim( [0,60] ) plt.yticks( np.arange(10), name_input[perm_sorted_idx] ) plt.plot( perm_imp.importances_mean[perm_sorted_idx].T*100./MSE, np.arange(10), 's', ms=10, label="GBR" ) plt.plot( importances_mean_kNN[perm_sorted_idx].T*100./MSE_kNN, np.arange(10), 'D', ms=10, label="kNN" ) plt.legend( loc=4, title="Method" ) sel = np.ones( num_features, bool ) sel[8] = False print( "Reruning the GBR pipeline after removing the features:", name_input[~sel] ) # define the new features after removing the high multicollinearity features # noMC = no multicollinearity name_input_noMC = name_input[sel] x_train_noMC = x_train[:,sel] x_test_noMC = x_test[:,sel] gbr_noMC = GradientBoostingRegressor(loss='ls', learning_rate=learning_rate, max_features=num_features-1, subsample=0.9, n_estimators=n_estimators, random_state=0, max_depth=best_max_depth, min_samples_split=best_min_samples_split, verbose=1 ) gbr_noMC.fit( x_train_noMC, y_train.flatten() ) MSE_noMC = mean_squared_error( gbr_noMC.predict( x_test_noMC ), y_test.flatten() ) print( "MSE no high MC features: \t%.5f" % MSE_noMC ) print( "MSE full features: \t%.5f" % MSE ) print( "Increase in MSE: \t%.5f (%.1f %%)" % (MSE_noMC-MSE, (MSE_noMC-MSE)*100./MSE) ) perm_imp_noMC = permutation_importance( gbr_noMC, x_test_noMC[:Nmax], y_test[:Nmax], n_repeats=5, random_state=0, \ scoring='neg_mean_squared_error' ) # order of feature importance for the no multicollinearity case perm_sorted_idx_noMC = perm_imp_noMC.importances_mean.argsort() print( "List of feature indexes sorted by importance: ", perm_sorted_idx_noMC[::-1] ) print( "List of feature names sorted by importance: ", name_input_noMC[perm_sorted_idx_noMC][::-1] ) fig1 = plt.figure( figsize=(1*7.5,1*6.5) ) plt.title( "Permutation importance of kNN features" ) plt.ylabel( "Features" ) plt.xlabel( "Increase in MSE [percentage]" ) plt.xlim( [0,60] ) plt.yticks( np.arange(9), name_input_noMC[perm_sorted_idx_noMC] ) plt.plot( perm_imp.importances_mean[sel][perm_sorted_idx_noMC].T*100./MSE, np.arange(9), 's', ms=10, label="initial" ) plt.plot( perm_imp_noMC.importances_mean[perm_sorted_idx_noMC].T*100./MSE_noMC, np.arange(9), 'D', ms=10, \ label="after high MC removal" ) plt.legend(loc=4)
0.397704
0.935524
<a href="https://colab.research.google.com/github/wizardcalidad/Machine_Learning_Artificial_Intelligence_Course/blob/main/basic_text_classification_tensorflow.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import matplotlib.pyplot as plt import os import re import shutil import string import tensorflow as tf from tensorflow.keras import layers from tensorflow.keras import losses from tensorflow.keras import preprocessing from tensorflow.keras.layers.experimental.preprocessing import TextVectorization tf.__version__ url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz" dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url, untar=True, cache_dir='.', cache_subdir='') dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb') os.listdir(dataset_dir) train_dir = os.path.join(dataset_dir, 'train') os.listdir(train_dir) sample_file = os.path.join(train_dir, 'pos/1181_9.txt') with open(sample_file) as f: print(f.read()) ``` #### Load the Database we have ro remove all the additional folders in the IMDB folder before using the utility function text_dataset_from_directory ``` remove_dir = os.path.join(train_dir, 'unsup') shutil.rmtree(remove_dir) batch_size = 32 seed = 42 raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory( 'aclImdb/train', batch_size=batch_size, validation_split=0.2, subset='training', seed=seed) for text_batch, label_batch in raw_train_ds.take(1): for i in range(3): print("Review", text_batch.numpy()[i]) print("Label", label_batch.numpy()[i]) print("Label 0 corresponds to", raw_train_ds.class_names[0]) print("Label 1 corresponds to", raw_train_ds.class_names[1]) raw_val_ds = tf.keras.preprocessing.text_dataset_from_directory( 'aclImdb/train', batch_size=batch_size, validation_split=0.2, subset='validation', seed=seed) raw_test_ds = tf.keras.preprocessing.text_dataset_from_directory( 'aclImdb/test', batch_size=batch_size) ``` ### Prepare the dataset for training ##### here, we will standardize, tokenize and vectorize the data using preprocessing.TextVectorization layer ``` def custom_standardization(input_data): lowercase = tf.strings.lower(input_data) stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ') return tf.strings.regex_replace(stripped_html, '[%s]' % re.escape(string.punctuation), '') ``` i write a custom standardizer because TextVectorization doesnt remove html tags during standardization process. ``` max_features = 10000 sequence_length = 250 vectorize_layer = TextVectorization( standardize=custom_standardization, max_tokens=max_features, output_mode='int', output_sequence_length=sequence_length) # Make a text-only dataset (without labels), then call adapt train_text = raw_train_ds.map(lambda x, y: x) vectorize_layer.adapt(train_text) ``` ###### Adapt function is called to help fit the state of preprocessing layer to the dataset. The model will be made to build an index of strings to integer. ``` def vectorize_text(text, label): text = tf.expand_dims(text, -1) return vectorize_layer(text), label # retrieve a batch (of 32 reviews and labels) from the dataset text_batch, label_batch = next(iter(raw_train_ds)) first_review, first_label = text_batch[0], label_batch[0] print("Review", first_review) print("Label", raw_train_ds.class_names[first_label]) print("Vectorized review", vectorize_text(first_review, first_label)) print("1287 ---> ",vectorize_layer.get_vocabulary()[1287]) print(" 313 ---> ",vectorize_layer.get_vocabulary()[313]) print('Vocabulary size: {}'.format(len(vectorize_layer.get_vocabulary()))) train_ds = raw_train_ds.map(vectorize_text) val_ds = raw_val_ds.map(vectorize_text) test_ds = raw_test_ds.map(vectorize_text) ``` ### Configure the Dataset for Performance ``` AUTOTUNE = tf.data.AUTOTUNE train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE) ``` ###### .cache() keeps data in memory after it's loaded off disk while ###### .prefetch() overlaps data preprocessing and model execution while training. ## Create the model ``` embedding_dim = 16 model = tf.keras.Sequential([ layers.Embedding(max_features + 1, embedding_dim), layers.Dropout(0.2), layers.GlobalAveragePooling1D(), layers.Dropout(0.2), layers.Dense(1)]) model.summary() ``` ## Loss function and optimizer ``` model.compile(loss=losses.BinaryCrossentropy(from_logits=True), optimizer='adam', metrics=tf.metrics.BinaryAccuracy(threshold=0.0)) ``` ## Train the model ``` epochs = 10 history = model.fit( train_ds, validation_data=val_ds, epochs=epochs) ``` ## Evaluate The Model ``` loss, accuracy = model.evaluate(test_ds) print("Loss: ", loss) print("Accuracy: ", accuracy) ``` ### Create a plot of accuracy and loss over time ``` history_dict = history.history history_dict.keys() acc = history_dict['binary_accuracy'] val_acc = history_dict['val_binary_accuracy'] loss = history_dict['loss'] val_loss = history_dict['val_loss'] epochs = range(1, len(acc) + 1) # "bo" is for "blue dot" plt.plot(epochs, loss, 'bo', label='Training loss') # b is for "solid blue line" plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend(loc='lower right') plt.show() ``` ###### Notice the training loss decreases with each epoch and the training accuracy increases with each epoch. This is expected when using a gradient descent optimization—it should minimize the desired quantity on every iteration. ## Export the model ###### To be able to deploy your model or want it to be able to process raw string, you have to include your TextVectorization layer inside your model. We will create a new model using using the weights of the one i just trained. ``` export_model = tf.keras.Sequential([ vectorize_layer, model, layers.Activation('sigmoid') ]) export_model.compile( loss=losses.BinaryCrossentropy(from_logits=False), optimizer="adam", metrics=['accuracy'] ) # Test it with `raw_test_ds`, which yields raw strings loss, accuracy = export_model.evaluate(raw_test_ds) print(accuracy) ``` ## Inference on new data ``` examples = [ "Fresho is a stubborn boy!", "Francis is Okay.", "Francis is a terrible guy..." ] export_model.predict(examples) ```
github_jupyter
import matplotlib.pyplot as plt import os import re import shutil import string import tensorflow as tf from tensorflow.keras import layers from tensorflow.keras import losses from tensorflow.keras import preprocessing from tensorflow.keras.layers.experimental.preprocessing import TextVectorization tf.__version__ url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz" dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url, untar=True, cache_dir='.', cache_subdir='') dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb') os.listdir(dataset_dir) train_dir = os.path.join(dataset_dir, 'train') os.listdir(train_dir) sample_file = os.path.join(train_dir, 'pos/1181_9.txt') with open(sample_file) as f: print(f.read()) remove_dir = os.path.join(train_dir, 'unsup') shutil.rmtree(remove_dir) batch_size = 32 seed = 42 raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory( 'aclImdb/train', batch_size=batch_size, validation_split=0.2, subset='training', seed=seed) for text_batch, label_batch in raw_train_ds.take(1): for i in range(3): print("Review", text_batch.numpy()[i]) print("Label", label_batch.numpy()[i]) print("Label 0 corresponds to", raw_train_ds.class_names[0]) print("Label 1 corresponds to", raw_train_ds.class_names[1]) raw_val_ds = tf.keras.preprocessing.text_dataset_from_directory( 'aclImdb/train', batch_size=batch_size, validation_split=0.2, subset='validation', seed=seed) raw_test_ds = tf.keras.preprocessing.text_dataset_from_directory( 'aclImdb/test', batch_size=batch_size) def custom_standardization(input_data): lowercase = tf.strings.lower(input_data) stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ') return tf.strings.regex_replace(stripped_html, '[%s]' % re.escape(string.punctuation), '') max_features = 10000 sequence_length = 250 vectorize_layer = TextVectorization( standardize=custom_standardization, max_tokens=max_features, output_mode='int', output_sequence_length=sequence_length) # Make a text-only dataset (without labels), then call adapt train_text = raw_train_ds.map(lambda x, y: x) vectorize_layer.adapt(train_text) def vectorize_text(text, label): text = tf.expand_dims(text, -1) return vectorize_layer(text), label # retrieve a batch (of 32 reviews and labels) from the dataset text_batch, label_batch = next(iter(raw_train_ds)) first_review, first_label = text_batch[0], label_batch[0] print("Review", first_review) print("Label", raw_train_ds.class_names[first_label]) print("Vectorized review", vectorize_text(first_review, first_label)) print("1287 ---> ",vectorize_layer.get_vocabulary()[1287]) print(" 313 ---> ",vectorize_layer.get_vocabulary()[313]) print('Vocabulary size: {}'.format(len(vectorize_layer.get_vocabulary()))) train_ds = raw_train_ds.map(vectorize_text) val_ds = raw_val_ds.map(vectorize_text) test_ds = raw_test_ds.map(vectorize_text) AUTOTUNE = tf.data.AUTOTUNE train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE) embedding_dim = 16 model = tf.keras.Sequential([ layers.Embedding(max_features + 1, embedding_dim), layers.Dropout(0.2), layers.GlobalAveragePooling1D(), layers.Dropout(0.2), layers.Dense(1)]) model.summary() model.compile(loss=losses.BinaryCrossentropy(from_logits=True), optimizer='adam', metrics=tf.metrics.BinaryAccuracy(threshold=0.0)) epochs = 10 history = model.fit( train_ds, validation_data=val_ds, epochs=epochs) loss, accuracy = model.evaluate(test_ds) print("Loss: ", loss) print("Accuracy: ", accuracy) history_dict = history.history history_dict.keys() acc = history_dict['binary_accuracy'] val_acc = history_dict['val_binary_accuracy'] loss = history_dict['loss'] val_loss = history_dict['val_loss'] epochs = range(1, len(acc) + 1) # "bo" is for "blue dot" plt.plot(epochs, loss, 'bo', label='Training loss') # b is for "solid blue line" plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend(loc='lower right') plt.show() export_model = tf.keras.Sequential([ vectorize_layer, model, layers.Activation('sigmoid') ]) export_model.compile( loss=losses.BinaryCrossentropy(from_logits=False), optimizer="adam", metrics=['accuracy'] ) # Test it with `raw_test_ds`, which yields raw strings loss, accuracy = export_model.evaluate(raw_test_ds) print(accuracy) examples = [ "Fresho is a stubborn boy!", "Francis is Okay.", "Francis is a terrible guy..." ] export_model.predict(examples)
0.685213
0.930015
# DomainCAT: Domain Connectivity Analysis Tool ### Analyzing the domain to domain connectivity of an Iris API Search ``` # Run This First: imports all the helper functions and sets stuff up %run domain_cat_module.py print("DomainCAT is ready to go") ``` ## Iris REST API Credentials ``` api_username_ui = widgets.Text(placeholder='Iris API Username', description='Username:', layout={'width': '500px'}, value="") api_pw_ui = widgets.Password(placeholder='Iris API Password', description='Password:', layout={'width': '500px'}, value="") widgets.VBox([api_username_ui, api_pw_ui]) ``` ## Query Domain Data From Iris Investigate API Enter either a list of return delimited domains into the Domains text box, _OR_ an Iris search hash into the hash text box. Note: if both a list of domains _AND_ a search hash is entered, the liast of domains will be queried and the search hash will be ignored ``` domain_list_ui = widgets.Textarea(placeholder='Enter list of domains', description='Domains:', layout={'height': '300px', 'width': '700px'}) search_hash_ui = widgets.Text(placeholder='Enter list of domains', description='Hash:', layout={'width': '700px'}) show_iris_query_ui(domain_list_ui, search_hash_ui) # Data Loading Config query_api = True save_search_to_disk = False json_file_path = "data/dash_gov_dot_us.json" if query_api: iris_results = query_iris_rest_api(api_username_ui, api_pw_ui, domain_list_ui, search_hash_ui) print(f'Iris API returned {len(iris_results)} domains') # save search results to disk to be used later if save_search_to_disk: with open(json_file_path, 'w') as f: json.dump(iris_results, f) else: with open(json_file_path) as json_data: iris_results = json.loads(json_data.read()) print(f'Loaded {len(iris_results)} domains from {json_file_path}') ``` ## DomainCAT Configuration Please refer to the DomainCAT documentation for details about these configuration options ``` config = Config() # only analyze domains that are active (currently registered) config.active_domains_only = True # config for pivoting on matching substrings. Only matching substrings this long or longer will be used to create a pivot config.longest_common_substring = 6 # List of substrings to ignore when creating pivots by matching substrings config.ignore_substrings = [] # use the pivot count to scale how important the pivot is during graph layout. Smaller pivot counts is more influence, and vice versa config.scale_edge_strength_by_pivot_count = True # Global pivot count threshold. Any pivot with more than this value is discarded. sys.maxsize effectivly keeps all pivots config.global_count_threshold = sys.maxsize # The smallest pivot count size to use. Default of 2 means no pivots are filtered out because it's count is too low config.min_pivot_size = 2 # theoretical max pivot size for calculating edge strengths config.max_domains = 100000000 # If True DomainCAT will print out some debug info while building the connected graph of domains config.print_debug_output = False ``` ## Choose Which Pivots To Use & Build Domain Graph ``` pivot_category_config = { "adsense", "google_analytics", "create_date", "redirect_domain", "registrar", "ip_address", "ip_country_code", "ip_isp", "ip_asn", "ssl_hash", "ssl_subject", "ssl_org", "ssl_email", # # Note: commented out ns_host and ns_ip because they double count ns connectedness when used with ns_domain. "ns_domain", # "ns_host", "ns_ip", # # Note: commented out mx_host and mx_ip because they double counts mx connectedness when used with mx_domain "mx_domain", # "mx_host", "mx_ip", "tld", "longest_common_substring", } # Build the domain pivot graph structure config.pivot_category_config = pivot_category_config graph, pivot_categories, trimmed_domains = build_domain_pivot_graph(iris_results, config) ``` ## Trimmed Domains ``` print_trimmed_domains = True if print_trimmed_domains: if len(trimmed_domains["unconnected"]) > 0: print("trimmed unconnected domains:") for domain in trimmed_domains["unconnected"]: print(f" {domain}") if len(trimmed_domains["create_date"]) > 0: print("\ntrimmed domains with only create date pivot:") for domain in trimmed_domains["create_date"]: print(f" {domain}") ``` ## Draw the Domain Graph in an Interactive 3D Layout ``` build_3d_graph_layout(graph) build_3d_graph_layout(graph) build_3d_graph_layout(graph) ``` ## Calculate & Show Pivot Statistics ``` # Calculate a bunch of pivot statistics to see how well connected all the domains in the search result are calc_pivot_stats(graph, pivot_categories) ``` ## Draw the Domain Graph in an Interactive 2D Layout ``` # calculate the pivots shared in commmon across all selected domains shared_pivots = {} def get_2d_shared_pivots(graph, selected_domains): global shared_pivots shared_pivots = get_shared_pivots(graph, selected_domains) build_2d_graph_layout(graph, get_2d_shared_pivots) ``` ## Heatmap of which pivots connect the most domains together: by pivot category ``` if len(shared_pivots) == 0: print("Select a set of domains in the 2D graph") else: create_pivot_heatmaps(shared_pivots) ``` ## Removing domains from the graph Sometimes you find disconnected domains in the 3D graph visualization that make pivoting the viz really annoying. To remove domains from the graph, enter the domain(s) you want removed in the text box below and run the second cell. This will remove the domains from the graph structure without having to requery the data. After you do this, re-run the 3D viz and the domains should be gone. ``` remove_domains_ui = widgets.Textarea(placeholder='Enter domains to remove from graph', description='Domains:', layout={'height': '100px', 'width': '700px'}) remove_domains_ui # Run this to remove the domains in the above text box from the graph graph = remove_domains_from_graph(graph, remove_domains_ui) ```
github_jupyter
# Run This First: imports all the helper functions and sets stuff up %run domain_cat_module.py print("DomainCAT is ready to go") api_username_ui = widgets.Text(placeholder='Iris API Username', description='Username:', layout={'width': '500px'}, value="") api_pw_ui = widgets.Password(placeholder='Iris API Password', description='Password:', layout={'width': '500px'}, value="") widgets.VBox([api_username_ui, api_pw_ui]) domain_list_ui = widgets.Textarea(placeholder='Enter list of domains', description='Domains:', layout={'height': '300px', 'width': '700px'}) search_hash_ui = widgets.Text(placeholder='Enter list of domains', description='Hash:', layout={'width': '700px'}) show_iris_query_ui(domain_list_ui, search_hash_ui) # Data Loading Config query_api = True save_search_to_disk = False json_file_path = "data/dash_gov_dot_us.json" if query_api: iris_results = query_iris_rest_api(api_username_ui, api_pw_ui, domain_list_ui, search_hash_ui) print(f'Iris API returned {len(iris_results)} domains') # save search results to disk to be used later if save_search_to_disk: with open(json_file_path, 'w') as f: json.dump(iris_results, f) else: with open(json_file_path) as json_data: iris_results = json.loads(json_data.read()) print(f'Loaded {len(iris_results)} domains from {json_file_path}') config = Config() # only analyze domains that are active (currently registered) config.active_domains_only = True # config for pivoting on matching substrings. Only matching substrings this long or longer will be used to create a pivot config.longest_common_substring = 6 # List of substrings to ignore when creating pivots by matching substrings config.ignore_substrings = [] # use the pivot count to scale how important the pivot is during graph layout. Smaller pivot counts is more influence, and vice versa config.scale_edge_strength_by_pivot_count = True # Global pivot count threshold. Any pivot with more than this value is discarded. sys.maxsize effectivly keeps all pivots config.global_count_threshold = sys.maxsize # The smallest pivot count size to use. Default of 2 means no pivots are filtered out because it's count is too low config.min_pivot_size = 2 # theoretical max pivot size for calculating edge strengths config.max_domains = 100000000 # If True DomainCAT will print out some debug info while building the connected graph of domains config.print_debug_output = False pivot_category_config = { "adsense", "google_analytics", "create_date", "redirect_domain", "registrar", "ip_address", "ip_country_code", "ip_isp", "ip_asn", "ssl_hash", "ssl_subject", "ssl_org", "ssl_email", # # Note: commented out ns_host and ns_ip because they double count ns connectedness when used with ns_domain. "ns_domain", # "ns_host", "ns_ip", # # Note: commented out mx_host and mx_ip because they double counts mx connectedness when used with mx_domain "mx_domain", # "mx_host", "mx_ip", "tld", "longest_common_substring", } # Build the domain pivot graph structure config.pivot_category_config = pivot_category_config graph, pivot_categories, trimmed_domains = build_domain_pivot_graph(iris_results, config) print_trimmed_domains = True if print_trimmed_domains: if len(trimmed_domains["unconnected"]) > 0: print("trimmed unconnected domains:") for domain in trimmed_domains["unconnected"]: print(f" {domain}") if len(trimmed_domains["create_date"]) > 0: print("\ntrimmed domains with only create date pivot:") for domain in trimmed_domains["create_date"]: print(f" {domain}") build_3d_graph_layout(graph) build_3d_graph_layout(graph) build_3d_graph_layout(graph) # Calculate a bunch of pivot statistics to see how well connected all the domains in the search result are calc_pivot_stats(graph, pivot_categories) # calculate the pivots shared in commmon across all selected domains shared_pivots = {} def get_2d_shared_pivots(graph, selected_domains): global shared_pivots shared_pivots = get_shared_pivots(graph, selected_domains) build_2d_graph_layout(graph, get_2d_shared_pivots) if len(shared_pivots) == 0: print("Select a set of domains in the 2D graph") else: create_pivot_heatmaps(shared_pivots) remove_domains_ui = widgets.Textarea(placeholder='Enter domains to remove from graph', description='Domains:', layout={'height': '100px', 'width': '700px'}) remove_domains_ui # Run this to remove the domains in the above text box from the graph graph = remove_domains_from_graph(graph, remove_domains_ui)
0.439747
0.791015
``` import pandas as pd import numpy as np from tqdm import tqdm as tqdm df = pd.read_csv('completed data.csv') from collections import Counter size=df.groupby('fid')['fid'].count() df['size'] = df['fid'].map(size) field_mean = df.groupby('fid').mean() new_df = field_mean.reset_index() Counter(new_df['label']) new_df['nd_20190606'] = (new_df['B08_20190606'] - new_df['B04_20190606'])/(new_df['B08_20190606'] + new_df['B04_20190606']) new_df['nd_20190701'] = (new_df['B08_20190701'] - new_df['B04_20190701'])/(new_df['B08_20190701'] + new_df['B04_20190701']) new_df['nd_20190706'] = (new_df['B08_20190706'] - new_df['B04_20190706'])/(new_df['B08_20190706'] + new_df['B04_20190706']) new_df['nd_20190711'] = (new_df['B08_20190711'] - new_df['B04_20190711'])/(new_df['B08_20190711'] + new_df['B04_20190711']) new_df['nd_20190721'] = (new_df['B08_20190721'] - new_df['B04_20190721'])/(new_df['B08_20190721'] + new_df['B04_20190721']) new_df['nd_20190805'] = (new_df['B08_20190805'] - new_df['B04_20190805'])/(new_df['B08_20190805'] + new_df['B04_20190805']) new_df['nd_20190815'] = (new_df['B08_20190815'] - new_df['B04_20190815'])/(new_df['B08_20190815'] + new_df['B04_20190815']) new_df['nd_20190825'] = (new_df['B08_20190825'] - new_df['B04_20190825'])/(new_df['B08_20190825'] + new_df['B04_20190825']) new_df['nd_20190909'] = (new_df['B08_20190909'] - new_df['B04_20190909'])/(new_df['B08_20190909'] + new_df['B04_20190909']) new_df['nd_20190919'] = (new_df['B08_20190919'] - new_df['B04_20190919'])/(new_df['B08_20190919'] + new_df['B04_20190919']) new_df['nd_20190924'] = (new_df['B08_20190924'] - new_df['B04_20190924'])/(new_df['B08_20190924'] + new_df['B04_20190924']) new_df['nd_20191004'] = (new_df['B08_20191004'] - new_df['B04_20191004'])/(new_df['B08_20191004'] + new_df['B04_20191004']) new_df['nd_20191103'] = (new_df['B08_20191103'] - new_df['B04_20191103'])/(new_df['B08_20191103'] + new_df['B04_20191103']) new_df['ndre_20190606'] = (new_df['B08_20190606'] - new_df['B07_20190606'])/(new_df['B08_20190606'] + new_df['B07_20190606']) new_df['ndre_20190701'] = (new_df['B08_20190701'] - new_df['B07_20190701'])/(new_df['B08_20190701'] + new_df['B07_20190701']) new_df['ndre_20190706'] = (new_df['B08_20190706'] - new_df['B07_20190706'])/(new_df['B08_20190706'] + new_df['B07_20190706']) new_df['ndre_20190711'] = (new_df['B08_20190711'] - new_df['B07_20190711'])/(new_df['B08_20190711'] + new_df['B07_20190711']) new_df['ndre_20190721'] = (new_df['B08_20190721'] - new_df['B07_20190721'])/(new_df['B08_20190721'] + new_df['B07_20190721']) new_df['ndre_20190805'] = (new_df['B08_20190805'] - new_df['B07_20190805'])/(new_df['B08_20190805'] + new_df['B07_20190805']) new_df['ndre_20190815'] = (new_df['B08_20190815'] - new_df['B07_20190815'])/(new_df['B08_20190815'] + new_df['B07_20190815']) new_df['ndre_20190825'] = (new_df['B08_20190825'] - new_df['B07_20190825'])/(new_df['B08_20190825'] + new_df['B07_20190825']) new_df['ndre_20190909'] = (new_df['B08_20190909'] - new_df['B07_20190909'])/(new_df['B08_20190909'] + new_df['B07_20190909']) new_df['ndre_20190919'] = (new_df['B08_20190919'] - new_df['B07_20190919'])/(new_df['B08_20190919'] + new_df['B07_20190919']) new_df['ndre_20190924'] = (new_df['B08_20190924'] - new_df['B07_20190924'])/(new_df['B08_20190924'] + new_df['B07_20190924']) new_df['ndre_20191004'] = (new_df['B08_20191004'] - new_df['B07_20191004'])/(new_df['B08_20191004'] + new_df['B07_20191004']) new_df['ndre_20191103'] = (new_df['B08_20191103'] - new_df['B07_20191103'])/(new_df['B08_20191103'] + new_df['B07_20191103']) new_df['ndre2_20190606'] = (new_df['B08_20190606'] - new_df['B06_20190606'])/(new_df['B08_20190606'] + new_df['B06_20190606']) new_df['ndre2_20190701'] = (new_df['B08_20190701'] - new_df['B06_20190701'])/(new_df['B08_20190701'] + new_df['B06_20190701']) new_df['ndre2_20190706'] = (new_df['B08_20190706'] - new_df['B06_20190706'])/(new_df['B08_20190706'] + new_df['B06_20190706']) new_df['ndre2_20190711'] = (new_df['B08_20190711'] - new_df['B06_20190711'])/(new_df['B08_20190711'] + new_df['B06_20190711']) new_df['ndre2_20190721'] = (new_df['B08_20190721'] - new_df['B06_20190721'])/(new_df['B08_20190721'] + new_df['B06_20190721']) new_df['ndre2_20190805'] = (new_df['B08_20190805'] - new_df['B06_20190805'])/(new_df['B08_20190805'] + new_df['B06_20190805']) new_df['ndre2_20190815'] = (new_df['B08_20190815'] - new_df['B06_20190815'])/(new_df['B08_20190815'] + new_df['B06_20190815']) new_df['ndre2_20190825'] = (new_df['B08_20190825'] - new_df['B06_20190825'])/(new_df['B08_20190825'] + new_df['B06_20190825']) new_df['ndre2_20190909'] = (new_df['B08_20190909'] - new_df['B06_20190909'])/(new_df['B08_20190909'] + new_df['B06_20190909']) new_df['ndre2_20190919'] = (new_df['B08_20190919'] - new_df['B06_20190919'])/(new_df['B08_20190919'] + new_df['B06_20190919']) new_df['ndre2_20190924'] = (new_df['B08_20190924'] - new_df['B06_20190924'])/(new_df['B08_20190924'] + new_df['B06_20190924']) new_df['ndre2_20191004'] = (new_df['B08_20191004'] - new_df['B06_20191004'])/(new_df['B08_20191004'] + new_df['B06_20191004']) new_df['ndre2_20191103'] = (new_df['B08_20191103'] - new_df['B06_20191103'])/(new_df['B08_20191103'] + new_df['B06_20191103']) new_df['ndre3_20190606'] = (new_df['B08_20190606'] - new_df['B05_20190606'])/(new_df['B08_20190606'] + new_df['B05_20190606']) new_df['ndre3_20190701'] = (new_df['B08_20190701'] - new_df['B05_20190701'])/(new_df['B08_20190701'] + new_df['B05_20190701']) new_df['ndre3_20190706'] = (new_df['B08_20190706'] - new_df['B05_20190706'])/(new_df['B08_20190706'] + new_df['B05_20190706']) new_df['ndre3_20190711'] = (new_df['B08_20190711'] - new_df['B05_20190711'])/(new_df['B08_20190711'] + new_df['B05_20190711']) new_df['ndre3_20190721'] = (new_df['B08_20190721'] - new_df['B05_20190721'])/(new_df['B08_20190721'] + new_df['B05_20190721']) new_df['ndre3_20190805'] = (new_df['B08_20190805'] - new_df['B05_20190805'])/(new_df['B08_20190805'] + new_df['B05_20190805']) new_df['ndre3_20190815'] = (new_df['B08_20190815'] - new_df['B05_20190815'])/(new_df['B08_20190815'] + new_df['B05_20190815']) new_df['ndre3_20190825'] = (new_df['B08_20190825'] - new_df['B05_20190825'])/(new_df['B08_20190825'] + new_df['B05_20190825']) new_df['ndre3_20190909'] = (new_df['B08_20190909'] - new_df['B05_20190909'])/(new_df['B08_20190909'] + new_df['B05_20190909']) new_df['ndre3_20190919'] = (new_df['B08_20190919'] - new_df['B05_20190919'])/(new_df['B08_20190919'] + new_df['B05_20190919']) new_df['ndre3_20190924'] = (new_df['B08_20190924'] - new_df['B05_20190924'])/(new_df['B08_20190924'] + new_df['B05_20190924']) new_df['ndre3_20191004'] = (new_df['B08_20191004'] - new_df['B05_20191004'])/(new_df['B08_20191004'] + new_df['B05_20191004']) new_df['ndre3_20191103'] = (new_df['B08_20191103'] - new_df['B05_20191103'])/(new_df['B08_20191103'] + new_df['B05_20191103']) new_df['mtci_20190606'] = (new_df['B08_20190606'] - new_df['B06_20190606'])/(new_df['B06_20190606'] - new_df['B04_20190606']) new_df['mtci_20190701'] = (new_df['B08_20190701'] - new_df['B06_20190701'])/(new_df['B06_20190701'] - new_df['B04_20190701']) new_df['mtci_20190706'] = (new_df['B08_20190706'] - new_df['B06_20190706'])/(new_df['B06_20190706'] - new_df['B04_20190706']) new_df['mtci_20190711'] = (new_df['B08_20190711'] - new_df['B06_20190711'])/(new_df['B06_20190711'] - new_df['B04_20190711']) new_df['mtci_20190721'] = (new_df['B08_20190721'] - new_df['B06_20190721'])/(new_df['B06_20190721'] - new_df['B04_20190721']) new_df['mtci_20190805'] = (new_df['B08_20190805'] - new_df['B06_20190805'])/(new_df['B06_20190805'] - new_df['B04_20190805']) new_df['mtci_20190815'] = (new_df['B08_20190815'] - new_df['B06_20190815'])/(new_df['B06_20190815'] - new_df['B04_20190815']) new_df['mtci_20190825'] = (new_df['B08_20190825'] - new_df['B06_20190825'])/(new_df['B06_20190825'] - new_df['B04_20190825']) new_df['mtci_20190909'] = (new_df['B08_20190909'] - new_df['B06_20190909'])/(new_df['B06_20190909'] - new_df['B04_20190909']) new_df['mtci_20190919'] = (new_df['B08_20190919'] - new_df['B06_20190919'])/(new_df['B06_20190919'] - new_df['B04_20190919']) new_df['mtci_20190924'] = (new_df['B08_20190924'] - new_df['B06_20190924'])/(new_df['B06_20190924'] - new_df['B04_20190924']) new_df['mtci_20191004'] = (new_df['B08_20191004'] - new_df['B06_20191004'])/(new_df['B06_20191004'] - new_df['B04_20191004']) new_df['mtci_20191103'] = (new_df['B08_20191103'] - new_df['B06_20191103'])/(new_df['B06_20191103'] - new_df['B04_20191103']) new_df['re_20190606'] = (new_df['B07_20190606'] / new_df['B03_20190606']) - 1 new_df['re_20190701'] = (new_df['B07_20190701'] / new_df['B03_20190701']) - 1 new_df['re_20190706'] = (new_df['B07_20190706'] / new_df['B03_20190706']) - 1 new_df['re_20190711'] = (new_df['B07_20190711'] / new_df['B03_20190711']) - 1 new_df['re_20190721'] = (new_df['B07_20190721'] / new_df['B03_20190721']) - 1 new_df['re_20190805'] = (new_df['B07_20190805'] / new_df['B03_20190805']) - 1 new_df['re_20190815'] = (new_df['B07_20190815'] / new_df['B03_20190815']) - 1 new_df['re_20190825'] = (new_df['B07_20190825'] / new_df['B03_20190825']) - 1 new_df['re_20190909'] = (new_df['B07_20190909'] / new_df['B03_20190909']) - 1 new_df['re_20190919'] = (new_df['B07_20190919'] / new_df['B03_20190919']) - 1 new_df['re_20190924'] = (new_df['B07_20190924'] / new_df['B03_20190924']) - 1 new_df['re_20191004'] = (new_df['B07_20191004'] / new_df['B03_20191004']) - 1 new_df['re_20191103'] = (new_df['B07_20191103'] / new_df['B03_20191103']) - 1 new_df['label'] = new_df['label'].astype('int16') new_df['tile'] = new_df['tile'].astype('int16') train = new_df[new_df['label']!=0] test = new_df[new_df['label']==0] X = train.drop(['fid', 'label', 'row_loc', 'col_loc', 'tile'], axis=1) y = train['label'] X import catboost as ctb import lightgbm as ltb import xgboost as xgb ctb1 = ctb.CatBoostClassifier(iterations=1700) model=ctb1.fit(X,y) X_test= test.drop(['fid', 'label', 'row_loc', 'col_loc', 'tile'], axis=1) X_test y_test = model.predict_proba(X_test) y_test prediction = pd.DataFrame(y_test, columns=['Crop_ID_1', 'Crop_ID_2', 'Crop_ID_3', 'Crop_ID_4', 'Crop_ID_5', 'Crop_ID_6', 'Crop_ID_7']) test test = test.reset_index() submission = pd.concat([test['fid'], prediction], axis=1) for i in tqdm(range(len(submission))): for j in range(1,7): if submission.iloc[i][j] > 0.90: submission.loc[i, submission.columns.difference([submission.columns[0]])] = 0 submission.loc[i, submission.columns[j]] = 1 for i in tqdm(range(len(submission))): for j in range(1,8): if submission.iloc[i][j] > 0.85: #submission.loc[i, submission.columns.difference([submission.columns[0]])] = 0 submission.loc[i, submission.columns[j]] = 1 for i in tqdm(range(len(submission))): for j in range(5,7): if submission.iloc[i][j] == max(submission.loc[i, submission.columns.difference([submission.columns[0]])]): submission.loc[i, submission.columns.difference([submission.columns[0]])] = 0 submission.loc[i, submission.columns[j]] = 1 submission.head() submission.columns = ['Field_ID', 'Crop_ID_1', 'Crop_ID_2', 'Crop_ID_3', 'Crop_ID_4', 'Crop_ID_5', 'Crop_ID_6', 'Crop_ID_7'] submission.to_csv('Band GE 0.9to1Rto0 0.85to1 Max56to1R0.csv', index=False) ```
github_jupyter
import pandas as pd import numpy as np from tqdm import tqdm as tqdm df = pd.read_csv('completed data.csv') from collections import Counter size=df.groupby('fid')['fid'].count() df['size'] = df['fid'].map(size) field_mean = df.groupby('fid').mean() new_df = field_mean.reset_index() Counter(new_df['label']) new_df['nd_20190606'] = (new_df['B08_20190606'] - new_df['B04_20190606'])/(new_df['B08_20190606'] + new_df['B04_20190606']) new_df['nd_20190701'] = (new_df['B08_20190701'] - new_df['B04_20190701'])/(new_df['B08_20190701'] + new_df['B04_20190701']) new_df['nd_20190706'] = (new_df['B08_20190706'] - new_df['B04_20190706'])/(new_df['B08_20190706'] + new_df['B04_20190706']) new_df['nd_20190711'] = (new_df['B08_20190711'] - new_df['B04_20190711'])/(new_df['B08_20190711'] + new_df['B04_20190711']) new_df['nd_20190721'] = (new_df['B08_20190721'] - new_df['B04_20190721'])/(new_df['B08_20190721'] + new_df['B04_20190721']) new_df['nd_20190805'] = (new_df['B08_20190805'] - new_df['B04_20190805'])/(new_df['B08_20190805'] + new_df['B04_20190805']) new_df['nd_20190815'] = (new_df['B08_20190815'] - new_df['B04_20190815'])/(new_df['B08_20190815'] + new_df['B04_20190815']) new_df['nd_20190825'] = (new_df['B08_20190825'] - new_df['B04_20190825'])/(new_df['B08_20190825'] + new_df['B04_20190825']) new_df['nd_20190909'] = (new_df['B08_20190909'] - new_df['B04_20190909'])/(new_df['B08_20190909'] + new_df['B04_20190909']) new_df['nd_20190919'] = (new_df['B08_20190919'] - new_df['B04_20190919'])/(new_df['B08_20190919'] + new_df['B04_20190919']) new_df['nd_20190924'] = (new_df['B08_20190924'] - new_df['B04_20190924'])/(new_df['B08_20190924'] + new_df['B04_20190924']) new_df['nd_20191004'] = (new_df['B08_20191004'] - new_df['B04_20191004'])/(new_df['B08_20191004'] + new_df['B04_20191004']) new_df['nd_20191103'] = (new_df['B08_20191103'] - new_df['B04_20191103'])/(new_df['B08_20191103'] + new_df['B04_20191103']) new_df['ndre_20190606'] = (new_df['B08_20190606'] - new_df['B07_20190606'])/(new_df['B08_20190606'] + new_df['B07_20190606']) new_df['ndre_20190701'] = (new_df['B08_20190701'] - new_df['B07_20190701'])/(new_df['B08_20190701'] + new_df['B07_20190701']) new_df['ndre_20190706'] = (new_df['B08_20190706'] - new_df['B07_20190706'])/(new_df['B08_20190706'] + new_df['B07_20190706']) new_df['ndre_20190711'] = (new_df['B08_20190711'] - new_df['B07_20190711'])/(new_df['B08_20190711'] + new_df['B07_20190711']) new_df['ndre_20190721'] = (new_df['B08_20190721'] - new_df['B07_20190721'])/(new_df['B08_20190721'] + new_df['B07_20190721']) new_df['ndre_20190805'] = (new_df['B08_20190805'] - new_df['B07_20190805'])/(new_df['B08_20190805'] + new_df['B07_20190805']) new_df['ndre_20190815'] = (new_df['B08_20190815'] - new_df['B07_20190815'])/(new_df['B08_20190815'] + new_df['B07_20190815']) new_df['ndre_20190825'] = (new_df['B08_20190825'] - new_df['B07_20190825'])/(new_df['B08_20190825'] + new_df['B07_20190825']) new_df['ndre_20190909'] = (new_df['B08_20190909'] - new_df['B07_20190909'])/(new_df['B08_20190909'] + new_df['B07_20190909']) new_df['ndre_20190919'] = (new_df['B08_20190919'] - new_df['B07_20190919'])/(new_df['B08_20190919'] + new_df['B07_20190919']) new_df['ndre_20190924'] = (new_df['B08_20190924'] - new_df['B07_20190924'])/(new_df['B08_20190924'] + new_df['B07_20190924']) new_df['ndre_20191004'] = (new_df['B08_20191004'] - new_df['B07_20191004'])/(new_df['B08_20191004'] + new_df['B07_20191004']) new_df['ndre_20191103'] = (new_df['B08_20191103'] - new_df['B07_20191103'])/(new_df['B08_20191103'] + new_df['B07_20191103']) new_df['ndre2_20190606'] = (new_df['B08_20190606'] - new_df['B06_20190606'])/(new_df['B08_20190606'] + new_df['B06_20190606']) new_df['ndre2_20190701'] = (new_df['B08_20190701'] - new_df['B06_20190701'])/(new_df['B08_20190701'] + new_df['B06_20190701']) new_df['ndre2_20190706'] = (new_df['B08_20190706'] - new_df['B06_20190706'])/(new_df['B08_20190706'] + new_df['B06_20190706']) new_df['ndre2_20190711'] = (new_df['B08_20190711'] - new_df['B06_20190711'])/(new_df['B08_20190711'] + new_df['B06_20190711']) new_df['ndre2_20190721'] = (new_df['B08_20190721'] - new_df['B06_20190721'])/(new_df['B08_20190721'] + new_df['B06_20190721']) new_df['ndre2_20190805'] = (new_df['B08_20190805'] - new_df['B06_20190805'])/(new_df['B08_20190805'] + new_df['B06_20190805']) new_df['ndre2_20190815'] = (new_df['B08_20190815'] - new_df['B06_20190815'])/(new_df['B08_20190815'] + new_df['B06_20190815']) new_df['ndre2_20190825'] = (new_df['B08_20190825'] - new_df['B06_20190825'])/(new_df['B08_20190825'] + new_df['B06_20190825']) new_df['ndre2_20190909'] = (new_df['B08_20190909'] - new_df['B06_20190909'])/(new_df['B08_20190909'] + new_df['B06_20190909']) new_df['ndre2_20190919'] = (new_df['B08_20190919'] - new_df['B06_20190919'])/(new_df['B08_20190919'] + new_df['B06_20190919']) new_df['ndre2_20190924'] = (new_df['B08_20190924'] - new_df['B06_20190924'])/(new_df['B08_20190924'] + new_df['B06_20190924']) new_df['ndre2_20191004'] = (new_df['B08_20191004'] - new_df['B06_20191004'])/(new_df['B08_20191004'] + new_df['B06_20191004']) new_df['ndre2_20191103'] = (new_df['B08_20191103'] - new_df['B06_20191103'])/(new_df['B08_20191103'] + new_df['B06_20191103']) new_df['ndre3_20190606'] = (new_df['B08_20190606'] - new_df['B05_20190606'])/(new_df['B08_20190606'] + new_df['B05_20190606']) new_df['ndre3_20190701'] = (new_df['B08_20190701'] - new_df['B05_20190701'])/(new_df['B08_20190701'] + new_df['B05_20190701']) new_df['ndre3_20190706'] = (new_df['B08_20190706'] - new_df['B05_20190706'])/(new_df['B08_20190706'] + new_df['B05_20190706']) new_df['ndre3_20190711'] = (new_df['B08_20190711'] - new_df['B05_20190711'])/(new_df['B08_20190711'] + new_df['B05_20190711']) new_df['ndre3_20190721'] = (new_df['B08_20190721'] - new_df['B05_20190721'])/(new_df['B08_20190721'] + new_df['B05_20190721']) new_df['ndre3_20190805'] = (new_df['B08_20190805'] - new_df['B05_20190805'])/(new_df['B08_20190805'] + new_df['B05_20190805']) new_df['ndre3_20190815'] = (new_df['B08_20190815'] - new_df['B05_20190815'])/(new_df['B08_20190815'] + new_df['B05_20190815']) new_df['ndre3_20190825'] = (new_df['B08_20190825'] - new_df['B05_20190825'])/(new_df['B08_20190825'] + new_df['B05_20190825']) new_df['ndre3_20190909'] = (new_df['B08_20190909'] - new_df['B05_20190909'])/(new_df['B08_20190909'] + new_df['B05_20190909']) new_df['ndre3_20190919'] = (new_df['B08_20190919'] - new_df['B05_20190919'])/(new_df['B08_20190919'] + new_df['B05_20190919']) new_df['ndre3_20190924'] = (new_df['B08_20190924'] - new_df['B05_20190924'])/(new_df['B08_20190924'] + new_df['B05_20190924']) new_df['ndre3_20191004'] = (new_df['B08_20191004'] - new_df['B05_20191004'])/(new_df['B08_20191004'] + new_df['B05_20191004']) new_df['ndre3_20191103'] = (new_df['B08_20191103'] - new_df['B05_20191103'])/(new_df['B08_20191103'] + new_df['B05_20191103']) new_df['mtci_20190606'] = (new_df['B08_20190606'] - new_df['B06_20190606'])/(new_df['B06_20190606'] - new_df['B04_20190606']) new_df['mtci_20190701'] = (new_df['B08_20190701'] - new_df['B06_20190701'])/(new_df['B06_20190701'] - new_df['B04_20190701']) new_df['mtci_20190706'] = (new_df['B08_20190706'] - new_df['B06_20190706'])/(new_df['B06_20190706'] - new_df['B04_20190706']) new_df['mtci_20190711'] = (new_df['B08_20190711'] - new_df['B06_20190711'])/(new_df['B06_20190711'] - new_df['B04_20190711']) new_df['mtci_20190721'] = (new_df['B08_20190721'] - new_df['B06_20190721'])/(new_df['B06_20190721'] - new_df['B04_20190721']) new_df['mtci_20190805'] = (new_df['B08_20190805'] - new_df['B06_20190805'])/(new_df['B06_20190805'] - new_df['B04_20190805']) new_df['mtci_20190815'] = (new_df['B08_20190815'] - new_df['B06_20190815'])/(new_df['B06_20190815'] - new_df['B04_20190815']) new_df['mtci_20190825'] = (new_df['B08_20190825'] - new_df['B06_20190825'])/(new_df['B06_20190825'] - new_df['B04_20190825']) new_df['mtci_20190909'] = (new_df['B08_20190909'] - new_df['B06_20190909'])/(new_df['B06_20190909'] - new_df['B04_20190909']) new_df['mtci_20190919'] = (new_df['B08_20190919'] - new_df['B06_20190919'])/(new_df['B06_20190919'] - new_df['B04_20190919']) new_df['mtci_20190924'] = (new_df['B08_20190924'] - new_df['B06_20190924'])/(new_df['B06_20190924'] - new_df['B04_20190924']) new_df['mtci_20191004'] = (new_df['B08_20191004'] - new_df['B06_20191004'])/(new_df['B06_20191004'] - new_df['B04_20191004']) new_df['mtci_20191103'] = (new_df['B08_20191103'] - new_df['B06_20191103'])/(new_df['B06_20191103'] - new_df['B04_20191103']) new_df['re_20190606'] = (new_df['B07_20190606'] / new_df['B03_20190606']) - 1 new_df['re_20190701'] = (new_df['B07_20190701'] / new_df['B03_20190701']) - 1 new_df['re_20190706'] = (new_df['B07_20190706'] / new_df['B03_20190706']) - 1 new_df['re_20190711'] = (new_df['B07_20190711'] / new_df['B03_20190711']) - 1 new_df['re_20190721'] = (new_df['B07_20190721'] / new_df['B03_20190721']) - 1 new_df['re_20190805'] = (new_df['B07_20190805'] / new_df['B03_20190805']) - 1 new_df['re_20190815'] = (new_df['B07_20190815'] / new_df['B03_20190815']) - 1 new_df['re_20190825'] = (new_df['B07_20190825'] / new_df['B03_20190825']) - 1 new_df['re_20190909'] = (new_df['B07_20190909'] / new_df['B03_20190909']) - 1 new_df['re_20190919'] = (new_df['B07_20190919'] / new_df['B03_20190919']) - 1 new_df['re_20190924'] = (new_df['B07_20190924'] / new_df['B03_20190924']) - 1 new_df['re_20191004'] = (new_df['B07_20191004'] / new_df['B03_20191004']) - 1 new_df['re_20191103'] = (new_df['B07_20191103'] / new_df['B03_20191103']) - 1 new_df['label'] = new_df['label'].astype('int16') new_df['tile'] = new_df['tile'].astype('int16') train = new_df[new_df['label']!=0] test = new_df[new_df['label']==0] X = train.drop(['fid', 'label', 'row_loc', 'col_loc', 'tile'], axis=1) y = train['label'] X import catboost as ctb import lightgbm as ltb import xgboost as xgb ctb1 = ctb.CatBoostClassifier(iterations=1700) model=ctb1.fit(X,y) X_test= test.drop(['fid', 'label', 'row_loc', 'col_loc', 'tile'], axis=1) X_test y_test = model.predict_proba(X_test) y_test prediction = pd.DataFrame(y_test, columns=['Crop_ID_1', 'Crop_ID_2', 'Crop_ID_3', 'Crop_ID_4', 'Crop_ID_5', 'Crop_ID_6', 'Crop_ID_7']) test test = test.reset_index() submission = pd.concat([test['fid'], prediction], axis=1) for i in tqdm(range(len(submission))): for j in range(1,7): if submission.iloc[i][j] > 0.90: submission.loc[i, submission.columns.difference([submission.columns[0]])] = 0 submission.loc[i, submission.columns[j]] = 1 for i in tqdm(range(len(submission))): for j in range(1,8): if submission.iloc[i][j] > 0.85: #submission.loc[i, submission.columns.difference([submission.columns[0]])] = 0 submission.loc[i, submission.columns[j]] = 1 for i in tqdm(range(len(submission))): for j in range(5,7): if submission.iloc[i][j] == max(submission.loc[i, submission.columns.difference([submission.columns[0]])]): submission.loc[i, submission.columns.difference([submission.columns[0]])] = 0 submission.loc[i, submission.columns[j]] = 1 submission.head() submission.columns = ['Field_ID', 'Crop_ID_1', 'Crop_ID_2', 'Crop_ID_3', 'Crop_ID_4', 'Crop_ID_5', 'Crop_ID_6', 'Crop_ID_7'] submission.to_csv('Band GE 0.9to1Rto0 0.85to1 Max56to1R0.csv', index=False)
0.046976
0.086516
``` import pandas as pd housing_df = pd.read_csv("housing.csv") housing_df import numpy as np import matplotlib.pyplot as plt housing_df['monthly income'] = housing_df['Income']/12 afford_df = housing_df[['home owner afford','rent afford']] pa = afford_df.plot(kind='bar',figsize=(20,7)) # Set a title for the chart plt.title("Housing affordability: % median income Per City") # PandasPlot.set_xticklabels() can be used to set the tick labels as well pa.set_xticklabels(housing_df["City"], rotation=45) plt.show() housing_df['monthly income'] = housing_df['Income']/12 afford_df = housing_df[['Median Mortgage Cost','Median gross rent','monthly income']] pa = afford_df.plot(kind='bar',figsize=(20,7)) # Set a title for the chart plt.title("Housing affordability: monthly income Per City") # PandasPlot.set_xticklabels() can be used to set the tick labels as well pa.set_xticklabels(housing_df["City"], rotation=45) plt.show() import seaborn as sns afford_df = housing_df[['home owner afford','rent afford']] plt.figure(figsize=(20,3)) sns.barplot(data=afford_df) plt.show() housing_df['monthly income'] = housing_df['Income']/12 afford_df = housing_df[['Median Mortgage Cost','Median gross rent','monthly income']] af=afford_df.reset_index() ax = af.plot(kind='scatter', y = 'monthly income', x = 'index', alpha=0.25, marker='o', s=af['Median Mortgage Cost']*20, label='Mortgage', figsize=(20,10), ) af.plot(kind='scatter', y = 'monthly income', x = 'index',alpha=0.25, marker = '*', s=af['Median gross rent']*20, label='Rent', color='r',ax=ax, figsize=(20,10)) # Set a title for the chart plt.title("Housing affordability: monthly income Per City") ax.set_xticklabels(housing_df["City"], rotation=45) plt.show() afford_df = housing_df[['Homeowner Vacancy Rate','Rental Vacancy Rate']] pa = afford_df.plot(kind="bar",figsize=(20,5)) # Set a title for the chart plt.title("Housing affordability: vacancy rate Per City") # PandasPlot.set_xticklabels() can be used to set the tick labels as well pa.set_xticklabels(housing_df["City"], rotation=45) plt.show() import seaborn as sns import matplotlib.pyplot as plt housing_df_temp_own = housing_df.copy() housing_df_temp_rent = housing_df.copy() housing_df_temp_own['RentOwn'] = 'own' #print(housing_df_temp_own.shape) housing_df_temp_rent['RentOwn'] = 'rent' #print(housing_df_temp_rent.shape) frames= [housing_df_temp_own, housing_df_temp_rent] housing_df_rentOwn = pd.concat(frames, ignore_index= 'True') #print(housing_df_rentOwn.shape) housing_df_rentOwn x = housing_df_rentOwn['RentOwn']== 'own' #print(x) housing_df_rentOwn['afford']= None #Function to add 'afford' col def add_afford_col(df): for i in range(0, df.shape[0]): if df['RentOwn'][i]== 'own': df['afford'][i] = df['home owner afford'][i] #print('true') else: df['afford'][i] = df['rent afford'][i] #print('false') add_afford_col(housing_df_rentOwn) #Create the bar plot plt.figure(figsize=(20,3)) sns.barplot(x="City", y="afford", hue="RentOwn", data=housing_df_rentOwn) #plt.xticks(rotation=-20) plt.show() housing_df_rentOwn ```
github_jupyter
import pandas as pd housing_df = pd.read_csv("housing.csv") housing_df import numpy as np import matplotlib.pyplot as plt housing_df['monthly income'] = housing_df['Income']/12 afford_df = housing_df[['home owner afford','rent afford']] pa = afford_df.plot(kind='bar',figsize=(20,7)) # Set a title for the chart plt.title("Housing affordability: % median income Per City") # PandasPlot.set_xticklabels() can be used to set the tick labels as well pa.set_xticklabels(housing_df["City"], rotation=45) plt.show() housing_df['monthly income'] = housing_df['Income']/12 afford_df = housing_df[['Median Mortgage Cost','Median gross rent','monthly income']] pa = afford_df.plot(kind='bar',figsize=(20,7)) # Set a title for the chart plt.title("Housing affordability: monthly income Per City") # PandasPlot.set_xticklabels() can be used to set the tick labels as well pa.set_xticklabels(housing_df["City"], rotation=45) plt.show() import seaborn as sns afford_df = housing_df[['home owner afford','rent afford']] plt.figure(figsize=(20,3)) sns.barplot(data=afford_df) plt.show() housing_df['monthly income'] = housing_df['Income']/12 afford_df = housing_df[['Median Mortgage Cost','Median gross rent','monthly income']] af=afford_df.reset_index() ax = af.plot(kind='scatter', y = 'monthly income', x = 'index', alpha=0.25, marker='o', s=af['Median Mortgage Cost']*20, label='Mortgage', figsize=(20,10), ) af.plot(kind='scatter', y = 'monthly income', x = 'index',alpha=0.25, marker = '*', s=af['Median gross rent']*20, label='Rent', color='r',ax=ax, figsize=(20,10)) # Set a title for the chart plt.title("Housing affordability: monthly income Per City") ax.set_xticklabels(housing_df["City"], rotation=45) plt.show() afford_df = housing_df[['Homeowner Vacancy Rate','Rental Vacancy Rate']] pa = afford_df.plot(kind="bar",figsize=(20,5)) # Set a title for the chart plt.title("Housing affordability: vacancy rate Per City") # PandasPlot.set_xticklabels() can be used to set the tick labels as well pa.set_xticklabels(housing_df["City"], rotation=45) plt.show() import seaborn as sns import matplotlib.pyplot as plt housing_df_temp_own = housing_df.copy() housing_df_temp_rent = housing_df.copy() housing_df_temp_own['RentOwn'] = 'own' #print(housing_df_temp_own.shape) housing_df_temp_rent['RentOwn'] = 'rent' #print(housing_df_temp_rent.shape) frames= [housing_df_temp_own, housing_df_temp_rent] housing_df_rentOwn = pd.concat(frames, ignore_index= 'True') #print(housing_df_rentOwn.shape) housing_df_rentOwn x = housing_df_rentOwn['RentOwn']== 'own' #print(x) housing_df_rentOwn['afford']= None #Function to add 'afford' col def add_afford_col(df): for i in range(0, df.shape[0]): if df['RentOwn'][i]== 'own': df['afford'][i] = df['home owner afford'][i] #print('true') else: df['afford'][i] = df['rent afford'][i] #print('false') add_afford_col(housing_df_rentOwn) #Create the bar plot plt.figure(figsize=(20,3)) sns.barplot(x="City", y="afford", hue="RentOwn", data=housing_df_rentOwn) #plt.xticks(rotation=-20) plt.show() housing_df_rentOwn
0.259732
0.567457
# Classification on MNIST with CNN This code is supporting material for the book Building Machine Learning Systems with Python by Willi Richert, Luis Pedro Coelho and Matthieu Brucher published by PACKT Publishing It is made available under the MIT License Let's try to classify the MNIST database (written digits) with a convolutional network. We will start with some hyper parameters ``` import tensorflow as tf import numpy as np n_epochs = 10 learning_rate = 0.0002 batch_size = 128 image_shape = [28,28,1] step = 1000 export_dir = "data/classifier-mnist" dim_W1 = 1024 dim_W2 = 128 dim_W3 = 64 dropout_rate = 0.1 ``` It is time to load the data and shape it as we want ``` from sklearn.datasets import fetch_mldata mnist = fetch_mldata('MNIST original') mnist.data.shape = (-1, 28, 28) mnist.data = mnist.data.astype(np.float32).reshape( [-1, 28, 28, 1]) / 255. mnist.num_examples = len(mnist.data) mnist.labels = mnist.target.astype(np.int64) ``` We should split our data between training and testing data (6 to 1) ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(mnist.data, mnist.labels, test_size=(1. / 7.)) ``` The convolutional network builder will be stored in a class ``` class CNN(): def __init__( self, image_shape=[28,28,1], dim_W1=1024, dim_W2=128, dim_W3=64, classes=10 ): self.image_shape = image_shape self.dim_W1 = dim_W1 self.dim_W2 = dim_W2 self.dim_W3 = dim_W3 self.classes = classes def build_model(self): image = tf.placeholder(tf.float32, [None]+self.image_shape, name="image") Y = tf.placeholder(tf.int64, [None], name="label") training = tf.placeholder(tf.bool, name="is_training") probabilities = self.discriminate(image, training) cost = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=Y, logits=probabilities)) accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(probabilities, axis=1), Y), tf.float32), name="accuracy") return image, Y, cost, accuracy, probabilities, training def create_conv2d(self, input, filters, kernel_size, name): layer = tf.layers.conv2d( inputs=input, filters=filters, kernel_size=kernel_size, activation=tf.nn.leaky_relu, name="Conv2d_" + name, padding="same") return layer def create_maxpool(self, input, name): layer = tf.layers.max_pooling2d( inputs=input, pool_size=[2,2], strides=2, name="MaxPool_" + name) return layer def create_dropout(self, input, name, is_training): layer = tf.layers.dropout( inputs=input, rate=dropout_rate, name="DropOut_" + name, training=is_training) return layer def create_dense(self, input, units, name): layer = tf.layers.dense( inputs=input, units=units, name="Dense" + name, ) layer = tf.layers.batch_normalization( inputs=layer, momentum=0, epsilon=1e-8, training=True, name="BatchNorm_" + name, ) layer = tf.nn.leaky_relu(layer, name="LeakyRELU_" + name) return layer def discriminate(self, image, training): h1 = self.create_conv2d(image, self.dim_W3, 5, "Layer1") h1 = self.create_maxpool(h1, "Layer1") h2 = self.create_conv2d(h1, self.dim_W2, 5, "Layer2") h2 = self.create_maxpool(h2, "Layer2") h2 = tf.reshape(h2, (-1, self.dim_W2 * 7 * 7)) h3 = self.create_dense(h2, self.dim_W1, "Layer3") h3 = self.create_dropout(h3, "Layer3", training) h4 = self.create_dense(h3, self.classes, "Layer4") return h4 ``` And now we can instantiate it and create our optimizer. We take the opportunity to create our two objects to save the Tensorflow graph, Saver and builder. ``` tf.reset_default_graph() cnn_model = CNN( image_shape=image_shape, dim_W1=dim_W1, dim_W2=dim_W2, dim_W3=dim_W3, ) image_tf, Y_tf, cost_tf, accuracy_tf, output_tf, training_tf = cnn_model.build_model() saver = tf.train.Saver(max_to_keep=10) train_step = tf.train.AdamOptimizer(learning_rate, beta1=0.5).minimize(cost_tf) builder = tf.saved_model.builder.SavedModelBuilder(export_dir) ``` This is a helper function that computes the global loss for the training and the testing data. It will be used for each epoch, but in real life, you should "trust" the partial loss instead, as this value is very costly to compute. ``` accuracy_vec = [] def show_train(sess, epoch): traccuracy = [] teaccuracy = [] for j in range(0, len(X_train), batch_size): Xs = X_train[j:j+batch_size] Ys = y_train[j:j+batch_size] traccuracy.append(sess.run(accuracy_tf, feed_dict={ training_tf: False, Y_tf: Ys, image_tf: Xs })) for j in range(0, len(X_test), batch_size): Xs = X_test[j:j+batch_size] Ys = y_test[j:j+batch_size] teaccuracy.append(sess.run(accuracy_tf, feed_dict={ training_tf: False, Y_tf: Ys, image_tf: Xs, })) train_accuracy = np.mean(traccuracy) test_accuracy = np.mean(teaccuracy) accuracy_vec.append((train_accuracy, test_accuracy)) result = sess.run(output_tf, feed_dict={ training_tf: False, image_tf: X_test[:10] }) print('Epoch #%i\n train accuracy = %f\n test accuracy = %f' % (epoch, train_accuracy, test_accuracy)) print('Result for the 10 first training images: %s' % np.argmax(result, axis=1)) print('Reference for the 10 first training images: %s' % y_test[:10]) ``` Let's train our model and save it. ``` with tf.Session() as sess: sess.run(tf.global_variables_initializer()) show_train(sess, -1) for epoch in range(n_epochs): permut = np.random.permutation(len(X_train)) print("epoch: %i" % epoch) for j in range(0, len(X_train), batch_size): if j % step == 0: print(" batch: %i" % j) batch = permut[j:j+batch_size] Xs = X_train[batch] Ys = y_train[batch] sess.run(train_step, feed_dict={ training_tf: True, Y_tf: Ys, image_tf: Xs }) if j % step == 0: temp_cost, temp_prec = sess.run([cost_tf, accuracy_tf], feed_dict={ training_tf: False, Y_tf: Ys, image_tf: Xs }) print(" cost: %f\n prec: %f" % (temp_cost, temp_prec)) saver.save(sess, './classifier', global_step=epoch) show_train(sess, epoch) saver.save(sess, './classifier-final') builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.TRAINING]) builder.save() ``` We can check the global training and testing cost, as we created a function to compute it. ``` from matplotlib import pyplot as plt %matplotlib inline accuracy = np.array(accuracy_vec) plt.semilogy(1 - accuracy[:,0], 'k-', label="train") plt.semilogy(1 - accuracy[:,1], 'r-', label="test") plt.title('Classification error per Epoch') plt.xlabel('Epoch') plt.ylabel('Classification error') plt.legend() ``` We now check that Saver allowed to properly save and restore the network. ``` tf.reset_default_graph() new_saver = tf.train.import_meta_graph("classifier-final.meta") with tf.Session() as sess: new_saver.restore(sess, tf.train.latest_checkpoint('./')) graph = tf.get_default_graph() training_tf = graph.get_tensor_by_name('is_training:0') Y_tf = graph.get_tensor_by_name('label:0') image_tf = graph.get_tensor_by_name('image:0') accuracy_tf = graph.get_tensor_by_name('accuracy:0') output_tf = graph.get_tensor_by_name('LeakyRELU_Layer4/Maximum:0') show_train(sess, 0) ``` And the same for builder. ``` tf.reset_default_graph() with tf.Session() as sess: tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.TRAINING], export_dir) graph = tf.get_default_graph() training_tf = graph.get_tensor_by_name('is_training:0') Y_tf = graph.get_tensor_by_name('label:0') image_tf = graph.get_tensor_by_name('image:0') accuracy_tf = graph.get_tensor_by_name('accuracy:0') output_tf = graph.get_tensor_by_name('LeakyRELU_Layer4/Maximum:0') show_train(sess, 0) ``` # Test prediction with LTSMs LSTMs are good tools to predict new values in a sequence. Can they predict text from Aesop's fables? ``` text="""A slave named Androcles once escaped from his master and fled to the forest. As he was wandering about there he came upon a Lion lying down moaning and groaning. At first he turned to flee, but finding that the Lion did not pursue him, he turned back and went up to him. As he came near, the Lion put out his paw, which was all swollen and bleeding, and Androcles found that a huge thorn had got into it, and was causing all the pain. He pulled out the thorn and bound up the paw of the Lion, who was soon able to rise and lick the hand of Androcles like a dog. Then the Lion took Androcles to his cave, and every day used to bring him meat from which to live. But shortly afterwards both Androcles and the Lion were captured, and the slave was sentenced to be thrown to the Lion, after the latter had been kept without food for several days. The Emperor and all his Court came to see the spectacle, and Androcles was led out into the middle of the arena. Soon the Lion was let loose from his den, and rushed bounding and roaring towards his victim. But as soon as he came near to Androcles he recognised his friend, and fawned upon him, and licked his hands like a friendly dog. The Emperor, surprised at this, summoned Androcles to him, who told him the whole story. Whereupon the slave was pardoned and freed, and the Lion let loose to his native forest.""" ``` We know remove commas and points and then split the text by words. ``` training_data = text.lower().replace(",", "").replace(".", "").split() ``` Python itsef has a module to count words: ``` import collections def build_dataset(words): count = collections.Counter(words).most_common() dictionary = dict() for word, _ in count: dictionary[word] = len(dictionary) reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys())) return dictionary, reverse_dictionary dictionary, reverse_dictionary = build_dataset(training_data) training_data_args = [dictionary[word] for word in training_data] ``` Our RNN will be a simple LSTM layer and then a dense layer to specify the word it selected. The input will be split so that we get several elements each time (here 3 words). ``` import tensorflow as tf from tensorflow.contrib import rnn def RNN(x): # Generate a n_input-element sequence of inputs # (eg. [had] [a] [general] -> [20] [6] [33]) x = tf.split(x,n_input,1) # 1-layer LSTM with n_hidden units. rnn_cell = rnn.BasicLSTMCell(n_hidden) # generate prediction outputs, states = rnn.static_rnn(rnn_cell, x, dtype=tf.float32) # there are n_input outputs but we only want the last output return tf.layers.dense(inputs = outputs[-1], units = vocab_size) ``` Let's add our traditional hyper parameters: ``` import random import numpy as np tf.reset_default_graph() vocab_size = len(dictionary) # Parameters learning_rate = 0.001 training_iters = 50000 display_step = 1000 # number of inputs (past words that we use) n_input = 3 # number of units in the RNN cell n_hidden = 512 # tf Graph input x = tf.placeholder(tf.float32, [None, n_input]) y = tf.placeholder(tf.int64, [None]) ``` And now the functions to optimize and our prediction functions as well. As for the MNIST CNN, we use sparse_softmax_cross_entropy_with_logits because we only want one word. ``` pred = RNN(x) cost = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=y)) optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate).minimize(cost) correct_pred = tf.equal(tf.argmax(pred,1), y) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) ``` This train loop is a little bit different than the previous ones, as it does one sample at a time, and then we average the loss and the accuracy before we display it. ``` with tf.Session() as session: session.run(tf.global_variables_initializer()) step = 0 offset = random.randint(0,n_input+1) end_offset = n_input + 1 acc_total = 0 loss_total = 0 while step < training_iters: # Batch with just one sample. Add some randomness on selection process. if offset > (len(training_data)-end_offset): offset = random.randint(0, n_input+1) symbols_in_keys = [ [training_data_args[i]] for i in range(offset, offset+n_input) ] symbols_in_keys = np.reshape(np.array(symbols_in_keys), [1, n_input]) symbols_out_onehot = [training_data_args[offset+n_input]] _, acc, loss, onehot_pred = session.run([optimizer, accuracy, cost, pred], \ feed_dict={x: symbols_in_keys, y: symbols_out_onehot}) loss_total += loss acc_total += acc if (step+1) % display_step == 0: print("Iter= %i , Average Loss= %.6f, Average Accuracy= %.2f%%" % (step+1, loss_total/display_step, 100*acc_total/display_step)) acc_total = 0 loss_total = 0 symbols_in = [training_data[i] for i in range(offset, offset + n_input)] symbols_out = training_data[offset + n_input] symbols_out_pred = reverse_dictionary[np.argmax(onehot_pred, axis=1)[0]] print("%s - [%s] vs [%s]" % (symbols_in, symbols_out, symbols_out_pred)) step += 1 offset += (n_input+1) ``` # Classification with LSTM We start this time with hyperparameters because the way we reshape our images depends on our network archtecture. ``` import tensorflow as tf from tensorflow.contrib import rnn tf.reset_default_graph() #rows of 28 pixels n_input=28 #unrolled through 28 time steps (our images are (28,28)) time_steps=28 #hidden LSTM units num_units=128 #learning rate for adam learning_rate=0.001 n_classes=10 batch_size=128 n_epochs = 10 step = 100 ``` Let's go back to our data: ``` import os import numpy as np from sklearn.datasets import fetch_mldata from sklearn.model_selection import train_test_split mnist = fetch_mldata('MNIST original') mnist.data = mnist.data.astype(np.float32).reshape( [-1, time_steps, n_input]) / 255. mnist.num_examples = len(mnist.data) mnist.labels = mnist.target.astype(np.int8) X_train, X_test, y_train, y_test = train_test_split(mnist.data, mnist.labels, test_size=(1. / 7.)) ``` This is the network we will use (we don't store it in a class this time) ``` x=tf.placeholder(tf.float32,[None,time_steps,n_input]) y=tf.placeholder(tf.int64,[None]) #processing the input tensor from [batch_size,n_steps,n_input] to "time_steps" number of [batch_size,n_input] tensors input=tf.unstack(x ,time_steps,1) lstm_layer=rnn.BasicLSTMCell(num_units,forget_bias=True) outputs,_=rnn.static_rnn(lstm_layer,input,dtype=tf.float32) prediction=tf.layers.dense(inputs=outputs[-1], units = n_classes) loss=tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=prediction,labels=y)) opt=tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss) correct_prediction=tf.equal(tf.argmax(prediction,1),y) accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) ``` Here we go for the training: ``` with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for epoch in range(n_epochs): permut = np.random.permutation(len(X_train)) print("epoch: %i" % epoch) for j in range(0, len(X_train), batch_size): if j % step == 0: print(" batch: %i" % j) batch = permut[j:j+batch_size] Xs = X_train[batch] Ys = y_train[batch] sess.run(opt, feed_dict={x: Xs, y: Ys}) if j % step == 0: acc=sess.run(accuracy,feed_dict={x:Xs,y:Ys}) los=sess.run(loss,feed_dict={x:Xs,y:Ys}) print(" accuracy %f" % acc) print(" loss %f" % los) print("") ```
github_jupyter
import tensorflow as tf import numpy as np n_epochs = 10 learning_rate = 0.0002 batch_size = 128 image_shape = [28,28,1] step = 1000 export_dir = "data/classifier-mnist" dim_W1 = 1024 dim_W2 = 128 dim_W3 = 64 dropout_rate = 0.1 from sklearn.datasets import fetch_mldata mnist = fetch_mldata('MNIST original') mnist.data.shape = (-1, 28, 28) mnist.data = mnist.data.astype(np.float32).reshape( [-1, 28, 28, 1]) / 255. mnist.num_examples = len(mnist.data) mnist.labels = mnist.target.astype(np.int64) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(mnist.data, mnist.labels, test_size=(1. / 7.)) class CNN(): def __init__( self, image_shape=[28,28,1], dim_W1=1024, dim_W2=128, dim_W3=64, classes=10 ): self.image_shape = image_shape self.dim_W1 = dim_W1 self.dim_W2 = dim_W2 self.dim_W3 = dim_W3 self.classes = classes def build_model(self): image = tf.placeholder(tf.float32, [None]+self.image_shape, name="image") Y = tf.placeholder(tf.int64, [None], name="label") training = tf.placeholder(tf.bool, name="is_training") probabilities = self.discriminate(image, training) cost = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=Y, logits=probabilities)) accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(probabilities, axis=1), Y), tf.float32), name="accuracy") return image, Y, cost, accuracy, probabilities, training def create_conv2d(self, input, filters, kernel_size, name): layer = tf.layers.conv2d( inputs=input, filters=filters, kernel_size=kernel_size, activation=tf.nn.leaky_relu, name="Conv2d_" + name, padding="same") return layer def create_maxpool(self, input, name): layer = tf.layers.max_pooling2d( inputs=input, pool_size=[2,2], strides=2, name="MaxPool_" + name) return layer def create_dropout(self, input, name, is_training): layer = tf.layers.dropout( inputs=input, rate=dropout_rate, name="DropOut_" + name, training=is_training) return layer def create_dense(self, input, units, name): layer = tf.layers.dense( inputs=input, units=units, name="Dense" + name, ) layer = tf.layers.batch_normalization( inputs=layer, momentum=0, epsilon=1e-8, training=True, name="BatchNorm_" + name, ) layer = tf.nn.leaky_relu(layer, name="LeakyRELU_" + name) return layer def discriminate(self, image, training): h1 = self.create_conv2d(image, self.dim_W3, 5, "Layer1") h1 = self.create_maxpool(h1, "Layer1") h2 = self.create_conv2d(h1, self.dim_W2, 5, "Layer2") h2 = self.create_maxpool(h2, "Layer2") h2 = tf.reshape(h2, (-1, self.dim_W2 * 7 * 7)) h3 = self.create_dense(h2, self.dim_W1, "Layer3") h3 = self.create_dropout(h3, "Layer3", training) h4 = self.create_dense(h3, self.classes, "Layer4") return h4 tf.reset_default_graph() cnn_model = CNN( image_shape=image_shape, dim_W1=dim_W1, dim_W2=dim_W2, dim_W3=dim_W3, ) image_tf, Y_tf, cost_tf, accuracy_tf, output_tf, training_tf = cnn_model.build_model() saver = tf.train.Saver(max_to_keep=10) train_step = tf.train.AdamOptimizer(learning_rate, beta1=0.5).minimize(cost_tf) builder = tf.saved_model.builder.SavedModelBuilder(export_dir) accuracy_vec = [] def show_train(sess, epoch): traccuracy = [] teaccuracy = [] for j in range(0, len(X_train), batch_size): Xs = X_train[j:j+batch_size] Ys = y_train[j:j+batch_size] traccuracy.append(sess.run(accuracy_tf, feed_dict={ training_tf: False, Y_tf: Ys, image_tf: Xs })) for j in range(0, len(X_test), batch_size): Xs = X_test[j:j+batch_size] Ys = y_test[j:j+batch_size] teaccuracy.append(sess.run(accuracy_tf, feed_dict={ training_tf: False, Y_tf: Ys, image_tf: Xs, })) train_accuracy = np.mean(traccuracy) test_accuracy = np.mean(teaccuracy) accuracy_vec.append((train_accuracy, test_accuracy)) result = sess.run(output_tf, feed_dict={ training_tf: False, image_tf: X_test[:10] }) print('Epoch #%i\n train accuracy = %f\n test accuracy = %f' % (epoch, train_accuracy, test_accuracy)) print('Result for the 10 first training images: %s' % np.argmax(result, axis=1)) print('Reference for the 10 first training images: %s' % y_test[:10]) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) show_train(sess, -1) for epoch in range(n_epochs): permut = np.random.permutation(len(X_train)) print("epoch: %i" % epoch) for j in range(0, len(X_train), batch_size): if j % step == 0: print(" batch: %i" % j) batch = permut[j:j+batch_size] Xs = X_train[batch] Ys = y_train[batch] sess.run(train_step, feed_dict={ training_tf: True, Y_tf: Ys, image_tf: Xs }) if j % step == 0: temp_cost, temp_prec = sess.run([cost_tf, accuracy_tf], feed_dict={ training_tf: False, Y_tf: Ys, image_tf: Xs }) print(" cost: %f\n prec: %f" % (temp_cost, temp_prec)) saver.save(sess, './classifier', global_step=epoch) show_train(sess, epoch) saver.save(sess, './classifier-final') builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.TRAINING]) builder.save() from matplotlib import pyplot as plt %matplotlib inline accuracy = np.array(accuracy_vec) plt.semilogy(1 - accuracy[:,0], 'k-', label="train") plt.semilogy(1 - accuracy[:,1], 'r-', label="test") plt.title('Classification error per Epoch') plt.xlabel('Epoch') plt.ylabel('Classification error') plt.legend() tf.reset_default_graph() new_saver = tf.train.import_meta_graph("classifier-final.meta") with tf.Session() as sess: new_saver.restore(sess, tf.train.latest_checkpoint('./')) graph = tf.get_default_graph() training_tf = graph.get_tensor_by_name('is_training:0') Y_tf = graph.get_tensor_by_name('label:0') image_tf = graph.get_tensor_by_name('image:0') accuracy_tf = graph.get_tensor_by_name('accuracy:0') output_tf = graph.get_tensor_by_name('LeakyRELU_Layer4/Maximum:0') show_train(sess, 0) tf.reset_default_graph() with tf.Session() as sess: tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.TRAINING], export_dir) graph = tf.get_default_graph() training_tf = graph.get_tensor_by_name('is_training:0') Y_tf = graph.get_tensor_by_name('label:0') image_tf = graph.get_tensor_by_name('image:0') accuracy_tf = graph.get_tensor_by_name('accuracy:0') output_tf = graph.get_tensor_by_name('LeakyRELU_Layer4/Maximum:0') show_train(sess, 0) text="""A slave named Androcles once escaped from his master and fled to the forest. As he was wandering about there he came upon a Lion lying down moaning and groaning. At first he turned to flee, but finding that the Lion did not pursue him, he turned back and went up to him. As he came near, the Lion put out his paw, which was all swollen and bleeding, and Androcles found that a huge thorn had got into it, and was causing all the pain. He pulled out the thorn and bound up the paw of the Lion, who was soon able to rise and lick the hand of Androcles like a dog. Then the Lion took Androcles to his cave, and every day used to bring him meat from which to live. But shortly afterwards both Androcles and the Lion were captured, and the slave was sentenced to be thrown to the Lion, after the latter had been kept without food for several days. The Emperor and all his Court came to see the spectacle, and Androcles was led out into the middle of the arena. Soon the Lion was let loose from his den, and rushed bounding and roaring towards his victim. But as soon as he came near to Androcles he recognised his friend, and fawned upon him, and licked his hands like a friendly dog. The Emperor, surprised at this, summoned Androcles to him, who told him the whole story. Whereupon the slave was pardoned and freed, and the Lion let loose to his native forest.""" training_data = text.lower().replace(",", "").replace(".", "").split() import collections def build_dataset(words): count = collections.Counter(words).most_common() dictionary = dict() for word, _ in count: dictionary[word] = len(dictionary) reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys())) return dictionary, reverse_dictionary dictionary, reverse_dictionary = build_dataset(training_data) training_data_args = [dictionary[word] for word in training_data] import tensorflow as tf from tensorflow.contrib import rnn def RNN(x): # Generate a n_input-element sequence of inputs # (eg. [had] [a] [general] -> [20] [6] [33]) x = tf.split(x,n_input,1) # 1-layer LSTM with n_hidden units. rnn_cell = rnn.BasicLSTMCell(n_hidden) # generate prediction outputs, states = rnn.static_rnn(rnn_cell, x, dtype=tf.float32) # there are n_input outputs but we only want the last output return tf.layers.dense(inputs = outputs[-1], units = vocab_size) import random import numpy as np tf.reset_default_graph() vocab_size = len(dictionary) # Parameters learning_rate = 0.001 training_iters = 50000 display_step = 1000 # number of inputs (past words that we use) n_input = 3 # number of units in the RNN cell n_hidden = 512 # tf Graph input x = tf.placeholder(tf.float32, [None, n_input]) y = tf.placeholder(tf.int64, [None]) pred = RNN(x) cost = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=y)) optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate).minimize(cost) correct_pred = tf.equal(tf.argmax(pred,1), y) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) with tf.Session() as session: session.run(tf.global_variables_initializer()) step = 0 offset = random.randint(0,n_input+1) end_offset = n_input + 1 acc_total = 0 loss_total = 0 while step < training_iters: # Batch with just one sample. Add some randomness on selection process. if offset > (len(training_data)-end_offset): offset = random.randint(0, n_input+1) symbols_in_keys = [ [training_data_args[i]] for i in range(offset, offset+n_input) ] symbols_in_keys = np.reshape(np.array(symbols_in_keys), [1, n_input]) symbols_out_onehot = [training_data_args[offset+n_input]] _, acc, loss, onehot_pred = session.run([optimizer, accuracy, cost, pred], \ feed_dict={x: symbols_in_keys, y: symbols_out_onehot}) loss_total += loss acc_total += acc if (step+1) % display_step == 0: print("Iter= %i , Average Loss= %.6f, Average Accuracy= %.2f%%" % (step+1, loss_total/display_step, 100*acc_total/display_step)) acc_total = 0 loss_total = 0 symbols_in = [training_data[i] for i in range(offset, offset + n_input)] symbols_out = training_data[offset + n_input] symbols_out_pred = reverse_dictionary[np.argmax(onehot_pred, axis=1)[0]] print("%s - [%s] vs [%s]" % (symbols_in, symbols_out, symbols_out_pred)) step += 1 offset += (n_input+1) import tensorflow as tf from tensorflow.contrib import rnn tf.reset_default_graph() #rows of 28 pixels n_input=28 #unrolled through 28 time steps (our images are (28,28)) time_steps=28 #hidden LSTM units num_units=128 #learning rate for adam learning_rate=0.001 n_classes=10 batch_size=128 n_epochs = 10 step = 100 import os import numpy as np from sklearn.datasets import fetch_mldata from sklearn.model_selection import train_test_split mnist = fetch_mldata('MNIST original') mnist.data = mnist.data.astype(np.float32).reshape( [-1, time_steps, n_input]) / 255. mnist.num_examples = len(mnist.data) mnist.labels = mnist.target.astype(np.int8) X_train, X_test, y_train, y_test = train_test_split(mnist.data, mnist.labels, test_size=(1. / 7.)) x=tf.placeholder(tf.float32,[None,time_steps,n_input]) y=tf.placeholder(tf.int64,[None]) #processing the input tensor from [batch_size,n_steps,n_input] to "time_steps" number of [batch_size,n_input] tensors input=tf.unstack(x ,time_steps,1) lstm_layer=rnn.BasicLSTMCell(num_units,forget_bias=True) outputs,_=rnn.static_rnn(lstm_layer,input,dtype=tf.float32) prediction=tf.layers.dense(inputs=outputs[-1], units = n_classes) loss=tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=prediction,labels=y)) opt=tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss) correct_prediction=tf.equal(tf.argmax(prediction,1),y) accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for epoch in range(n_epochs): permut = np.random.permutation(len(X_train)) print("epoch: %i" % epoch) for j in range(0, len(X_train), batch_size): if j % step == 0: print(" batch: %i" % j) batch = permut[j:j+batch_size] Xs = X_train[batch] Ys = y_train[batch] sess.run(opt, feed_dict={x: Xs, y: Ys}) if j % step == 0: acc=sess.run(accuracy,feed_dict={x:Xs,y:Ys}) los=sess.run(loss,feed_dict={x:Xs,y:Ys}) print(" accuracy %f" % acc) print(" loss %f" % los) print("")
0.812682
0.947503
___ <a href='http://www.pieriandata.com'><img src='../Pierian_Data_Logo.png'/></a> ___ <center><em>Copyright Pierian Data</em></center> <center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center> # Time Series with Pandas Project Exercise For this exercise, answer the questions below given the dataset: https://fred.stlouisfed.org/series/UMTMVS This dataset is the Value of Manufacturers' Shipments for All Manufacturing Industries. **Import any necessary libraries.** ``` # CODE HERE import numpy as np import pandas as pd %matplotlib inline ``` **Read in the data UMTMVS.csv file from the Data folder** ``` # CODE HERE df = pd.read_csv('../Data/UMTMVS.csv') ``` **Check the head of the data** ``` # CODE HERE df.head() ``` **Set the DATE column as the index.** ``` # CODE HERE df = df.set_index('DATE') df.head() ``` **Check the data type of the index.** ``` # CODE HERE df.index ``` **Convert the index to be a datetime index. Note, there are many, many correct ways to do this!** ``` # CODE HERE df.index = pd.to_datetime(df.index) df.index ``` **Plot out the data, choose a reasonable figure size** ``` # CODE HERE df.plot(figsize=(14,8)) ``` **What was the percent increase in value from Jan 2009 to Jan 2019?** ``` #CODE HERE 100 * (df.loc['2019-01-01'] - df.loc['2009-01-01']) / df.loc['2009-01-01'] ``` **What was the percent decrease from Jan 2008 to Jan 2009?** ``` #CODE HERE 100 * (df.loc['2009-01-01'] - df.loc['2008-01-01']) / df.loc['2008-01-01'] ``` **What is the month with the least value after 2005?** [HINT](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.idxmin.html) ``` #CODE HERE df.loc['2005-01-01':].idxmin() ``` **What 6 months have the highest value?** ``` # CODE HERE df.sort_values(by='UMTMVS',ascending=False).head(5) ``` **How many millions of dollars in value was lost in 2008? (Another way of posing this question is what was the value difference between Jan 2008 and Jan 2009)** ``` # CODE HERE df.loc['2008-01-01'] - df.loc['2009-01-01'] ``` **Create a bar plot showing the average value in millions of dollars per year** ``` # CODE HERE df.resample('Y').mean().plot.bar(figsize=(15,8)) ``` **What year had the biggest increase in mean value from the previous year's mean value? (Lots of ways to get this answer!)** [HINT for a useful method](https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.idxmax.html) ``` # CODE HERE yearly_data = df.resample('Y').mean() yearly_data_shift = yearly_data.shift(1) yearly_data.head() change = yearly_data - yearly_data_shift change['UMTMVS'].idxmax() ``` **Plot out the yearly rolling mean on top of the original data. Recall that this is monthly data and there are 12 months in a year!** ``` # CODE HERE df['Yearly Mean'] = df['UMTMVS'].rolling(window=12).mean() df[['UMTMVS','Yearly Mean']].plot(figsize=(12,5)).autoscale(axis='x',tight=True); ``` **BONUS QUESTION (HARD).** **Some month in 2008 the value peaked for that year. How many months did it take to surpass that 2008 peak? (Since it crashed immediately after this peak) There are many ways to get this answer. NOTE: I get 70 months as my answer, you may get 69 or 68, depending on whether or not you count the start and end months. Refer to the video solutions for full explanation on this.** ``` #CODE HERE df = pd.read_csv('../Data/UMTMVS.csv',index_col='DATE',parse_dates=True) df.head() df2008 = df.loc['2008-01-01':'2009-01-01'] df2008.idxmax() df2008.max() df_post_peak = df.loc['2008-06-01':] df_post_peak[df_post_peak>=510081].dropna() len(df.loc['2008-06-01':'2014-03-01']) ``` # GREAT JOB!
github_jupyter
# CODE HERE import numpy as np import pandas as pd %matplotlib inline # CODE HERE df = pd.read_csv('../Data/UMTMVS.csv') # CODE HERE df.head() # CODE HERE df = df.set_index('DATE') df.head() # CODE HERE df.index # CODE HERE df.index = pd.to_datetime(df.index) df.index # CODE HERE df.plot(figsize=(14,8)) #CODE HERE 100 * (df.loc['2019-01-01'] - df.loc['2009-01-01']) / df.loc['2009-01-01'] #CODE HERE 100 * (df.loc['2009-01-01'] - df.loc['2008-01-01']) / df.loc['2008-01-01'] #CODE HERE df.loc['2005-01-01':].idxmin() # CODE HERE df.sort_values(by='UMTMVS',ascending=False).head(5) # CODE HERE df.loc['2008-01-01'] - df.loc['2009-01-01'] # CODE HERE df.resample('Y').mean().plot.bar(figsize=(15,8)) # CODE HERE yearly_data = df.resample('Y').mean() yearly_data_shift = yearly_data.shift(1) yearly_data.head() change = yearly_data - yearly_data_shift change['UMTMVS'].idxmax() # CODE HERE df['Yearly Mean'] = df['UMTMVS'].rolling(window=12).mean() df[['UMTMVS','Yearly Mean']].plot(figsize=(12,5)).autoscale(axis='x',tight=True); #CODE HERE df = pd.read_csv('../Data/UMTMVS.csv',index_col='DATE',parse_dates=True) df.head() df2008 = df.loc['2008-01-01':'2009-01-01'] df2008.idxmax() df2008.max() df_post_peak = df.loc['2008-06-01':] df_post_peak[df_post_peak>=510081].dropna() len(df.loc['2008-06-01':'2014-03-01'])
0.172416
0.9858
``` import numpy as np import pandas as pd import os from glob import glob from pprint import pprint import json import seaborn as sns import matplotlib.pyplot as plt from matplotlib.collections import LineCollection from matplotlib.colors import ListedColormap, BoundaryNorm import matplotlib.pyplot as plt import cellcycle.PlottingTools as plottingTools from cellcycle.ParameterSet import ParameterSet import cellcycle.DataStorage as dataStorage import cellcycle.DataAnalysis as dataAnalysis import cellcycle.MakeDataframe as makeDataframe from cellcycle import mainClass parameter_set = 'muntants_final_parameter_set' file_path_input_params_json = '../../input_params.json' input_param_dict = mainClass.extract_variables_from_input_params_json(file_path_input_params_json) root_path = input_param_dict["DATA_FOLDER_PATH"] simulation_location = 'SI/S16_model_validation/'+parameter_set+'/Olesen_paper' file_path = os.path.join(root_path, simulation_location) print('file_path', file_path) parameter_path = os.path.join(file_path, 'parameter_set.csv') print('parameter_path', parameter_path) pinkish_red = (247 / 255, 109 / 255, 109 / 255) green = (0 / 255, 133 / 255, 86 / 255) dark_blue = (36 / 255, 49 / 255, 94 / 255) light_blue = (168 / 255, 209 / 255, 231 / 255) blue = (55 / 255, 71 / 255, 133 / 255) yellow = (247 / 255, 233 / 255, 160 / 255) data_frame = makeDataframe.make_dataframe(file_path) data_frame = makeDataframe.add_average_values_to_df(data_frame) data_frame = makeDataframe.add_theoretical_init_reg_concentrations_to_df(data_frame) def add_label_mutation(df_row): if df_row.destruction_rate_datA == 0 and df_row.production_rate_dars2 == 0 and df_row.production_rate_dars1 == 0: return r'$\Delta D1 \Delta D2 \Delta datA$' elif df_row.production_rate_dars1 == 0 and df_row.production_rate_dars2 == 0: return r'$\Delta D1 \Delta D2$' elif df_row.destruction_rate_datA == 0 and df_row.production_rate_dars1 == 0: return r'$\Delta D1 \Delta datA$' elif df_row.destruction_rate_datA == 0 and df_row.production_rate_dars2 == 0: return r'$\Delta D2 \Delta datA$' elif df_row.destruction_rate_datA == 0: return r'$\Delta datA$' elif df_row.production_rate_dars1 == 0: return r'$\Delta D1$' elif df_row.production_rate_dars2 == 0: return r'$\Delta D2$' # elif df_row.destruction_rate_rida ==0: # return r'$\Delta Hda$' # elif df_row.production_rate_lipids ==0: # return r'$\Delta lipids$' # elif df_row.period_blocked ==0: # return r'$\Delta SeqA$' # elif df_row.n_c_max_0 ==0: # return r'no titration sites' # elif df_row.n_c_max_0 ==1200: # return r'more titration sites $n_{\rm s}=1200$' # elif df_row.n_c_max_0 ==600: # return r'more titration sites $n_{\rm s}=600$' # elif df_row.n_c_max_0 ==100: # return r'less titration sites $n_{\rm s}=100$' else: return 'WT' def return_init_volume_by_key(key, data_frame): row = data_frame.loc[data_frame['legend_mutant']==key] return row['v_init_per_n_ori'] data_frame.loc[:, 'legend_mutant'] = data_frame.apply(lambda row: add_label_mutation(row), axis = 1) data_frame['legend_mutant'] ``` # Slow growth regime ``` data_frame['doubling_rate'] sns.set(style="ticks") sns.set_context("poster") data_frame_slow = data_frame.loc[data_frame['doubling_rate'] == 0.5] y_axes_experiment = np.array([1, 0.67, 1.25, 1.25, 1.67, 0.71, 0.83, 1.25 ]) # 0.67, x_axes = np.array([0, 1, 2, 3, 4, 5, 6, 7]) x_axes_labels = [r'WT', r'$\Delta datA$', r'$\Delta D1$', r'$\Delta D2$', r'$\Delta D1 \Delta D2$', r'$\Delta D1 \Delta datA$', r'$\Delta D2 \Delta datA$', r'$\Delta D1 \Delta D2 \Delta datA$', ] v_init_WT_slow = return_init_volume_by_key('WT', data_frame_slow).iloc[0] print(v_init_WT_slow) y_axes_simulations_relative = np.array([return_init_volume_by_key(item, data_frame_slow).iloc[0]/ v_init_WT_slow for item in x_axes_labels]) print('y_labels', y_axes_simulations_relative) y_label = r'$\delta v^\ast = v^\ast_{\rm \Delta x} \, / \, v^\ast_{\rm WT}$' fig, ax = plt.subplots(figsize=(10,5)) ax.plot(x_axes, y_axes_simulations_relative, 'v', label='LDDR+titration model', color=blue) ax.plot(x_axes, y_axes_experiment, 'o', label=r'Frimodt-M$\o$ller et al. 2015', color=pinkish_red) print(x_axes, y_axes_experiment) ax.set_xticks(x_axes) ax.set_xticklabels(x_axes_labels) ax.set_ylabel(y_label) ax.axhline(1, color='grey') ax.set_ylim([0.5,2.5]) ax.legend() plt.savefig(file_path + '/mutations_slow_growth.pdf', format='pdf', bbox_inches='tight') ``` # Fast growth regime ``` data_frame_fast = data_frame.loc[data_frame['doubling_rate'] == 2] data_frame_fast sns.set(style="ticks") sns.set_context("poster") data_frame_fast = data_frame.loc[data_frame['doubling_rate'] == 2] data_frame_fast y_axes_experiment = np.array([1, 0.83, 1.05, 1.05, 1.11, 0.91, 0.91, 1.11 ]) # 0.67, x_axes = np.array([0, 1, 2, 3, 4, 5, 6, 7]) x_axes_labels = [r'WT', r'$\Delta datA$', r'$\Delta D1$', r'$\Delta D2$', r'$\Delta D1 \Delta D2$', r'$\Delta D1 \Delta datA$', r'$\Delta D2 \Delta datA$', r'$\Delta D1 \Delta D2 \Delta datA$', ] v_init_WT_fast = return_init_volume_by_key('WT', data_frame_fast).iloc[0] print(v_init_WT_fast) y_axes_simulations_relative = np.array([return_init_volume_by_key(item, data_frame_fast).iloc[0]/ v_init_WT_fast for item in x_axes_labels]) print('y_labels', y_axes_simulations_relative) y_label = r'$\delta v^\ast = v^\ast_{\rm \Delta x} \, / \, v^\ast_{\rm WT}$' fig, ax = plt.subplots(figsize=(10,5)) ax.plot(x_axes, y_axes_simulations_relative, 'v', label='LDDR+titration model', color=blue) ax.plot(x_axes, y_axes_experiment, 'o', label=r'Frimodt-M$\o$ller et al. 2015', color=pinkish_red) print(x_axes, y_axes_experiment) ax.set_xticks(x_axes) ax.set_xticklabels(x_axes_labels) ax.set_ylabel(y_label) ax.axhline(1, color='grey') ax.set_ylim([0.5,2.5]) ax.legend() plt.savefig(file_path + '/mutations_fast_growth.pdf', format='pdf', bbox_inches='tight') ```
github_jupyter
import numpy as np import pandas as pd import os from glob import glob from pprint import pprint import json import seaborn as sns import matplotlib.pyplot as plt from matplotlib.collections import LineCollection from matplotlib.colors import ListedColormap, BoundaryNorm import matplotlib.pyplot as plt import cellcycle.PlottingTools as plottingTools from cellcycle.ParameterSet import ParameterSet import cellcycle.DataStorage as dataStorage import cellcycle.DataAnalysis as dataAnalysis import cellcycle.MakeDataframe as makeDataframe from cellcycle import mainClass parameter_set = 'muntants_final_parameter_set' file_path_input_params_json = '../../input_params.json' input_param_dict = mainClass.extract_variables_from_input_params_json(file_path_input_params_json) root_path = input_param_dict["DATA_FOLDER_PATH"] simulation_location = 'SI/S16_model_validation/'+parameter_set+'/Olesen_paper' file_path = os.path.join(root_path, simulation_location) print('file_path', file_path) parameter_path = os.path.join(file_path, 'parameter_set.csv') print('parameter_path', parameter_path) pinkish_red = (247 / 255, 109 / 255, 109 / 255) green = (0 / 255, 133 / 255, 86 / 255) dark_blue = (36 / 255, 49 / 255, 94 / 255) light_blue = (168 / 255, 209 / 255, 231 / 255) blue = (55 / 255, 71 / 255, 133 / 255) yellow = (247 / 255, 233 / 255, 160 / 255) data_frame = makeDataframe.make_dataframe(file_path) data_frame = makeDataframe.add_average_values_to_df(data_frame) data_frame = makeDataframe.add_theoretical_init_reg_concentrations_to_df(data_frame) def add_label_mutation(df_row): if df_row.destruction_rate_datA == 0 and df_row.production_rate_dars2 == 0 and df_row.production_rate_dars1 == 0: return r'$\Delta D1 \Delta D2 \Delta datA$' elif df_row.production_rate_dars1 == 0 and df_row.production_rate_dars2 == 0: return r'$\Delta D1 \Delta D2$' elif df_row.destruction_rate_datA == 0 and df_row.production_rate_dars1 == 0: return r'$\Delta D1 \Delta datA$' elif df_row.destruction_rate_datA == 0 and df_row.production_rate_dars2 == 0: return r'$\Delta D2 \Delta datA$' elif df_row.destruction_rate_datA == 0: return r'$\Delta datA$' elif df_row.production_rate_dars1 == 0: return r'$\Delta D1$' elif df_row.production_rate_dars2 == 0: return r'$\Delta D2$' # elif df_row.destruction_rate_rida ==0: # return r'$\Delta Hda$' # elif df_row.production_rate_lipids ==0: # return r'$\Delta lipids$' # elif df_row.period_blocked ==0: # return r'$\Delta SeqA$' # elif df_row.n_c_max_0 ==0: # return r'no titration sites' # elif df_row.n_c_max_0 ==1200: # return r'more titration sites $n_{\rm s}=1200$' # elif df_row.n_c_max_0 ==600: # return r'more titration sites $n_{\rm s}=600$' # elif df_row.n_c_max_0 ==100: # return r'less titration sites $n_{\rm s}=100$' else: return 'WT' def return_init_volume_by_key(key, data_frame): row = data_frame.loc[data_frame['legend_mutant']==key] return row['v_init_per_n_ori'] data_frame.loc[:, 'legend_mutant'] = data_frame.apply(lambda row: add_label_mutation(row), axis = 1) data_frame['legend_mutant'] data_frame['doubling_rate'] sns.set(style="ticks") sns.set_context("poster") data_frame_slow = data_frame.loc[data_frame['doubling_rate'] == 0.5] y_axes_experiment = np.array([1, 0.67, 1.25, 1.25, 1.67, 0.71, 0.83, 1.25 ]) # 0.67, x_axes = np.array([0, 1, 2, 3, 4, 5, 6, 7]) x_axes_labels = [r'WT', r'$\Delta datA$', r'$\Delta D1$', r'$\Delta D2$', r'$\Delta D1 \Delta D2$', r'$\Delta D1 \Delta datA$', r'$\Delta D2 \Delta datA$', r'$\Delta D1 \Delta D2 \Delta datA$', ] v_init_WT_slow = return_init_volume_by_key('WT', data_frame_slow).iloc[0] print(v_init_WT_slow) y_axes_simulations_relative = np.array([return_init_volume_by_key(item, data_frame_slow).iloc[0]/ v_init_WT_slow for item in x_axes_labels]) print('y_labels', y_axes_simulations_relative) y_label = r'$\delta v^\ast = v^\ast_{\rm \Delta x} \, / \, v^\ast_{\rm WT}$' fig, ax = plt.subplots(figsize=(10,5)) ax.plot(x_axes, y_axes_simulations_relative, 'v', label='LDDR+titration model', color=blue) ax.plot(x_axes, y_axes_experiment, 'o', label=r'Frimodt-M$\o$ller et al. 2015', color=pinkish_red) print(x_axes, y_axes_experiment) ax.set_xticks(x_axes) ax.set_xticklabels(x_axes_labels) ax.set_ylabel(y_label) ax.axhline(1, color='grey') ax.set_ylim([0.5,2.5]) ax.legend() plt.savefig(file_path + '/mutations_slow_growth.pdf', format='pdf', bbox_inches='tight') data_frame_fast = data_frame.loc[data_frame['doubling_rate'] == 2] data_frame_fast sns.set(style="ticks") sns.set_context("poster") data_frame_fast = data_frame.loc[data_frame['doubling_rate'] == 2] data_frame_fast y_axes_experiment = np.array([1, 0.83, 1.05, 1.05, 1.11, 0.91, 0.91, 1.11 ]) # 0.67, x_axes = np.array([0, 1, 2, 3, 4, 5, 6, 7]) x_axes_labels = [r'WT', r'$\Delta datA$', r'$\Delta D1$', r'$\Delta D2$', r'$\Delta D1 \Delta D2$', r'$\Delta D1 \Delta datA$', r'$\Delta D2 \Delta datA$', r'$\Delta D1 \Delta D2 \Delta datA$', ] v_init_WT_fast = return_init_volume_by_key('WT', data_frame_fast).iloc[0] print(v_init_WT_fast) y_axes_simulations_relative = np.array([return_init_volume_by_key(item, data_frame_fast).iloc[0]/ v_init_WT_fast for item in x_axes_labels]) print('y_labels', y_axes_simulations_relative) y_label = r'$\delta v^\ast = v^\ast_{\rm \Delta x} \, / \, v^\ast_{\rm WT}$' fig, ax = plt.subplots(figsize=(10,5)) ax.plot(x_axes, y_axes_simulations_relative, 'v', label='LDDR+titration model', color=blue) ax.plot(x_axes, y_axes_experiment, 'o', label=r'Frimodt-M$\o$ller et al. 2015', color=pinkish_red) print(x_axes, y_axes_experiment) ax.set_xticks(x_axes) ax.set_xticklabels(x_axes_labels) ax.set_ylabel(y_label) ax.axhline(1, color='grey') ax.set_ylim([0.5,2.5]) ax.legend() plt.savefig(file_path + '/mutations_fast_growth.pdf', format='pdf', bbox_inches='tight')
0.266453
0.621081
``` from Bio import SeqIO import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import tqdm import glob import re import requests import io import torch from argparse import Namespace from esm.constants import proteinseq_toks import math import torch.nn as nn import torch.nn.functional as F from esm.modules import TransformerLayer, PositionalEmbedding # noqa from esm.model import ProteinBertModel import esm import time import tape from tape import ProteinBertModel, TAPETokenizer,UniRepModel pdt_embed = np.load("../../out/201120/pdt_motor_t34.npy") pdt_motor = pd.read_csv("../../data/thermo/pdt_motor.csv") print(pdt_embed.shape) print(pdt_motor.shape) pfamA_target_name = ["PF00349","PF00022","PF03727","PF06723",\ "PF14450","PF03953","PF12327","PF00091","PF10644",\ "PF13809","PF14881","PF00063","PF00225","PF03028"] pdt_motor_target = pdt_motor.loc[pdt_motor["pfam_id"].isin(pfamA_target_name),:] pdt_embed_target = pdt_embed[pdt_motor["pfam_id"].isin(pfamA_target_name),:] print(pdt_embed_target.shape) print(pdt_motor_target.shape) print(sum(pdt_motor_target["is_thermophilic"])) pdt_motor.groupby(["clan","is_thermophilic"]).count() pdt_motor_target.groupby(["clan","is_thermophilic"]).count() pdt_motor.loc[pdt_motor["clan"]=="p_loop_gtpase",:].groupby(["pfam_id","is_thermophilic"]).count() ``` ## Try create a balanced training set by sampling the same number of min(thermophilic, non-thermophilic) of a family. For now do no sample from a family is it does not contain one of the classes ``` thermo_sampled = pd.DataFrame() for pfam_id in pdt_motor["pfam_id"].unique(): curr_dat = pdt_motor.loc[pdt_motor["pfam_id"] == pfam_id,:] is_thermo = curr_dat.loc[curr_dat["is_thermophilic"]==1,:] not_thermo = curr_dat.loc[curr_dat["is_thermophilic"]==0,:] if (not_thermo.shape[0]>=is_thermo.shape[0]): print(is_thermo.shape[0]) #sample #is_thermo.shape[0] entries from not_thermo uniformly thermo_sampled = thermo_sampled.append(is_thermo) tmp = not_thermo.sample(n = is_thermo.shape[0]) else: #sample #not_thermo.shape[0] entries from is_thermo uniformly print(not_thermo.shape[0]) thermo_sampled = thermo_sampled.append(not_thermo) tmp = is_thermo.sample(n = not_thermo.shape[0]) thermo_sampled = thermo_sampled.append(tmp) thermo_sampled.groupby(["clan","is_thermophilic"]).count() thermo_sampled_embed = pdt_embed[thermo_sampled.index,:] ``` # Normalize the hidden dimensions ``` from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(thermo_sampled_embed) thermo_sampled_embed_scaled = scaler.transform(thermo_sampled_embed) u, s, v = np.linalg.svd(thermo_sampled_embed_scaled.T@thermo_sampled_embed_scaled) s[0:10] s_ratio = np.cumsum(s)/sum(s) s_ratio[270] a = thermo_sampled_embed_scaled.T@thermo_sampled_embed_scaled a.shape sigma = np.cov(thermo_sampled_embed_scaled.T) sigma.shape u, s, v = np.linalg.svd(sigma) s[0:10] s_ratio = np.cumsum(s)/sum(s) s_ratio[125] from sklearn.decomposition import PCA pca = PCA(n_components=125) thermo_sampled_embed_scaled_reduced = pca.fit_transform(thermo_sampled_embed_scaled) np.cumsum(pca.explained_variance_ratio_) X = thermo_sampled_embed_scaled_reduced y = thermo_sampled["is_thermophilic"] print(X.shape) print(y.shape) ``` ## Classifying thermophilic using logistic regression with cross validation ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) from sklearn.linear_model import LogisticRegression from sklearn.linear_model import LogisticRegressionCV clf = LogisticRegressionCV(cv=5, random_state=0).fit(X_train, y_train) clf.score(X_test, y_test) clf.score(X_train, y_train) ``` ## Classifying thermophilic using softSVM ``` from sklearn.svm import LinearSVC clf = LinearSVC(random_state=0) clf.fit(X_train, y_train) clf.score(X_train, y_train) clf.score(X_test, y_test) ``` ## Classifying thermophilic using kNN classifier ``` from sklearn.neighbors import KNeighborsClassifier neigh = KNeighborsClassifier(n_neighbors=5,weights = "uniform") neigh.fit(X_train, y_train) neigh.score(X_train, y_train) neigh.score(X_test, y_test) neigh = KNeighborsClassifier(n_neighbors=5,weights = "distance") neigh.fit(X_train, y_train) print(neigh.score(X_train, y_train)) print(neigh.score(X_test, y_test)) neigh = KNeighborsClassifier(n_neighbors=9,weights = "distance") neigh.fit(X_train, y_train) print(neigh.score(X_train, y_train)) print(neigh.score(X_test, y_test)) from torch.utils.data import Dataset, DataLoader class ThermoDataset(Dataset): """Face Landmarks dataset.""" def __init__(self, dat,label): """ Args: dat (ndarray): ndarray with the X data label: an pdSeries with the 0/1 label of the X data """ self.X = dat self.y = label def __len__(self): return self.X.shape[0] def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() embed = self.X[idx,:] is_thermo = self.y.iloc[idx] sample = {'X': embed, 'y': is_thermo} return sample X = thermo_sampled_embed_scaled_reduced y = thermo_sampled["is_thermophilic"] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) thermo_dataset_train = ThermoDataset(X_train,y_train) train_loader = DataLoader(thermo_dataset_train, batch_size=100, shuffle=True, num_workers=0) for i_batch, sample_batched in enumerate(train_loader): print(i_batch, sample_batched['X'].size(), sample_batched['y'].size()) if i_batch > 3: break import torch.nn as nn import torch.nn.functional as F class ThermoClassifier_75(nn.Module): def __init__(self): super(ThermoClassifier_75, self).__init__() self.fc1 = nn.Linear(125, 80) self.fc2 = nn.Linear(80, 50) self.fc3 = nn.Linear(50, 30) self.fc4 = nn.Linear(30, 10) self.fc5 = nn.Linear(10, 2) def forward(self, x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.relu(self.fc4(x)) x = self.fc5(x) return x import torch.optim as optim learning_rate = 0.001 criterion = nn.CrossEntropyLoss() device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = ThermoClassifier_75().to(device) optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) device # Train the model num_epochs = 200 total_step = len(train_loader) for epoch in range(num_epochs): for i_batch, sample_batched in enumerate(train_loader): X = sample_batched['X'] y = sample_batched['y'] # Move tensors to the configured device # print(X) embed = X.to(device) labels = y.to(device) # Forward pass outputs = model(embed) # print(outputs.shape) loss = criterion(outputs, labels) # Backprpagation and optimization optimizer.zero_grad() loss.backward() optimizer.step() if (i_batch+1) % 200 == 0: print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' .format(epoch+1, num_epochs, i_batch+1, total_step, loss.item())) thermo_dataset_test = ThermoDataset(X_test,y_test) test_loader = DataLoader(thermo_dataset_test, batch_size=100, shuffle=True, num_workers=0) # Test the model # In the test phase, don't need to compute gradients (for memory efficiency) with torch.no_grad(): correct = 0 total = 0 for i_batch, sample_batched in enumerate(test_loader): X = sample_batched['X'].to(device) y = sample_batched['y'].to(device) outputs = model(X) _, predicted = torch.max(outputs.data, 1) # print(predicted) # print(y.size(0)) total += y.size(0) correct += (predicted == y).sum().item() print('Accuracy of the network on the test for model_75 : {} %'.format(100 * correct / total)) # Save the model checkpoint torch.save(model.state_dict(), 'model_75.ckpt') ``` ## model not using reduced data ``` X = thermo_sampled_embed_scaled y = thermo_sampled["is_thermophilic"] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) thermo_dataset_train = ThermoDataset(X_train,y_train) train_loader = DataLoader(thermo_dataset_train, batch_size=100, shuffle=True, num_workers=0) for i_batch, sample_batched in enumerate(train_loader): print(i_batch, sample_batched['X'].size(), sample_batched['y'].size()) if i_batch > 3: break import torch.nn as nn import torch.nn.functional as F class ThermoClassifier(nn.Module): def __init__(self): super(ThermoClassifier, self).__init__() self.fc1 = nn.Linear(1280, 600) self.fc2 = nn.Linear(600, 300) self.fc3 = nn.Linear(300, 150) self.fc4 = nn.Linear(150, 75) self.fc5 = nn.Linear(75, 20) self.fc6 = nn.Linear(20, 2) def forward(self, x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.relu(self.fc4(x)) x = F.relu(self.fc5(x)) x = self.fc6(x) return x import torch.optim as optim learning_rate = 0.001 criterion = nn.CrossEntropyLoss() device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = ThermoClassifier().to(device) optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) device # Train the model num_epochs = 200 total_step = len(train_loader) for epoch in range(num_epochs): for i_batch, sample_batched in enumerate(train_loader): X = sample_batched['X'] y = sample_batched['y'] # Move tensors to the configured device # print(X) embed = X.to(device) labels = y.to(device) # Forward pass outputs = model(embed) # print(outputs.shape) loss = criterion(outputs, labels) # Backprpagation and optimization optimizer.zero_grad() loss.backward() optimizer.step() if (i_batch+1) % 200 == 0: print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' .format(epoch+1, num_epochs, i_batch+1, total_step, loss.item())) thermo_dataset_test = ThermoDataset(X_test,y_test) test_loader = DataLoader(thermo_dataset_test, batch_size=100, shuffle=True, num_workers=0) # Test the model # In the test phase, don't need to compute gradients (for memory efficiency) with torch.no_grad(): correct = 0 total = 0 for i_batch, sample_batched in enumerate(test_loader): X = sample_batched['X'].to(device) y = sample_batched['y'].to(device) outputs = model(X) _, predicted = torch.max(outputs.data, 1) # print(predicted) # print(y.size(0)) total += y.size(0) correct += (predicted == y).sum().item() print('Accuracy of the network on the test for model_768 : {} %'.format(100 * correct / total)) # Save the model checkpoint torch.save(model.state_dict(), 'model_768.ckpt') ``` ## Generate Labelled prediction for model_768.ckpt for both training and testing dataset ``` model.load_state_dict(torch.load('model_768.ckpt')) model.eval() X = thermo_sampled_embed_scaled y = thermo_sampled["is_thermophilic"] thermo_sampled["is_thermophilic"].iloc[1] # Test the model # In the test phase, don't need to compute gradients (for memory efficiency) with torch.no_grad(): correct = 0 total = 0 pred_y = [] for i in range(thermo_sampled_embed_scaled.shape[0]): X = torch.tensor(thermo_sampled_embed_scaled[i,:]).reshape(1,-1).to(device) y = thermo_sampled["is_thermophilic"].iloc[i] outputs = model(X) _, predicted = torch.max(outputs.data, 1) # print(predicted.item()) total += 1 correct += int(predicted.item() == y) pred_y.append(predicted.item()) # break print('Accuracy of the network on the test for model_768 : {} %'.format(100 * correct / total)) len(pred_y) thermo_sampled_embed_scaled.shape thermo_sampled.shape thermo_sampled.head() thermo_sampled["pred_y"] = pred_y thermo_sampled_embed_scaled_reduced.shape thermo_sampled.to_csv("../../data/thermo/for_vis/thermo_sampled.csv") np.save("../../data/thermo/for_vis/thermo_sampled_embed_scaled.npy", thermo_sampled_embed_scaled) np.save("../../data/thermo/for_vis/thermo_sampled_embed_scaled_reduced.npy", thermo_sampled_embed_scaled_reduced) ```
github_jupyter
from Bio import SeqIO import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import tqdm import glob import re import requests import io import torch from argparse import Namespace from esm.constants import proteinseq_toks import math import torch.nn as nn import torch.nn.functional as F from esm.modules import TransformerLayer, PositionalEmbedding # noqa from esm.model import ProteinBertModel import esm import time import tape from tape import ProteinBertModel, TAPETokenizer,UniRepModel pdt_embed = np.load("../../out/201120/pdt_motor_t34.npy") pdt_motor = pd.read_csv("../../data/thermo/pdt_motor.csv") print(pdt_embed.shape) print(pdt_motor.shape) pfamA_target_name = ["PF00349","PF00022","PF03727","PF06723",\ "PF14450","PF03953","PF12327","PF00091","PF10644",\ "PF13809","PF14881","PF00063","PF00225","PF03028"] pdt_motor_target = pdt_motor.loc[pdt_motor["pfam_id"].isin(pfamA_target_name),:] pdt_embed_target = pdt_embed[pdt_motor["pfam_id"].isin(pfamA_target_name),:] print(pdt_embed_target.shape) print(pdt_motor_target.shape) print(sum(pdt_motor_target["is_thermophilic"])) pdt_motor.groupby(["clan","is_thermophilic"]).count() pdt_motor_target.groupby(["clan","is_thermophilic"]).count() pdt_motor.loc[pdt_motor["clan"]=="p_loop_gtpase",:].groupby(["pfam_id","is_thermophilic"]).count() thermo_sampled = pd.DataFrame() for pfam_id in pdt_motor["pfam_id"].unique(): curr_dat = pdt_motor.loc[pdt_motor["pfam_id"] == pfam_id,:] is_thermo = curr_dat.loc[curr_dat["is_thermophilic"]==1,:] not_thermo = curr_dat.loc[curr_dat["is_thermophilic"]==0,:] if (not_thermo.shape[0]>=is_thermo.shape[0]): print(is_thermo.shape[0]) #sample #is_thermo.shape[0] entries from not_thermo uniformly thermo_sampled = thermo_sampled.append(is_thermo) tmp = not_thermo.sample(n = is_thermo.shape[0]) else: #sample #not_thermo.shape[0] entries from is_thermo uniformly print(not_thermo.shape[0]) thermo_sampled = thermo_sampled.append(not_thermo) tmp = is_thermo.sample(n = not_thermo.shape[0]) thermo_sampled = thermo_sampled.append(tmp) thermo_sampled.groupby(["clan","is_thermophilic"]).count() thermo_sampled_embed = pdt_embed[thermo_sampled.index,:] from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(thermo_sampled_embed) thermo_sampled_embed_scaled = scaler.transform(thermo_sampled_embed) u, s, v = np.linalg.svd(thermo_sampled_embed_scaled.T@thermo_sampled_embed_scaled) s[0:10] s_ratio = np.cumsum(s)/sum(s) s_ratio[270] a = thermo_sampled_embed_scaled.T@thermo_sampled_embed_scaled a.shape sigma = np.cov(thermo_sampled_embed_scaled.T) sigma.shape u, s, v = np.linalg.svd(sigma) s[0:10] s_ratio = np.cumsum(s)/sum(s) s_ratio[125] from sklearn.decomposition import PCA pca = PCA(n_components=125) thermo_sampled_embed_scaled_reduced = pca.fit_transform(thermo_sampled_embed_scaled) np.cumsum(pca.explained_variance_ratio_) X = thermo_sampled_embed_scaled_reduced y = thermo_sampled["is_thermophilic"] print(X.shape) print(y.shape) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) from sklearn.linear_model import LogisticRegression from sklearn.linear_model import LogisticRegressionCV clf = LogisticRegressionCV(cv=5, random_state=0).fit(X_train, y_train) clf.score(X_test, y_test) clf.score(X_train, y_train) from sklearn.svm import LinearSVC clf = LinearSVC(random_state=0) clf.fit(X_train, y_train) clf.score(X_train, y_train) clf.score(X_test, y_test) from sklearn.neighbors import KNeighborsClassifier neigh = KNeighborsClassifier(n_neighbors=5,weights = "uniform") neigh.fit(X_train, y_train) neigh.score(X_train, y_train) neigh.score(X_test, y_test) neigh = KNeighborsClassifier(n_neighbors=5,weights = "distance") neigh.fit(X_train, y_train) print(neigh.score(X_train, y_train)) print(neigh.score(X_test, y_test)) neigh = KNeighborsClassifier(n_neighbors=9,weights = "distance") neigh.fit(X_train, y_train) print(neigh.score(X_train, y_train)) print(neigh.score(X_test, y_test)) from torch.utils.data import Dataset, DataLoader class ThermoDataset(Dataset): """Face Landmarks dataset.""" def __init__(self, dat,label): """ Args: dat (ndarray): ndarray with the X data label: an pdSeries with the 0/1 label of the X data """ self.X = dat self.y = label def __len__(self): return self.X.shape[0] def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() embed = self.X[idx,:] is_thermo = self.y.iloc[idx] sample = {'X': embed, 'y': is_thermo} return sample X = thermo_sampled_embed_scaled_reduced y = thermo_sampled["is_thermophilic"] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) thermo_dataset_train = ThermoDataset(X_train,y_train) train_loader = DataLoader(thermo_dataset_train, batch_size=100, shuffle=True, num_workers=0) for i_batch, sample_batched in enumerate(train_loader): print(i_batch, sample_batched['X'].size(), sample_batched['y'].size()) if i_batch > 3: break import torch.nn as nn import torch.nn.functional as F class ThermoClassifier_75(nn.Module): def __init__(self): super(ThermoClassifier_75, self).__init__() self.fc1 = nn.Linear(125, 80) self.fc2 = nn.Linear(80, 50) self.fc3 = nn.Linear(50, 30) self.fc4 = nn.Linear(30, 10) self.fc5 = nn.Linear(10, 2) def forward(self, x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.relu(self.fc4(x)) x = self.fc5(x) return x import torch.optim as optim learning_rate = 0.001 criterion = nn.CrossEntropyLoss() device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = ThermoClassifier_75().to(device) optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) device # Train the model num_epochs = 200 total_step = len(train_loader) for epoch in range(num_epochs): for i_batch, sample_batched in enumerate(train_loader): X = sample_batched['X'] y = sample_batched['y'] # Move tensors to the configured device # print(X) embed = X.to(device) labels = y.to(device) # Forward pass outputs = model(embed) # print(outputs.shape) loss = criterion(outputs, labels) # Backprpagation and optimization optimizer.zero_grad() loss.backward() optimizer.step() if (i_batch+1) % 200 == 0: print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' .format(epoch+1, num_epochs, i_batch+1, total_step, loss.item())) thermo_dataset_test = ThermoDataset(X_test,y_test) test_loader = DataLoader(thermo_dataset_test, batch_size=100, shuffle=True, num_workers=0) # Test the model # In the test phase, don't need to compute gradients (for memory efficiency) with torch.no_grad(): correct = 0 total = 0 for i_batch, sample_batched in enumerate(test_loader): X = sample_batched['X'].to(device) y = sample_batched['y'].to(device) outputs = model(X) _, predicted = torch.max(outputs.data, 1) # print(predicted) # print(y.size(0)) total += y.size(0) correct += (predicted == y).sum().item() print('Accuracy of the network on the test for model_75 : {} %'.format(100 * correct / total)) # Save the model checkpoint torch.save(model.state_dict(), 'model_75.ckpt') X = thermo_sampled_embed_scaled y = thermo_sampled["is_thermophilic"] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) thermo_dataset_train = ThermoDataset(X_train,y_train) train_loader = DataLoader(thermo_dataset_train, batch_size=100, shuffle=True, num_workers=0) for i_batch, sample_batched in enumerate(train_loader): print(i_batch, sample_batched['X'].size(), sample_batched['y'].size()) if i_batch > 3: break import torch.nn as nn import torch.nn.functional as F class ThermoClassifier(nn.Module): def __init__(self): super(ThermoClassifier, self).__init__() self.fc1 = nn.Linear(1280, 600) self.fc2 = nn.Linear(600, 300) self.fc3 = nn.Linear(300, 150) self.fc4 = nn.Linear(150, 75) self.fc5 = nn.Linear(75, 20) self.fc6 = nn.Linear(20, 2) def forward(self, x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.relu(self.fc4(x)) x = F.relu(self.fc5(x)) x = self.fc6(x) return x import torch.optim as optim learning_rate = 0.001 criterion = nn.CrossEntropyLoss() device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = ThermoClassifier().to(device) optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) device # Train the model num_epochs = 200 total_step = len(train_loader) for epoch in range(num_epochs): for i_batch, sample_batched in enumerate(train_loader): X = sample_batched['X'] y = sample_batched['y'] # Move tensors to the configured device # print(X) embed = X.to(device) labels = y.to(device) # Forward pass outputs = model(embed) # print(outputs.shape) loss = criterion(outputs, labels) # Backprpagation and optimization optimizer.zero_grad() loss.backward() optimizer.step() if (i_batch+1) % 200 == 0: print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' .format(epoch+1, num_epochs, i_batch+1, total_step, loss.item())) thermo_dataset_test = ThermoDataset(X_test,y_test) test_loader = DataLoader(thermo_dataset_test, batch_size=100, shuffle=True, num_workers=0) # Test the model # In the test phase, don't need to compute gradients (for memory efficiency) with torch.no_grad(): correct = 0 total = 0 for i_batch, sample_batched in enumerate(test_loader): X = sample_batched['X'].to(device) y = sample_batched['y'].to(device) outputs = model(X) _, predicted = torch.max(outputs.data, 1) # print(predicted) # print(y.size(0)) total += y.size(0) correct += (predicted == y).sum().item() print('Accuracy of the network on the test for model_768 : {} %'.format(100 * correct / total)) # Save the model checkpoint torch.save(model.state_dict(), 'model_768.ckpt') model.load_state_dict(torch.load('model_768.ckpt')) model.eval() X = thermo_sampled_embed_scaled y = thermo_sampled["is_thermophilic"] thermo_sampled["is_thermophilic"].iloc[1] # Test the model # In the test phase, don't need to compute gradients (for memory efficiency) with torch.no_grad(): correct = 0 total = 0 pred_y = [] for i in range(thermo_sampled_embed_scaled.shape[0]): X = torch.tensor(thermo_sampled_embed_scaled[i,:]).reshape(1,-1).to(device) y = thermo_sampled["is_thermophilic"].iloc[i] outputs = model(X) _, predicted = torch.max(outputs.data, 1) # print(predicted.item()) total += 1 correct += int(predicted.item() == y) pred_y.append(predicted.item()) # break print('Accuracy of the network on the test for model_768 : {} %'.format(100 * correct / total)) len(pred_y) thermo_sampled_embed_scaled.shape thermo_sampled.shape thermo_sampled.head() thermo_sampled["pred_y"] = pred_y thermo_sampled_embed_scaled_reduced.shape thermo_sampled.to_csv("../../data/thermo/for_vis/thermo_sampled.csv") np.save("../../data/thermo/for_vis/thermo_sampled_embed_scaled.npy", thermo_sampled_embed_scaled) np.save("../../data/thermo/for_vis/thermo_sampled_embed_scaled_reduced.npy", thermo_sampled_embed_scaled_reduced)
0.571288
0.386098
``` import random import pandas as pd mappings = { 'San Francisco 49ers': '49ers', 'Chicago Bears': 'Bears', 'Cincinnati Bengals': 'Bengals', 'Buffalo Bills': 'Bills', 'Denver Broncos': 'Broncos', 'Cleveland Browns': 'Browns', 'Tampa Bay Buccaneers': 'Buccaneers', 'Arizona Cardinals': 'Cardinals', 'Los Angeles Chargers': 'Chargers', 'Kansas City Chiefs': 'Chiefs', 'Indianapolis Colts': 'Colts', 'Washington Football Team': 'Commanders', 'Washington Commanders': 'Commanders', 'Dallas Cowboys': 'Cowboys', 'Miami Dolphins': 'Dolphins', 'Philadelphia Eagles': 'Eagles', 'Atlanta Falcons': 'Falcons', 'New York Giants': 'Giants', 'Jacksonville Jaguars': 'Jaguars', 'New York Jets': 'Jets', 'Detroit Lions': 'Lions', 'Green Bay Packers': 'Packers', 'Carolina Panthers': 'Panthers', 'New England Patriots': 'Patriots', 'Las Vegas Raiders': 'Raiders', 'Los Angeles Rams': 'Rams', 'Baltimore Ravens': 'Ravens', 'New Orleans Saints': 'Saints', 'Seattle Seahawks': 'Seahawks', 'Pittsburgh Steelers': 'Steelers', 'Houston Texans': 'Texans', 'Tennessee Titans': 'Titans', 'Minnesota Vikings': 'Vikings', } teams = list(set(mappings.values())) def generate_team_power_ratings(random_range = (-15, 15)): power_rating = random.randint(random_range[0], random_range[1]) power_rating += random.random() return { team: power_rating for team in teams } home_power_ratings = generate_team_power_ratings() away_power_ratings = generate_team_power_ratings() df = pd.read_csv('../../data/nfl/pfr-2021-games-combo.csv', index_col=0) df = df[df['week'].str.isdigit()] df['week'] = df['week'].astype(str) df['home'] = df['home'].astype(str).map(lambda a: mappings[a]) df['home_pts'] = df['home_pts'].astype(int) df['away'] = df['away'].astype(str).map(lambda a: mappings[a]) df['away_pts'] = df['away_pts'].astype(int) df['margin'] = df.apply(lambda a: a['home_pts'] - a['away_pts'], axis=1) def tweak_power_ratings(power_ratings, random_range = (-1, 1), k_range = None): delta_power_ratings = power_ratings.copy() teams = list(delta_power_ratings.keys()) if k_range is not None: k = random.randint(k_range[0], k_range[1]) else: k = random.randint(2, len(teams)) for team in random.sample(teams, k): delta_power_ratings[team] += random.randint(random_range[0], random_range[1]) delta_power_ratings[team] += random.random() return delta_power_ratings tweak_power_ratings(home_power_ratings) def evaluate(df, home_power_ratings, away_power_ratings, edge = 3): def prediction(row): return row['home_rating_adj'] - row['away_rating_adj'] df_pred = df.copy() home_edge = .5 * edge away_edge = -home_edge df_pred['home_rating'] = df_pred['home'].map(home_power_ratings) df_pred['home_rating_adj'] = home_edge + df_pred['home_rating'] df_pred['away_rating'] = df_pred['away'].map(away_power_ratings) df_pred['away_rating_adj'] = -away_edge + df_pred['away_rating'] df_pred['prediction'] = df_pred.apply(prediction, axis=1) df_pred['error'] = df_pred.apply(lambda a: a['margin'] - a['prediction'], axis=1) df_pred['sq_error'] = df_pred['error'].map(lambda a: a * a) df_pred['abs_error'] = df_pred['error'].map(lambda a: abs(a)) columns = [ 'week', 'home', 'home_pts', 'home_rating', 'home_rating_adj', 'away', 'away_pts', 'away_rating', 'away_rating_adj', 'margin', 'prediction', 'error', 'sq_error', 'abs_error', ] return df_pred[columns] df_pred = evaluate(df, home_power_ratings, away_power_ratings) print('sq error:', df_pred['sq_error'].sum()) print('abs error:', df_pred['abs_error'].sum()) print() df_pred.head() home_power_ratings = generate_team_power_ratings() away_power_ratings = generate_team_power_ratings() error_col = 'abs_error' df_eval = evaluate(df, home_power_ratings, away_power_ratings) global_error = df_eval[error_col].sum() passes = 100 total_iterations_allowed_in_pass = 250 random_range = (-3, 3) k_range = (2, 4) print(f'global error @ 0:', global_error) for i in range(1, passes): tse = global_error current_iteration = 1 while current_iteration <= total_iterations_allowed_in_pass: delta_home_power_ratings = tweak_power_ratings(home_power_ratings, random_range, k_range) df_eval = evaluate(df, delta_home_power_ratings, away_power_ratings) error = df_eval[error_col].sum() if global_error > error: global_error = error home_power_ratings = delta_home_power_ratings delta_away_power_ratings = tweak_power_ratings(away_power_ratings, random_range, k_range) df_eval = evaluate(df, home_power_ratings, delta_away_power_ratings) error = df_eval[error_col].sum() if global_error > error: global_error = error away_power_ratings = delta_away_power_ratings current_iteration += 1 if tse != global_error: print(f'global error @ {i}:', global_error) print('sq error:', global_error) evaluate(df, home_power_ratings, away_power_ratings).head() sorted(home_power_ratings.items(), key=lambda a: a[1], reverse=True) sorted(away_power_ratings.items(), key=lambda a: a[1], reverse=True) ```
github_jupyter
import random import pandas as pd mappings = { 'San Francisco 49ers': '49ers', 'Chicago Bears': 'Bears', 'Cincinnati Bengals': 'Bengals', 'Buffalo Bills': 'Bills', 'Denver Broncos': 'Broncos', 'Cleveland Browns': 'Browns', 'Tampa Bay Buccaneers': 'Buccaneers', 'Arizona Cardinals': 'Cardinals', 'Los Angeles Chargers': 'Chargers', 'Kansas City Chiefs': 'Chiefs', 'Indianapolis Colts': 'Colts', 'Washington Football Team': 'Commanders', 'Washington Commanders': 'Commanders', 'Dallas Cowboys': 'Cowboys', 'Miami Dolphins': 'Dolphins', 'Philadelphia Eagles': 'Eagles', 'Atlanta Falcons': 'Falcons', 'New York Giants': 'Giants', 'Jacksonville Jaguars': 'Jaguars', 'New York Jets': 'Jets', 'Detroit Lions': 'Lions', 'Green Bay Packers': 'Packers', 'Carolina Panthers': 'Panthers', 'New England Patriots': 'Patriots', 'Las Vegas Raiders': 'Raiders', 'Los Angeles Rams': 'Rams', 'Baltimore Ravens': 'Ravens', 'New Orleans Saints': 'Saints', 'Seattle Seahawks': 'Seahawks', 'Pittsburgh Steelers': 'Steelers', 'Houston Texans': 'Texans', 'Tennessee Titans': 'Titans', 'Minnesota Vikings': 'Vikings', } teams = list(set(mappings.values())) def generate_team_power_ratings(random_range = (-15, 15)): power_rating = random.randint(random_range[0], random_range[1]) power_rating += random.random() return { team: power_rating for team in teams } home_power_ratings = generate_team_power_ratings() away_power_ratings = generate_team_power_ratings() df = pd.read_csv('../../data/nfl/pfr-2021-games-combo.csv', index_col=0) df = df[df['week'].str.isdigit()] df['week'] = df['week'].astype(str) df['home'] = df['home'].astype(str).map(lambda a: mappings[a]) df['home_pts'] = df['home_pts'].astype(int) df['away'] = df['away'].astype(str).map(lambda a: mappings[a]) df['away_pts'] = df['away_pts'].astype(int) df['margin'] = df.apply(lambda a: a['home_pts'] - a['away_pts'], axis=1) def tweak_power_ratings(power_ratings, random_range = (-1, 1), k_range = None): delta_power_ratings = power_ratings.copy() teams = list(delta_power_ratings.keys()) if k_range is not None: k = random.randint(k_range[0], k_range[1]) else: k = random.randint(2, len(teams)) for team in random.sample(teams, k): delta_power_ratings[team] += random.randint(random_range[0], random_range[1]) delta_power_ratings[team] += random.random() return delta_power_ratings tweak_power_ratings(home_power_ratings) def evaluate(df, home_power_ratings, away_power_ratings, edge = 3): def prediction(row): return row['home_rating_adj'] - row['away_rating_adj'] df_pred = df.copy() home_edge = .5 * edge away_edge = -home_edge df_pred['home_rating'] = df_pred['home'].map(home_power_ratings) df_pred['home_rating_adj'] = home_edge + df_pred['home_rating'] df_pred['away_rating'] = df_pred['away'].map(away_power_ratings) df_pred['away_rating_adj'] = -away_edge + df_pred['away_rating'] df_pred['prediction'] = df_pred.apply(prediction, axis=1) df_pred['error'] = df_pred.apply(lambda a: a['margin'] - a['prediction'], axis=1) df_pred['sq_error'] = df_pred['error'].map(lambda a: a * a) df_pred['abs_error'] = df_pred['error'].map(lambda a: abs(a)) columns = [ 'week', 'home', 'home_pts', 'home_rating', 'home_rating_adj', 'away', 'away_pts', 'away_rating', 'away_rating_adj', 'margin', 'prediction', 'error', 'sq_error', 'abs_error', ] return df_pred[columns] df_pred = evaluate(df, home_power_ratings, away_power_ratings) print('sq error:', df_pred['sq_error'].sum()) print('abs error:', df_pred['abs_error'].sum()) print() df_pred.head() home_power_ratings = generate_team_power_ratings() away_power_ratings = generate_team_power_ratings() error_col = 'abs_error' df_eval = evaluate(df, home_power_ratings, away_power_ratings) global_error = df_eval[error_col].sum() passes = 100 total_iterations_allowed_in_pass = 250 random_range = (-3, 3) k_range = (2, 4) print(f'global error @ 0:', global_error) for i in range(1, passes): tse = global_error current_iteration = 1 while current_iteration <= total_iterations_allowed_in_pass: delta_home_power_ratings = tweak_power_ratings(home_power_ratings, random_range, k_range) df_eval = evaluate(df, delta_home_power_ratings, away_power_ratings) error = df_eval[error_col].sum() if global_error > error: global_error = error home_power_ratings = delta_home_power_ratings delta_away_power_ratings = tweak_power_ratings(away_power_ratings, random_range, k_range) df_eval = evaluate(df, home_power_ratings, delta_away_power_ratings) error = df_eval[error_col].sum() if global_error > error: global_error = error away_power_ratings = delta_away_power_ratings current_iteration += 1 if tse != global_error: print(f'global error @ {i}:', global_error) print('sq error:', global_error) evaluate(df, home_power_ratings, away_power_ratings).head() sorted(home_power_ratings.items(), key=lambda a: a[1], reverse=True) sorted(away_power_ratings.items(), key=lambda a: a[1], reverse=True)
0.379723
0.350241
<img src="img/heading_title.jpg" style="width: 100%; margin:0;" /> <br /><br /> ## Align Zoom and Notebook Side By Side ![split_screenshot.png](img/split_screenshot.png) <img src="img/heading_tutorial.jpg" style="width: 100%; margin:0;" /> <img src="img/tab_intro.png" style="width: 664px; margin:0;" /> ``` # run this cell ! echo "Hello from the Linux Shell" ! echo "\n✅ Step Complete\n" ``` <br /><br /> <br /><br /> <img src="img/heading_overview.jpg" style="width: 100%; margin:0;" /> <br /><br /> <img src="img/heading_domain.jpg" style="width: 100%; margin:0;" /> <br /><br /> <img src="img/tab_install_pip.png" alt="tab" style="width: 664px; margin:0;" /> ``` # run this cell ! sudo apt update && sudo apt install python3-pip ! echo "\n✅ Step Complete\n" ``` <br /><br /> <img src="img/tab_install_hagrid.png" alt="tab" style="width: 664px; margin:0;" /> ``` # run this cell ! pip install -U hagrid ! echo "\n✅ Step Complete\n" ``` <br /><br /> <img src="img/tab_install_syft.png" style="width: 664px; margin:0;" /> ``` # run this cell ! pip install --pre syft ! echo "\n✅ Step Complete\n" ``` <br /><br /> <img src="img/tab_launch_domain.png" alt="tab" style="width: 664px; margin:0;" /> ``` # edit DOMAIN_NAME and run this cell DOMAIN_NAME = "My Institution Name" ! hagrid launch {DOMAIN_NAME} to docker:80 --tag=latest --tail=false ! echo "\n✅ Step Complete\n" ``` <br /><br /> <img src="img/tab_check_domain.png" alt="tab" style="width: 664px; margin:0;" /> ``` # run this cell ! hagrid check --wait --silent ! echo "\n✅ Step Complete\n" ``` <br /><br /> <br /><br /> <img src="img/heading_data.jpg" style="width: 100%; margin:0;" /> <br /><br /> <img src="img/tab_import_syft.png" style="width: 664px; margin:0;" /> ``` # run this cell import syft as sy from utils import * print("Syft is imported") ``` <br /><br /> <img src="img/tab_python_client_login.png" style="width: 664px; margin:0;" /> ``` domain_client = sy.login( url=auto_detect_domain_host_ip(), email="info@openmined.org", password="changethis" ) ``` <br /><br /> <img src="img/tab_get_dataset.png" style="width: 664px; margin:0;" /> ``` # edit MY_DATASET_URL then run this cell MY_DATASET_URL = "" dataset = download_dataset(MY_DATASET_URL) # see footnotes for information about the dataset ``` <br /><br /> <img src="img/tab_preview_dataset.png" style="width: 664px; margin:0;" /> ``` dataset.head() ``` <br /><br /> <img src="img/tab_preprocess_data.png" style="width: 664px; margin:0;" /> ``` # run this cell train, val, test = split_and_preprocess_dataset(data=dataset) ``` <br /><br /> <img src="img/tab_annotate_train.png" style="width: 664px; margin:0;" /> ``` # run this cell data_subjects = DataSubjectList.from_series(train["patient_ids"]) train_image_data = sy.Tensor(train["images"]).annotated_with_dp_metadata( min_val=0, max_val=255, data_subjects=data_subjects ) train_label_data = sy.Tensor(train["labels"]).annotated_with_dp_metadata( min_val=0, max_val=1, data_subjects=data_subjects ) ``` <br /><br /> <img src="img/tab_annotate_val.png" style="width: 664px; margin:0;" /> ``` data_subjects = DataSubjectList.from_series(val["patient_ids"]) val_image_data = sy.Tensor(val["images"]).annotated_with_dp_metadata( min_val=0, max_val=255, data_subjects=data_subjects ) val_label_data = sy.Tensor(val["labels"]).annotated_with_dp_metadata( min_val=0, max_val=1, data_subjects=data_subjects ) ``` <br /><br /> <img src="img/tab_annotate_test.png" style="width: 664px; margin:0;" /> ``` data_subjects = DataSubjectList.from_series(test["patient_ids"]) test_image_data = sy.Tensor(test["images"]).annotated_with_dp_metadata( min_val=0, max_val=255, data_subjects=data_subjects ) test_label_data = sy.Tensor(test["labels"]).annotated_with_dp_metadata( min_val=0, max_val=1, data_subjects=data_subjects ) ``` <br /><br /> <img src="img/tab_upload_dataset.png" style="width: 664px; margin:0;" /> ``` # run this cell domain_client.load_dataset( name="BreastCancerDataset", assets={ "train_images": train_image_data, "train_labels": train_label_data, "val_images": val_image_data, "val_labels": val_label_data, "test_images": test_image_data, "test_labels": test_label_data, }, description="Invasive Ductal Carcinoma (IDC) is the most common subtype of all breast cancers. \ The modified dataset consisted of 162 whole mount slide images of Breast Cancer (BCa) specimens scanned at 40x. \ Patches of size 50 x 50 were extracted from the original image. The labels 0 is non-IDC and 1 is IDC." ) ``` <br /><br /> <img src="img/tab_check_dataset.png" style="width: 664px; margin:0;" /> ``` # run this cell domain_client.datasets ``` <br /><br /> <br /><br /> <img src="img/heading_network.jpg" style="width: 100%; margin:0;" /> <br /><br /> <img src="img/tab_browse_networks.png" style="width: 664px; margin:0;" /> ``` # run this cell sy.networks ``` <br /><br /> <img src="img/tab_join_network.png" style="width: 664px; margin:0;" /> ``` # run this cell NETWORK_NAME = "" network_client = sy.networks[NETWORK_NAME] domain_client.apply_to_network(network_client) ``` <br /><br /> <img src="img/tab_see_domains.png" style="width: 664px; margin:0;" /> ``` # run this cell network_client.domains ``` <br /><br /> <br /><br /> <img src="img/heading_account.jpg" style="width: 100%; margin:0;" /> <br /><br /> <img src="img/tab_create_user.png" style="width: 664px; margin:0;" /> ``` # run this cell data_scientist_details = domain_client.create_user( name="Sam Carter", email="sam@stargate.net", password="changethis", budget=9999 ) ``` <br /><br /> <img src="img/tab_copy_details.png" style="width: 664px; margin:0;" /> ``` # run this cell then copy the output submit_credentials(data_scientist_details) print("Please give these details to the Data Scientist 👇🏽") print(data_scientist_details) ``` <br /><br /> ### 🖐 Raise your hand in Video Call and wait <br /><br /> <img src="img/heading_recap.jpg" style="width: 100%; margin:0;" /> <br /><br /> # Thank You If you have any questions for our team please don't hesitate to reach out via email or the slack link below. # Links 🌍 Web:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; https://blog.openmined.org/ 💬 Slack:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; https://openmined.slack.com/ 🎥 Course:&nbsp;&nbsp; https://courses.openmined.org/ 📰 Blog:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; https://blog.openmined.org/ 🐙 Code:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; https://github.com/OpenMined/PySyft # Footnotes ### Breast Cancer Dataset Credit https://www.kaggle.com/datasets/paultimothymooney/breast-histopathology-images?datasetId=7415&sortBy=voteCount **Citations:** - https://www.ncbi.nlm.nih.gov/pubmed/27563488 - http://spie.org/Publications/Proceedings/Paper/10.1117/12.2043872
github_jupyter
# run this cell ! echo "Hello from the Linux Shell" ! echo "\n✅ Step Complete\n" # run this cell ! sudo apt update && sudo apt install python3-pip ! echo "\n✅ Step Complete\n" # run this cell ! pip install -U hagrid ! echo "\n✅ Step Complete\n" # run this cell ! pip install --pre syft ! echo "\n✅ Step Complete\n" # edit DOMAIN_NAME and run this cell DOMAIN_NAME = "My Institution Name" ! hagrid launch {DOMAIN_NAME} to docker:80 --tag=latest --tail=false ! echo "\n✅ Step Complete\n" # run this cell ! hagrid check --wait --silent ! echo "\n✅ Step Complete\n" # run this cell import syft as sy from utils import * print("Syft is imported") domain_client = sy.login( url=auto_detect_domain_host_ip(), email="info@openmined.org", password="changethis" ) # edit MY_DATASET_URL then run this cell MY_DATASET_URL = "" dataset = download_dataset(MY_DATASET_URL) # see footnotes for information about the dataset dataset.head() # run this cell train, val, test = split_and_preprocess_dataset(data=dataset) # run this cell data_subjects = DataSubjectList.from_series(train["patient_ids"]) train_image_data = sy.Tensor(train["images"]).annotated_with_dp_metadata( min_val=0, max_val=255, data_subjects=data_subjects ) train_label_data = sy.Tensor(train["labels"]).annotated_with_dp_metadata( min_val=0, max_val=1, data_subjects=data_subjects ) data_subjects = DataSubjectList.from_series(val["patient_ids"]) val_image_data = sy.Tensor(val["images"]).annotated_with_dp_metadata( min_val=0, max_val=255, data_subjects=data_subjects ) val_label_data = sy.Tensor(val["labels"]).annotated_with_dp_metadata( min_val=0, max_val=1, data_subjects=data_subjects ) data_subjects = DataSubjectList.from_series(test["patient_ids"]) test_image_data = sy.Tensor(test["images"]).annotated_with_dp_metadata( min_val=0, max_val=255, data_subjects=data_subjects ) test_label_data = sy.Tensor(test["labels"]).annotated_with_dp_metadata( min_val=0, max_val=1, data_subjects=data_subjects ) # run this cell domain_client.load_dataset( name="BreastCancerDataset", assets={ "train_images": train_image_data, "train_labels": train_label_data, "val_images": val_image_data, "val_labels": val_label_data, "test_images": test_image_data, "test_labels": test_label_data, }, description="Invasive Ductal Carcinoma (IDC) is the most common subtype of all breast cancers. \ The modified dataset consisted of 162 whole mount slide images of Breast Cancer (BCa) specimens scanned at 40x. \ Patches of size 50 x 50 were extracted from the original image. The labels 0 is non-IDC and 1 is IDC." ) # run this cell domain_client.datasets # run this cell sy.networks # run this cell NETWORK_NAME = "" network_client = sy.networks[NETWORK_NAME] domain_client.apply_to_network(network_client) # run this cell network_client.domains # run this cell data_scientist_details = domain_client.create_user( name="Sam Carter", email="sam@stargate.net", password="changethis", budget=9999 ) # run this cell then copy the output submit_credentials(data_scientist_details) print("Please give these details to the Data Scientist 👇🏽") print(data_scientist_details)
0.332852
0.886519
``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt ``` # scikit-image: a tour There are many tools and utilities in the package, far too many to cover in a tutorial. This notebook is designed as a road map, to guide you as you explore or search for additional tools for your applications. *This is intended as a guide, not an exhaustive list*. Each submodule of scikit-image has its own section, which you can navigate to below in the table of contents. ## Table of Contents * [skimage.color](#color) * [skimage.data](#data) * [skimage.draw](#draw) * [skimage.exposure](#exposure) * [skimage.feature](#feature) * [skimage.filters](#filters) * [skiamge.future](#future) * [skimage.graph](#graph) * [skimage.io](#io) * [skimage.measure](#measure) * [skimage.morphology](#morphology) * [skimage.restoration](#restoration) * [skimage.segmentation](#segmentation) * [skimage.transform](#transform) * [skimage.util](#util) ## [skimage.color](https://scikit-image.org/docs/stable/api/skimage.color.html) - color conversion<a id='color'></a> The `color` submmodule includes routines to convert to and from common color representations. For example, RGB (Red, Green, and Blue) can be converted into many other representations. ``` import skimage.color as color # Tab complete to see available functions in the color submodule color.rgb2 color. ``` ### Example: conversion to grayscale ``` from skimage import data from skimage import color original = data.astronaut() grayscale = color.rgb2gray(original) # Plot the results fig, axes = plt.subplots(1, 2, figsize=(8, 4)) ax = axes.ravel() ax[0].imshow(original) ax[0].set_title("Original") ax[0].axis('off') ax[1].imshow(grayscale, cmap='gray') ax[1].set_title("Grayscale") ax[1].axis('off') fig.tight_layout() plt.show(); ``` ### Example: conversion to HSV Usually, objects in images have distinct colors (hues) and luminosities, so that these features can be used to separate different areas of the image. In the RGB representation the hue and the luminosity are expressed as a linear combination of the R,G,B channels, whereas they correspond to single channels of the HSV image (the Hue and the Value channels). A simple segmentation of the image can then be effectively performed by a mere thresholding of the HSV channels. See below link for additional details. https://en.wikipedia.org/wiki/HSL_and_HSV We first load the RGB image and extract the Hue and Value channels: ``` from skimage import data from skimage.color import rgb2hsv rgb_img = data.coffee() hsv_img = rgb2hsv(rgb_img) hue_img = hsv_img[:, :, 0] value_img = hsv_img[:, :, 2] fig, (ax0, ax1, ax2) = plt.subplots(ncols=3, figsize=(8, 2)) ax0.imshow(rgb_img) ax0.set_title("RGB image") ax0.axis('off') ax1.imshow(hue_img, cmap='hsv') ax1.set_title("Hue channel") ax1.axis('off') ax2.imshow(value_img) ax2.set_title("Value channel") ax2.axis('off') fig.tight_layout(); ``` The cup and saucer have a Hue distinct from the remainder of the image, which can be isolated by thresholding ``` hue_threshold = 0.04 binary_img = hue_img > hue_threshold fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(8, 3)) ax0.hist(hue_img.ravel(), 512) ax0.set_title("Histogram of the Hue channel with threshold") ax0.axvline(x=hue_threshold, color='r', linestyle='dashed', linewidth=2) ax0.set_xbound(0, 0.12) ax1.imshow(binary_img) ax1.set_title("Hue-thresholded image") ax1.axis('off') fig.tight_layout(); ``` An additional threshold in the value channel can remote most of the shadow ``` fig, ax0 = plt.subplots(figsize=(4, 3)) value_threshold = 0.10 binary_img = (hue_img > hue_threshold) | (value_img < value_threshold) ax0.imshow(binary_img) ax0.set_title("Hue and value thresholded image") ax0.axis('off') fig.tight_layout() plt.show(); ``` #### Additional color conversion examples available in the [online gallery](https://scikit-image.org/docs/stable/auto_examples/#manipulating-exposure-and-color-channels). #### [Back to the Table of Contents](#Table-of-Contents) ## [skimage.data](https://scikit-image.org/docs/stable/api/skimage.data.html) - test images<a id='data'></a> The `data` submodule includes standard test images useful for examples and testing the package. These images are shipped with the package. There are scientific images, general test images, and a stereoscopic image. ``` from skimage import data # Explore with tab completion example_image = data.camera() fig, ax = plt.subplots(figsize=(6, 6)) ax.imshow(example_image) ax.axis('off'); # Room for experimentation ``` ----------------------- ## [skimage.draw](https://scikit-image.org/docs/stable/api/skimage.draw.html) - drawing primitives on an image<a id='draw'></a> The majority of functions in this submodule return the *coordinates* of the specified shape/object in the image, rather than drawing it on the image directly. The coordinates can then be used as a mask to draw on the image, or you pass the image as well as those coordinates into the convenience function `draw.set_color`. Lines and circles can be drawn with antialiasing (these functions end in the suffix *_aa). At the current time text is not supported; other libraries including matplotlib have robust support for overlaying text. ``` from skimage import draw # Tab complete to see available options draw. # Room for experimentation ``` ## Example: drawing shapes ``` fig, (ax1, ax2) = plt.subplots(ncols=2, nrows=1, figsize=(10, 6)) img = np.zeros((500, 500, 3), dtype=np.float64) # draw line rr, cc = draw.line(120, 123, 20, 400) img[rr, cc, 0] = 255 # fill polygon poly = np.array(( (300, 300), (480, 320), (380, 430), (220, 590), (300, 300), )) rr, cc = draw.polygon(poly[:, 0], poly[:, 1], img.shape) img[rr, cc, 1] = 1 # fill circle rr, cc = draw.circle(200, 200, 100, img.shape) img[rr, cc, :] = (1, 1, 0) # fill ellipse rr, cc = draw.ellipse(300, 300, 100, 200, img.shape) img[rr, cc, 2] = 1 # circle rr, cc = draw.circle_perimeter(120, 400, 15) img[rr, cc, :] = (1, 0, 0) # Bezier curve rr, cc = draw.bezier_curve(70, 100, 10, 10, 150, 100, 1) img[rr, cc, :] = (1, 0, 0) # ellipses rr, cc = draw.ellipse_perimeter(120, 400, 60, 20, orientation=np.pi / 4.) img[rr, cc, :] = (1, 0, 1) rr, cc = draw.ellipse_perimeter(120, 400, 60, 20, orientation=-np.pi / 4.) img[rr, cc, :] = (0, 0, 1) rr, cc = draw.ellipse_perimeter(120, 400, 60, 20, orientation=np.pi / 2.) img[rr, cc, :] = (1, 1, 1) ax1.imshow(img) ax1.set_title('No anti-aliasing') ax1.axis('off') img = np.zeros((100, 100), dtype=np.double) # anti-aliased line rr, cc, val = draw.line_aa(12, 12, 20, 50) img[rr, cc] = val # anti-aliased circle rr, cc, val = draw.circle_perimeter_aa(60, 40, 30) img[rr, cc] = val ax2.imshow(img, cmap=plt.cm.gray, interpolation='nearest') ax2.set_title('Anti-aliasing') ax2.axis('off'); ``` #### [Back to the Table of Contents](#Table-of-Contents) ----------------------------------------- ## [skimage.exposure](https://scikit-image.org/docs/stable/api/skimage.exposure.html) - evaluating or changing the exposure of an image<a id='exposure'></a> One of the most common tools to evaluate exposure is the *histogram*, which plots the number of points which have a certain value against the values in order from lowest (dark) to highest (light). The function `exposure.histogram` differs from `numpy.histogram` in that there is no rebinnning; each value along the x-axis is preserved. ### Example: Histogram equalization ``` from skimage import data, img_as_float from skimage import exposure def plot_img_and_hist(image, axes, bins=256): """Plot an image along with its histogram and cumulative histogram. """ image = img_as_float(image) ax_img, ax_hist = axes ax_cdf = ax_hist.twinx() # Display image ax_img.imshow(image, cmap=plt.cm.gray) ax_img.set_axis_off() # Display histogram ax_hist.hist(image.ravel(), bins=bins, histtype='step', color='black') ax_hist.ticklabel_format(axis='y', style='scientific', scilimits=(0, 0)) ax_hist.set_xlabel('Pixel intensity') ax_hist.set_xlim(0, 1) ax_hist.set_yticks([]) # Display cumulative distribution img_cdf, bins = exposure.cumulative_distribution(image, bins) ax_cdf.plot(bins, img_cdf, 'r') ax_cdf.set_yticks([]) return ax_img, ax_hist, ax_cdf # Load an example image img = data.moon() # Contrast stretching p2, p98 = np.percentile(img, (2, 98)) img_rescale = exposure.rescale_intensity(img, in_range=(p2, p98)) # Equalization img_eq = exposure.equalize_hist(img) # Adaptive Equalization img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03) # Display results fig = plt.figure(figsize=(8, 5)) axes = np.zeros((2, 4), dtype=np.object) axes[0, 0] = fig.add_subplot(2, 4, 1) for i in range(1, 4): axes[0, i] = fig.add_subplot(2, 4, 1+i, sharex=axes[0,0], sharey=axes[0,0]) for i in range(0, 4): axes[1, i] = fig.add_subplot(2, 4, 5+i) ax_img, ax_hist, ax_cdf = plot_img_and_hist(img, axes[:, 0]) ax_img.set_title('Low contrast image') y_min, y_max = ax_hist.get_ylim() ax_hist.set_ylabel('Number of pixels') ax_hist.set_yticks(np.linspace(0, y_max, 5)) ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_rescale, axes[:, 1]) ax_img.set_title('Contrast stretch') ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_eq, axes[:, 2]) ax_img.set_title('Histogram eq') ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_adapteq, axes[:, 3]) ax_img.set_title('Adaptive eq') ax_cdf.set_ylabel('Fraction of total intensity') ax_cdf.set_yticks(np.linspace(0, 1, 5)) # prevent overlap of y-axis labels fig.tight_layout(); # Explore with tab completion exposure. # Room for experimentation ``` #### Additional examples available in the [example gallery](https://scikit-image.org/docs/stable/auto_examples/#manipulating-exposure-and-color-channels) #### [Back to the Table of Contents](#Table-of-Contents) ---------------------- ## [skimage.feature](https://scikit-image.org/docs/stable/api/skimage.feature.html) - extract features from an image<a id='feature'></a> This submodule presents a diverse set of tools to identify or extract certain features from images, including tools for * Edge detection * `feature.canny` * Corner detection * `feature.corner_kitchen_rosenfeld` * `feature.corner_harris` * `feature.corner_shi_tomasi` * `feature.corner_foerstner` * `feature.subpix` * `feature.corner_moravec` * `feature.corner_fast` * `feature.corner_orientations` * Blob detection * `feature.blob_dog` * `feature.blob_doh` * `feature.blob_log` * Texture * `feature.greycomatrix` * `feature.greycoprops` * `feature.local_binary_pattern` * `feature.multiblock_lbp` * Peak finding * `feature.peak_local_max` * Object detction * `feature.hog` * `feature.match_template` * Stereoscopic depth estimation * `feature.daisy` * Feature matching * `feature.ORB` * `feature.BRIEF` * `feature.CENSURE` * `feature.match_descriptors` * `feature.plot_matches` ``` from skimage import feature # Explore with tab completion feature. # Room for experimentation ``` This is a large submodule. For brevity here is a short example illustrating ORB feature matching, and additional examples can be explored in the [online gallery](https://scikit-image.org/docs/stable/auto_examples/index.html#detection-of-features-and-objects). ``` from skimage import data from skimage import transform as tf from skimage import feature from skimage.color import rgb2gray # Import the astronaut then warp/rotate the image img1 = rgb2gray(data.astronaut()) img2 = tf.rotate(img1, 180) tform = tf.AffineTransform(scale=(1.3, 1.1), rotation=0.5, translation=(0, -200)) img3 = tf.warp(img1, tform) # Build ORB extractor and extract features descriptor_extractor = feature.ORB(n_keypoints=200) descriptor_extractor.detect_and_extract(img1) keypoints1 = descriptor_extractor.keypoints descriptors1 = descriptor_extractor.descriptors descriptor_extractor.detect_and_extract(img2) keypoints2 = descriptor_extractor.keypoints descriptors2 = descriptor_extractor.descriptors descriptor_extractor.detect_and_extract(img3) keypoints3 = descriptor_extractor.keypoints descriptors3 = descriptor_extractor.descriptors # Find matches between the extracted features matches12 = feature.match_descriptors(descriptors1, descriptors2, cross_check=True) matches13 = feature.match_descriptors(descriptors1, descriptors3, cross_check=True) # Plot the results fig, ax = plt.subplots(nrows=2, ncols=1, figsize=(10, 10)) plt.gray() feature.plot_matches(ax[0], img1, img2, keypoints1, keypoints2, matches12) ax[0].axis('off') ax[0].set_title("Original Image vs. Transformed Image") feature.plot_matches(ax[1], img1, img3, keypoints1, keypoints3, matches13) ax[1].axis('off') ax[1].set_title("Original Image vs. Transformed Image"); ``` #### Additional feature detection and extraction examples available in the [online gallery](https://scikit-image.org/docs/stable/auto_examples/index.html#detection-of-features-and-objects). ``` # Room for experimentation ``` #### [Back to the Table of Contents](#Table-of-Contents) --------------------------- ## [skimage.filters](https://scikit-image.org/docs/stable/api/skimage.filters.html) - apply filters to an image<a id='filters'></a> Filtering applies whole-image modifications such as sharpening or blurring. Thresholding methods also live in this submodule. Notable functions include (links to relevant gallery examples) * [Thresholding](https://scikit-image.org/docs/stable/auto_examples/applications/plot_thresholding.html) * filters.threshold_* (multiple different functions with this prefix) * skimage.filters.try_all_threshold to compare various methods * [Edge finding/enhancement](https://scikit-image.org/docs/stable/auto_examples/edges/plot_edge_filter.html) * filters.sobel * filters.prewitt * filters.scharr * filters.roberts * filters.laplace * filters.hessian * [Ridge filters](https://scikit-image.org/docs/stable/auto_examples/edges/plot_ridge_filter.html) * filters.meijering * filters.sato * filters.frangi * Inverse filtering (see also [skimage.restoration](#restoration)) * filters.weiner * filters.inverse * [Directional](https://scikit-image.org/docs/stable/auto_examples/features_detection/plot_gabor.html) * filters.gabor * Blurring/denoising * filters.gaussian * filters.median * [Sharpening](https://scikit-image.org/docs/stable/auto_examples/filters/plot_unsharp_mask.html) * filters.unsharp_mask * Define your own * LPIFilter2D ``` from skimage import filters # Explore with tab completion filters. ``` ### Rank filters There is a sub-submodule, `skimage.filters.rank`, which contains rank filters. These filters are nonlinear and operate on the local histogram. To learn more about the rank filters, see the comprehensive [gallery example for rank filters](https://scikit-image.org/docs/stable/auto_examples/applications/plot_rank_filters.html). #### Additional feature detection and extraction examples available in the [online gallery](https://scikit-image.org/docs/stable/auto_examples/index.html#detection-of-features-and-objects). #### [Back to the Table of Contents](#Table-of-Contents) --------------------------- ## [skimage.future](https://scikit-image.org/docs/stable/api/skimage.future.html) - stable code with unstable API<a id='future'></a> Bleeding edge features which work well, and will be moved from here into the main package in future releases. However, on the way their API may change. #### [Back to the Table of Contents](#Table-of-Contents) ------------------------------ ## [skimage.graph](https://scikit-image.org/docs/stable/api/skimage.graph.html) - graph theory, minimum cost paths<a id='graph'></a> Graph theory. Currently this submodule primarily deals with a constructed "cost" image, and how to find the minimum cost path through it, with constraints if desired. [The panorama tutorial lecture illustrates a real-world example.](./solutions/adv3_panorama-stitching-solution.ipynb) #### [Back to the Table of Contents](#Table-of-Contents) ------------------------ ## [skimage.io](https://scikit-image.org/docs/stable/api/skimage.io.html) - utilities to read and write images in various formats<a id='io'></a> Reading your image and writing the results back out. There are multiple plugins available, which support multiple formats. The most commonly used functions include * io.imread - Read an image to a numpy array. * io.imsave - Write an image to disk. * io.imread_collection - Read multiple images which match a common prefix #### [Back to the Table of Contents](#Table-of-Contents) ------------------------------ ## <a id='measure'></a>[skimage.measure](https://scikit-image.org/docs/stable/api/skimage.measure.html) - measuring image or region properties Multiple algorithms to label images, or obtain information about discrete regions of an image. * Label an image * measure.label * In a labeled image (image with discrete regions identified by unique integers, as returned by `label`), find various properties of the labeled regions. [**`regionprops` is extremely useful**](https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_regionprops.html) * measure.regionprops * Finding paths from a 2D image, or isosurfaces from a 3D image * measure.find_contours * measure.marching_cubes_lewiner * measure.marching_cubes_classic * measure.mesh_surface_area (surface area of 3D mesh from marching cubes) * Quantify the difference between two whole images (often used in denoising or restoration) * measure.compare_* **RANDom Sample Consensus fitting (RANSAC)** - a powerful, robust approach to fitting a model to data. It exists here because its initial use was for fitting shapes, but it can also fit transforms. * measure.ransac * measure.CircleModel * measure.EllipseModel * measure.LineModelND ``` from skimage import measure # Explore with tab completion measure. # Room to explore ``` #### [Back to the Table of Contents](#Table-of-Contents) --------------------- ## <a id='morphology'></a>[skimage.morphology](https://scikit-image.org/docs/stable/api/skimage.morphology.html) - binary and grayscale morphology Morphological image processing is a collection of non-linear operations related to the shape or morphology of features in an image, such as boundaries, skeletons, etc. In any given technique, we probe an image with a small shape or template called a structuring element, which defines the region of interest or neighborhood around a pixel. ``` from skimage import morphology as morph # Explore with tab completion morph. ``` ### Example: Flood filling Flood fill is an algorithm to iteratively identify and/or change adjacent values in an image based on their similarity to an initial seed point. The conceptual analogy is the ‘paint bucket’ tool in many graphic editors. The `flood` function returns the binary mask of the flooded area. `flood_fill` returns a modified image. Both of these can be set with a `tolerance` keyword argument, within which the adjacent region will be filled. Here we will experiment a bit on the cameraman, turning his coat from dark to light. ``` from skimage import data from skimage import morphology as morph cameraman = data.camera() # Change the cameraman's coat from dark to light (255). The seed point is # chosen as (200, 100), light_coat = morph.flood_fill(cameraman, (200, 100), 255, tolerance=10) fig, ax = plt.subplots(ncols=2, figsize=(10, 5)) ax[0].imshow(cameraman, cmap=plt.cm.gray) ax[0].set_title('Original') ax[0].axis('off') ax[1].imshow(light_coat, cmap=plt.cm.gray) ax[1].plot(100, 200, 'ro') # seed point ax[1].set_title('After flood fill') ax[1].axis('off'); ``` ### Example: Binary and grayscale morphology Here we outline the following basic morphological operations: 1. Erosion 2. Dilation 3. Opening 4. Closing 5. White Tophat 6. Black Tophat 7. Skeletonize 8. Convex Hull To get started, let’s load an image using `io.imread`. Note that morphology functions only work on gray-scale or binary images, so we set `as_gray=True`. ``` import os from skimage.data import data_dir from skimage.util import img_as_ubyte from skimage import io orig_phantom = img_as_ubyte(io.imread(os.path.join(data_dir, "phantom.png"), as_gray=True)) fig, ax = plt.subplots(figsize=(5, 5)) ax.imshow(orig_phantom, cmap=plt.cm.gray) ax.axis('off'); def plot_comparison(original, filtered, filter_name): fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(8, 4), sharex=True, sharey=True) ax1.imshow(original, cmap=plt.cm.gray) ax1.set_title('original') ax1.axis('off') ax2.imshow(filtered, cmap=plt.cm.gray) ax2.set_title(filter_name) ax2.axis('off') ``` ### Erosion Morphological `erosion` sets a pixel at (i, j) to the minimum over all pixels in the neighborhood centered at (i, j). *Erosion shrinks bright regions and enlarges dark regions.* The structuring element, `selem`, passed to erosion is a boolean array that describes this neighborhood. Below, we use `disk` to create a circular structuring element, which we use for most of the following examples. ``` from skimage import morphology as morph selem = morph.disk(6) eroded = morph.erosion(orig_phantom, selem) plot_comparison(orig_phantom, eroded, 'erosion') ``` ### Dilation Morphological `dilation` sets a pixel at (i, j) to the maximum over all pixels in the neighborhood centered at (i, j). *Dilation enlarges bright regions and shrinks dark regions.* ``` dilated = morph.dilation(orig_phantom, selem) plot_comparison(orig_phantom, dilated, 'dilation') ``` Notice how the white boundary of the image thickens, or gets dilated, as we increase the size of the disk. Also notice the decrease in size of the two black ellipses in the centre, and the thickening of the light grey circle in the center and the 3 patches in the lower part of the image. ### Opening Morphological `opening` on an image is defined as an erosion followed by a dilation. *Opening can remove small bright spots (i.e. “salt”) and connect small dark cracks.* ``` opened = morph.opening(orig_phantom, selem) plot_comparison(orig_phantom, opened, 'opening') ``` Since opening an image starts with an erosion operation, light regions that are smaller than the structuring element are removed. The dilation operation that follows ensures that light regions that are larger than the structuring element retain their original size. Notice how the light and dark shapes in the center their original thickness but the 3 lighter patches in the bottom get completely eroded. The size dependence is highlighted by the outer white ring: The parts of the ring thinner than the structuring element were completely erased, while the thicker region at the top retains its original thickness. ### Closing Morphological `closing` on an image is defined as a dilation followed by an erosion. *Closing can remove small dark spots (i.e. “pepper”) and connect small bright cracks.* To illustrate this more clearly, let’s add a small crack to the white border: ``` phantom = orig_phantom.copy() phantom[10:30, 200:210] = 0 closed = morph.closing(phantom, selem) plot_comparison(phantom, closed, 'closing') ``` Since closing an image starts with an dilation operation, dark regions that are smaller than the structuring element are removed. The dilation operation that follows ensures that dark regions that are larger than the structuring element retain their original size. Notice how the white ellipses at the bottom get connected because of dilation, but other dark region retain their original sizes. Also notice how the crack we added is mostly removed. ### White tophat The `white_tophat` of an image is defined as the image minus its morphological opening. *This operation returns the bright spots of the image that are smaller than the structuring element.* To make things interesting, we’ll add bright and dark spots to the image: ``` phantom = orig_phantom.copy() phantom[340:350, 200:210] = 255 phantom[100:110, 200:210] = 0 w_tophat = morph.white_tophat(phantom, selem) plot_comparison(phantom, w_tophat, 'white tophat') ``` As you can see, the 10-pixel wide white square is highlighted since it is smaller than the structuring element. Also, the thin, white edges around most of the ellipse are retained because they’re smaller than the structuring element, but the thicker region at the top disappears. ### Black tophat The `black_tophat` of an image is defined as its morphological closing minus the original image. *This operation returns the dark spots of the image that are smaller than the structuring element.* ``` b_tophat = morph.black_tophat(phantom, selem) plot_comparison(phantom, b_tophat, 'black tophat') ``` As you can see, the 10-pixel wide black square is highlighted since it is smaller than the structuring element. #### Duality As you should have noticed, many of these operations are simply the reverse of another operation. This duality can be summarized as follows: * Erosion <-> Dilation * Opening <-> Closing * White tophat <-> Black tophat ### Skeletonize Thinning is used to reduce each connected component in a binary image to a single-pixel wide skeleton. It is important to note that this is performed on binary images only. ``` horse = io.imread(os.path.join(data_dir, "horse.png"), as_gray=True) sk = morph.skeletonize(horse == 0) plot_comparison(horse, sk, 'skeletonize') ``` As the name suggests, this technique is used to thin the image to 1-pixel wide skeleton by applying thinning successively. ### Convex hull The convex_hull_image is the set of pixels included in the smallest convex polygon that surround all white pixels in the input image. Again note that this is also performed on binary images. ``` hull1 = morph.convex_hull_image(horse == 0) plot_comparison(horse, hull1, 'convex hull') ``` #### [Back to the Table of Contents](#Table-of-Contents) ----------------------------------- ## [skimage.restoration](https://scikit-image.org/docs/stable/api/skimage.restoration.html) - restoration of an image<a id='restoration'></a> This submodule includes routines to restore images. Currently these routines fall into four major categories. Links lead to topical gallery examples. * [Reducing noise](https://scikit-image.org/docs/stable/auto_examples/filters/plot_denoise.html) * restoration.denoise_* * [Deconvolution](https://scikit-image.org/docs/stable/auto_examples/filters/plot_deconvolution.html), or reversing a convolutional effect which applies to the entire image. For example, lens correction. This can be done [unsupervised](https://scikit-image.org/docs/stable/auto_examples/filters/plot_restoration.html). * restoration.weiner * restoration.unsupervised_weiner * restoration.richardson_lucy * [Inpainting](https://scikit-image.org/docs/stable/auto_examples/filters/plot_inpaint.html), or filling in missing areas of an image * restoration.inpaint_biharmonic * [Phase unwrapping](https://scikit-image.org/docs/stable/auto_examples/filters/plot_phase_unwrap.html) * restoration.unwrap_phase ``` from skimage import restoration # Explore with tab completion restoration. # Space to experiment with restoration techniques ``` #### [Back to the Table of Contents](#Table-of-Contents) --------------------------------- ## <a id='segmentation'></a>[skimage.segmentation](https://scikit-image.org/docs/stable/api/skimage.segmentation.html) - identification of regions of interest One of the key image analysis tasks is identifying regions of interest. These could be a person, an object, certain features of an animal, microscopic image, or stars. Segmenting an image is the process of determining where these things you want are in your images. Segmentation has two overarching categories: Supervised and Unsupervised. **Supervised** - must provide some guidance (seed points or initial conditions) * segmentation.random_walker * segmentation.active_contour * segmentation.watershed * segmentation.flood_fill * segmentation.flood * some thresholding algorithms in `filters` **Unsupervised** - no human input * segmentation.slic * segmentation.felzenszwalb * segmentation.chan_vese * some thresholding algorithms in `filters` There is a [segmentation lecture](./4_segmentation.ipynb) ([and solution](./solutions/4_segmentation.ipynb)) you may peruse, as well as many [gallery examples](https://scikit-image.org/docs/stable/auto_examples/index.html#segmentation-of-objects) which illustrate all of these segmentation methods. ``` from skimage import segmentation # Explore with tab completion segmentation. # Room for experimentation ``` #### [Back to the Table of Contents](#Table-of-Contents) --------------------------- ## [skimage.transform](https://scikit-image.org/docs/stable/api/skimage.transform.html) - transforms & warping<a id='transform'></a> This submodule has multiple features which fall under the umbrella of transformations. Forward (`radon`) and inverse (`iradon`) radon transforms, as well as some variants (`iradon_sart`) and the finite versions of these transforms (`frt2` and `ifrt2`). These are used for [reconstructing medical computed tomography (CT) images](https://scikit-image.org/docs/stable/auto_examples/transform/plot_radon_transform.html). Hough transforms for identifying lines, circles, and ellipses. Changing image size, shape, or resolution with `resize`, `rescale`, or `downscale_local_mean`. `warp`, and `warp_coordinates` which take an image or set of coordinates and translate them through one of the defined `*Transforms` in this submodule. `estimate_transform` may be assist in estimating the parameters. [Numerous gallery examples are available](https://scikit-image.org/docs/stable/auto_examples/index.html#geometrical-transformations-and-registration) illustrating these functions. [The panorama tutorial also includes warping](./solutions/adv3_panorama-stitching-solution.ipynb) via `SimilarityTransform` with parameter estimation via `measure.ransac`. ``` from skimage import transform # Explore with tab completion transform. # Room for experimentation ``` #### [Back to the Table of Contents](#Table-of-Contents) -------------------------- ## [skimage.util](https://scikit-image.org/docs/stable/api/skimage.util.html) - utility functions<a id='util'></a> These are generally useful functions which have no definite other place in the package. `util.img_as_*` are convenience functions for datatype conversion. `util.invert` is a convenient way to invert any image, accounting for its datatype. `util.random_noise` is a comprehensive function to apply any amount of many different types of noise to images. The seed may be set, resulting in pseudo-random noise for testing. `util.view_as_*` allows for overlapping views into the same memory array, which is useful for elegant local computations with minimal memory impact. `util.apply_parallel` uses Dask to apply a function across subsections of an image. This can result in dramatic performance or memory improvements, but depending on the algorithm edge effects or lack of knowledge of the remainder of the image may result in unexpected results. `util.pad` and `util.crop` pads or crops the edges of images. `util.pad` is now a direct wrapper for `numpy.pad`. ``` from skimage import util # Explore with tab completion util. # Room to experiment ``` #### [Back to the Table of Contents](#Table-of-Contents) ----------------------------
github_jupyter
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import skimage.color as color # Tab complete to see available functions in the color submodule color.rgb2 color. from skimage import data from skimage import color original = data.astronaut() grayscale = color.rgb2gray(original) # Plot the results fig, axes = plt.subplots(1, 2, figsize=(8, 4)) ax = axes.ravel() ax[0].imshow(original) ax[0].set_title("Original") ax[0].axis('off') ax[1].imshow(grayscale, cmap='gray') ax[1].set_title("Grayscale") ax[1].axis('off') fig.tight_layout() plt.show(); from skimage import data from skimage.color import rgb2hsv rgb_img = data.coffee() hsv_img = rgb2hsv(rgb_img) hue_img = hsv_img[:, :, 0] value_img = hsv_img[:, :, 2] fig, (ax0, ax1, ax2) = plt.subplots(ncols=3, figsize=(8, 2)) ax0.imshow(rgb_img) ax0.set_title("RGB image") ax0.axis('off') ax1.imshow(hue_img, cmap='hsv') ax1.set_title("Hue channel") ax1.axis('off') ax2.imshow(value_img) ax2.set_title("Value channel") ax2.axis('off') fig.tight_layout(); hue_threshold = 0.04 binary_img = hue_img > hue_threshold fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(8, 3)) ax0.hist(hue_img.ravel(), 512) ax0.set_title("Histogram of the Hue channel with threshold") ax0.axvline(x=hue_threshold, color='r', linestyle='dashed', linewidth=2) ax0.set_xbound(0, 0.12) ax1.imshow(binary_img) ax1.set_title("Hue-thresholded image") ax1.axis('off') fig.tight_layout(); fig, ax0 = plt.subplots(figsize=(4, 3)) value_threshold = 0.10 binary_img = (hue_img > hue_threshold) | (value_img < value_threshold) ax0.imshow(binary_img) ax0.set_title("Hue and value thresholded image") ax0.axis('off') fig.tight_layout() plt.show(); from skimage import data # Explore with tab completion example_image = data.camera() fig, ax = plt.subplots(figsize=(6, 6)) ax.imshow(example_image) ax.axis('off'); # Room for experimentation from skimage import draw # Tab complete to see available options draw. # Room for experimentation fig, (ax1, ax2) = plt.subplots(ncols=2, nrows=1, figsize=(10, 6)) img = np.zeros((500, 500, 3), dtype=np.float64) # draw line rr, cc = draw.line(120, 123, 20, 400) img[rr, cc, 0] = 255 # fill polygon poly = np.array(( (300, 300), (480, 320), (380, 430), (220, 590), (300, 300), )) rr, cc = draw.polygon(poly[:, 0], poly[:, 1], img.shape) img[rr, cc, 1] = 1 # fill circle rr, cc = draw.circle(200, 200, 100, img.shape) img[rr, cc, :] = (1, 1, 0) # fill ellipse rr, cc = draw.ellipse(300, 300, 100, 200, img.shape) img[rr, cc, 2] = 1 # circle rr, cc = draw.circle_perimeter(120, 400, 15) img[rr, cc, :] = (1, 0, 0) # Bezier curve rr, cc = draw.bezier_curve(70, 100, 10, 10, 150, 100, 1) img[rr, cc, :] = (1, 0, 0) # ellipses rr, cc = draw.ellipse_perimeter(120, 400, 60, 20, orientation=np.pi / 4.) img[rr, cc, :] = (1, 0, 1) rr, cc = draw.ellipse_perimeter(120, 400, 60, 20, orientation=-np.pi / 4.) img[rr, cc, :] = (0, 0, 1) rr, cc = draw.ellipse_perimeter(120, 400, 60, 20, orientation=np.pi / 2.) img[rr, cc, :] = (1, 1, 1) ax1.imshow(img) ax1.set_title('No anti-aliasing') ax1.axis('off') img = np.zeros((100, 100), dtype=np.double) # anti-aliased line rr, cc, val = draw.line_aa(12, 12, 20, 50) img[rr, cc] = val # anti-aliased circle rr, cc, val = draw.circle_perimeter_aa(60, 40, 30) img[rr, cc] = val ax2.imshow(img, cmap=plt.cm.gray, interpolation='nearest') ax2.set_title('Anti-aliasing') ax2.axis('off'); from skimage import data, img_as_float from skimage import exposure def plot_img_and_hist(image, axes, bins=256): """Plot an image along with its histogram and cumulative histogram. """ image = img_as_float(image) ax_img, ax_hist = axes ax_cdf = ax_hist.twinx() # Display image ax_img.imshow(image, cmap=plt.cm.gray) ax_img.set_axis_off() # Display histogram ax_hist.hist(image.ravel(), bins=bins, histtype='step', color='black') ax_hist.ticklabel_format(axis='y', style='scientific', scilimits=(0, 0)) ax_hist.set_xlabel('Pixel intensity') ax_hist.set_xlim(0, 1) ax_hist.set_yticks([]) # Display cumulative distribution img_cdf, bins = exposure.cumulative_distribution(image, bins) ax_cdf.plot(bins, img_cdf, 'r') ax_cdf.set_yticks([]) return ax_img, ax_hist, ax_cdf # Load an example image img = data.moon() # Contrast stretching p2, p98 = np.percentile(img, (2, 98)) img_rescale = exposure.rescale_intensity(img, in_range=(p2, p98)) # Equalization img_eq = exposure.equalize_hist(img) # Adaptive Equalization img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03) # Display results fig = plt.figure(figsize=(8, 5)) axes = np.zeros((2, 4), dtype=np.object) axes[0, 0] = fig.add_subplot(2, 4, 1) for i in range(1, 4): axes[0, i] = fig.add_subplot(2, 4, 1+i, sharex=axes[0,0], sharey=axes[0,0]) for i in range(0, 4): axes[1, i] = fig.add_subplot(2, 4, 5+i) ax_img, ax_hist, ax_cdf = plot_img_and_hist(img, axes[:, 0]) ax_img.set_title('Low contrast image') y_min, y_max = ax_hist.get_ylim() ax_hist.set_ylabel('Number of pixels') ax_hist.set_yticks(np.linspace(0, y_max, 5)) ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_rescale, axes[:, 1]) ax_img.set_title('Contrast stretch') ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_eq, axes[:, 2]) ax_img.set_title('Histogram eq') ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_adapteq, axes[:, 3]) ax_img.set_title('Adaptive eq') ax_cdf.set_ylabel('Fraction of total intensity') ax_cdf.set_yticks(np.linspace(0, 1, 5)) # prevent overlap of y-axis labels fig.tight_layout(); # Explore with tab completion exposure. # Room for experimentation from skimage import feature # Explore with tab completion feature. # Room for experimentation from skimage import data from skimage import transform as tf from skimage import feature from skimage.color import rgb2gray # Import the astronaut then warp/rotate the image img1 = rgb2gray(data.astronaut()) img2 = tf.rotate(img1, 180) tform = tf.AffineTransform(scale=(1.3, 1.1), rotation=0.5, translation=(0, -200)) img3 = tf.warp(img1, tform) # Build ORB extractor and extract features descriptor_extractor = feature.ORB(n_keypoints=200) descriptor_extractor.detect_and_extract(img1) keypoints1 = descriptor_extractor.keypoints descriptors1 = descriptor_extractor.descriptors descriptor_extractor.detect_and_extract(img2) keypoints2 = descriptor_extractor.keypoints descriptors2 = descriptor_extractor.descriptors descriptor_extractor.detect_and_extract(img3) keypoints3 = descriptor_extractor.keypoints descriptors3 = descriptor_extractor.descriptors # Find matches between the extracted features matches12 = feature.match_descriptors(descriptors1, descriptors2, cross_check=True) matches13 = feature.match_descriptors(descriptors1, descriptors3, cross_check=True) # Plot the results fig, ax = plt.subplots(nrows=2, ncols=1, figsize=(10, 10)) plt.gray() feature.plot_matches(ax[0], img1, img2, keypoints1, keypoints2, matches12) ax[0].axis('off') ax[0].set_title("Original Image vs. Transformed Image") feature.plot_matches(ax[1], img1, img3, keypoints1, keypoints3, matches13) ax[1].axis('off') ax[1].set_title("Original Image vs. Transformed Image"); # Room for experimentation from skimage import filters # Explore with tab completion filters. from skimage import measure # Explore with tab completion measure. # Room to explore from skimage import morphology as morph # Explore with tab completion morph. from skimage import data from skimage import morphology as morph cameraman = data.camera() # Change the cameraman's coat from dark to light (255). The seed point is # chosen as (200, 100), light_coat = morph.flood_fill(cameraman, (200, 100), 255, tolerance=10) fig, ax = plt.subplots(ncols=2, figsize=(10, 5)) ax[0].imshow(cameraman, cmap=plt.cm.gray) ax[0].set_title('Original') ax[0].axis('off') ax[1].imshow(light_coat, cmap=plt.cm.gray) ax[1].plot(100, 200, 'ro') # seed point ax[1].set_title('After flood fill') ax[1].axis('off'); import os from skimage.data import data_dir from skimage.util import img_as_ubyte from skimage import io orig_phantom = img_as_ubyte(io.imread(os.path.join(data_dir, "phantom.png"), as_gray=True)) fig, ax = plt.subplots(figsize=(5, 5)) ax.imshow(orig_phantom, cmap=plt.cm.gray) ax.axis('off'); def plot_comparison(original, filtered, filter_name): fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(8, 4), sharex=True, sharey=True) ax1.imshow(original, cmap=plt.cm.gray) ax1.set_title('original') ax1.axis('off') ax2.imshow(filtered, cmap=plt.cm.gray) ax2.set_title(filter_name) ax2.axis('off') from skimage import morphology as morph selem = morph.disk(6) eroded = morph.erosion(orig_phantom, selem) plot_comparison(orig_phantom, eroded, 'erosion') dilated = morph.dilation(orig_phantom, selem) plot_comparison(orig_phantom, dilated, 'dilation') opened = morph.opening(orig_phantom, selem) plot_comparison(orig_phantom, opened, 'opening') phantom = orig_phantom.copy() phantom[10:30, 200:210] = 0 closed = morph.closing(phantom, selem) plot_comparison(phantom, closed, 'closing') phantom = orig_phantom.copy() phantom[340:350, 200:210] = 255 phantom[100:110, 200:210] = 0 w_tophat = morph.white_tophat(phantom, selem) plot_comparison(phantom, w_tophat, 'white tophat') b_tophat = morph.black_tophat(phantom, selem) plot_comparison(phantom, b_tophat, 'black tophat') horse = io.imread(os.path.join(data_dir, "horse.png"), as_gray=True) sk = morph.skeletonize(horse == 0) plot_comparison(horse, sk, 'skeletonize') hull1 = morph.convex_hull_image(horse == 0) plot_comparison(horse, hull1, 'convex hull') from skimage import restoration # Explore with tab completion restoration. # Space to experiment with restoration techniques from skimage import segmentation # Explore with tab completion segmentation. # Room for experimentation from skimage import transform # Explore with tab completion transform. # Room for experimentation from skimage import util # Explore with tab completion util. # Room to experiment
0.818737
0.987154
# SRGAN This notebook will contain demonstrations on how to train and use the SRGAN network. ``` import imageio import numpy as np from skimage.transform import resize from sklearn.datasets import fetch_olivetti_faces import matplotlib.pyplot as plt from libs.srgan import SRGAN from libs.util import plot_test_images ``` # 1. Loading Data First we'll load some data to use ``` dataset = fetch_olivetti_faces("./data/olivetti_faces") for i, img in enumerate(dataset.images): imageio.imwrite(f"./data/olivetti_faces/{i}.png", (img*255).astype(np.uint8)) ``` ## 1. Training To train the SRGAN, we first instantiate the model ``` gan = SRGAN() ``` And then perform training, which has options for regularly outputting the result on a few test images, and which regularly saves the model weights (in data/weights/ directory). You should change: * datapath: to the directory containing all your training images * test_images: to a list of image paths for testing during training During training, check out the ./images/samples/ directory for test sample results. Use the rest of the parameters to play with batch_size, how often to save the weights and perform testing, and how often to print progress. Here we only train the model on the limited olivetty faces dataset, and only for a short amount of epochs ``` gan.train( epochs=1000, dataname='olivetti', datapath='./data/olivetti_faces/', batch_size=1, test_images=[ './data/olivetti_faces/0.png' ], test_frequency=100, test_path='./images/samples/', weight_path='./data/weights/', weight_frequency=100, print_frequency=100 ) ``` # 2. Testing We have trained the network on imagenet for 100.000 iterations with a batch-size of 1. Below we show how to load these weights, and use them to create an SR version of a given image ``` gan.load_weights('./data/weights/imagenet_generator.h5', './data/weights/imagenet_discriminator.h5') ``` And then we can use the following utility function to take a test-image, super-resolve it, and then show the results ``` # Load image & scale it img_hr = imageio.imread("./data/sample.jpg").astype(np.float) / 127.5 - 1 # Create a low-resolution version of it lr_shape = (int(img_hr.shape[0]/4), int(img_hr.shape[1]/4)) img_lr = resize(img_hr, lr_shape, mode='constant') # Predict high-resolution version (add batch dimension to image) img_sr = gan.generator.predict(np.expand_dims(img_lr, 0)) # Remove batch dimension img_sr = np.squeeze(img_sr, axis=0) # Images and titles images = { 'Low Resolution': img_lr, 'SRGAN': img_sr, 'Original': img_hr } # Plot the images. Note: rescaling and using squeeze since we are getting batches of size 1 fig, axes = plt.subplots(1, 3, figsize=(15, 5)) for i, (title, img) in enumerate(images.items()): axes[i].imshow(0.5 * img + 0.5) axes[i].set_title(title) axes[i].axis('off') plt.show() ```
github_jupyter
import imageio import numpy as np from skimage.transform import resize from sklearn.datasets import fetch_olivetti_faces import matplotlib.pyplot as plt from libs.srgan import SRGAN from libs.util import plot_test_images dataset = fetch_olivetti_faces("./data/olivetti_faces") for i, img in enumerate(dataset.images): imageio.imwrite(f"./data/olivetti_faces/{i}.png", (img*255).astype(np.uint8)) gan = SRGAN() gan.train( epochs=1000, dataname='olivetti', datapath='./data/olivetti_faces/', batch_size=1, test_images=[ './data/olivetti_faces/0.png' ], test_frequency=100, test_path='./images/samples/', weight_path='./data/weights/', weight_frequency=100, print_frequency=100 ) gan.load_weights('./data/weights/imagenet_generator.h5', './data/weights/imagenet_discriminator.h5') # Load image & scale it img_hr = imageio.imread("./data/sample.jpg").astype(np.float) / 127.5 - 1 # Create a low-resolution version of it lr_shape = (int(img_hr.shape[0]/4), int(img_hr.shape[1]/4)) img_lr = resize(img_hr, lr_shape, mode='constant') # Predict high-resolution version (add batch dimension to image) img_sr = gan.generator.predict(np.expand_dims(img_lr, 0)) # Remove batch dimension img_sr = np.squeeze(img_sr, axis=0) # Images and titles images = { 'Low Resolution': img_lr, 'SRGAN': img_sr, 'Original': img_hr } # Plot the images. Note: rescaling and using squeeze since we are getting batches of size 1 fig, axes = plt.subplots(1, 3, figsize=(15, 5)) for i, (title, img) in enumerate(images.items()): axes[i].imshow(0.5 * img + 0.5) axes[i].set_title(title) axes[i].axis('off') plt.show()
0.816662
0.992371
<a href="https://colab.research.google.com/github/mamonraab/working-with-GANs/blob/main/1-StyleGAN%20-%20faces-Pre_trained_model_exploration.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Pre-trained Model Exploration ### Goal In this notebook, you will begin playing with generative models. Do not worry if you are not sure about anything you see here - you will learn all about these models and parameter in this course! Simply run a couple of generative models and check out their cool outputs. ### Learning Objectives 1. See some generative models in action! ## StyleGAN - faces #### Run the generative model called StyleGAN to generate fake faces. You are going to use the original paper's implementation of StyleGAN. Be sure to scroll all the way down and look at a face it generates. ``` !git clone https://github.com/NVlabs/stylegan.git %tensorflow_version 1.x # Import needed Python libraries import os import pickle import warnings import numpy as np import PIL from tensorflow.python.util import module_wrapper module_wrapper._PER_MODULE_WARNING_LIMIT = 0 # Import the official StyleGAN repo import stylegan from stylegan.dnnlib import tflib from stylegan import config # Initialize TensorFlow tflib.init_tf() # Move into the StyleGAN directory, if you're not in it already path = 'stylegan/' if "stylegan" not in os.getcwd(): os.chdir(path) # Load pre-trained StyleGAN network url = 'https://bitbucket.org/ezelikman/gans/downloads/karras2019stylegan-ffhq-1024x1024.pkl' # karras2019stylegan-ffhq-1024x1024.pkl with stylegan.dnnlib.util.open_url(url, cache_dir=stylegan.config.cache_dir) as f: # You'll load 3 components, and use the last one Gs for sampling images. # _G = Instantaneous snapshot of the generator. Mainly useful for resuming a previous training run. # _D = Instantaneous snapshot of the discriminator. Mainly useful for resuming a previous training run. # Gs = Long-term average of the generator. Yields higher-quality results than the instantaneous snapshot. _G, _D, Gs = pickle.load(f) print('StyleGAN package loaded successfully!') #@title Generate faces with StyleGAN #@markdown Double click here to see the code. After setting truncation, run the cells below to generate images. This adjusts the truncation, you will learn more about this soon! Truncation trades off fidelity (quality) and diversity of the generated images - play with it! Truncation = 0.7 #@param {type:"slider", min:0.1, max:1, step:0.1} print(f'Truncation set to {Truncation}. \nNow run the cells below to generate images with this truncation value.') # Set the random state. Nothing special about 42, # except that it's the meaning of life. rnd = np.random.RandomState(42) print(f'Random state is set.') ``` You'll default to 4 images for the run, which is called a batch. Feel free to generate more by changing this parameter, but note that very large batch sizes will cause the model to run out of memory. ``` batch_size = 4 #@param {type:"slider", min:1, max:10, step:1} print(f'Batch size is {batch_size}...') ``` Noise vectors make sure the generated images are randomly (stochastically) different, not all the same. Notice that there is a noise vector for each image in the batch. You can run this next cell as many times as you want to get new noise vectors, and as a result, new images! ``` input_shape = Gs.input_shape[1] noise_vectors = rnd.randn(batch_size, input_shape) print(f'There are {noise_vectors.shape[0]} noise vectors, each with {noise_vectors.shape[1]} random values between -{Truncation} and {Truncation}.') ``` Run the model to generate the images. Notice that truncation and noise vectors are passed in. Don't worry too much about the other stuff - it's about output formats and adding additional randomness/diversity to the output. ``` fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True) images = Gs.run(noise_vectors, None, truncation_psi=Truncation, randomize_noise=False, output_transform=fmt) print(f'Successfully sampled {batch_size} images from the model.') ``` Now you save and visualize the images. Feel free to regenerate the noise vectors above and run the cells afterwards to see new images. ``` # Save the images os.makedirs(config.result_dir, exist_ok=True) png_filename = os.path.join(config.result_dir, 'stylegan-example.png') if batch_size > 1: img = np.concatenate(images, axis=1) else: img = images[0] PIL.Image.fromarray(img, 'RGB').save(png_filename) # Check the images out! from IPython.display import Image Image(png_filename, width=256*batch_size, height=256) ``` ## BigGAN - objects #### Here are some objects generated by a different GAN (called BigGAN) of a dog, mountain, butterfly, and hamburger. These different objects are called classes. #### You can play with the different classes that BigGAN can generate below. ![alt text](https://3qeqpr26caki16dnhd19sv6by6v-wpengine.netdna-ssl.com/wp-content/uploads/2019/06/Examples-of-High-Quality-Class-Conditional-Images-Generated-by-BigGAN.png) ``` # Import Python packages import numpy as np import os from io import StringIO from tqdm import tqdm from random import random from PIL import ImageFont, ImageDraw, ImageEnhance from scipy.stats import truncnorm from google.colab import files import IPython.display import tensorflow as tf import tensorflow_hub as hub print(f'Successfully imported packages.') # Load BigGAN from the official repo (Coursera: remove and load pkl file) # tf.reset_default_graph() module_path = 'https://tfhub.dev/deepmind/biggan-deep-256/1' print('Loading BigGAN module from:', module_path) module = hub.Module(module_path) inputs = {k: tf.placeholder(v.dtype, v.get_shape().as_list(), k) for k, v in module.get_input_info_dict().items()} output = module(inputs) print('Loaded the BigGAN module. Here are its input and outputs sizes:') print('Inputs:\n', '\n'.join( ' {}: {}'.format(*kv) for kv in inputs.items())) print('\nOutput:', output) # Get the different components of the input noise_vector = input_z = inputs['z'] label = input_y = inputs['y'] input_trunc = inputs['truncation'] # Get the sizes of the noise vector and the label noise_vector_size = input_z.shape.as_list()[1] label_size = input_y.shape.as_list()[1] print(f'Components of input are set.') print(f'Noise vector is size {noise_vector_size}. Label is size {label_size}.') # Function to truncate the noise vector def truncated_noise_vector(batch_size, truncation=1., seed=42): state = None if seed is None else np.random.RandomState(seed) values = truncnorm.rvs(-2, 2, size=(batch_size, noise_vector_size), random_state=state) return truncation * values print(f'Function declared.') def one_hot(label, label_size=label_size): ''' Function to turn label into a one-hot vector. This means that all values in the vector are 0, except one value that is 1, which represents the class label, e.g. [0 0 0 0 1 0 0]. ''' label = np.asarray(label) if len(label.shape) <= 1: index = label index = np.asarray(index) if len(index.shape) == 0: index = np.asarray([index]) assert len(index.shape) == 1 num = index.shape[0] label = np.zeros((num, label_size), dtype=np.float32) label[np.arange(num), index] = 1 assert len(label.shape) == 2 return label print(f'Function declared.') def sample(sess, noise, label, truncation=1., batch_size=8, label_size=label_size): ''' Function to sample images from the model. Inputs include the noise vector, label, truncation, and batch size (number of images to generate). ''' noise = np.asarray(noise) label = np.asarray(label) num = noise.shape[0] if len(label.shape) == 0: label = np.asarray([label] * num) if label.shape[0] != num: raise ValueError('Got # noise samples ({}) != # label samples ({})' .format(noise.shape[0], label.shape[0])) label = one_hot(label, label_size) ims = [] print(f"Generating images...") for batch_start in tqdm(range(0, num, batch_size)): s = slice(batch_start, min(num, batch_start + batch_size)) feed_dict = {input_z: noise[s], input_y: label[s], input_trunc: truncation} ims.append(sess.run(output, feed_dict=feed_dict)) ims = np.concatenate(ims, axis=0) assert ims.shape[0] == num ims = np.clip(((ims + 1) / 2.0) * 256, 0, 255) ims = np.uint8(ims) return ims print(f'Function declared.') ''' Functions for saving and visualizing images in a grid. ''' def imgrid(imarray, cols=5, pad=1): if imarray.dtype != np.uint8: raise ValueError('imgrid input imarray must be uint8') pad = int(pad) assert pad >= 0 cols = int(cols) assert cols >= 1 N, H, W, C = imarray.shape rows = int(np.ceil(N / float(cols))) batch_pad = rows * cols - N assert batch_pad >= 0 post_pad = [batch_pad, pad, pad, 0] pad_arg = [[0, p] for p in post_pad] imarray = np.pad(imarray, pad_arg, 'constant', constant_values=255) H += pad W += pad grid = (imarray .reshape(rows, cols, H, W, C) .transpose(0, 2, 1, 3, 4) .reshape(rows*H, cols*W, C)) if pad: grid = grid[:-pad, :-pad] return grid def imshow(a, format='png', jpeg_fallback=True): a = np.asarray(a, dtype=np.uint8) path = 'results/biggan-example.png' img = PIL.Image.fromarray(a) img.save(path, format) try: disp = IPython.display.display(IPython.display.Image(path)) except IOError: if jpeg_fallback and format != 'jpeg': print ('Warning: image was too large to display in format "{}"; ' 'trying jpeg instead.').format(format) return imshow(a, format='jpeg') else: raise return disp print(f'Functions declared.') # Initialize TensorFlow initializer = tf.global_variables_initializer() sess = tf.Session() sess.run(initializer) print('TensorFlow initialized.') #@title Select the class and truncation { display-mode: "form", run: "auto" } #@markdown ##### The id next to each class is taken from ImageNet, a 1000-class dataset of that BigGAN was trained on. #@markdown ##### Double click to see all values in a code format. Class = "247) Saint Bernard, St Bernard" #@param ["0) tench, Tinca tinca", "1) goldfish, Carassius auratus", "2) great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias", "3) tiger shark, Galeocerdo cuvieri", "4) hammerhead, hammerhead shark", "5) electric ray, crampfish, numbfish, torpedo", "6) stingray", "7) cock", "8) hen", "9) ostrich, Struthio camelus", "10) brambling, Fringilla montifringilla", "11) goldfinch, Carduelis carduelis", "12) house finch, linnet, Carpodacus mexicanus", "13) junco, snowbird", "14) indigo bunting, indigo finch, indigo bird, Passerina cyanea", "15) robin, American robin, Turdus migratorius", "16) bulbul", "17) jay", "18) magpie", "19) chickadee", "20) water ouzel, dipper", "21) kite", "22) bald eagle, American eagle, Haliaeetus leucocephalus", "23) vulture", "24) great grey owl, great gray owl, Strix nebulosa", "25) European fire salamander, Salamandra salamandra", "26) common newt, Triturus vulgaris", "27) eft", "28) spotted salamander, Ambystoma maculatum", "29) axolotl, mud puppy, Ambystoma mexicanum", "30) bullfrog, Rana catesbeiana", "31) tree frog, tree-frog", "32) tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui", "33) loggerhead, loggerhead turtle, Caretta caretta", "34) leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea", "35) mud turtle", "36) terrapin", "37) box turtle, box tortoise", "38) banded gecko", "39) common iguana, iguana, Iguana iguana", "40) American chameleon, anole, Anolis carolinensis", "41) whiptail, whiptail lizard", "42) agama", "43) frilled lizard, Chlamydosaurus kingi", "44) alligator lizard", "45) Gila monster, Heloderma suspectum", "46) green lizard, Lacerta viridis", "47) African chameleon, Chamaeleo chamaeleon", "48) Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis", "49) African crocodile, Nile crocodile, Crocodylus niloticus", "50) American alligator, Alligator mississipiensis", "51) triceratops", "52) thunder snake, worm snake, Carphophis amoenus", "53) ringneck snake, ring-necked snake, ring snake", "54) hognose snake, puff adder, sand viper", "55) green snake, grass snake", "56) king snake, kingsnake", "57) garter snake, grass snake", "58) water snake", "59) vine snake", "60) night snake, Hypsiglena torquata", "61) boa constrictor, Constrictor constrictor", "62) rock python, rock snake, Python sebae", "63) Indian cobra, Naja naja", "64) green mamba", "65) sea snake", "66) horned viper, cerastes, sand viper, horned asp, Cerastes cornutus", "67) diamondback, diamondback rattlesnake, Crotalus adamanteus", "68) sidewinder, horned rattlesnake, Crotalus cerastes", "69) trilobite", "70) harvestman, daddy longlegs, Phalangium opilio", "71) scorpion", "72) black and gold garden spider, Argiope aurantia", "73) barn spider, Araneus cavaticus", "74) garden spider, Aranea diademata", "75) black widow, Latrodectus mactans", "76) tarantula", "77) wolf spider, hunting spider", "78) tick", "79) centipede", "80) black grouse", "81) ptarmigan", "82) ruffed grouse, partridge, Bonasa umbellus", "83) prairie chicken, prairie grouse, prairie fowl", "84) peacock", "85) quail", "86) partridge", "87) African grey, African gray, Psittacus erithacus", "88) macaw", "89) sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita", "90) lorikeet", "91) coucal", "92) bee eater", "93) hornbill", "94) hummingbird", "95) jacamar", "96) toucan", "97) drake", "98) red-breasted merganser, Mergus serrator", "99) goose", "100) black swan, Cygnus atratus", "101) tusker", "102) echidna, spiny anteater, anteater", "103) platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus", "104) wallaby, brush kangaroo", "105) koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus", "106) wombat", "107) jellyfish", "108) sea anemone, anemone", "109) brain coral", "110) flatworm, platyhelminth", "111) nematode, nematode worm, roundworm", "112) conch", "113) snail", "114) slug", "115) sea slug, nudibranch", "116) chiton, coat-of-mail shell, sea cradle, polyplacophore", "117) chambered nautilus, pearly nautilus, nautilus", "118) Dungeness crab, Cancer magister", "119) rock crab, Cancer irroratus", "120) fiddler crab", "121) king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica", "122) American lobster, Northern lobster, Maine lobster, Homarus americanus", "123) spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish", "124) crayfish, crawfish, crawdad, crawdaddy", "125) hermit crab", "126) isopod", "127) white stork, Ciconia ciconia", "128) black stork, Ciconia nigra", "129) spoonbill", "130) flamingo", "131) little blue heron, Egretta caerulea", "132) American egret, great white heron, Egretta albus", "133) bittern", "134) crane", "135) limpkin, Aramus pictus", "136) European gallinule, Porphyrio porphyrio", "137) American coot, marsh hen, mud hen, water hen, Fulica americana", "138) bustard", "139) ruddy turnstone, Arenaria interpres", "140) red-backed sandpiper, dunlin, Erolia alpina", "141) redshank, Tringa totanus", "142) dowitcher", "143) oystercatcher, oyster catcher", "144) pelican", "145) king penguin, Aptenodytes patagonica", "146) albatross, mollymawk", "147) grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus", "148) killer whale, killer, orca, grampus, sea wolf, Orcinus orca", "149) dugong, Dugong dugon", "150) sea lion", "151) Chihuahua", "152) Japanese spaniel", "153) Maltese dog, Maltese terrier, Maltese", "154) Pekinese, Pekingese, Peke", "155) Shih-Tzu", "156) Blenheim spaniel", "157) papillon", "158) toy terrier", "159) Rhodesian ridgeback", "160) Afghan hound, Afghan", "161) basset, basset hound", "162) beagle", "163) bloodhound, sleuthhound", "164) bluetick", "165) black-and-tan coonhound", "166) Walker hound, Walker foxhound", "167) English foxhound", "168) redbone", "169) borzoi, Russian wolfhound", "170) Irish wolfhound", "171) Italian greyhound", "172) whippet", "173) Ibizan hound, Ibizan Podenco", "174) Norwegian elkhound, elkhound", "175) otterhound, otter hound", "176) Saluki, gazelle hound", "177) Scottish deerhound, deerhound", "178) Weimaraner", "179) Staffordshire bullterrier, Staffordshire bull terrier", "180) American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier", "181) Bedlington terrier", "182) Border terrier", "183) Kerry blue terrier", "184) Irish terrier", "185) Norfolk terrier", "186) Norwich terrier", "187) Yorkshire terrier", "188) wire-haired fox terrier", "189) Lakeland terrier", "190) Sealyham terrier, Sealyham", "191) Airedale, Airedale terrier", "192) cairn, cairn terrier", "193) Australian terrier", "194) Dandie Dinmont, Dandie Dinmont terrier", "195) Boston bull, Boston terrier", "196) miniature schnauzer", "197) giant schnauzer", "198) standard schnauzer", "199) Scotch terrier, Scottish terrier, Scottie", "200) Tibetan terrier, chrysanthemum dog", "201) silky terrier, Sydney silky", "202) soft-coated wheaten terrier", "203) West Highland white terrier", "204) Lhasa, Lhasa apso", "205) flat-coated retriever", "206) curly-coated retriever", "207) golden retriever", "208) Labrador retriever", "209) Chesapeake Bay retriever", "210) German short-haired pointer", "211) vizsla, Hungarian pointer", "212) English setter", "213) Irish setter, red setter", "214) Gordon setter", "215) Brittany spaniel", "216) clumber, clumber spaniel", "217) English springer, English springer spaniel", "218) Welsh springer spaniel", "219) cocker spaniel, English cocker spaniel, cocker", "220) Sussex spaniel", "221) Irish water spaniel", "222) kuvasz", "223) schipperke", "224) groenendael", "225) malinois", "226) briard", "227) kelpie", "228) komondor", "229) Old English sheepdog, bobtail", "230) Shetland sheepdog, Shetland sheep dog, Shetland", "231) collie", "232) Border collie", "233) Bouvier des Flandres, Bouviers des Flandres", "234) Rottweiler", "235) German shepherd, German shepherd dog, German police dog, alsatian", "236) Doberman, Doberman pinscher", "237) miniature pinscher", "238) Greater Swiss Mountain dog", "239) Bernese mountain dog", "240) Appenzeller", "241) EntleBucher", "242) boxer", "243) bull mastiff", "244) Tibetan mastiff", "245) French bulldog", "246) Great Dane", "247) Saint Bernard, St Bernard", "248) Eskimo dog, husky", "249) malamute, malemute, Alaskan malamute", "250) Siberian husky", "251) dalmatian, coach dog, carriage dog", "252) affenpinscher, monkey pinscher, monkey dog", "253) basenji", "254) pug, pug-dog", "255) Leonberg", "256) Newfoundland, Newfoundland dog", "257) Great Pyrenees", "258) Samoyed, Samoyede", "259) Pomeranian", "260) chow, chow chow", "261) keeshond", "262) Brabancon griffon", "263) Pembroke, Pembroke Welsh corgi", "264) Cardigan, Cardigan Welsh corgi", "265) toy poodle", "266) miniature poodle", "267) standard poodle", "268) Mexican hairless", "269) timber wolf, grey wolf, gray wolf, Canis lupus", "270) white wolf, Arctic wolf, Canis lupus tundrarum", "271) red wolf, maned wolf, Canis rufus, Canis niger", "272) coyote, prairie wolf, brush wolf, Canis latrans", "273) dingo, warrigal, warragal, Canis dingo", "274) dhole, Cuon alpinus", "275) African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus", "276) hyena, hyaena", "277) red fox, Vulpes vulpes", "278) kit fox, Vulpes macrotis", "279) Arctic fox, white fox, Alopex lagopus", "280) grey fox, gray fox, Urocyon cinereoargenteus", "281) tabby, tabby cat", "282) tiger cat", "283) Persian cat", "284) Siamese cat, Siamese", "285) Egyptian cat", "286) cougar, puma, catamount, mountain lion, painter, panther, Felis concolor", "287) lynx, catamount", "288) leopard, Panthera pardus", "289) snow leopard, ounce, Panthera uncia", "290) jaguar, panther, Panthera onca, Felis onca", "291) lion, king of beasts, Panthera leo", "292) tiger, Panthera tigris", "293) cheetah, chetah, Acinonyx jubatus", "294) brown bear, bruin, Ursus arctos", "295) American black bear, black bear, Ursus americanus, Euarctos americanus", "296) ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus", "297) sloth bear, Melursus ursinus, Ursus ursinus", "298) mongoose", "299) meerkat, mierkat", "300) tiger beetle", "301) ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle", "302) ground beetle, carabid beetle", "303) long-horned beetle, longicorn, longicorn beetle", "304) leaf beetle, chrysomelid", "305) dung beetle", "306) rhinoceros beetle", "307) weevil", "308) fly", "309) bee", "310) ant, emmet, pismire", "311) grasshopper, hopper", "312) cricket", "313) walking stick, walkingstick, stick insect", "314) cockroach, roach", "315) mantis, mantid", "316) cicada, cicala", "317) leafhopper", "318) lacewing, lacewing fly", "319) dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", "320) damselfly", "321) admiral", "322) ringlet, ringlet butterfly", "323) monarch, monarch butterfly, milkweed butterfly, Danaus plexippus", "324) cabbage butterfly", "325) sulphur butterfly, sulfur butterfly", "326) lycaenid, lycaenid butterfly", "327) starfish, sea star", "328) sea urchin", "329) sea cucumber, holothurian", "330) wood rabbit, cottontail, cottontail rabbit", "331) hare", "332) Angora, Angora rabbit", "333) hamster", "334) porcupine, hedgehog", "335) fox squirrel, eastern fox squirrel, Sciurus niger", "336) marmot", "337) beaver", "338) guinea pig, Cavia cobaya", "339) sorrel", "340) zebra", "341) hog, pig, grunter, squealer, Sus scrofa", "342) wild boar, boar, Sus scrofa", "343) warthog", "344) hippopotamus, hippo, river horse, Hippopotamus amphibius", "345) ox", "346) water buffalo, water ox, Asiatic buffalo, Bubalus bubalis", "347) bison", "348) ram, tup", "349) bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis", "350) ibex, Capra ibex", "351) hartebeest", "352) impala, Aepyceros melampus", "353) gazelle", "354) Arabian camel, dromedary, Camelus dromedarius", "355) llama", "356) weasel", "357) mink", "358) polecat, fitch, foulmart, foumart, Mustela putorius", "359) black-footed ferret, ferret, Mustela nigripes", "360) otter", "361) skunk, polecat, wood pussy", "362) badger", "363) armadillo", "364) three-toed sloth, ai, Bradypus tridactylus", "365) orangutan, orang, orangutang, Pongo pygmaeus", "366) gorilla, Gorilla gorilla", "367) chimpanzee, chimp, Pan troglodytes", "368) gibbon, Hylobates lar", "369) siamang, Hylobates syndactylus, Symphalangus syndactylus", "370) guenon, guenon monkey", "371) patas, hussar monkey, Erythrocebus patas", "372) baboon", "373) macaque", "374) langur", "375) colobus, colobus monkey", "376) proboscis monkey, Nasalis larvatus", "377) marmoset", "378) capuchin, ringtail, Cebus capucinus", "379) howler monkey, howler", "380) titi, titi monkey", "381) spider monkey, Ateles geoffroyi", "382) squirrel monkey, Saimiri sciureus", "383) Madagascar cat, ring-tailed lemur, Lemur catta", "384) indri, indris, Indri indri, Indri brevicaudatus", "385) Indian elephant, Elephas maximus", "386) African elephant, Loxodonta africana", "387) lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens", "388) giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca", "389) barracouta, snoek", "390) eel", "391) coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch", "392) rock beauty, Holocanthus tricolor", "393) anemone fish", "394) sturgeon", "395) gar, garfish, garpike, billfish, Lepisosteus osseus", "396) lionfish", "397) puffer, pufferfish, blowfish, globefish", "398) abacus", "399) abaya", "400) academic gown, academic robe, judge's robe", "401) accordion, piano accordion, squeeze box", "402) acoustic guitar", "403) aircraft carrier, carrier, flattop, attack aircraft carrier", "404) airliner", "405) airship, dirigible", "406) altar", "407) ambulance", "408) amphibian, amphibious vehicle", "409) analog clock", "410) apiary, bee house", "411) apron", "412) ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin", "413) assault rifle, assault gun", "414) backpack, back pack, knapsack, packsack, rucksack, haversack", "415) bakery, bakeshop, bakehouse", "416) balance beam, beam", "417) balloon", "418) ballpoint, ballpoint pen, ballpen, Biro", "419) Band Aid", "420) banjo", "421) bannister, banister, balustrade, balusters, handrail", "422) barbell", "423) barber chair", "424) barbershop", "425) barn", "426) barometer", "427) barrel, cask", "428) barrow, garden cart, lawn cart, wheelbarrow", "429) baseball", "430) basketball", "431) bassinet", "432) bassoon", "433) bathing cap, swimming cap", "434) bath towel", "435) bathtub, bathing tub, bath, tub", "436) beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon", "437) beacon, lighthouse, beacon light, pharos", "438) beaker", "439) bearskin, busby, shako", "440) beer bottle", "441) beer glass", "442) bell cote, bell cot", "443) bib", "444) bicycle-built-for-two, tandem bicycle, tandem", "445) bikini, two-piece", "446) binder, ring-binder", "447) binoculars, field glasses, opera glasses", "448) birdhouse", "449) boathouse", "450) bobsled, bobsleigh, bob", "451) bolo tie, bolo, bola tie, bola", "452) bonnet, poke bonnet", "453) bookcase", "454) bookshop, bookstore, bookstall", "455) bottlecap", "456) bow", "457) bow tie, bow-tie, bowtie", "458) brass, memorial tablet, plaque", "459) brassiere, bra, bandeau", "460) breakwater, groin, groyne, mole, bulwark, seawall, jetty", "461) breastplate, aegis, egis", "462) broom", "463) bucket, pail", "464) buckle", "465) bulletproof vest", "466) bullet train, bullet", "467) butcher shop, meat market", "468) cab, hack, taxi, taxicab", "469) caldron, cauldron", "470) candle, taper, wax light", "471) cannon", "472) canoe", "473) can opener, tin opener", "474) cardigan", "475) car mirror", "476) carousel, carrousel, merry-go-round, roundabout, whirligig", "477) carpenter's kit, tool kit", "478) carton", "479) car wheel", "480) cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM", "481) cassette", "482) cassette player", "483) castle", "484) catamaran", "485) CD player", "486) cello, violoncello", "487) cellular telephone, cellular phone, cellphone, cell, mobile phone", "488) chain", "489) chainlink fence", "490) chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour", "491) chain saw, chainsaw", "492) chest", "493) chiffonier, commode", "494) chime, bell, gong", "495) china cabinet, china closet", "496) Christmas stocking", "497) church, church building", "498) cinema, movie theater, movie theatre, movie house, picture palace", "499) cleaver, meat cleaver, chopper", "500) cliff dwelling", "501) cloak", "502) clog, geta, patten, sabot", "503) cocktail shaker", "504) coffee mug", "505) coffeepot", "506) coil, spiral, volute, whorl, helix", "507) combination lock", "508) computer keyboard, keypad", "509) confectionery, confectionary, candy store", "510) container ship, containership, container vessel", "511) convertible", "512) corkscrew, bottle screw", "513) cornet, horn, trumpet, trump", "514) cowboy boot", "515) cowboy hat, ten-gallon hat", "516) cradle", "517) crane", "518) crash helmet", "519) crate", "520) crib, cot", "521) Crock Pot", "522) croquet ball", "523) crutch", "524) cuirass", "525) dam, dike, dyke", "526) desk", "527) desktop computer", "528) dial telephone, dial phone", "529) diaper, nappy, napkin", "530) digital clock", "531) digital watch", "532) dining table, board", "533) dishrag, dishcloth", "534) dishwasher, dish washer, dishwashing machine", "535) disk brake, disc brake", "536) dock, dockage, docking facility", "537) dogsled, dog sled, dog sleigh", "538) dome", "539) doormat, welcome mat", "540) drilling platform, offshore rig", "541) drum, membranophone, tympan", "542) drumstick", "543) dumbbell", "544) Dutch oven", "545) electric fan, blower", "546) electric guitar", "547) electric locomotive", "548) entertainment center", "549) envelope", "550) espresso maker", "551) face powder", "552) feather boa, boa", "553) file, file cabinet, filing cabinet", "554) fireboat", "555) fire engine, fire truck", "556) fire screen, fireguard", "557) flagpole, flagstaff", "558) flute, transverse flute", "559) folding chair", "560) football helmet", "561) forklift", "562) fountain", "563) fountain pen", "564) four-poster", "565) freight car", "566) French horn, horn", "567) frying pan, frypan, skillet", "568) fur coat", "569) garbage truck, dustcart", "570) gasmask, respirator, gas helmet", "571) gas pump, gasoline pump, petrol pump, island dispenser", "572) goblet", "573) go-kart", "574) golf ball", "575) golfcart, golf cart", "576) gondola", "577) gong, tam-tam", "578) gown", "579) grand piano, grand", "580) greenhouse, nursery, glasshouse", "581) grille, radiator grille", "582) grocery store, grocery, food market, market", "583) guillotine", "584) hair slide", "585) hair spray", "586) half track", "587) hammer", "588) hamper", "589) hand blower, blow dryer, blow drier, hair dryer, hair drier", "590) hand-held computer, hand-held microcomputer", "591) handkerchief, hankie, hanky, hankey", "592) hard disc, hard disk, fixed disk", "593) harmonica, mouth organ, harp, mouth harp", "594) harp", "595) harvester, reaper", "596) hatchet", "597) holster", "598) home theater, home theatre", "599) honeycomb", "600) hook, claw", "601) hoopskirt, crinoline", "602) horizontal bar, high bar", "603) horse cart, horse-cart", "604) hourglass", "605) iPod", "606) iron, smoothing iron", "607) jack-o'-lantern", "608) jean, blue jean, denim", "609) jeep, landrover", "610) jersey, T-shirt, tee shirt", "611) jigsaw puzzle", "612) jinrikisha, ricksha, rickshaw", "613) joystick", "614) kimono", "615) knee pad", "616) knot", "617) lab coat, laboratory coat", "618) ladle", "619) lampshade, lamp shade", "620) laptop, laptop computer", "621) lawn mower, mower", "622) lens cap, lens cover", "623) letter opener, paper knife, paperknife", "624) library", "625) lifeboat", "626) lighter, light, igniter, ignitor", "627) limousine, limo", "628) liner, ocean liner", "629) lipstick, lip rouge", "630) Loafer", "631) lotion", "632) loudspeaker, speaker, speaker unit, loudspeaker system, speaker system", "633) loupe, jeweler's loupe", "634) lumbermill, sawmill", "635) magnetic compass", "636) mailbag, postbag", "637) mailbox, letter box", "638) maillot", "639) maillot, tank suit", "640) manhole cover", "641) maraca", "642) marimba, xylophone", "643) mask", "644) matchstick", "645) maypole", "646) maze, labyrinth", "647) measuring cup", "648) medicine chest, medicine cabinet", "649) megalith, megalithic structure", "650) microphone, mike", "651) microwave, microwave oven", "652) military uniform", "653) milk can", "654) minibus", "655) miniskirt, mini", "656) minivan", "657) missile", "658) mitten", "659) mixing bowl", "660) mobile home, manufactured home", "661) Model T", "662) modem", "663) monastery", "664) monitor", "665) moped", "666) mortar", "667) mortarboard", "668) mosque", "669) mosquito net", "670) motor scooter, scooter", "671) mountain bike, all-terrain bike, off-roader", "672) mountain tent", "673) mouse, computer mouse", "674) mousetrap", "675) moving van", "676) muzzle", "677) nail", "678) neck brace", "679) necklace", "680) nipple", "681) notebook, notebook computer", "682) obelisk", "683) oboe, hautboy, hautbois", "684) ocarina, sweet potato", "685) odometer, hodometer, mileometer, milometer", "686) oil filter", "687) organ, pipe organ", "688) oscilloscope, scope, cathode-ray oscilloscope, CRO", "689) overskirt", "690) oxcart", "691) oxygen mask", "692) packet", "693) paddle, boat paddle", "694) paddlewheel, paddle wheel", "695) padlock", "696) paintbrush", "697) pajama, pyjama, pj's, jammies", "698) palace", "699) panpipe, pandean pipe, syrinx", "700) paper towel", "701) parachute, chute", "702) parallel bars, bars", "703) park bench", "704) parking meter", "705) passenger car, coach, carriage", "706) patio, terrace", "707) pay-phone, pay-station", "708) pedestal, plinth, footstall", "709) pencil box, pencil case", "710) pencil sharpener", "711) perfume, essence", "712) Petri dish", "713) photocopier", "714) pick, plectrum, plectron", "715) pickelhaube", "716) picket fence, paling", "717) pickup, pickup truck", "718) pier", "719) piggy bank, penny bank", "720) pill bottle", "721) pillow", "722) ping-pong ball", "723) pinwheel", "724) pirate, pirate ship", "725) pitcher, ewer", "726) plane, carpenter's plane, woodworking plane", "727) planetarium", "728) plastic bag", "729) plate rack", "730) plow, plough", "731) plunger, plumber's helper", "732) Polaroid camera, Polaroid Land camera", "733) pole", "734) police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria", "735) poncho", "736) pool table, billiard table, snooker table", "737) pop bottle, soda bottle", "738) pot, flowerpot", "739) potter's wheel", "740) power drill", "741) prayer rug, prayer mat", "742) printer", "743) prison, prison house", "744) projectile, missile", "745) projector", "746) puck, hockey puck", "747) punching bag, punch bag, punching ball, punchball", "748) purse", "749) quill, quill pen", "750) quilt, comforter, comfort, puff", "751) racer, race car, racing car", "752) racket, racquet", "753) radiator", "754) radio, wireless", "755) radio telescope, radio reflector", "756) rain barrel", "757) recreational vehicle, RV, R.V.", "758) reel", "759) reflex camera", "760) refrigerator, icebox", "761) remote control, remote", "762) restaurant, eating house, eating place, eatery", "763) revolver, six-gun, six-shooter", "764) rifle", "765) rocking chair, rocker", "766) rotisserie", "767) rubber eraser, rubber, pencil eraser", "768) rugby ball", "769) rule, ruler", "770) running shoe", "771) safe", "772) safety pin", "773) saltshaker, salt shaker", "774) sandal", "775) sarong", "776) sax, saxophone", "777) scabbard", "778) scale, weighing machine", "779) school bus", "780) schooner", "781) scoreboard", "782) screen, CRT screen", "783) screw", "784) screwdriver", "785) seat belt, seatbelt", "786) sewing machine", "787) shield, buckler", "788) shoe shop, shoe-shop, shoe store", "789) shoji", "790) shopping basket", "791) shopping cart", "792) shovel", "793) shower cap", "794) shower curtain", "795) ski", "796) ski mask", "797) sleeping bag", "798) slide rule, slipstick", "799) sliding door", "800) slot, one-armed bandit", "801) snorkel", "802) snowmobile", "803) snowplow, snowplough", "804) soap dispenser", "805) soccer ball", "806) sock", "807) solar dish, solar collector, solar furnace", "808) sombrero", "809) soup bowl", "810) space bar", "811) space heater", "812) space shuttle", "813) spatula", "814) speedboat", "815) spider web, spider's web", "816) spindle", "817) sports car, sport car", "818) spotlight, spot", "819) stage", "820) steam locomotive", "821) steel arch bridge", "822) steel drum", "823) stethoscope", "824) stole", "825) stone wall", "826) stopwatch, stop watch", "827) stove", "828) strainer", "829) streetcar, tram, tramcar, trolley, trolley car", "830) stretcher", "831) studio couch, day bed", "832) stupa, tope", "833) submarine, pigboat, sub, U-boat", "834) suit, suit of clothes", "835) sundial", "836) sunglass", "837) sunglasses, dark glasses, shades", "838) sunscreen, sunblock, sun blocker", "839) suspension bridge", "840) swab, swob, mop", "841) sweatshirt", "842) swimming trunks, bathing trunks", "843) swing", "844) switch, electric switch, electrical switch", "845) syringe", "846) table lamp", "847) tank, army tank, armored combat vehicle, armoured combat vehicle", "848) tape player", "849) teapot", "850) teddy, teddy bear", "851) television, television system", "852) tennis ball", "853) thatch, thatched roof", "854) theater curtain, theatre curtain", "855) thimble", "856) thresher, thrasher, threshing machine", "857) throne", "858) tile roof", "859) toaster", "860) tobacco shop, tobacconist shop, tobacconist", "861) toilet seat", "862) torch", "863) totem pole", "864) tow truck, tow car, wrecker", "865) toyshop", "866) tractor", "867) trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi", "868) tray", "869) trench coat", "870) tricycle, trike, velocipede", "871) trimaran", "872) tripod", "873) triumphal arch", "874) trolleybus, trolley coach, trackless trolley", "875) trombone", "876) tub, vat", "877) turnstile", "878) typewriter keyboard", "879) umbrella", "880) unicycle, monocycle", "881) upright, upright piano", "882) vacuum, vacuum cleaner", "883) vase", "884) vault", "885) velvet", "886) vending machine", "887) vestment", "888) viaduct", "889) violin, fiddle", "890) volleyball", "891) waffle iron", "892) wall clock", "893) wallet, billfold, notecase, pocketbook", "894) wardrobe, closet, press", "895) warplane, military plane", "896) washbasin, handbasin, washbowl, lavabo, wash-hand basin", "897) washer, automatic washer, washing machine", "898) water bottle", "899) water jug", "900) water tower", "901) whiskey jug", "902) whistle", "903) wig", "904) window screen", "905) window shade", "906) Windsor tie", "907) wine bottle", "908) wing", "909) wok", "910) wooden spoon", "911) wool, woolen, woollen", "912) worm fence, snake fence, snake-rail fence, Virginia fence", "913) wreck", "914) yawl", "915) yurt", "916) web site, website, internet site, site", "917) comic book", "918) crossword puzzle, crossword", "919) street sign", "920) traffic light, traffic signal, stoplight", "921) book jacket, dust cover, dust jacket, dust wrapper", "922) menu", "923) plate", "924) guacamole", "925) consomme", "926) hot pot, hotpot", "927) trifle", "928) ice cream, icecream", "929) ice lolly, lolly, lollipop, popsicle", "930) French loaf", "931) bagel, beigel", "932) pretzel", "933) cheeseburger", "934) hotdog, hot dog, red hot", "935) mashed potato", "936) head cabbage", "937) broccoli", "938) cauliflower", "939) zucchini, courgette", "940) spaghetti squash", "941) acorn squash", "942) butternut squash", "943) cucumber, cuke", "944) artichoke, globe artichoke", "945) bell pepper", "946) cardoon", "947) mushroom", "948) Granny Smith", "949) strawberry", "950) orange", "951) lemon", "952) fig", "953) pineapple, ananas", "954) banana", "955) jackfruit, jak, jack", "956) custard apple", "957) pomegranate", "958) hay", "959) carbonara", "960) chocolate sauce, chocolate syrup", "961) dough", "962) meat loaf, meatloaf", "963) pizza, pizza pie", "964) potpie", "965) burrito", "966) red wine", "967) espresso", "968) cup", "969) eggnog", "970) alp", "971) bubble", "972) cliff, drop, drop-off", "973) coral reef", "974) geyser", "975) lakeside, lakeshore", "976) promontory, headland, head, foreland", "977) sandbar, sand bar", "978) seashore, coast, seacoast, sea-coast", "979) valley, vale", "980) volcano", "981) ballplayer, baseball player", "982) groom, bridegroom", "983) scuba diver", "984) rapeseed", "985) daisy", "986) yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum", "987) corn", "988) acorn", "989) hip, rose hip, rosehip", "990) buckeye, horse chestnut, conker", "991) coral fungus", "992) agaric", "993) gyromitra", "994) stinkhorn, carrion fungus", "995) earthstar", "996) hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa", "997) bolete", "998) ear, spike, capitulum", "999) toilet tissue, toilet paper, bathroom tissue"] Truncation = 1 #@param {type:"slider", min:0.02, max:1, step:0.02} # Set number of samples num_samples = 4 # Create the noise vector with truncation (you'll learn about this later!) noise_vector = truncated_noise_vector(num_samples, Truncation) # Select the class to generate label = int(Class.split(')')[0]) # Sample the images with the noise vector and label as inputs ims = sample(sess, noise_vector, label, truncation=Truncation) # Display generated images imshow(imgrid(ims, cols=min(num_samples, 5))) #@title Beyond images #@markdown Generative models can also generate non-images, like this melody. This melody comes from a generative model type called a variational autoencoder, or VAE for short. VAEs are different from GANs, which you don't have to worry about now. Just note that there are different types of generative models, not just GANs! from IPython.display import YouTubeVideo YouTubeVideo('G5JT16flZwM') ```
github_jupyter
!git clone https://github.com/NVlabs/stylegan.git %tensorflow_version 1.x # Import needed Python libraries import os import pickle import warnings import numpy as np import PIL from tensorflow.python.util import module_wrapper module_wrapper._PER_MODULE_WARNING_LIMIT = 0 # Import the official StyleGAN repo import stylegan from stylegan.dnnlib import tflib from stylegan import config # Initialize TensorFlow tflib.init_tf() # Move into the StyleGAN directory, if you're not in it already path = 'stylegan/' if "stylegan" not in os.getcwd(): os.chdir(path) # Load pre-trained StyleGAN network url = 'https://bitbucket.org/ezelikman/gans/downloads/karras2019stylegan-ffhq-1024x1024.pkl' # karras2019stylegan-ffhq-1024x1024.pkl with stylegan.dnnlib.util.open_url(url, cache_dir=stylegan.config.cache_dir) as f: # You'll load 3 components, and use the last one Gs for sampling images. # _G = Instantaneous snapshot of the generator. Mainly useful for resuming a previous training run. # _D = Instantaneous snapshot of the discriminator. Mainly useful for resuming a previous training run. # Gs = Long-term average of the generator. Yields higher-quality results than the instantaneous snapshot. _G, _D, Gs = pickle.load(f) print('StyleGAN package loaded successfully!') #@title Generate faces with StyleGAN #@markdown Double click here to see the code. After setting truncation, run the cells below to generate images. This adjusts the truncation, you will learn more about this soon! Truncation trades off fidelity (quality) and diversity of the generated images - play with it! Truncation = 0.7 #@param {type:"slider", min:0.1, max:1, step:0.1} print(f'Truncation set to {Truncation}. \nNow run the cells below to generate images with this truncation value.') # Set the random state. Nothing special about 42, # except that it's the meaning of life. rnd = np.random.RandomState(42) print(f'Random state is set.') batch_size = 4 #@param {type:"slider", min:1, max:10, step:1} print(f'Batch size is {batch_size}...') input_shape = Gs.input_shape[1] noise_vectors = rnd.randn(batch_size, input_shape) print(f'There are {noise_vectors.shape[0]} noise vectors, each with {noise_vectors.shape[1]} random values between -{Truncation} and {Truncation}.') fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True) images = Gs.run(noise_vectors, None, truncation_psi=Truncation, randomize_noise=False, output_transform=fmt) print(f'Successfully sampled {batch_size} images from the model.') # Save the images os.makedirs(config.result_dir, exist_ok=True) png_filename = os.path.join(config.result_dir, 'stylegan-example.png') if batch_size > 1: img = np.concatenate(images, axis=1) else: img = images[0] PIL.Image.fromarray(img, 'RGB').save(png_filename) # Check the images out! from IPython.display import Image Image(png_filename, width=256*batch_size, height=256) # Import Python packages import numpy as np import os from io import StringIO from tqdm import tqdm from random import random from PIL import ImageFont, ImageDraw, ImageEnhance from scipy.stats import truncnorm from google.colab import files import IPython.display import tensorflow as tf import tensorflow_hub as hub print(f'Successfully imported packages.') # Load BigGAN from the official repo (Coursera: remove and load pkl file) # tf.reset_default_graph() module_path = 'https://tfhub.dev/deepmind/biggan-deep-256/1' print('Loading BigGAN module from:', module_path) module = hub.Module(module_path) inputs = {k: tf.placeholder(v.dtype, v.get_shape().as_list(), k) for k, v in module.get_input_info_dict().items()} output = module(inputs) print('Loaded the BigGAN module. Here are its input and outputs sizes:') print('Inputs:\n', '\n'.join( ' {}: {}'.format(*kv) for kv in inputs.items())) print('\nOutput:', output) # Get the different components of the input noise_vector = input_z = inputs['z'] label = input_y = inputs['y'] input_trunc = inputs['truncation'] # Get the sizes of the noise vector and the label noise_vector_size = input_z.shape.as_list()[1] label_size = input_y.shape.as_list()[1] print(f'Components of input are set.') print(f'Noise vector is size {noise_vector_size}. Label is size {label_size}.') # Function to truncate the noise vector def truncated_noise_vector(batch_size, truncation=1., seed=42): state = None if seed is None else np.random.RandomState(seed) values = truncnorm.rvs(-2, 2, size=(batch_size, noise_vector_size), random_state=state) return truncation * values print(f'Function declared.') def one_hot(label, label_size=label_size): ''' Function to turn label into a one-hot vector. This means that all values in the vector are 0, except one value that is 1, which represents the class label, e.g. [0 0 0 0 1 0 0]. ''' label = np.asarray(label) if len(label.shape) <= 1: index = label index = np.asarray(index) if len(index.shape) == 0: index = np.asarray([index]) assert len(index.shape) == 1 num = index.shape[0] label = np.zeros((num, label_size), dtype=np.float32) label[np.arange(num), index] = 1 assert len(label.shape) == 2 return label print(f'Function declared.') def sample(sess, noise, label, truncation=1., batch_size=8, label_size=label_size): ''' Function to sample images from the model. Inputs include the noise vector, label, truncation, and batch size (number of images to generate). ''' noise = np.asarray(noise) label = np.asarray(label) num = noise.shape[0] if len(label.shape) == 0: label = np.asarray([label] * num) if label.shape[0] != num: raise ValueError('Got # noise samples ({}) != # label samples ({})' .format(noise.shape[0], label.shape[0])) label = one_hot(label, label_size) ims = [] print(f"Generating images...") for batch_start in tqdm(range(0, num, batch_size)): s = slice(batch_start, min(num, batch_start + batch_size)) feed_dict = {input_z: noise[s], input_y: label[s], input_trunc: truncation} ims.append(sess.run(output, feed_dict=feed_dict)) ims = np.concatenate(ims, axis=0) assert ims.shape[0] == num ims = np.clip(((ims + 1) / 2.0) * 256, 0, 255) ims = np.uint8(ims) return ims print(f'Function declared.') ''' Functions for saving and visualizing images in a grid. ''' def imgrid(imarray, cols=5, pad=1): if imarray.dtype != np.uint8: raise ValueError('imgrid input imarray must be uint8') pad = int(pad) assert pad >= 0 cols = int(cols) assert cols >= 1 N, H, W, C = imarray.shape rows = int(np.ceil(N / float(cols))) batch_pad = rows * cols - N assert batch_pad >= 0 post_pad = [batch_pad, pad, pad, 0] pad_arg = [[0, p] for p in post_pad] imarray = np.pad(imarray, pad_arg, 'constant', constant_values=255) H += pad W += pad grid = (imarray .reshape(rows, cols, H, W, C) .transpose(0, 2, 1, 3, 4) .reshape(rows*H, cols*W, C)) if pad: grid = grid[:-pad, :-pad] return grid def imshow(a, format='png', jpeg_fallback=True): a = np.asarray(a, dtype=np.uint8) path = 'results/biggan-example.png' img = PIL.Image.fromarray(a) img.save(path, format) try: disp = IPython.display.display(IPython.display.Image(path)) except IOError: if jpeg_fallback and format != 'jpeg': print ('Warning: image was too large to display in format "{}"; ' 'trying jpeg instead.').format(format) return imshow(a, format='jpeg') else: raise return disp print(f'Functions declared.') # Initialize TensorFlow initializer = tf.global_variables_initializer() sess = tf.Session() sess.run(initializer) print('TensorFlow initialized.') #@title Select the class and truncation { display-mode: "form", run: "auto" } #@markdown ##### The id next to each class is taken from ImageNet, a 1000-class dataset of that BigGAN was trained on. #@markdown ##### Double click to see all values in a code format. Class = "247) Saint Bernard, St Bernard" #@param ["0) tench, Tinca tinca", "1) goldfish, Carassius auratus", "2) great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias", "3) tiger shark, Galeocerdo cuvieri", "4) hammerhead, hammerhead shark", "5) electric ray, crampfish, numbfish, torpedo", "6) stingray", "7) cock", "8) hen", "9) ostrich, Struthio camelus", "10) brambling, Fringilla montifringilla", "11) goldfinch, Carduelis carduelis", "12) house finch, linnet, Carpodacus mexicanus", "13) junco, snowbird", "14) indigo bunting, indigo finch, indigo bird, Passerina cyanea", "15) robin, American robin, Turdus migratorius", "16) bulbul", "17) jay", "18) magpie", "19) chickadee", "20) water ouzel, dipper", "21) kite", "22) bald eagle, American eagle, Haliaeetus leucocephalus", "23) vulture", "24) great grey owl, great gray owl, Strix nebulosa", "25) European fire salamander, Salamandra salamandra", "26) common newt, Triturus vulgaris", "27) eft", "28) spotted salamander, Ambystoma maculatum", "29) axolotl, mud puppy, Ambystoma mexicanum", "30) bullfrog, Rana catesbeiana", "31) tree frog, tree-frog", "32) tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui", "33) loggerhead, loggerhead turtle, Caretta caretta", "34) leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea", "35) mud turtle", "36) terrapin", "37) box turtle, box tortoise", "38) banded gecko", "39) common iguana, iguana, Iguana iguana", "40) American chameleon, anole, Anolis carolinensis", "41) whiptail, whiptail lizard", "42) agama", "43) frilled lizard, Chlamydosaurus kingi", "44) alligator lizard", "45) Gila monster, Heloderma suspectum", "46) green lizard, Lacerta viridis", "47) African chameleon, Chamaeleo chamaeleon", "48) Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis", "49) African crocodile, Nile crocodile, Crocodylus niloticus", "50) American alligator, Alligator mississipiensis", "51) triceratops", "52) thunder snake, worm snake, Carphophis amoenus", "53) ringneck snake, ring-necked snake, ring snake", "54) hognose snake, puff adder, sand viper", "55) green snake, grass snake", "56) king snake, kingsnake", "57) garter snake, grass snake", "58) water snake", "59) vine snake", "60) night snake, Hypsiglena torquata", "61) boa constrictor, Constrictor constrictor", "62) rock python, rock snake, Python sebae", "63) Indian cobra, Naja naja", "64) green mamba", "65) sea snake", "66) horned viper, cerastes, sand viper, horned asp, Cerastes cornutus", "67) diamondback, diamondback rattlesnake, Crotalus adamanteus", "68) sidewinder, horned rattlesnake, Crotalus cerastes", "69) trilobite", "70) harvestman, daddy longlegs, Phalangium opilio", "71) scorpion", "72) black and gold garden spider, Argiope aurantia", "73) barn spider, Araneus cavaticus", "74) garden spider, Aranea diademata", "75) black widow, Latrodectus mactans", "76) tarantula", "77) wolf spider, hunting spider", "78) tick", "79) centipede", "80) black grouse", "81) ptarmigan", "82) ruffed grouse, partridge, Bonasa umbellus", "83) prairie chicken, prairie grouse, prairie fowl", "84) peacock", "85) quail", "86) partridge", "87) African grey, African gray, Psittacus erithacus", "88) macaw", "89) sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita", "90) lorikeet", "91) coucal", "92) bee eater", "93) hornbill", "94) hummingbird", "95) jacamar", "96) toucan", "97) drake", "98) red-breasted merganser, Mergus serrator", "99) goose", "100) black swan, Cygnus atratus", "101) tusker", "102) echidna, spiny anteater, anteater", "103) platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus", "104) wallaby, brush kangaroo", "105) koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus", "106) wombat", "107) jellyfish", "108) sea anemone, anemone", "109) brain coral", "110) flatworm, platyhelminth", "111) nematode, nematode worm, roundworm", "112) conch", "113) snail", "114) slug", "115) sea slug, nudibranch", "116) chiton, coat-of-mail shell, sea cradle, polyplacophore", "117) chambered nautilus, pearly nautilus, nautilus", "118) Dungeness crab, Cancer magister", "119) rock crab, Cancer irroratus", "120) fiddler crab", "121) king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica", "122) American lobster, Northern lobster, Maine lobster, Homarus americanus", "123) spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish", "124) crayfish, crawfish, crawdad, crawdaddy", "125) hermit crab", "126) isopod", "127) white stork, Ciconia ciconia", "128) black stork, Ciconia nigra", "129) spoonbill", "130) flamingo", "131) little blue heron, Egretta caerulea", "132) American egret, great white heron, Egretta albus", "133) bittern", "134) crane", "135) limpkin, Aramus pictus", "136) European gallinule, Porphyrio porphyrio", "137) American coot, marsh hen, mud hen, water hen, Fulica americana", "138) bustard", "139) ruddy turnstone, Arenaria interpres", "140) red-backed sandpiper, dunlin, Erolia alpina", "141) redshank, Tringa totanus", "142) dowitcher", "143) oystercatcher, oyster catcher", "144) pelican", "145) king penguin, Aptenodytes patagonica", "146) albatross, mollymawk", "147) grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus", "148) killer whale, killer, orca, grampus, sea wolf, Orcinus orca", "149) dugong, Dugong dugon", "150) sea lion", "151) Chihuahua", "152) Japanese spaniel", "153) Maltese dog, Maltese terrier, Maltese", "154) Pekinese, Pekingese, Peke", "155) Shih-Tzu", "156) Blenheim spaniel", "157) papillon", "158) toy terrier", "159) Rhodesian ridgeback", "160) Afghan hound, Afghan", "161) basset, basset hound", "162) beagle", "163) bloodhound, sleuthhound", "164) bluetick", "165) black-and-tan coonhound", "166) Walker hound, Walker foxhound", "167) English foxhound", "168) redbone", "169) borzoi, Russian wolfhound", "170) Irish wolfhound", "171) Italian greyhound", "172) whippet", "173) Ibizan hound, Ibizan Podenco", "174) Norwegian elkhound, elkhound", "175) otterhound, otter hound", "176) Saluki, gazelle hound", "177) Scottish deerhound, deerhound", "178) Weimaraner", "179) Staffordshire bullterrier, Staffordshire bull terrier", "180) American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier", "181) Bedlington terrier", "182) Border terrier", "183) Kerry blue terrier", "184) Irish terrier", "185) Norfolk terrier", "186) Norwich terrier", "187) Yorkshire terrier", "188) wire-haired fox terrier", "189) Lakeland terrier", "190) Sealyham terrier, Sealyham", "191) Airedale, Airedale terrier", "192) cairn, cairn terrier", "193) Australian terrier", "194) Dandie Dinmont, Dandie Dinmont terrier", "195) Boston bull, Boston terrier", "196) miniature schnauzer", "197) giant schnauzer", "198) standard schnauzer", "199) Scotch terrier, Scottish terrier, Scottie", "200) Tibetan terrier, chrysanthemum dog", "201) silky terrier, Sydney silky", "202) soft-coated wheaten terrier", "203) West Highland white terrier", "204) Lhasa, Lhasa apso", "205) flat-coated retriever", "206) curly-coated retriever", "207) golden retriever", "208) Labrador retriever", "209) Chesapeake Bay retriever", "210) German short-haired pointer", "211) vizsla, Hungarian pointer", "212) English setter", "213) Irish setter, red setter", "214) Gordon setter", "215) Brittany spaniel", "216) clumber, clumber spaniel", "217) English springer, English springer spaniel", "218) Welsh springer spaniel", "219) cocker spaniel, English cocker spaniel, cocker", "220) Sussex spaniel", "221) Irish water spaniel", "222) kuvasz", "223) schipperke", "224) groenendael", "225) malinois", "226) briard", "227) kelpie", "228) komondor", "229) Old English sheepdog, bobtail", "230) Shetland sheepdog, Shetland sheep dog, Shetland", "231) collie", "232) Border collie", "233) Bouvier des Flandres, Bouviers des Flandres", "234) Rottweiler", "235) German shepherd, German shepherd dog, German police dog, alsatian", "236) Doberman, Doberman pinscher", "237) miniature pinscher", "238) Greater Swiss Mountain dog", "239) Bernese mountain dog", "240) Appenzeller", "241) EntleBucher", "242) boxer", "243) bull mastiff", "244) Tibetan mastiff", "245) French bulldog", "246) Great Dane", "247) Saint Bernard, St Bernard", "248) Eskimo dog, husky", "249) malamute, malemute, Alaskan malamute", "250) Siberian husky", "251) dalmatian, coach dog, carriage dog", "252) affenpinscher, monkey pinscher, monkey dog", "253) basenji", "254) pug, pug-dog", "255) Leonberg", "256) Newfoundland, Newfoundland dog", "257) Great Pyrenees", "258) Samoyed, Samoyede", "259) Pomeranian", "260) chow, chow chow", "261) keeshond", "262) Brabancon griffon", "263) Pembroke, Pembroke Welsh corgi", "264) Cardigan, Cardigan Welsh corgi", "265) toy poodle", "266) miniature poodle", "267) standard poodle", "268) Mexican hairless", "269) timber wolf, grey wolf, gray wolf, Canis lupus", "270) white wolf, Arctic wolf, Canis lupus tundrarum", "271) red wolf, maned wolf, Canis rufus, Canis niger", "272) coyote, prairie wolf, brush wolf, Canis latrans", "273) dingo, warrigal, warragal, Canis dingo", "274) dhole, Cuon alpinus", "275) African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus", "276) hyena, hyaena", "277) red fox, Vulpes vulpes", "278) kit fox, Vulpes macrotis", "279) Arctic fox, white fox, Alopex lagopus", "280) grey fox, gray fox, Urocyon cinereoargenteus", "281) tabby, tabby cat", "282) tiger cat", "283) Persian cat", "284) Siamese cat, Siamese", "285) Egyptian cat", "286) cougar, puma, catamount, mountain lion, painter, panther, Felis concolor", "287) lynx, catamount", "288) leopard, Panthera pardus", "289) snow leopard, ounce, Panthera uncia", "290) jaguar, panther, Panthera onca, Felis onca", "291) lion, king of beasts, Panthera leo", "292) tiger, Panthera tigris", "293) cheetah, chetah, Acinonyx jubatus", "294) brown bear, bruin, Ursus arctos", "295) American black bear, black bear, Ursus americanus, Euarctos americanus", "296) ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus", "297) sloth bear, Melursus ursinus, Ursus ursinus", "298) mongoose", "299) meerkat, mierkat", "300) tiger beetle", "301) ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle", "302) ground beetle, carabid beetle", "303) long-horned beetle, longicorn, longicorn beetle", "304) leaf beetle, chrysomelid", "305) dung beetle", "306) rhinoceros beetle", "307) weevil", "308) fly", "309) bee", "310) ant, emmet, pismire", "311) grasshopper, hopper", "312) cricket", "313) walking stick, walkingstick, stick insect", "314) cockroach, roach", "315) mantis, mantid", "316) cicada, cicala", "317) leafhopper", "318) lacewing, lacewing fly", "319) dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", "320) damselfly", "321) admiral", "322) ringlet, ringlet butterfly", "323) monarch, monarch butterfly, milkweed butterfly, Danaus plexippus", "324) cabbage butterfly", "325) sulphur butterfly, sulfur butterfly", "326) lycaenid, lycaenid butterfly", "327) starfish, sea star", "328) sea urchin", "329) sea cucumber, holothurian", "330) wood rabbit, cottontail, cottontail rabbit", "331) hare", "332) Angora, Angora rabbit", "333) hamster", "334) porcupine, hedgehog", "335) fox squirrel, eastern fox squirrel, Sciurus niger", "336) marmot", "337) beaver", "338) guinea pig, Cavia cobaya", "339) sorrel", "340) zebra", "341) hog, pig, grunter, squealer, Sus scrofa", "342) wild boar, boar, Sus scrofa", "343) warthog", "344) hippopotamus, hippo, river horse, Hippopotamus amphibius", "345) ox", "346) water buffalo, water ox, Asiatic buffalo, Bubalus bubalis", "347) bison", "348) ram, tup", "349) bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis", "350) ibex, Capra ibex", "351) hartebeest", "352) impala, Aepyceros melampus", "353) gazelle", "354) Arabian camel, dromedary, Camelus dromedarius", "355) llama", "356) weasel", "357) mink", "358) polecat, fitch, foulmart, foumart, Mustela putorius", "359) black-footed ferret, ferret, Mustela nigripes", "360) otter", "361) skunk, polecat, wood pussy", "362) badger", "363) armadillo", "364) three-toed sloth, ai, Bradypus tridactylus", "365) orangutan, orang, orangutang, Pongo pygmaeus", "366) gorilla, Gorilla gorilla", "367) chimpanzee, chimp, Pan troglodytes", "368) gibbon, Hylobates lar", "369) siamang, Hylobates syndactylus, Symphalangus syndactylus", "370) guenon, guenon monkey", "371) patas, hussar monkey, Erythrocebus patas", "372) baboon", "373) macaque", "374) langur", "375) colobus, colobus monkey", "376) proboscis monkey, Nasalis larvatus", "377) marmoset", "378) capuchin, ringtail, Cebus capucinus", "379) howler monkey, howler", "380) titi, titi monkey", "381) spider monkey, Ateles geoffroyi", "382) squirrel monkey, Saimiri sciureus", "383) Madagascar cat, ring-tailed lemur, Lemur catta", "384) indri, indris, Indri indri, Indri brevicaudatus", "385) Indian elephant, Elephas maximus", "386) African elephant, Loxodonta africana", "387) lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens", "388) giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca", "389) barracouta, snoek", "390) eel", "391) coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch", "392) rock beauty, Holocanthus tricolor", "393) anemone fish", "394) sturgeon", "395) gar, garfish, garpike, billfish, Lepisosteus osseus", "396) lionfish", "397) puffer, pufferfish, blowfish, globefish", "398) abacus", "399) abaya", "400) academic gown, academic robe, judge's robe", "401) accordion, piano accordion, squeeze box", "402) acoustic guitar", "403) aircraft carrier, carrier, flattop, attack aircraft carrier", "404) airliner", "405) airship, dirigible", "406) altar", "407) ambulance", "408) amphibian, amphibious vehicle", "409) analog clock", "410) apiary, bee house", "411) apron", "412) ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin", "413) assault rifle, assault gun", "414) backpack, back pack, knapsack, packsack, rucksack, haversack", "415) bakery, bakeshop, bakehouse", "416) balance beam, beam", "417) balloon", "418) ballpoint, ballpoint pen, ballpen, Biro", "419) Band Aid", "420) banjo", "421) bannister, banister, balustrade, balusters, handrail", "422) barbell", "423) barber chair", "424) barbershop", "425) barn", "426) barometer", "427) barrel, cask", "428) barrow, garden cart, lawn cart, wheelbarrow", "429) baseball", "430) basketball", "431) bassinet", "432) bassoon", "433) bathing cap, swimming cap", "434) bath towel", "435) bathtub, bathing tub, bath, tub", "436) beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon", "437) beacon, lighthouse, beacon light, pharos", "438) beaker", "439) bearskin, busby, shako", "440) beer bottle", "441) beer glass", "442) bell cote, bell cot", "443) bib", "444) bicycle-built-for-two, tandem bicycle, tandem", "445) bikini, two-piece", "446) binder, ring-binder", "447) binoculars, field glasses, opera glasses", "448) birdhouse", "449) boathouse", "450) bobsled, bobsleigh, bob", "451) bolo tie, bolo, bola tie, bola", "452) bonnet, poke bonnet", "453) bookcase", "454) bookshop, bookstore, bookstall", "455) bottlecap", "456) bow", "457) bow tie, bow-tie, bowtie", "458) brass, memorial tablet, plaque", "459) brassiere, bra, bandeau", "460) breakwater, groin, groyne, mole, bulwark, seawall, jetty", "461) breastplate, aegis, egis", "462) broom", "463) bucket, pail", "464) buckle", "465) bulletproof vest", "466) bullet train, bullet", "467) butcher shop, meat market", "468) cab, hack, taxi, taxicab", "469) caldron, cauldron", "470) candle, taper, wax light", "471) cannon", "472) canoe", "473) can opener, tin opener", "474) cardigan", "475) car mirror", "476) carousel, carrousel, merry-go-round, roundabout, whirligig", "477) carpenter's kit, tool kit", "478) carton", "479) car wheel", "480) cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM", "481) cassette", "482) cassette player", "483) castle", "484) catamaran", "485) CD player", "486) cello, violoncello", "487) cellular telephone, cellular phone, cellphone, cell, mobile phone", "488) chain", "489) chainlink fence", "490) chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour", "491) chain saw, chainsaw", "492) chest", "493) chiffonier, commode", "494) chime, bell, gong", "495) china cabinet, china closet", "496) Christmas stocking", "497) church, church building", "498) cinema, movie theater, movie theatre, movie house, picture palace", "499) cleaver, meat cleaver, chopper", "500) cliff dwelling", "501) cloak", "502) clog, geta, patten, sabot", "503) cocktail shaker", "504) coffee mug", "505) coffeepot", "506) coil, spiral, volute, whorl, helix", "507) combination lock", "508) computer keyboard, keypad", "509) confectionery, confectionary, candy store", "510) container ship, containership, container vessel", "511) convertible", "512) corkscrew, bottle screw", "513) cornet, horn, trumpet, trump", "514) cowboy boot", "515) cowboy hat, ten-gallon hat", "516) cradle", "517) crane", "518) crash helmet", "519) crate", "520) crib, cot", "521) Crock Pot", "522) croquet ball", "523) crutch", "524) cuirass", "525) dam, dike, dyke", "526) desk", "527) desktop computer", "528) dial telephone, dial phone", "529) diaper, nappy, napkin", "530) digital clock", "531) digital watch", "532) dining table, board", "533) dishrag, dishcloth", "534) dishwasher, dish washer, dishwashing machine", "535) disk brake, disc brake", "536) dock, dockage, docking facility", "537) dogsled, dog sled, dog sleigh", "538) dome", "539) doormat, welcome mat", "540) drilling platform, offshore rig", "541) drum, membranophone, tympan", "542) drumstick", "543) dumbbell", "544) Dutch oven", "545) electric fan, blower", "546) electric guitar", "547) electric locomotive", "548) entertainment center", "549) envelope", "550) espresso maker", "551) face powder", "552) feather boa, boa", "553) file, file cabinet, filing cabinet", "554) fireboat", "555) fire engine, fire truck", "556) fire screen, fireguard", "557) flagpole, flagstaff", "558) flute, transverse flute", "559) folding chair", "560) football helmet", "561) forklift", "562) fountain", "563) fountain pen", "564) four-poster", "565) freight car", "566) French horn, horn", "567) frying pan, frypan, skillet", "568) fur coat", "569) garbage truck, dustcart", "570) gasmask, respirator, gas helmet", "571) gas pump, gasoline pump, petrol pump, island dispenser", "572) goblet", "573) go-kart", "574) golf ball", "575) golfcart, golf cart", "576) gondola", "577) gong, tam-tam", "578) gown", "579) grand piano, grand", "580) greenhouse, nursery, glasshouse", "581) grille, radiator grille", "582) grocery store, grocery, food market, market", "583) guillotine", "584) hair slide", "585) hair spray", "586) half track", "587) hammer", "588) hamper", "589) hand blower, blow dryer, blow drier, hair dryer, hair drier", "590) hand-held computer, hand-held microcomputer", "591) handkerchief, hankie, hanky, hankey", "592) hard disc, hard disk, fixed disk", "593) harmonica, mouth organ, harp, mouth harp", "594) harp", "595) harvester, reaper", "596) hatchet", "597) holster", "598) home theater, home theatre", "599) honeycomb", "600) hook, claw", "601) hoopskirt, crinoline", "602) horizontal bar, high bar", "603) horse cart, horse-cart", "604) hourglass", "605) iPod", "606) iron, smoothing iron", "607) jack-o'-lantern", "608) jean, blue jean, denim", "609) jeep, landrover", "610) jersey, T-shirt, tee shirt", "611) jigsaw puzzle", "612) jinrikisha, ricksha, rickshaw", "613) joystick", "614) kimono", "615) knee pad", "616) knot", "617) lab coat, laboratory coat", "618) ladle", "619) lampshade, lamp shade", "620) laptop, laptop computer", "621) lawn mower, mower", "622) lens cap, lens cover", "623) letter opener, paper knife, paperknife", "624) library", "625) lifeboat", "626) lighter, light, igniter, ignitor", "627) limousine, limo", "628) liner, ocean liner", "629) lipstick, lip rouge", "630) Loafer", "631) lotion", "632) loudspeaker, speaker, speaker unit, loudspeaker system, speaker system", "633) loupe, jeweler's loupe", "634) lumbermill, sawmill", "635) magnetic compass", "636) mailbag, postbag", "637) mailbox, letter box", "638) maillot", "639) maillot, tank suit", "640) manhole cover", "641) maraca", "642) marimba, xylophone", "643) mask", "644) matchstick", "645) maypole", "646) maze, labyrinth", "647) measuring cup", "648) medicine chest, medicine cabinet", "649) megalith, megalithic structure", "650) microphone, mike", "651) microwave, microwave oven", "652) military uniform", "653) milk can", "654) minibus", "655) miniskirt, mini", "656) minivan", "657) missile", "658) mitten", "659) mixing bowl", "660) mobile home, manufactured home", "661) Model T", "662) modem", "663) monastery", "664) monitor", "665) moped", "666) mortar", "667) mortarboard", "668) mosque", "669) mosquito net", "670) motor scooter, scooter", "671) mountain bike, all-terrain bike, off-roader", "672) mountain tent", "673) mouse, computer mouse", "674) mousetrap", "675) moving van", "676) muzzle", "677) nail", "678) neck brace", "679) necklace", "680) nipple", "681) notebook, notebook computer", "682) obelisk", "683) oboe, hautboy, hautbois", "684) ocarina, sweet potato", "685) odometer, hodometer, mileometer, milometer", "686) oil filter", "687) organ, pipe organ", "688) oscilloscope, scope, cathode-ray oscilloscope, CRO", "689) overskirt", "690) oxcart", "691) oxygen mask", "692) packet", "693) paddle, boat paddle", "694) paddlewheel, paddle wheel", "695) padlock", "696) paintbrush", "697) pajama, pyjama, pj's, jammies", "698) palace", "699) panpipe, pandean pipe, syrinx", "700) paper towel", "701) parachute, chute", "702) parallel bars, bars", "703) park bench", "704) parking meter", "705) passenger car, coach, carriage", "706) patio, terrace", "707) pay-phone, pay-station", "708) pedestal, plinth, footstall", "709) pencil box, pencil case", "710) pencil sharpener", "711) perfume, essence", "712) Petri dish", "713) photocopier", "714) pick, plectrum, plectron", "715) pickelhaube", "716) picket fence, paling", "717) pickup, pickup truck", "718) pier", "719) piggy bank, penny bank", "720) pill bottle", "721) pillow", "722) ping-pong ball", "723) pinwheel", "724) pirate, pirate ship", "725) pitcher, ewer", "726) plane, carpenter's plane, woodworking plane", "727) planetarium", "728) plastic bag", "729) plate rack", "730) plow, plough", "731) plunger, plumber's helper", "732) Polaroid camera, Polaroid Land camera", "733) pole", "734) police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria", "735) poncho", "736) pool table, billiard table, snooker table", "737) pop bottle, soda bottle", "738) pot, flowerpot", "739) potter's wheel", "740) power drill", "741) prayer rug, prayer mat", "742) printer", "743) prison, prison house", "744) projectile, missile", "745) projector", "746) puck, hockey puck", "747) punching bag, punch bag, punching ball, punchball", "748) purse", "749) quill, quill pen", "750) quilt, comforter, comfort, puff", "751) racer, race car, racing car", "752) racket, racquet", "753) radiator", "754) radio, wireless", "755) radio telescope, radio reflector", "756) rain barrel", "757) recreational vehicle, RV, R.V.", "758) reel", "759) reflex camera", "760) refrigerator, icebox", "761) remote control, remote", "762) restaurant, eating house, eating place, eatery", "763) revolver, six-gun, six-shooter", "764) rifle", "765) rocking chair, rocker", "766) rotisserie", "767) rubber eraser, rubber, pencil eraser", "768) rugby ball", "769) rule, ruler", "770) running shoe", "771) safe", "772) safety pin", "773) saltshaker, salt shaker", "774) sandal", "775) sarong", "776) sax, saxophone", "777) scabbard", "778) scale, weighing machine", "779) school bus", "780) schooner", "781) scoreboard", "782) screen, CRT screen", "783) screw", "784) screwdriver", "785) seat belt, seatbelt", "786) sewing machine", "787) shield, buckler", "788) shoe shop, shoe-shop, shoe store", "789) shoji", "790) shopping basket", "791) shopping cart", "792) shovel", "793) shower cap", "794) shower curtain", "795) ski", "796) ski mask", "797) sleeping bag", "798) slide rule, slipstick", "799) sliding door", "800) slot, one-armed bandit", "801) snorkel", "802) snowmobile", "803) snowplow, snowplough", "804) soap dispenser", "805) soccer ball", "806) sock", "807) solar dish, solar collector, solar furnace", "808) sombrero", "809) soup bowl", "810) space bar", "811) space heater", "812) space shuttle", "813) spatula", "814) speedboat", "815) spider web, spider's web", "816) spindle", "817) sports car, sport car", "818) spotlight, spot", "819) stage", "820) steam locomotive", "821) steel arch bridge", "822) steel drum", "823) stethoscope", "824) stole", "825) stone wall", "826) stopwatch, stop watch", "827) stove", "828) strainer", "829) streetcar, tram, tramcar, trolley, trolley car", "830) stretcher", "831) studio couch, day bed", "832) stupa, tope", "833) submarine, pigboat, sub, U-boat", "834) suit, suit of clothes", "835) sundial", "836) sunglass", "837) sunglasses, dark glasses, shades", "838) sunscreen, sunblock, sun blocker", "839) suspension bridge", "840) swab, swob, mop", "841) sweatshirt", "842) swimming trunks, bathing trunks", "843) swing", "844) switch, electric switch, electrical switch", "845) syringe", "846) table lamp", "847) tank, army tank, armored combat vehicle, armoured combat vehicle", "848) tape player", "849) teapot", "850) teddy, teddy bear", "851) television, television system", "852) tennis ball", "853) thatch, thatched roof", "854) theater curtain, theatre curtain", "855) thimble", "856) thresher, thrasher, threshing machine", "857) throne", "858) tile roof", "859) toaster", "860) tobacco shop, tobacconist shop, tobacconist", "861) toilet seat", "862) torch", "863) totem pole", "864) tow truck, tow car, wrecker", "865) toyshop", "866) tractor", "867) trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi", "868) tray", "869) trench coat", "870) tricycle, trike, velocipede", "871) trimaran", "872) tripod", "873) triumphal arch", "874) trolleybus, trolley coach, trackless trolley", "875) trombone", "876) tub, vat", "877) turnstile", "878) typewriter keyboard", "879) umbrella", "880) unicycle, monocycle", "881) upright, upright piano", "882) vacuum, vacuum cleaner", "883) vase", "884) vault", "885) velvet", "886) vending machine", "887) vestment", "888) viaduct", "889) violin, fiddle", "890) volleyball", "891) waffle iron", "892) wall clock", "893) wallet, billfold, notecase, pocketbook", "894) wardrobe, closet, press", "895) warplane, military plane", "896) washbasin, handbasin, washbowl, lavabo, wash-hand basin", "897) washer, automatic washer, washing machine", "898) water bottle", "899) water jug", "900) water tower", "901) whiskey jug", "902) whistle", "903) wig", "904) window screen", "905) window shade", "906) Windsor tie", "907) wine bottle", "908) wing", "909) wok", "910) wooden spoon", "911) wool, woolen, woollen", "912) worm fence, snake fence, snake-rail fence, Virginia fence", "913) wreck", "914) yawl", "915) yurt", "916) web site, website, internet site, site", "917) comic book", "918) crossword puzzle, crossword", "919) street sign", "920) traffic light, traffic signal, stoplight", "921) book jacket, dust cover, dust jacket, dust wrapper", "922) menu", "923) plate", "924) guacamole", "925) consomme", "926) hot pot, hotpot", "927) trifle", "928) ice cream, icecream", "929) ice lolly, lolly, lollipop, popsicle", "930) French loaf", "931) bagel, beigel", "932) pretzel", "933) cheeseburger", "934) hotdog, hot dog, red hot", "935) mashed potato", "936) head cabbage", "937) broccoli", "938) cauliflower", "939) zucchini, courgette", "940) spaghetti squash", "941) acorn squash", "942) butternut squash", "943) cucumber, cuke", "944) artichoke, globe artichoke", "945) bell pepper", "946) cardoon", "947) mushroom", "948) Granny Smith", "949) strawberry", "950) orange", "951) lemon", "952) fig", "953) pineapple, ananas", "954) banana", "955) jackfruit, jak, jack", "956) custard apple", "957) pomegranate", "958) hay", "959) carbonara", "960) chocolate sauce, chocolate syrup", "961) dough", "962) meat loaf, meatloaf", "963) pizza, pizza pie", "964) potpie", "965) burrito", "966) red wine", "967) espresso", "968) cup", "969) eggnog", "970) alp", "971) bubble", "972) cliff, drop, drop-off", "973) coral reef", "974) geyser", "975) lakeside, lakeshore", "976) promontory, headland, head, foreland", "977) sandbar, sand bar", "978) seashore, coast, seacoast, sea-coast", "979) valley, vale", "980) volcano", "981) ballplayer, baseball player", "982) groom, bridegroom", "983) scuba diver", "984) rapeseed", "985) daisy", "986) yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum", "987) corn", "988) acorn", "989) hip, rose hip, rosehip", "990) buckeye, horse chestnut, conker", "991) coral fungus", "992) agaric", "993) gyromitra", "994) stinkhorn, carrion fungus", "995) earthstar", "996) hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa", "997) bolete", "998) ear, spike, capitulum", "999) toilet tissue, toilet paper, bathroom tissue"] Truncation = 1 #@param {type:"slider", min:0.02, max:1, step:0.02} # Set number of samples num_samples = 4 # Create the noise vector with truncation (you'll learn about this later!) noise_vector = truncated_noise_vector(num_samples, Truncation) # Select the class to generate label = int(Class.split(')')[0]) # Sample the images with the noise vector and label as inputs ims = sample(sess, noise_vector, label, truncation=Truncation) # Display generated images imshow(imgrid(ims, cols=min(num_samples, 5))) #@title Beyond images #@markdown Generative models can also generate non-images, like this melody. This melody comes from a generative model type called a variational autoencoder, or VAE for short. VAEs are different from GANs, which you don't have to worry about now. Just note that there are different types of generative models, not just GANs! from IPython.display import YouTubeVideo YouTubeVideo('G5JT16flZwM')
0.798423
0.936749
<center><FONT size="6pt" color=firebrick> Probabilités et statistiques </FONT></center> ![extraitProgrammeStats](img/extraitProgrammeStats.png) <center><FONT size="6pt" color=firebrick> Mémento </FONT></center> | Syntaxe | Rôle | |:---------------------:|:------------------| || **simulation de variables aléatoires**| | import **numpy.random** as rd | importe le sous-module *random* de Numpy qui contient les fonctions statistiques | | rd.random() | génére un nombre aléatoire sur $[0;1]$ | | rd.uniform($a$,$b$,$N$) | génére une collection de $N$ nombres aléatoires sur $[a;b]$ avec une loi uniforme | | rd.normal($moy$,$sigma$,$N$) | génére une collection de $N$ nombres aléatoires suivant une loi normale (=gaussienne) de moyenne $moy$ et d'écart-type $sigma$| | rd.randint($a$,$b$,$N$) | génère une collection de $N$ **nombres entiers** entre $a$ (inclus) et $b$ (exclu) | || **tracé d'histogrammes**| |import **matplotlib.pyplot** as plt | l'affichage d'histogramme est un outil de *pyplot*| | plt.hist($L$, bins = 'rice') | affiche l'histogramme de la collection de valeurs $L$ (tableau numpy ou liste Python de valeurs) <br> avec optimisation de la taille des bacs (option bins = 'rice')| || **analyse statistique de données**| |import **numpy** as np | les outils statistiques sont inclus dans le module Numpy | |np.mean(L) | calcul la **moyenne** de la collection de valeurs $L$ (tableau numpy ou liste Python de valeurs) | |np.std(L,ddof = 1) | calcul un estimateur non biaisé de l'**écart-type** pour la collection de valeurs $L$ (tableau numpy ou liste Python de valeurs) <br> le paramètre *ddof = 1* fait diviser par $N-1$ au lieu de diviser par $N$. | || **régression affine**| | $a$,$b$ = np.polyfit(xi,yi,deg = 1) | calcul les paramètres $a$,$b$ de la regression affine des données yi = f(xi), c'est-à-dire la 'meilleure' droite d'équation $Y=aX+b$ <br> permettant de décrire les couples de points expérimentaux $(x_i,y_i)$. Ne pas oublier le paramètre $deg = 1$.| # Importation des modules L'ensemble des outils statistiques utilisés en CPGE sont contenus dans le module *random* de la bibliothèque Numpy. On peut donc y accéder à l'aide des commandes d'importation suivantes. ``` # 1ère solution import numpy as np # crée un alias sur le module Numpy np.random.random() # renvoie un flottant aléatoire uniformément distribué sur l'intervalle [0;1] # 2ème solution import numpy.random as rd # crée un alias sur le sous-module random de Numpy rd.random() # c'est la fonction random() du sous-module random du module Numpy ``` ## La fonction random() du module numpy.random L'aide sur cette fonction précise que la *borne supérieure est exclue*. Ceci est un détail car en pratique tout se passe comme si le tirage était uniforme dans l'intervalle $[0;1]$ fermé. ``` help(rd.random) # appel l'aide sur la fonction random() du module numpy.random ``` ## Comment créer une liste de valeurs aléatoires uniformément distribuées dans l'intervalle $[0;10]$ ? Méthode 1 : on remplit une liste en ajoutant à chaque fois un nombre tiré aléatoirement dans l'intervalle $[0;1]$ que l'on multiplie par 10. ``` # Création de N = 12 valeurs aléatoires tirées uniformément dans l'intervalle [0;10] N = 12 L1 = [] for k in range(N): # boucle for L1.append(rd.random()*10) print(L1) ``` ### Exercice N2 n°1 : tirage uniforme dans un intervalle [a;b] quelconque Modifier le code ci-dessous pour que les 12 valeurs soient tirées aléatoirement dans l'intervalle $[a ; b]$ de manière uniforme. (On prendra $a=50$ et $b=80$). ``` # Création de N = 12 valeurs aléatoires tirées uniformément dans l'intervalle [a;b] N = 12 a, b = 50, 80 # bornes de l'intervalle L1 = [] for k in range(N): # boucle for L1.append(rd.random()) # LIGNE A MODIFIER print(L1) # Création de N = 12 valeurs aléatoires tirées uniformément dans l'intervalle [a;b] N = 12 a, b = 50, 80 # bornes de l'intervalle L1 = [] for k in range(N): # boucle for L1.append(rd.random()*(b-a) + a) print(L1) ``` Méthode 2 : on utilise la fonction ``rand()`` qui admet pour argument le nombre de valeurs à tirer. ``` rd.rand(12) # création d'un vecteur Numpy de 12 valeurs tirées uniformément dans l'intervalle [0;1] rd.rand(12)*(b-a)+a # 12 valeurs aléatoires tirées entre les bornes a et b ``` Comparaison des deux méthodes : - La première méthode renvoie une liste Python (cet objet **n'accepte pas** les additions et/ou les multiplications par des scalaires). - La seconde méthode renvoie un ``ndarray``, c'est un "tableau numpy" qui se manipule comme des vecteurs (ou des matrices) mathématiques: les additions et multiplications par des scalaires sont possibles. Remarque, on peut toujours convertir une liste Python L de valeurs numériques en objet ``ndarray`` grâce à la commande ``` np.array(L) # convertit la liste python L en objet de type ndarray ``` ``` print(L1) # affichage de la liste Python de valeurs numériques L2 = np.array(L1) # conversion de la liste Python L1 en un "tableau Numpy" print(L2) # affichage de l'objet de type "tableau Numpy" ``` # Réaliser un tirage aléatoire de nombres entiers Voici comment obtenir un nombre aléatoire tiré uniformément entre a=5 (inclus) et b = 10 (exclu). ``` a, b = 5, 10 x = rd.randint(a,b) print('nb tiré aléatoirement x = ',x) ``` ATTENTION : pour les entiers, il importe de **toujours vérifier dans la spécification de la fonction** si les bornes sont INCLUSES ou EXCLUES. Selon les modules utilisés, les bornes supérieures sont parfois incluses ou exclues. ``` L3 = [] for k in range(50): # 50 valeurs L3.append(rd.randint(5,10)) # ajoute un entier aléatoire à la liste L3 print(L3) # on peut vérifier qu'AUCUNE des 50 valeurs tirées n'est égale à 10 # Voici une autre syntaxe en utilisant une 'liste en compréhension' L4 = [rd.randint(5,10) for k in range(50)] print(L4) ``` ## La bibliothèque random Cette bibliothèque possède AUSSI une fonction ``randint()`` qui ne fonctionne pas de la même manière : la borne supérieure est cette fois incluse. ``` import random # il s'agit d'un autre module random qui n'est pas dans la bibliothèque Numpy L5 = [random.randint(5,10) for k in range(50)] print(L5) # On peut vérifier que la valeur 10 est atteinte ! ``` *A NOTER :* Dans la mesure du possible, on recommande de travailler avec numpy.random mais il faut savoir s'adapter à la bibliothèque random si cela vous est demandé.* # Histogrammes Un **histogramme** est une représentation graphique permettant de visualiser la répartition d'une variable continue en la représentant avec des colonnes. Pour tracer un histogramme, nous utilisons la fonction ``hist()`` qui appartient au module matplotlib.pyplot qui contient les outils graphiques. ``` import matplotlib.pyplot as plt # import du sous-module pyplot de matplotlib # Etape 1 : création d'une liste de N valeurs aléatoires uniformément distribuées dans l'intervalle [100;150] N = 10**4 L = [rd.random()*(150-100)+100 for k in range(N)] # L est une liste Python de 10000 flottants # Etape 2 : tracé de l'histogramme avec la fonction hist plt.hist(L,bins = 'rice') # l'option 'rice' ajuste automatiquement la taille des colonnes plt.xlim([0,200]) # modification des limites de l'axe horizontal plt.title("histogramme des tirages selon d'une distribution uniforme") plt.xlabel('valeurs tirées') plt.ylabel('fréquences') plt.show() ``` # Estimateurs statistiques : moyenne, variance et écart-type Un estimateur est une fonction permettant d'évaluer un paramètre inconnu relatif à une loi de probabilité. Les deux estimateurs à notre programme sont : - la moyenne, noté $\overline{x}$ - l'écart-type, noté $\sigma_x$. ## L'estimateur 'moyenne' Prenons comme exemple les données qui sont contenues dans la liste ``L`` précédemment générées à partir d'une loi de probabilité uniforme sur l'intervalle $[100;150]$. On peut **estimer** la 'valeur centrale' de cette loi à partir des $N$ **réalisations** $\{x_k,\quad k=1\ldots N\}$. Pour cela, on effectue le calcul de la moyenne $\overline{x}$ dont l'expression est la somme des valeurs divisée par le nombre de valeurs : $$\overline{x}= \frac{1}{N} \sum_{k=1}^{N}x_k$$ On donne ci-dessous deux méthodes pour calculer la moyenne des $N$ valeurs de la liste ``L``. ``` # 1ère méthode pour le cacul de la moyenne des valeurs d'une liste moy = 0 # initialisation de la moyenne for k in range(len(L)): # boucle sur les indices des éléments de la liste (len(L) renvoie le nb d'éléments) moy = moy + L[k] # à chaque itération de la boucle, on additionne la k-ième valeur de la liste moy = moy / len(L) # on divise le résultat par le nombre d'éléments de L print('moyenne = ',moy) # affichage du résultat # 2ème méthode pour le cacul de la moyenne des valeurs d'une liste print('moyennne = ',np.mean(L)) # la méthode mean() du module Numpy donne directement le résultat ``` **Conclusion** On constate que la moyenne des N valeurs tirées est "proche" de la valeur centrale de l'intervalle $[100 ;150]$. Ainsi, la moyenne est une fonction des $N$ réalisations de la loi $\overline{x}=f(\{x_k\})$ qui permet d'estimer la valeur centrale de la loi uniforme. ## L'estimateur 'écart-type' La dispersion des valeurs peut être quantifiée par le calcul de l'écart-type. Par définition, l'**écart-type** $\sigma_x$ est la *racine carrée de la moyenne de l'écart quadratique à la moyenne*. En anglais, on dit aussi valeur *RMS* = **Root Mean Square**. Une estimation de l'écart-type peut donc être calculée de la manière suivante: (1) On soustrait chaque à valeur $x_k$ la moyenne $\overline{x}$ des valeurs de (2) On prend le carré de cet écart à la moyenne, $\left(x_k-\overline{x}\right)^2$ représente un écart *quadratique* (3) On prend la moyenne de ces écarts quadratiques (aussi appelée **variance**, notée $V(x)$): $$V(x)=\frac{1}{N}\sum_{k=1}^N \left(x_k-\overline{x}\right)^2$$ (4) Enfin, on prend la racine carrée de ce résultat: $$\sigma_x=\sqrt{\frac{1}{N}\sum_{k=1}^N \left(x_k-\overline{x}\right)^2}$$ On donne ci-dessous deux méthodes pour calculer l'écart-type des $N$ valeurs de la liste ``L``. ``` # 1ère méthode pour le cacul de l'écart-type des valeurs d'une liste moy = np.mean(L) # calcul de la moyenne des valeurs de L (en dehors de la boucle !) etype = 0 # initialisation de l'écart-type for k in range(len(L)): # boucle sur les indices des éléments de la liste etype += (L[k]-moy)**2 # à chaque itération de la boucle, on additionne le carré de l'écart à la moyenne etype = etype / len(L) # on prend la moyenne de ces écarts au carré etype = etype**(1/2) # on prend la racine carrée de cette moyenne print('écart-type = ',etype) # affichage du résultat # 2ème méthode pour le cacul de l'écart-type des valeurs d'une liste print('écart-type = ',np.std(L)) # appel à la méthode std (standard deviation) du module Numpy ``` **Remarque : écart-type sans biais** En théorie des probabilités (hors programme), on peut montrer que l'estimateur précédent n'est pas optimum : il possède un possède **un biais** d'autant plus important que le nombre d'échantillons $N$ est faible. C'est pourquoi nous utilisons (sauf indication contraire) l'estimateur suivant appelé **estimateur sans biais de l'écart-type**: $$\sigma_x=\sqrt{\frac{1}{N-1}\sum_{k=1}^N \left(x_k-\overline{x}\right)^2}$$ La calcul de cet estimateur se fait avec en appelant la méhode ``std()`` de Numpy avec le paramètre ``ddof = 1``. Le paramètre ddof signifie "Delta Degree Of Freedom" (= nombres de degrés de libertés). ``` print('écart-type sans biais = ',np.std(L,ddof = 1)) # estimateur non biaisé de l'écart-type ``` **A savoir** Pour une loi de probabilité uniforme sur un intervalle $[a;b]$, l'écart-type vaut la *demi-largeur de l'intervalle divisée par racine carrée de 3*. On peut vérifier l'estimation obtenue pour l'écart-type est "proche" de la valeur théorique: $$\frac{(b-a)/2}{\sqrt{3}}=\frac{25}{\sqrt{3}}\approx 14,44$$ ## Exercice N2 n°1 (corrigé) a) Ecrire les instructions en python permettant de générer une liste X de N valeurs aléatoires tirées uniformément sur l'intervalle $[-2 ; 12]$. b) Calculer la moyenne et l'écart-type des valeurs de la liste pour $N=10$, $N=100$, $N=10^4$ et $N=10^6$. c) Comparer les résultats obtenus avec les valeurs théoriques. Conclure **Correction** ``` # Question a, ici le nb d'échantillons N vaut 10 N = 10**1 X = [np.random.random()*(12-(-2))-2 for k in range(N)] # création de la liste Python print('moyenne = ',np.mean(X), # calcul de la moyenne ' écart-type = ',np.std(X,ddof = 1)) # calcul de l'écart-type sans biais ## Question b, ici on utilise une boucle sur les valeurs de N pour envisager les différentes valeurs Nlist = [10, 100, 10**4, 10**6] for N in Nlist : # boucle sur les valeurs de N dans la liste X = [np.random.random()*(12-(-2))-2 for k in range(N)] # création de la liste Python de N valeurs print('N = ', N, # affichage de valeur de N '\t moyenne = ',np.mean(X), # calcul de la moyenne ' \t écart-type = ',np.std(X,ddof = 1)) # calcul de l'écart-type sans biais # Question c : comparaison avec les valeurs théoriques moyTheorique = (12+(-2))/2 # moyenne des bornes de l'intervalle etypeTheorique = (12-(-2))/2/np.sqrt(3) # demi-largeur de l'intervalle divisé par racine carré de 3 print('valeurs théoriques : moyenne = ',moyTheorique, ' écart-type = ',etypeTheorique) ``` Concusion : on constate que plus le nombre $N$ d'échantillons est grand, plus les estimateurs semblent "proches" des valeurs théoriques. ## Simulation d'une loi uniforme (type rectangulaire) Dans le paragraphe précédent, nous avions généré $N$ valeurs aléatoires tirées selon une loi uniforme à l'aide du tirage d'une unique valeur (fonction ``random()``). Le module ``random`` de Numpy contient la méthode ``uniform`` qui permet d'effectuer directement le tirage de $N$ valeurs sur un intervalle $[a;b]$. Attention, dans ce cas les valeurs générées sont de type ``nd.array`` (tableau Numpy) et ne sont plus une simple liste Python de valeurs comme c'était le cas dans le paragraphe précédent. ``` import numpy as np import matplotlib.pyplot as plt help(np.random.uniform) X1 = np.random.uniform(-2,12,10**4) # 10^4 valeurs dans l'intervalle [-2;12] plt.hist(X1,bins = 'rice') # affichage de l'histogramme, rice = ajustement automatique plt.xlim([-10,20]) # modification des limites de l'axe X plt.show() ``` ### Influence du nombre de "bins" d'un histogramme On peut spécifier le nombre de *bins* (= nb de classes, nb de "bacs") lors de la construction d'un histogramme. ``` plt.figure(1) plt.hist(X1,bins = 20) # affichage de l'histogramme, rice = ajustement automatique plt.xlim([-10,20]) # modification des limites de l'axe X plt.title('histogramme utilisant 20 classes') plt.show() plt.figure(2) plt.title('histogramme utilisant 200 classes') plt.hist(X1,bins = 200) # affichage de l'histogramme, rice = ajustement automatique plt.xlim([-10,20]) # modification des limites de l'axe X plt.show() ``` **En conclusion**, on voit que le nombre de "bins" doit être doit choisi de manière judicieuse pour représenter convenablement un échantillon de valeurs. L'option ``bins ='rice'`` fournit automatiquement une valeur généralement acceptable. ## Simulation d'une loi normale (type gaussienne) Une **variable aléatoire gaussienne** (ou normale) décrit un processus aléatoire dont la probabilité d'obtenir une valeur numérique entre $x$ et $x+\textrm{d}x$ est $f(x)\textrm{d}x$ où $f(x)$ est appelée *densité de probabilité normale* et est donnée par: $$f(x)=\frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{\left(x-\mu\right)^2}{2\sigma^2}\right)$$ Les paramètres $\mu$ et $\sigma$ sont les deux paramètres de la loi normale qui sont appelés, respectivement, moyenne et écart-type. La courbe de cette densité de probabilité est appelée courbe de Gauss (ou *courbe en cloche*). Pour simuler une loi gaussienne, on peut utiliser la fonction ``normal()``. ``` N = 10**6 # nombre de valeurs X2 = np.random.normal(5,14/np.sqrt(3),N) # Loi normale (distribution gaussienne) # moyenne, écart-type, nombre de tirages) plt.hist(X2,bins='rice',color ='red') plt.plot() ``` ### Autre méthode pour simuler un tirage aléatoire A partir d'échantillons tirés selon une loi gaussienne de moyenne nulle et d'écart-type égal à un, il est possible **par multiplication et addition** d'obtenir des échantillons tirés selon une moyenne $\mu$ et un écart-type $\sigma$ quelconques. ``` N = 10**6 # nb d'échantillons ## Tirage de moyenne nulle et de variance unitaire X1 = np.random.normal(0,1,N) ## Obtenu d'un tirage de moyenne moy = 5. et d'écart type sigma =2. mu, sigma = 5., 2. X2 = X1*sigma + mu # multiplication par sigma et 'translation' de moy ``` De même, à partir d'un tirage uniforme sur l'intervalle $[0;1]$ il est possible d'obtenir un tirage uniforme sur l'intervalle $[a;b]$. ## O2 n°2 Exercice (corrigé) Compléter le script ci-dessous de manière à créer, à partir de la la liste d'échantillons *X1*, une liste d'échantillons *X2* tirés uniformément sur l'intervalle $I=[x_0-\Delta x;x_0+\Delta x]$ avec $x_0 = 12$ et $\Delta x = 0,25$. ``` N = 50000 # nb d'échantillons X1 = np.random.uniform(0,1,N) # tirage uniforme sur [0;1] x0, deltax = 12., 0.25 X2 = # à compléter plt.hist(X2,bins='rice') plt.xlim([11,13]) # mise à l'échelle des x plt.show() ``` **Correction** La ligne correctement complétée est la suivante: ``` X2 = (X1-0.5)*2*deltax + x0 ``` Voici le raisonnement utilisé : - (X1-0.5) est une collection de valeurs centrées sur zéro, son étendue est $\pm 0,5$ autour de zéro, - on multiplie alors les valeurs par $2\Delta x$ pour avoir la bonne largeur d'intervalle, - enfin on applique une *translation* sur les valeurs en ajoutant la valeur centrale $x_0$ de l'intervalle. ``` N = 50000 # nb d'échantillons X1 = np.random.uniform(0,1,N) # tirage uniforme sur [0;1] x0, deltax = 12., 0.25 X2 = (X1-0.5)*2*deltax + x0 plt.hist(X2,bins='rice') plt.xlim([11,13]) # mise à l'échelle des x plt.show() ``` # Régression linéaire (ou régression affine) Supposons que l'on dispose de $N$ couples valeurs $(x_k,y_k)$ avec $k=1,\ldots,N$. Ces valeurs peuvent être représentées dans un plan en tant que nuage de points. On dit que l'on effectue une régression linéaire sur les données $\{(x_k,y_k) \textrm{ avec } k=1, \ldots ,N\}$ lorsque l'on cherche à faire passer une droite "au mieux par tous les points" comme cela est illustré sur la figure ci-dessous. ![regressionDroite](img/regressionDroite2.png) **Définition :** Un modèle de régression linéaire est un modèle de régression qui cherche à établir une relation linéaire entre une variable, dite expliquée noté Y, et une ou plusieurs variables, dites explicatives (notée X). ## La fonction polyfit de numpy Pour effectuer la régression linéaire, nous utiliserons la fonction ``polyfit()`` du module Numpy. ``` help(np.polyfit) # polyfit(xData,yData,degreDuPolynôme) ``` La syntaxe est la suivante: ``` p = polyfit(xData,yData,deg = 1) # on précise le degré du polynôme. ``` Le résultat ``p`` est une liste Python de $n$ valeurs qui représente les coefficients d'un polynôme de degré $n-1$ écrit sous la forme suivante: $$P(X) = p[0]X^{n-1}+p[1]X^{n-2} + p[2] X^{n-3}+\ldots + p[n-2]X+p[n-1]$$ Exemple ``p = [3,2,1]`` représente le polynôme de degré deux suiant: $$P(X) = 3X^2 + 2X +1$$ Pour une régression affine, le degré du polynôme est ``deg = 1``. Les coefficients de la droite de régression $Y=AX+B$ sont donc donnés par : - ``A = p[0]``, est la pente de la droite, c'est le coefficient du terme de degré 1, - ``B = p[1]``, est le terme constant (l'ordonnée à l'origine), c'est le coefficient du terme de degré zéro. ## N2 n°3 Exercice-type pour la régression linéaire : exemple de situation expérimentale (corrigé) - On mesure l'absorbance de cinq solutions de complexe $\mathrm{[Fe(SCN)]^{2+}}$ de concentrations connues. L'absorbance de chacune des solutions est mesurée à 580 nm. - On dispose d'une solution (s) de la même espèce chimique dont on souhaite connaître la concentration $C_s$ munie de son incertitude-type. **Données du problème** - Les résultats sont consignés dans le tableau ci-dessous ($\lambda = 580 \ \mathrm{nm})$ $$ \begin{array}{|c|c|c|c|c|c|} \hline \mathrm{C \ / \ mol \ L^{-1}} & 2.5 \cdot 10^{-4} & 5.0 \cdot 10^{-4} & 1.0 \cdot 10^{-3} & 1.5 \cdot 10^{-3} & 2.0 \cdot 10^{-3}\\ \hline \mathrm{A} & 0.143 & 0.264 & 0.489 & 0.741 & 0.998 \\ \hline \end{array} $$ - L'absorbance de la solution (S) lue est $A_s = 0.571$ - Dans la notice du spectrophotomètre, le constructeur indique que la précision sur la mesure de A est $\pm 2\%$. On l'interprète comme une variable aléatoire à distribution uniforme sur un intervalle de demi-étendue $\Delta A = \frac{2}{100} A$ ; - pour les solutions, le technicien fournit une « précision » de la concentration $C$ à 2 %. On l'interprète comme une variable aléatoire à distribution uniforme sur un intervalle de demi-étendue $\Delta C = \frac{2}{100} C$. **Questions** a) Déterminer l'équation de la droite d'étalonnage par une régression affine $C=f(A)$. b) Représenter la droite de régression ainsi que le jeu des cinq donnnées. c) En déduire la concentration $A_s$ de la solution inconnue. ### a) Détermination des coefficients de la droite de régression Il suffit d'appeler la fonction poylfit sur le jeu de données: - $x=C$, concentration en abscisses, - $y=A$, absorbance en ordonnées. ``` # Etape zéro : Entrée des données du problème : C et A sont des tableaux Numpy de même taille C = np.array([2.5e-4, 5.0e-4, 1.0e-3, 1.5e-3, 2.0e-3]) A = np.array([0.143, 0.264, 0.520, 0.741, 0.998]) # Etape 1: visualisation des données (à faire systématiquement même si non demandé) plt.plot(C,A,'+k',ms = 15,mew = 3) #on trace l'absorbance A en fonction de la concentration C plt.xlabel('Concentration (mol/L)') , plt.ylabel('Absorbance'), plt.grid() plt.xlim([0,2.2e-3]), plt.ylim([0,1.1]) # ajustement des limites d'axes pour voir le "zéro" plt.title("Données d'étalonnage") plt.show() # Détermination des coefficients de la régression: utilisation de la fonction polyfit p = np.polyfit(C, A, deg = 1) # On effectue le régression Y = f(X) = p1.x + p0 soit A = p1.C + p0 print(p) # Affichage du résultat (ATTENTION aux unités des coefficients!) print('La pente de la droite de régression est p1 = ',format(p[0],"#.4g"), 'L/mol')# la pente est en L/mol print("L'ordonnée à l'origine est p0 = ",format(p[1],"#.4g")) ``` Remarque : la commande ``format(x,"#.4g")`` permet de mettre en forme le nombre flottant en supprimant les chiffres non significatifs. ### b) Représentation de la droite de régression Pour tracer une droite, il suffit de deux points. Nous construisons donc un vecteur Numpy contenant les deux valeurs extrêmes des abscisses : ``` xi = np.array([np.min(C),np.max(C)]) ``` Puis nous calculons les valeurs $y_i$ par l'équation de la droite ``` yi = p[0]*xi + p[1] ``` ``` # Superposition des données et de la droite de régression # Affichage des données plt.plot(C,A,'+k',ms = 15,mew = 3) #on trace l'absorbance A en fonction de la concentration C plt.xlabel('Concentration (mol/L)') , plt.ylabel('Absorbance'), plt.grid() plt.xlim([0,2.2e-3]), plt.ylim([0,1.1]) # ajustement des limites d'axes pour voir le "zéro" # Détermination des valeurs minimales et maximales d'abscisses xi = np.array([np.min(C), np.max(C)]) # les deux valeurs d'abscisses permettant le tracé de la droite # remarque: ici, on souhaite visualiser l'équation de droite à proximité du point origine xi = np.array([0, np.max(C)]) # les deux valeurs d'abscisses permettant le tracé de la droite # Calcul des deux valeurs d'ordonnées pour les points extrêmes yi = p[0]*xi + p[1] plt.plot(xi,yi,'-r') # droite de régression en rouge plt.title("Données d'étalonnage et droite de régression") plt.show() ``` ### c) Détermination de la concentration inconnue. Il suffit d'utiliser la relation affine pour laquelle les constantes $A$ et $B$ sont connnues: $$Y = A.X+B$$ La valeur de $Y$ étant connue, on en déduit la valeur de $X$. Dans notre cas, $$A = \textrm{p[0] }C + \textrm{p[1] }$$ On en tire donc $$C=\frac{A-\textrm{p[1]}}{ \textrm{p[0] }}$$ ``` As = 0.571 # donnée mesurée Cs = (As-p[1])/p[0] print('La concentration de la solution inconnue est estimée à Cs = ',format(Cs,"#.4g"), ' mol/L' ) ``` Conclusion : la concentration de la solution est estimée à $C_s \approx 1,128 \times 10 ^{-3} \, \textrm{mol.L}^{-1}$. En revanche, nous n'avons aucune information quant à la précision de ce résultat. L'estimation des incertitudes est au programme en MPSI et sera vue ultérieurement. # Exercices d'entraînement ## O2 n°4 Simulation de la somme de deux variables aléatoires gaussiennes a) Ecrire le code python qui génère deux variables aléatoires gaussiennes $X_1$ et $X_2$ de paramètres respectifs $\mu_1=5$, $\sigma_1=0.5$ et $\mu_2=10$, $\sigma_2=1$. On utilisera la fonction ``normal()`` du module numy.random. et on choisira $N=10^6$ échantillons de chacune de ces variables. b) Représenter graphiquement les histogrammes des échantillons $X_1$, $X_2$ et $X_s=X_1+X_2$ sur un même graphe. c) Ecrire le code Python qui estimer la moyenne et l'écart-type de la variable $X_s$. d) Comparer l'estimation de l'écart-type $\sigma_s$ à la valeur théorique donnée par l'expression suivante: $$\sigma_s=\sqrt{\sigma_1^2+\sigma_2^2}$$ On s'aidera de la trâme proposée (à compléter) pour résoudre l'exercice. ``` ## a) génération des deux variables aléatoires # Etape 1: import du module numpy et du module random de numpy # à compléter # Etape 2: utilisation de la fonction normal(moy, ecarttype, nbSamples) N = 10**6 # nb d'échantillons X1 = # à compléter# génère 10^6 nb aléatoires selon la loi normale de moyenne 5 et d'écart-type 1 X2 = # à compléter# génère 10^6 nb aléatoires selon la loi normale de moyenne 10 et d'écart-type 2 ## b) Représentation des histogrammes des échantillons X1 et X2 # Etape 1: import du module pyplot # à compléter # Etape 2: construction des valeurs de la variable somme Xs Xs = # à compléter # Etape 2: tracé des histogrammes avec la fonction hist # à compléter # histogramme de X1 # à compléter# histogramme de X2 # à compléter # histogramme de Xs # Etape 3: ajout des titres des axes, titre de figure plt.xlabel('Valeurs tirées') plt.ylabel('Fréquences') plt.title('histogrammes des variables X1, X2 et X1+X2') plt.legend() plt.show() # c) Estimation de la moyenne et de l'écart-type de la variable Xs moyS # à compléter #calcule la moyenne sigmaS # à compléter # calcule l'écart type print("Estimateurs : moyenne = ",moyS," écart-type = ",sigmaS) # d) Comparaison avec la valeur théorique s1, s2 = 0.5, 1. # variables auxilaires sigmaStheo = # à compléter print("valeur théorique de l'écart-type ",sigmaStheo) ``` **Correction de l'exercice O2 n4** ``` # a) génération des deux variables aléatoire # Etape 1: import du module numpy et du module random de numpy import numpy as np import numpy.random as rd # Etape 2: utilisation de la fonction normal(moy, ecarttype, nbSamples) N = 10**6 # nb d'échantillons X1 = rd.normal(5., .5 , N) # génère 10^6 nb aléatoires selon la loi normale de moyenne 5 et d'écart-type 1 X2 = rd.normal(10., 1., N) # génère 10^6 nb aléatoires selon la loi normale de moyenne 10 et d'écart-type 2 # b) Représentation des histogrammes des échantillons X1 et X2 # Etape 1: import du module pyplot import matplotlib.pyplot as plt # Etape 2: construction des valeurs de la variable somme Xs Xs = X1+X2 # Etape 2: tracé des histogrammes avec la fonction hist plt.hist(X1, bins = 'rice',label='X1') # histogramme de X1 plt.hist(X2, bins = 'rice',label='X2') # histogramme de X2 plt.hist(Xs, bins = 'rice',label='X1+X2') # histogramme de Xs # Etape 3: ajout des titres des axes, titre de figure plt.xlabel('Valeurs tirées') plt.ylabel('Fréquences') plt.title('histogrammes des variables X1, X2 et X1+X2') plt.legend() plt.show() # c) Estimation de la moyenne et de l'écart-type de la variable Xs moyS = np.mean(Xs) # la fonction mean() calcule la moyenne sigmaS = np.std(Xs,ddof = 1) # la fonction std() calcule l'écart type print("Estimateurs : moyenne = ",moyS," écart-type = ",sigmaS) # d) Comparaison avec la valeur théorique s1, s2 = 0.5, 1. sigmaStheo = (s1**2+s2**2)**(0.5) # racine carré de la somme des carrés (expression de type "Pythagore") print("valeur théorique de l'écart-type ",sigmaStheo) ``` Conclusion : on constate que la valeur estimée est "proche" de la valeur théorique. *Remarque:* on pourrait davantage préciser la notion de "proche" en utilisant la notion de "Z-score" (cf chapitre sur les incertitudes), mais ce n'est pas l'objet de cet exercice. ## O2 n°5 Simulation de la différence de deux variables aléatoires uniformes On considère deux variables aléatoires de loi uniforme $X_1$ et $X_2$ telles que : - $X_1$ est uniformément répartie sur l'intervalle de valeurs $I_1=[x_1-\Delta_1;x_1+\Delta_1]$; - $X_2$ est uniformément répartie sur l'intervalle de valeurs $I_2=[x_2-\Delta_2;x_2+\Delta_2]$. Dans toute la suite, on prendra : $x_1=5$, $\Delta_1=0,5$, $x_2=20$ et $\Delta_2=1$. a) Ecrire le code python qui génère $N=10^6$ réalisations de ces variables aléatoires. On pourra utiliser la fonction ``uniform(a,b,nbSamples)`` du module numy.random. b) Représenter graphiquement les histogrammes des échantillons $X_1$, $X_2$ et $X_d=X_2-X_1$ sur un même graphe. c) Ecrire le code Python qui estimer la moyenne et l'écart-type $\sigma_d$ de la variable $X_d$. d) Comparer l'estimation de l'écart-type $\sigma_d$ à la valeur donnée par l'expression suivante : $$\sigma_d=\sqrt{\sigma_1^2+\sigma_2^2}$$ dans laquelle les écarts-types $\sigma_1$ et $\sigma_2$ sont donnés par la **demi-largeur de l'intervalle de valeurs divisé par racine carrée de trois**: $$\sigma_i=\frac{\Delta_i}{\sqrt{3}}$$ Ci-dessous une trame à compléter. ``` # Données numériques N = 10**5 x1, delta1, x2, delta2 = 5, 0.5, 20, 1. # Génération des variables X1, X2 et Xd X1 = # à compléter X2 = # à compléter Xd = # à compléter # Tracé des histogrammes plt.hist(X1,bins='rice',color = 'b', label='X1') plt.hist(X2,bins='rice',color = 'g', label='X2') plt.hist(Xd,bins='rice',color = 'r', label='Xd') plt.show() # Estimation de la moyenne et de l'écart-type de Xd moyd =# à compléter sigmad = # à compléter print("Estimateurs : moyenne = ",moyd," écart-type = ",sigmad) # Comparaison avec la théorie sigma1, sigma2 = delta1/3**0.5, delta2/3**0.5 sigmad_bis = # à compléter print("Expression de l'écart-type = ",sigmad_bis) ``` **Correction 02 n°5** ``` # Données numériques N = 10**5 x1, delta1, x2, delta2 = 5, 0.5, 20, 1. # Génération des variables X1, X2 et Xd X1 = rd.uniform(x1-delta1, x1+delta1,N) X2 = rd.uniform(x2-delta2, x2+delta2,N) Xd = X2-X1 # Tracé des histogrammes plt.hist(X1,bins='rice',color = 'b', label='X1') plt.hist(X2,bins='rice',color = 'g', label='X2') plt.hist(Xd,bins='rice',color = 'r', label='Xd') plt.show() # Estimation de la moyenne et de l'écart-type de Xd moyd =np.mean(Xd) sigmad = np.std(Xd,ddof = 1) print("Estimateurs : moyenne = ",moyd," écart-type = ",sigmad) # Comparaison avec la théorie sigma1, sigma2 = delta1/3**0.5, delta2/3**0.5 sigmad_bis = np.sqrt(sigma1**2 + sigma2**2) print("Expression de l'écart-type = ",sigmad_bis) ``` Conclusion: la simulation donne une valeur en accord avec l'expression $\sigma_d=\sqrt{\sigma_1^2+\sigma_2^2}$. ## O2 n°6 Simulation de la somme deux variables aléatoires uniformes Reprendre l'exercice O2 n°5 mais en calculant cette fois la grandeur somme $X_1+X_2$. Les mêmes constatations s'appliquent-elles? **Correction** il suffit de recopier les mêmes instructions en remplaçant le signe "moins" par un signe plus. On constate que la relation donnant l'écart-type demeure toujours valide. ``` # Données numériques N = 10**5 x1, delta1, x2, delta2 = 5, 0.5, 20, 1. # Génération des variables X1, X2 et Xd X1 = rd.uniform(x1-delta1, x1+delta1,N) X2 = rd.uniform(x2-delta2, x2+delta2,N) Xd = X2+X1 # Xd est CETTE fois une SOMME : on garde la notation Xd pour éviter de tout taper à nouveau! # Tracé des histogrammes plt.hist(X1,bins='rice',color = 'b', label='X1') plt.hist(X2,bins='rice',color = 'g', label='X2') plt.hist(Xd,bins='rice',color = 'r', label='Xd') plt.show() # Estimation de la moyenne et de l'écart-type de Xd moyd =np.mean(Xd) sigmad = np.std(Xd,ddof = 1) print("Estimateurs : moyenne = ",moyd," écart-type = ",sigmad) # Comparaison avec la théorie sigma1, sigma2 = delta1/3**0.5, delta2/3**0.5 sigmad_bis = np.sqrt(sigma1**2 + sigma2**2) print("Expression de l'écart-type = ",sigmad_bis) ``` ## O2 n°7 Simulation du quotient de deux variables aléatoires gaussiennes (à chercher) On considère deux variables aléatoires de loi normale $U$ et $I$ telles que : - $U$ possède une moyenne $\overline{U}=1,2457 \textrm{ V}$ et d'écart-type $\sigma_U=1,2 \,\textrm{mV}$; - $I$ possède une moyenne $\overline{I}=12,383 \textrm{ mA}$ et d'écart-type $\sigma_I=5,2 \, \mu \textrm{A}$. a) Simuler à l'aide du module Numpy un grand nombre de réalisations de ces variables aléatoires. On considère la grandeur $$R=U/I$$. b) Tracer l'histogramme de la grandeur $R$ et déterminer les paramètres statistiques de cette distribution (moyenne $\overline{R}$ et écart-type $\sigma_R$). c) Comparer le résultat obtenu aux expressions théoriques: $$\overline{R}=\frac{\overline{U}}{\overline{I}}$$ et $$\frac{\sigma_R}{\overline{R}} = \sqrt{\left(\frac{\sigma_U}{\overline{U}}\right)^2 + \left(\frac{\sigma_I}{\overline{I}} \right) ^2} $$ **Correction de O2 n°7** ``` # import des modules import numpy as np import matplotlib.pyplot as plt # tirages aléatoires N = 10**6 # "grand nombre" U = np.random.normal(1.2457, 1.2e-3, N) # attention à la conversion mV en V I = np.random.normal(12.383e-3, 5.2e-6, N) # l'intensité électrique est convertie en ampère (A) # Grandeur R R = U/I # Histogramme plt.hist(R,bins='rice') plt.show() # Paramètres statistiques moyR, sigmaR = np.mean(R), np.std(R, ddof=1) # fonction mean (moyenne) et std (écart-type) print("Estimateurs : \t moyenne = ",moyR, " écart-type = ",sigmaR) # Valeurs théoriques # Pour les applications numériques on utilise des variables auxiliaires pour "alléger" les notations u, su, i, si = 1.2457, 1.2e-3, 12.383e-3, 5.2e-6 moyTheo = u/i sigmaTheo = (u/i)*np.sqrt( (su/u)**2+(si/i)**2) print("Valeurs théoriques :\t moyenne = ", moyTheo, " écart-type = ", sigmaTheo) ``` Conclusion: on constate un bon accord entre les données simulées et les valeurs théoriques. ## O2 n°8 Régression linéaire (1) : période d'un pendule en fonction de la longueur de fil Un étudiant souhaite modéliser la relation entre la longueur d'un pendule pesant et la période des oscillations de ce pendule (cf figure ci-desssous). Pour différentes valeurs de la longueur $\ell$ de ce pendule, il mesure la période $T$ des oscillations. ![penduleLongueur](img/penduleLongueur.png) $$ \begin{array}{|c|c|c|c|c|c|c|} \hline \textrm{longueur } \ell \textrm{ (cm) } & 10,0& 15,0 & 31,25 & 47,5 & 63,8 & 80,0 \\ \hline \textrm{période d'oscillation }T \textrm{ (s) }& 0,632 & 0,782 & 1,119 & 1,377 & 1,603&1,791 \\ \hline \end{array} $$ Dans tout l'exercice, on prendra $g=9,81 \textrm{ m.s} ^{-2}$ pour l'accélération de la pesanteur. **Questions** a) Tracer l'évolution de la période $T$ en fonction de la longueur $\ell$ du pendule. Les points expérimentaux semblent-ils alignés? La relation théorique entre la période et l'allongement se déduit des relations suivantes: $$\omega=\sqrt{\frac{g}{\ell}}$$ et $$T = \frac{2\pi}{\omega}$$ b) Justifier qu'il est pertinent de proposer le modèle de régression affine $Y=aX+b$ avec $Y=T^\alpha$ et $X=\ell$ sous la forme $$ T^\alpha = a \ell +b$$ où l'exposant $\alpha$ est un coefficient numérique que l'on déterminera. c) Déterminer par régression affine les valeurs numériques des paramètres $a$ et $b$ de ce modèle (on ne demande pas d'estimer les incertitudes ni de valider ce modèle). d) Représenter la droite de régression en superposition sur le jeu de données. e) Comparer la valeur obtenue pour la pente $a$ de la régression à la valeur théorique $a_\textrm{théo}$ que l'on précisera. **Autre méthode pour estimer la pente de la régression** Une méthode alternative pour estimer la pente de la régression est de proposer un modèle de la forme $Y=aX$, c'est-à-dire purement "linéaire" (sans composante affine): - pour chaque couple de valeurs $(x_i,y_i)$, déterminer la valeur $a_i$ de la pente par $a_i=y_i/x_i$, - effectuer la moyenne de l'ensemble des pentes $\{a_i\}$ obtenues. f) Mettre en oeuvre cette méthode et comparer l'estimation de $a$ dans ce cas à celle obtenue avec la régression affine. **Correction O2 n°8** ``` ## a) Tracé de l'évolution de la période en fonction de la longueur #import des modules import numpy as np import matplotlib.pyplot as plt # Données expérimentales l = np.array([0.10, 0.15, 0.3125,0.475, 0.638, 0.80]) # longueurs en mètres T = np.array([0.632,0.782, 1.119, 1.377, 1.603, 1.791]) # périodes en secondes # Evolution de la période en fonction de la longueur du pendule plt.plot(l,T,'+k',ms = 15,mew = 2) plt.xlim([0,1.0]),plt.ylim([0,2.]) # visualisation du zéro plt.xlabel('longueur (m)'), plt.ylabel('période (s)') plt.grid() ``` b) D'après les relations précédentes: $$T^2=\frac{4\pi^2}{g}\ell$$ qui est bien de la forme $Y=T^2 = aX$ avec $X=\ell$ et la pente théorique qui vaut $$a_\textrm{théo}=\frac{4\pi^2}{g}$$ Le modèle affine est donc compatible avec la relation théorique à condition de choisir $\alpha=2$. Le résultat de la régression affine doit nous fournir la valeur $b\approx0$ ``` ## c) Détermination des valeurs de a et b par régression affine X = l Y = T**2 a, b = np.polyfit(X,Y,deg = 1) # on effectue la régression de Y = T^2 par la variable X = longueur print('Regression affine : pente a = ',a,' s²/m ',' b = ',b,' s²') ## d) Visualisation de la droite de régression sur le jeu de données plt.plot(l,T**2.,'+k',ms = 15,mew = 2) plt.xlim([0,1.0]) # visualisation du zéro plt.xlabel('longueur (m)'), plt.ylabel('période (s)') # Attention aux unités des paramètres a et b xi = np.array([0,1.]) plt.plot(xi,a*xi+b,'-r') # affichage de la droite de régression plt.grid() ## e) Comparaison avec la valeur théorique g = 9.81 # m/s² atheo = 4*np.pi**2/g print('valeur théorique de la pente athéo = ', atheo) ## f) Autre estimation de la pente ai = T**2/l print("estimation de la pente a = ",np.mean(ai)) # moyenne des pentes estimées ``` ## O2 n°9 Régression linéaire (2) : traction d'un élastique Un étudiant souhaite **caractériser le comportement en traction** d'un élastique en caoutchouc. Pour cela, il met en oeuvre le dispositif expérimental suivant (cf figure ci-dessous) : - l'élastique à tester est accroché à une potence, - on fixe à son extrémité inférieure un masse $m$ de valeur variable, - pour plusieurs valeurs de la masse $m$, il mesure la longueur $\ell$ de l'élastique à l'aide d'un mètre ruban. **Illustration du dispositif expérimental** ![elastiquePotence](img/elastiquePotence4.png) **Instruments de mesure utilisé :** - Balance de cuisine (au gramme) ; - Mètre ruban (gradué en millimètre) **Tableau des mesures** $$ \begin{array}{|c|c|c|c|c|c|c|c|} \hline \textrm{masse } m \textrm{ (g) } & 0 & 200 & 300 & 400 & 500 & 600 & 700\\ \hline \textrm{longueur }\ell \textrm{ (cm) }& 12,4 &12,9& 13,1&13,3& 13,6& 14,2&14,6 \\ \hline \end{array} $$ **Questions** On note $\Delta \ell$ l'**allongement** de l'élastique la grandeur définie par $$\Delta \ell = \ell - \ell_0$$ où $\ell_0$ est la longueur de l'élastique à vide, c'est-à-dire lorsque qu'il est soumis à une force nulle. a) Représentation des mesures : tracer l'évolution de l'allongement $\Delta \ell$ (en mètres) en fonction de la masse (en kilogrammes). b) Etablir l'équation de la droite de régression $\Delta \ell = f(m)$ et représenter la droite de régression sur le jeu de données. On suppose que la loi de Hooke d'applique, c'est-à-dire que la relation *force vs allongement* est linéaire, soit: $$F = k \Delta \ell$$ c) En prenant $g=9,81 \textrm{m.s}^{-2}$ déterminer la valeur de la raideur $k$ correspond à la loi de Hooke. **Remarques:** - l'étude des incertitudes n'est pas demandée dans cet exercice, - rien de permet d'affirmer que la loi de Hooke est valide dans ce cas de cet élastique (et d'ailleurs, il n'en est rien ! nous montrerons que les résultats de ces mesures permettent **d'invalider le modèle linéaire proposé**). Indications : pour répondre aux questions, on s'aidera de la trame pré-remplie ci-dessous. ``` ## Import des modules # import de numpy (à compléter) # import de pyplot (à compléter) ## Saisie des données expérimentales m = np.array([0, 200, 300, 400, 500, 600, 700]) # masses appliquées (en g) u_m = 0.5/np.sqrt(3) # incertitude-type sur les valeurs de m (en g) l0 = 12.4 # longueur du ressort à vide (en cm) l = np.array([12.4,12.9, 13.1, 13.3, 13.6, 14.2, 14.6]) # longueur du ressort en charge (en cm) u_l = 0.05/np.sqrt(3) ## a) Tracé des courbes dl = # allongements en mètres (à compléter) mkg = # masse en kg (à compléter) plt.plot(...) # à compléter plt.xlabel('masse appliquée (kg)') plt.ylabel('allongement (m)') plt.grid() ## b) Régression linéaire a,b = # extraction des paramètres a et b de la régression linéaire (à compléter) print('la pente est a = ',a, ' ') # affichage de la pente avec l'unité correcte (à compléter) print("l'ordonnée à l'origine est n = ",b,' ') # affichage de l'ordonnée à l'origine (compléter l'unité) # Ajout de la droite de régression en rouge sur le jeu de données (à compléter) plt.show() ``` c) Détermination de la raideur $k$ de la loi de Hooke $F=k\Delta \ell$ (avec l'unité correcte) **Correction de O2 n°9** ``` ## Import des modules import numpy as np # import de numpy import matplotlib.pyplot as plt # import de pyplot ## Saisie des données expérimentales m = np.array([0, 200, 300, 400, 500, 600, 700]) # masses appliquées (en g) u_m = 0.5/np.sqrt(3) # incertitude-type sur les valeurs de m (en g) l0 = 12.4 # longueur du ressort à vide (en cm) l = np.array([12.4,12.9, 13.1, 13.3, 13.6, 14.2, 14.6]) # longueur du ressort en charge (en cm) u_l = 0.05/np.sqrt(3) ## a) Tracé des courbes dl = (l-l0)*1e-2 # allongements en mètres (à compléter) mkg = m*1e-3 # masse en kg (à compléter) plt.plot(mkg,dl,'+k',ms = 15) # à compléter plt.xlabel('masse appliquée (kg)') plt.ylabel('allongement (m)') plt.grid() ## b) Régression linéaire a,b = np.polyfit(mkg, dl, deg = 1) # extraction des paramètres a et b de la régression linéaire print('la pente est a = ',a, ' m/kg') # affichage de la pente avec l'unité correcte print("l'ordonnée à l'origine est b = ",b,' m') # affichage de l'ordonnée à l'origine avec l'unité correcte # Ajout de la droite de régression en rouge sur le jeu de données plt.plot(mkg, a*mkg + b, '-r') # on utilise les valeurs de masses X = mkg en abscisses pour tracer Y = aX+b # autre méthode pour tracer la droite de régression xi = np.array([0,0.7]) # valeurs extrêmes plt.plot(xi,a*xi+b,'-r') # on utilise deux points pour tracer la droite de régression plt.show() ``` c) Détermination de la raideur $k$ (en newton par mètre : $\textrm{N.m}^{-1}$) de la loi de Hooke $$F=k\Delta \ell$$ La force $F$ est reliée à la masse par (poids d'un corps de masse $m$ dans le champ de pesanteur $g$) : $$F=mg$$ Or, en utilisant la valeur de la pente $a$ obtenue par régression affine, on peut écrire,à condition de négliger l'ordonnée à l'origine $b$, que l'allongement et la masse sont des grandeurs proportionnelles : $$Y=a X \quad \textrm{ soit } \quad \Delta \ell = a \times m$$ On a donc $m = \frac{1}{a} \Delta \ell$, en remplaçant dans l'expression de la force $F=mg$, il vient: $$F=\frac{g}{a} \Delta \ell$$ Donc, en identifiant avec la loi de Hooke: $$k = \frac{g}{a}$$ **Application numérique** ``` g = 9.81 # m/s2 k = g/a print("La raideur k de l'élastique est estimée à k = ",k, ' N/m.') ``` Remarque : au vu des données expérimentales, le modèle affine peut sembler discutable mais, sans **prendre en compte rigoureusement les incertitudes de mesure**, aucune conclusion ne peut être tirée. ## O2 n°10 Régression linéaire (3) : oscillations d'un système masse-ressort (à chercher) Un étudiant s'intéresse aux oscillations verticales d'un système masse ressort (cf figure). ![masseRessort](img/masseRessort3.png) Pour cela, il effectue deux séries de mesures. Dans la première série de mesures, il varie la masse $m$ accrochée au ressort tout en mesurant l'allongement $\Delta \ell$ du ressort à l'équilibre. $$ \begin{array}{|c|c|c|c|c|} \hline \textrm{masse } m \textrm{ (g) } & 50 & 100 & 150 & 250 \\ \hline \textrm{allongement à l'équilibre } \Delta \ell \textrm{ (cm) }& 16 & 31 & 49 & 82\\ \hline \end{array} $$ Dans la seconde série de mesures, il détermine aussi précisément que possible la période $T$ des oscillations pour différentes valeurs de la masse $m$. $$ \begin{array}{|c|c|c|c|c|} \hline \textrm{masse } m \textrm{ (g) } & 50 & 100 & 150 & 250 \\ \hline \textrm{période d'oscillation }T \textrm{ (s) }& 0,812 & 1,15 & 1,41 & 1,82\\ \hline \end{array} $$ **Questions** a) Etablir, à l'aide de la première série de mesures, la valeur de la raideur $k$ du ressort que l'on supposera obéir à la loi de Hooke $$ F=k\Delta \ell$$, la force appliquée sur le ressort étant donnée par le poids de la masse $m$ (on prendra $g=9,81\textrm{ m.s}^{-2}$). b) A partir de la seconde série de mesures, proposer un modèle affine compatible avec la relation théorique entre la masse $m$ et la fréquence des $f$ des oscillations $$f=\frac{1}{T}$$ avec $$\omega = 2\pi f= \sqrt{\frac{k}{m}}$$ On déterminera les paramètres $(a,b)$ de la régression affine et on comparera ces valeurs aux valeurs théoriques. Aucune étude d'incertitude ni de validation de modèle n'est demandé dans cette question. ``` ## Données mesurées m = np.array([50,100,150,250])*1e-3 # masses en kg deltaL = np.array([16,31,49,82])*1e-2 # allongement en m T = np.array([0.812, 1.15, 1.41, 1.82])# périodes en s ``` **Correction O2 n°10** ``` ## Données m = np.array([50,100,150,250])*1e-3 # masses en kg deltaL = np.array([16,31,49,82])*1e-2 # allongement en m T = np.array([0.812, 1.15, 1.41, 1.82])# périodes en s ## a) détermination de la raideur du ressort F = k DeltaL g = 9.81 F = m*g # Force (en N) plt.figure(1) plt.plot(deltaL,F,'+k',ms = 15) # force en fonction de l'allongeument p = np.polyfit(deltaL,F,deg=1) # régression linéaire print("Regression : pente = ", p[0]," N/m", " b = ",p[1]," N") # paramètres de la régression avec unités plt.xlabel('allongement (m)') plt.ylabel('Force (N)') xi = np.array([0.,1.]) # valeurs extrêmes plt.plot(xi,np.polyval(p,xi)) # polyval : évalue le polynôme p aux points de coordonnées ``` D'après les données du sujet, l'expression de la période $T$ en fonction de la masse $m$ est donnée par: $$T=\frac{2\pi}{\omega}=2\pi\sqrt{m}{k}=\left(\frac{2\pi}{\sqrt{k}} \right) \times m^{1/2}$$ La régression linéaire se fait donc avec $Y = T$ et $X=m^{1/2}$, la pente théorique est donc $$a_\textrm{théo}=\frac{2\pi}{\sqrt{k}}$$ ``` # b) Détermination des paramètres a,b de la régression affine ## Représentation des données Y = période, X = racine carrée de la masse X = m**(1/2) # variable d'abscisse de la régression plt.plot(X,T,'+k',ms = 15) # données expérimentales plt.xlabel('X = m^(1/2) (kg^(1/2))') # variable X plt.ylabel('période T (s)') # variable Y a, b = np.polyfit(X,T,deg = 1) # régression linéaire xi = np.array([0,0.6]) # valeurs extrêmes plt.plot(xi,a*xi+b,'-r') # droite de régression print('Regression affine : pente a = ',a," s.kg^(-1/2) ; b = ",b," s") # attention aux unités # Comparaison avec la valeur théorique atheo = 2*np.pi/p[0]**(0.5) # valeur théorique, la raideur étant donnée par le coefficient p[0] print("valeur théorique de la pente = ",atheo, " s.kg^(-1/2)") ``` Autre possibilité, on peut tracer l'évolution de $T^2$ en fonction de la masse: $$T^2 =\frac{(2\pi)^2}{k} \times m$$ Dans ce cas, la pente théorique est $$a'_\textrm{théo}=\frac{(2\pi) ^2}{k}$$ ``` # b2) Détermination des paramètres a,b de la régression affine [METHODE ALTERNATIVE] ## Représentation des données Y = période, X = racine carrée de la masse Y = T**2 # variable d'ordonnée de la régression plt.plot(m,Y,'+k',ms = 15) # données expérimentales plt.xlabel('m (kg') # variable X plt.ylabel('période T au carré(s^2)') # variable Y a, b = np.polyfit(m,Y,deg = 1) # régression linéaire xi = np.array([0,0.3]) # valeurs extrêmes plt.plot(xi,a*xi+b,'-r') # droite de régression print('Regression affine : pente a = ',a," s^2.kg^(-1) ; b = ",b," s^2")# attention aux unités # Comparaison avec la valeur théorique atheo = (2*np.pi)**2/p[0] # valeur théorique, la raideur étant donnée par le coefficient p[0] print("valeur théorique de la pente = ",atheo, " s^2.kg^(-1)") ``` **Encore une autre méthode alternative** : au lieu d'une régression affine, on peut faire la moyenne des pentes obtenues pour chacun des couples $(y_i,x_i)$. ``` ai = T**2/m # liste des rapports Y_i/X_i print("estimation de la pente a = ", np.mean(ai)) # on prend la moyenne des pentes ``` # Complément : description mathématique de la régression linéaire ordinaire Faire une régression linéaire revient à chercher le minimum d'une fonction. Soit une série de $N$ couples $(x_i,y_i)$ de valeurs numériques, la régression linéaire ordinaire consiste à trouver les deux valeurs des paramètres $(a,b)$ de la droite d'équation $y=ax+b$ qui **minimise les écarts verticaux** entre la droite modèle et les points de données (cf figure). ![regressionPrincipe](img/regressionPrincipe.png) L'**écart vertical** $\varepsilon_i$ entre le point numéro $i$ et la droite est la quantité (représentée en tirés verts sur le figure) telle que: $$\varepsilon_i =y_i- (a x_i+b)$$ On décide alors de **minimiser la somme des carrés de ces écarts**, c'est-à-dire la quantité notée $\varepsilon_N(a,b)$ qui dépend des $N$ points de mesure et des deux variables $a$ et $b$ : $$\varepsilon_N(a,b)=\sum_{i=1}^{N} \varepsilon_i^2 = \sum_{i=1}^{N} \left(y_i- (a x_i+b)\right)^2 $$ Comme la fonction $(a,b) \mapsto \varepsilon_N(a,b)$ est une fonction quadratique de deux variables, sa minimisation est aisée et se ramène au système linéaire de deux équations suivant: $$\frac{\partial \varepsilon_N}{ \partial a} = 0 \quad \Leftrightarrow a \sum_i xi^2 + b \sum_ix_i = \sum_i x_iy_i$$ $$\frac{\partial \varepsilon_N}{ \partial b} = 0 \quad \Leftrightarrow a \sum_i xi + b \sum_i1 = \sum_iy_i \textrm{ }$$ Qui se résoud en $$a=\frac{\sum_i (x_i-\overline{x})(y_i-\overline{y})}{\sum_i(x_i-\overline{x})^2}$$ et $$b=\overline{b}-a\overline{x}$$
github_jupyter
# 1ère solution import numpy as np # crée un alias sur le module Numpy np.random.random() # renvoie un flottant aléatoire uniformément distribué sur l'intervalle [0;1] # 2ème solution import numpy.random as rd # crée un alias sur le sous-module random de Numpy rd.random() # c'est la fonction random() du sous-module random du module Numpy help(rd.random) # appel l'aide sur la fonction random() du module numpy.random # Création de N = 12 valeurs aléatoires tirées uniformément dans l'intervalle [0;10] N = 12 L1 = [] for k in range(N): # boucle for L1.append(rd.random()*10) print(L1) # Création de N = 12 valeurs aléatoires tirées uniformément dans l'intervalle [a;b] N = 12 a, b = 50, 80 # bornes de l'intervalle L1 = [] for k in range(N): # boucle for L1.append(rd.random()) # LIGNE A MODIFIER print(L1) # Création de N = 12 valeurs aléatoires tirées uniformément dans l'intervalle [a;b] N = 12 a, b = 50, 80 # bornes de l'intervalle L1 = [] for k in range(N): # boucle for L1.append(rd.random()*(b-a) + a) print(L1) rd.rand(12) # création d'un vecteur Numpy de 12 valeurs tirées uniformément dans l'intervalle [0;1] rd.rand(12)*(b-a)+a # 12 valeurs aléatoires tirées entre les bornes a et b # Réaliser un tirage aléatoire de nombres entiers Voici comment obtenir un nombre aléatoire tiré uniformément entre a=5 (inclus) et b = 10 (exclu). ATTENTION : pour les entiers, il importe de **toujours vérifier dans la spécification de la fonction** si les bornes sont INCLUSES ou EXCLUES. Selon les modules utilisés, les bornes supérieures sont parfois incluses ou exclues. ## La bibliothèque random Cette bibliothèque possède AUSSI une fonction ``randint()`` qui ne fonctionne pas de la même manière : la borne supérieure est cette fois incluse. *A NOTER :* Dans la mesure du possible, on recommande de travailler avec numpy.random mais il faut savoir s'adapter à la bibliothèque random si cela vous est demandé.* # Histogrammes Un **histogramme** est une représentation graphique permettant de visualiser la répartition d'une variable continue en la représentant avec des colonnes. Pour tracer un histogramme, nous utilisons la fonction ``hist()`` qui appartient au module matplotlib.pyplot qui contient les outils graphiques. # Estimateurs statistiques : moyenne, variance et écart-type Un estimateur est une fonction permettant d'évaluer un paramètre inconnu relatif à une loi de probabilité. Les deux estimateurs à notre programme sont : - la moyenne, noté $\overline{x}$ - l'écart-type, noté $\sigma_x$. ## L'estimateur 'moyenne' Prenons comme exemple les données qui sont contenues dans la liste ``L`` précédemment générées à partir d'une loi de probabilité uniforme sur l'intervalle $[100;150]$. On peut **estimer** la 'valeur centrale' de cette loi à partir des $N$ **réalisations** $\{x_k,\quad k=1\ldots N\}$. Pour cela, on effectue le calcul de la moyenne $\overline{x}$ dont l'expression est la somme des valeurs divisée par le nombre de valeurs : $$\overline{x}= \frac{1}{N} \sum_{k=1}^{N}x_k$$ On donne ci-dessous deux méthodes pour calculer la moyenne des $N$ valeurs de la liste ``L``. **Conclusion** On constate que la moyenne des N valeurs tirées est "proche" de la valeur centrale de l'intervalle $[100 ;150]$. Ainsi, la moyenne est une fonction des $N$ réalisations de la loi $\overline{x}=f(\{x_k\})$ qui permet d'estimer la valeur centrale de la loi uniforme. ## L'estimateur 'écart-type' La dispersion des valeurs peut être quantifiée par le calcul de l'écart-type. Par définition, l'**écart-type** $\sigma_x$ est la *racine carrée de la moyenne de l'écart quadratique à la moyenne*. En anglais, on dit aussi valeur *RMS* = **Root Mean Square**. Une estimation de l'écart-type peut donc être calculée de la manière suivante: (1) On soustrait chaque à valeur $x_k$ la moyenne $\overline{x}$ des valeurs de (2) On prend le carré de cet écart à la moyenne, $\left(x_k-\overline{x}\right)^2$ représente un écart *quadratique* (3) On prend la moyenne de ces écarts quadratiques (aussi appelée **variance**, notée $V(x)$): $$V(x)=\frac{1}{N}\sum_{k=1}^N \left(x_k-\overline{x}\right)^2$$ (4) Enfin, on prend la racine carrée de ce résultat: $$\sigma_x=\sqrt{\frac{1}{N}\sum_{k=1}^N \left(x_k-\overline{x}\right)^2}$$ On donne ci-dessous deux méthodes pour calculer l'écart-type des $N$ valeurs de la liste ``L``. **Remarque : écart-type sans biais** En théorie des probabilités (hors programme), on peut montrer que l'estimateur précédent n'est pas optimum : il possède un possède **un biais** d'autant plus important que le nombre d'échantillons $N$ est faible. C'est pourquoi nous utilisons (sauf indication contraire) l'estimateur suivant appelé **estimateur sans biais de l'écart-type**: $$\sigma_x=\sqrt{\frac{1}{N-1}\sum_{k=1}^N \left(x_k-\overline{x}\right)^2}$$ La calcul de cet estimateur se fait avec en appelant la méhode ``std()`` de Numpy avec le paramètre ``ddof = 1``. Le paramètre ddof signifie "Delta Degree Of Freedom" (= nombres de degrés de libertés). **A savoir** Pour une loi de probabilité uniforme sur un intervalle $[a;b]$, l'écart-type vaut la *demi-largeur de l'intervalle divisée par racine carrée de 3*. On peut vérifier l'estimation obtenue pour l'écart-type est "proche" de la valeur théorique: $$\frac{(b-a)/2}{\sqrt{3}}=\frac{25}{\sqrt{3}}\approx 14,44$$ ## Exercice N2 n°1 (corrigé) a) Ecrire les instructions en python permettant de générer une liste X de N valeurs aléatoires tirées uniformément sur l'intervalle $[-2 ; 12]$. b) Calculer la moyenne et l'écart-type des valeurs de la liste pour $N=10$, $N=100$, $N=10^4$ et $N=10^6$. c) Comparer les résultats obtenus avec les valeurs théoriques. Conclure **Correction** Concusion : on constate que plus le nombre $N$ d'échantillons est grand, plus les estimateurs semblent "proches" des valeurs théoriques. ## Simulation d'une loi uniforme (type rectangulaire) Dans le paragraphe précédent, nous avions généré $N$ valeurs aléatoires tirées selon une loi uniforme à l'aide du tirage d'une unique valeur (fonction ``random()``). Le module ``random`` de Numpy contient la méthode ``uniform`` qui permet d'effectuer directement le tirage de $N$ valeurs sur un intervalle $[a;b]$. Attention, dans ce cas les valeurs générées sont de type ``nd.array`` (tableau Numpy) et ne sont plus une simple liste Python de valeurs comme c'était le cas dans le paragraphe précédent. ### Influence du nombre de "bins" d'un histogramme On peut spécifier le nombre de *bins* (= nb de classes, nb de "bacs") lors de la construction d'un histogramme. **En conclusion**, on voit que le nombre de "bins" doit être doit choisi de manière judicieuse pour représenter convenablement un échantillon de valeurs. L'option ``bins ='rice'`` fournit automatiquement une valeur généralement acceptable. ## Simulation d'une loi normale (type gaussienne) Une **variable aléatoire gaussienne** (ou normale) décrit un processus aléatoire dont la probabilité d'obtenir une valeur numérique entre $x$ et $x+\textrm{d}x$ est $f(x)\textrm{d}x$ où $f(x)$ est appelée *densité de probabilité normale* et est donnée par: $$f(x)=\frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{\left(x-\mu\right)^2}{2\sigma^2}\right)$$ Les paramètres $\mu$ et $\sigma$ sont les deux paramètres de la loi normale qui sont appelés, respectivement, moyenne et écart-type. La courbe de cette densité de probabilité est appelée courbe de Gauss (ou *courbe en cloche*). Pour simuler une loi gaussienne, on peut utiliser la fonction ``normal()``. ### Autre méthode pour simuler un tirage aléatoire A partir d'échantillons tirés selon une loi gaussienne de moyenne nulle et d'écart-type égal à un, il est possible **par multiplication et addition** d'obtenir des échantillons tirés selon une moyenne $\mu$ et un écart-type $\sigma$ quelconques. De même, à partir d'un tirage uniforme sur l'intervalle $[0;1]$ il est possible d'obtenir un tirage uniforme sur l'intervalle $[a;b]$. ## O2 n°2 Exercice (corrigé) Compléter le script ci-dessous de manière à créer, à partir de la la liste d'échantillons *X1*, une liste d'échantillons *X2* tirés uniformément sur l'intervalle $I=[x_0-\Delta x;x_0+\Delta x]$ avec $x_0 = 12$ et $\Delta x = 0,25$. **Correction** La ligne correctement complétée est la suivante: Voici le raisonnement utilisé : - (X1-0.5) est une collection de valeurs centrées sur zéro, son étendue est $\pm 0,5$ autour de zéro, - on multiplie alors les valeurs par $2\Delta x$ pour avoir la bonne largeur d'intervalle, - enfin on applique une *translation* sur les valeurs en ajoutant la valeur centrale $x_0$ de l'intervalle. # Régression linéaire (ou régression affine) Supposons que l'on dispose de $N$ couples valeurs $(x_k,y_k)$ avec $k=1,\ldots,N$. Ces valeurs peuvent être représentées dans un plan en tant que nuage de points. On dit que l'on effectue une régression linéaire sur les données $\{(x_k,y_k) \textrm{ avec } k=1, \ldots ,N\}$ lorsque l'on cherche à faire passer une droite "au mieux par tous les points" comme cela est illustré sur la figure ci-dessous. ![regressionDroite](img/regressionDroite2.png) **Définition :** Un modèle de régression linéaire est un modèle de régression qui cherche à établir une relation linéaire entre une variable, dite expliquée noté Y, et une ou plusieurs variables, dites explicatives (notée X). ## La fonction polyfit de numpy Pour effectuer la régression linéaire, nous utiliserons la fonction ``polyfit()`` du module Numpy. La syntaxe est la suivante: Le résultat ``p`` est une liste Python de $n$ valeurs qui représente les coefficients d'un polynôme de degré $n-1$ écrit sous la forme suivante: $$P(X) = p[0]X^{n-1}+p[1]X^{n-2} + p[2] X^{n-3}+\ldots + p[n-2]X+p[n-1]$$ Exemple ``p = [3,2,1]`` représente le polynôme de degré deux suiant: $$P(X) = 3X^2 + 2X +1$$ Pour une régression affine, le degré du polynôme est ``deg = 1``. Les coefficients de la droite de régression $Y=AX+B$ sont donc donnés par : - ``A = p[0]``, est la pente de la droite, c'est le coefficient du terme de degré 1, - ``B = p[1]``, est le terme constant (l'ordonnée à l'origine), c'est le coefficient du terme de degré zéro. ## N2 n°3 Exercice-type pour la régression linéaire : exemple de situation expérimentale (corrigé) - On mesure l'absorbance de cinq solutions de complexe $\mathrm{[Fe(SCN)]^{2+}}$ de concentrations connues. L'absorbance de chacune des solutions est mesurée à 580 nm. - On dispose d'une solution (s) de la même espèce chimique dont on souhaite connaître la concentration $C_s$ munie de son incertitude-type. **Données du problème** - Les résultats sont consignés dans le tableau ci-dessous ($\lambda = 580 \ \mathrm{nm})$ $$ \begin{array}{|c|c|c|c|c|c|} \hline \mathrm{C \ / \ mol \ L^{-1}} & 2.5 \cdot 10^{-4} & 5.0 \cdot 10^{-4} & 1.0 \cdot 10^{-3} & 1.5 \cdot 10^{-3} & 2.0 \cdot 10^{-3}\\ \hline \mathrm{A} & 0.143 & 0.264 & 0.489 & 0.741 & 0.998 \\ \hline \end{array} $$ - L'absorbance de la solution (S) lue est $A_s = 0.571$ - Dans la notice du spectrophotomètre, le constructeur indique que la précision sur la mesure de A est $\pm 2\%$. On l'interprète comme une variable aléatoire à distribution uniforme sur un intervalle de demi-étendue $\Delta A = \frac{2}{100} A$ ; - pour les solutions, le technicien fournit une « précision » de la concentration $C$ à 2 %. On l'interprète comme une variable aléatoire à distribution uniforme sur un intervalle de demi-étendue $\Delta C = \frac{2}{100} C$. **Questions** a) Déterminer l'équation de la droite d'étalonnage par une régression affine $C=f(A)$. b) Représenter la droite de régression ainsi que le jeu des cinq donnnées. c) En déduire la concentration $A_s$ de la solution inconnue. ### a) Détermination des coefficients de la droite de régression Il suffit d'appeler la fonction poylfit sur le jeu de données: - $x=C$, concentration en abscisses, - $y=A$, absorbance en ordonnées. Remarque : la commande ``format(x,"#.4g")`` permet de mettre en forme le nombre flottant en supprimant les chiffres non significatifs. ### b) Représentation de la droite de régression Pour tracer une droite, il suffit de deux points. Nous construisons donc un vecteur Numpy contenant les deux valeurs extrêmes des abscisses : Puis nous calculons les valeurs $y_i$ par l'équation de la droite ### c) Détermination de la concentration inconnue. Il suffit d'utiliser la relation affine pour laquelle les constantes $A$ et $B$ sont connnues: $$Y = A.X+B$$ La valeur de $Y$ étant connue, on en déduit la valeur de $X$. Dans notre cas, $$A = \textrm{p[0] }C + \textrm{p[1] }$$ On en tire donc $$C=\frac{A-\textrm{p[1]}}{ \textrm{p[0] }}$$ Conclusion : la concentration de la solution est estimée à $C_s \approx 1,128 \times 10 ^{-3} \, \textrm{mol.L}^{-1}$. En revanche, nous n'avons aucune information quant à la précision de ce résultat. L'estimation des incertitudes est au programme en MPSI et sera vue ultérieurement. # Exercices d'entraînement ## O2 n°4 Simulation de la somme de deux variables aléatoires gaussiennes a) Ecrire le code python qui génère deux variables aléatoires gaussiennes $X_1$ et $X_2$ de paramètres respectifs $\mu_1=5$, $\sigma_1=0.5$ et $\mu_2=10$, $\sigma_2=1$. On utilisera la fonction ``normal()`` du module numy.random. et on choisira $N=10^6$ échantillons de chacune de ces variables. b) Représenter graphiquement les histogrammes des échantillons $X_1$, $X_2$ et $X_s=X_1+X_2$ sur un même graphe. c) Ecrire le code Python qui estimer la moyenne et l'écart-type de la variable $X_s$. d) Comparer l'estimation de l'écart-type $\sigma_s$ à la valeur théorique donnée par l'expression suivante: $$\sigma_s=\sqrt{\sigma_1^2+\sigma_2^2}$$ On s'aidera de la trâme proposée (à compléter) pour résoudre l'exercice. **Correction de l'exercice O2 n4** Conclusion : on constate que la valeur estimée est "proche" de la valeur théorique. *Remarque:* on pourrait davantage préciser la notion de "proche" en utilisant la notion de "Z-score" (cf chapitre sur les incertitudes), mais ce n'est pas l'objet de cet exercice. ## O2 n°5 Simulation de la différence de deux variables aléatoires uniformes On considère deux variables aléatoires de loi uniforme $X_1$ et $X_2$ telles que : - $X_1$ est uniformément répartie sur l'intervalle de valeurs $I_1=[x_1-\Delta_1;x_1+\Delta_1]$; - $X_2$ est uniformément répartie sur l'intervalle de valeurs $I_2=[x_2-\Delta_2;x_2+\Delta_2]$. Dans toute la suite, on prendra : $x_1=5$, $\Delta_1=0,5$, $x_2=20$ et $\Delta_2=1$. a) Ecrire le code python qui génère $N=10^6$ réalisations de ces variables aléatoires. On pourra utiliser la fonction ``uniform(a,b,nbSamples)`` du module numy.random. b) Représenter graphiquement les histogrammes des échantillons $X_1$, $X_2$ et $X_d=X_2-X_1$ sur un même graphe. c) Ecrire le code Python qui estimer la moyenne et l'écart-type $\sigma_d$ de la variable $X_d$. d) Comparer l'estimation de l'écart-type $\sigma_d$ à la valeur donnée par l'expression suivante : $$\sigma_d=\sqrt{\sigma_1^2+\sigma_2^2}$$ dans laquelle les écarts-types $\sigma_1$ et $\sigma_2$ sont donnés par la **demi-largeur de l'intervalle de valeurs divisé par racine carrée de trois**: $$\sigma_i=\frac{\Delta_i}{\sqrt{3}}$$ Ci-dessous une trame à compléter. **Correction 02 n°5** Conclusion: la simulation donne une valeur en accord avec l'expression $\sigma_d=\sqrt{\sigma_1^2+\sigma_2^2}$. ## O2 n°6 Simulation de la somme deux variables aléatoires uniformes Reprendre l'exercice O2 n°5 mais en calculant cette fois la grandeur somme $X_1+X_2$. Les mêmes constatations s'appliquent-elles? **Correction** il suffit de recopier les mêmes instructions en remplaçant le signe "moins" par un signe plus. On constate que la relation donnant l'écart-type demeure toujours valide. ## O2 n°7 Simulation du quotient de deux variables aléatoires gaussiennes (à chercher) On considère deux variables aléatoires de loi normale $U$ et $I$ telles que : - $U$ possède une moyenne $\overline{U}=1,2457 \textrm{ V}$ et d'écart-type $\sigma_U=1,2 \,\textrm{mV}$; - $I$ possède une moyenne $\overline{I}=12,383 \textrm{ mA}$ et d'écart-type $\sigma_I=5,2 \, \mu \textrm{A}$. a) Simuler à l'aide du module Numpy un grand nombre de réalisations de ces variables aléatoires. On considère la grandeur $$R=U/I$$. b) Tracer l'histogramme de la grandeur $R$ et déterminer les paramètres statistiques de cette distribution (moyenne $\overline{R}$ et écart-type $\sigma_R$). c) Comparer le résultat obtenu aux expressions théoriques: $$\overline{R}=\frac{\overline{U}}{\overline{I}}$$ et $$\frac{\sigma_R}{\overline{R}} = \sqrt{\left(\frac{\sigma_U}{\overline{U}}\right)^2 + \left(\frac{\sigma_I}{\overline{I}} \right) ^2} $$ **Correction de O2 n°7** Conclusion: on constate un bon accord entre les données simulées et les valeurs théoriques. ## O2 n°8 Régression linéaire (1) : période d'un pendule en fonction de la longueur de fil Un étudiant souhaite modéliser la relation entre la longueur d'un pendule pesant et la période des oscillations de ce pendule (cf figure ci-desssous). Pour différentes valeurs de la longueur $\ell$ de ce pendule, il mesure la période $T$ des oscillations. ![penduleLongueur](img/penduleLongueur.png) $$ \begin{array}{|c|c|c|c|c|c|c|} \hline \textrm{longueur } \ell \textrm{ (cm) } & 10,0& 15,0 & 31,25 & 47,5 & 63,8 & 80,0 \\ \hline \textrm{période d'oscillation }T \textrm{ (s) }& 0,632 & 0,782 & 1,119 & 1,377 & 1,603&1,791 \\ \hline \end{array} $$ Dans tout l'exercice, on prendra $g=9,81 \textrm{ m.s} ^{-2}$ pour l'accélération de la pesanteur. **Questions** a) Tracer l'évolution de la période $T$ en fonction de la longueur $\ell$ du pendule. Les points expérimentaux semblent-ils alignés? La relation théorique entre la période et l'allongement se déduit des relations suivantes: $$\omega=\sqrt{\frac{g}{\ell}}$$ et $$T = \frac{2\pi}{\omega}$$ b) Justifier qu'il est pertinent de proposer le modèle de régression affine $Y=aX+b$ avec $Y=T^\alpha$ et $X=\ell$ sous la forme $$ T^\alpha = a \ell +b$$ où l'exposant $\alpha$ est un coefficient numérique que l'on déterminera. c) Déterminer par régression affine les valeurs numériques des paramètres $a$ et $b$ de ce modèle (on ne demande pas d'estimer les incertitudes ni de valider ce modèle). d) Représenter la droite de régression en superposition sur le jeu de données. e) Comparer la valeur obtenue pour la pente $a$ de la régression à la valeur théorique $a_\textrm{théo}$ que l'on précisera. **Autre méthode pour estimer la pente de la régression** Une méthode alternative pour estimer la pente de la régression est de proposer un modèle de la forme $Y=aX$, c'est-à-dire purement "linéaire" (sans composante affine): - pour chaque couple de valeurs $(x_i,y_i)$, déterminer la valeur $a_i$ de la pente par $a_i=y_i/x_i$, - effectuer la moyenne de l'ensemble des pentes $\{a_i\}$ obtenues. f) Mettre en oeuvre cette méthode et comparer l'estimation de $a$ dans ce cas à celle obtenue avec la régression affine. **Correction O2 n°8** b) D'après les relations précédentes: $$T^2=\frac{4\pi^2}{g}\ell$$ qui est bien de la forme $Y=T^2 = aX$ avec $X=\ell$ et la pente théorique qui vaut $$a_\textrm{théo}=\frac{4\pi^2}{g}$$ Le modèle affine est donc compatible avec la relation théorique à condition de choisir $\alpha=2$. Le résultat de la régression affine doit nous fournir la valeur $b\approx0$ ## O2 n°9 Régression linéaire (2) : traction d'un élastique Un étudiant souhaite **caractériser le comportement en traction** d'un élastique en caoutchouc. Pour cela, il met en oeuvre le dispositif expérimental suivant (cf figure ci-dessous) : - l'élastique à tester est accroché à une potence, - on fixe à son extrémité inférieure un masse $m$ de valeur variable, - pour plusieurs valeurs de la masse $m$, il mesure la longueur $\ell$ de l'élastique à l'aide d'un mètre ruban. **Illustration du dispositif expérimental** ![elastiquePotence](img/elastiquePotence4.png) **Instruments de mesure utilisé :** - Balance de cuisine (au gramme) ; - Mètre ruban (gradué en millimètre) **Tableau des mesures** $$ \begin{array}{|c|c|c|c|c|c|c|c|} \hline \textrm{masse } m \textrm{ (g) } & 0 & 200 & 300 & 400 & 500 & 600 & 700\\ \hline \textrm{longueur }\ell \textrm{ (cm) }& 12,4 &12,9& 13,1&13,3& 13,6& 14,2&14,6 \\ \hline \end{array} $$ **Questions** On note $\Delta \ell$ l'**allongement** de l'élastique la grandeur définie par $$\Delta \ell = \ell - \ell_0$$ où $\ell_0$ est la longueur de l'élastique à vide, c'est-à-dire lorsque qu'il est soumis à une force nulle. a) Représentation des mesures : tracer l'évolution de l'allongement $\Delta \ell$ (en mètres) en fonction de la masse (en kilogrammes). b) Etablir l'équation de la droite de régression $\Delta \ell = f(m)$ et représenter la droite de régression sur le jeu de données. On suppose que la loi de Hooke d'applique, c'est-à-dire que la relation *force vs allongement* est linéaire, soit: $$F = k \Delta \ell$$ c) En prenant $g=9,81 \textrm{m.s}^{-2}$ déterminer la valeur de la raideur $k$ correspond à la loi de Hooke. **Remarques:** - l'étude des incertitudes n'est pas demandée dans cet exercice, - rien de permet d'affirmer que la loi de Hooke est valide dans ce cas de cet élastique (et d'ailleurs, il n'en est rien ! nous montrerons que les résultats de ces mesures permettent **d'invalider le modèle linéaire proposé**). Indications : pour répondre aux questions, on s'aidera de la trame pré-remplie ci-dessous. c) Détermination de la raideur $k$ de la loi de Hooke $F=k\Delta \ell$ (avec l'unité correcte) **Correction de O2 n°9** c) Détermination de la raideur $k$ (en newton par mètre : $\textrm{N.m}^{-1}$) de la loi de Hooke $$F=k\Delta \ell$$ La force $F$ est reliée à la masse par (poids d'un corps de masse $m$ dans le champ de pesanteur $g$) : $$F=mg$$ Or, en utilisant la valeur de la pente $a$ obtenue par régression affine, on peut écrire,à condition de négliger l'ordonnée à l'origine $b$, que l'allongement et la masse sont des grandeurs proportionnelles : $$Y=a X \quad \textrm{ soit } \quad \Delta \ell = a \times m$$ On a donc $m = \frac{1}{a} \Delta \ell$, en remplaçant dans l'expression de la force $F=mg$, il vient: $$F=\frac{g}{a} \Delta \ell$$ Donc, en identifiant avec la loi de Hooke: $$k = \frac{g}{a}$$ **Application numérique** Remarque : au vu des données expérimentales, le modèle affine peut sembler discutable mais, sans **prendre en compte rigoureusement les incertitudes de mesure**, aucune conclusion ne peut être tirée. ## O2 n°10 Régression linéaire (3) : oscillations d'un système masse-ressort (à chercher) Un étudiant s'intéresse aux oscillations verticales d'un système masse ressort (cf figure). ![masseRessort](img/masseRessort3.png) Pour cela, il effectue deux séries de mesures. Dans la première série de mesures, il varie la masse $m$ accrochée au ressort tout en mesurant l'allongement $\Delta \ell$ du ressort à l'équilibre. $$ \begin{array}{|c|c|c|c|c|} \hline \textrm{masse } m \textrm{ (g) } & 50 & 100 & 150 & 250 \\ \hline \textrm{allongement à l'équilibre } \Delta \ell \textrm{ (cm) }& 16 & 31 & 49 & 82\\ \hline \end{array} $$ Dans la seconde série de mesures, il détermine aussi précisément que possible la période $T$ des oscillations pour différentes valeurs de la masse $m$. $$ \begin{array}{|c|c|c|c|c|} \hline \textrm{masse } m \textrm{ (g) } & 50 & 100 & 150 & 250 \\ \hline \textrm{période d'oscillation }T \textrm{ (s) }& 0,812 & 1,15 & 1,41 & 1,82\\ \hline \end{array} $$ **Questions** a) Etablir, à l'aide de la première série de mesures, la valeur de la raideur $k$ du ressort que l'on supposera obéir à la loi de Hooke $$ F=k\Delta \ell$$, la force appliquée sur le ressort étant donnée par le poids de la masse $m$ (on prendra $g=9,81\textrm{ m.s}^{-2}$). b) A partir de la seconde série de mesures, proposer un modèle affine compatible avec la relation théorique entre la masse $m$ et la fréquence des $f$ des oscillations $$f=\frac{1}{T}$$ avec $$\omega = 2\pi f= \sqrt{\frac{k}{m}}$$ On déterminera les paramètres $(a,b)$ de la régression affine et on comparera ces valeurs aux valeurs théoriques. Aucune étude d'incertitude ni de validation de modèle n'est demandé dans cette question. **Correction O2 n°10** D'après les données du sujet, l'expression de la période $T$ en fonction de la masse $m$ est donnée par: $$T=\frac{2\pi}{\omega}=2\pi\sqrt{m}{k}=\left(\frac{2\pi}{\sqrt{k}} \right) \times m^{1/2}$$ La régression linéaire se fait donc avec $Y = T$ et $X=m^{1/2}$, la pente théorique est donc $$a_\textrm{théo}=\frac{2\pi}{\sqrt{k}}$$ Autre possibilité, on peut tracer l'évolution de $T^2$ en fonction de la masse: $$T^2 =\frac{(2\pi)^2}{k} \times m$$ Dans ce cas, la pente théorique est $$a'_\textrm{théo}=\frac{(2\pi) ^2}{k}$$ **Encore une autre méthode alternative** : au lieu d'une régression affine, on peut faire la moyenne des pentes obtenues pour chacun des couples $(y_i,x_i)$.
0.299925
0.955444
<a href="https://colab.research.google.com/github/crux82/mt-ganbert/blob/main/1_MT_DNN_model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # MT-DNN model in Pytorch This notebook shows how to train a model based only on the paradigm of the Multi-task training. The model is based on the transformer, Italian Bert-base model, UmBERTo (https://github.com/musixmatchresearch/umberto), and the MTDNN model is trained at the same time on the six tasks considered in our work,used for the recognition of abusive linguistic behaviors. The task are: 1. HaSpeeDe: Hate Spech Recognition 2. AMI A: Automatic Misogyny Identification (misogyny, not mysogyny) 3. AMI B: Automatic Misogyny Identification (misogyny_category: stereotype, sexual_harassment, discredit) 4. DANKMEMEs: Hate Spech Recognition in MEMEs sentences 5. SENTIPOLC 1: Sentiment Polarity Classification (objective, subjective) 6. SENTIPOLC 2: Sentiment Polarity Classification (polarity: positive, negative, neutral) ## Setup environment ``` #-------------------------------- # Retrieve the github directory #-------------------------------- !git clone https://github.com/crux82/mt-ganbert %cd mt-ganbert/mttransformer/ #installation of necessary packages !pip install -r requirements.txt !pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 -f https://download.pytorch.org/whl/torch_stable.html !pip install ekphrasis ``` ## Import ``` from google.colab import drive import pandas as pd import csv from sklearn.model_selection import train_test_split import numpy as np import random import tensorflow as tf import torch # Get the GPU device name. device_name = tf.test.gpu_device_name() # The device name should look like the following: if device_name == '/device:GPU:0': print('Found GPU at: {}'.format(device_name)) else: raise SystemError('GPU device not found') # If there's a GPU available... if torch.cuda.is_available(): # Tell PyTorch to use the GPU. device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) # If not... else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") ``` ## Run training For each dataset, with a dedicated script ("script_tsv.py"), are created 4 files: 1. taskName_task_def.yml, a config file about the task 2. taskName_train.tsv, file tsv of task train set 3. taskName_test.tsv, file tsv of task test set 4. taskName_dev.tsv, file tsv of task dev set The number of examples of train can consist of: * All train dataset * 100 examples of oringinal train dataset * 200 examples of oringinal train dataset * 500 examples of oringinal train dataset To access to the .tsv files and config file of each task, based on the cutting of examples of the train set you want to use, these can be the paths: * data/0/taskName_file * data/100/no_gan/taskName_file * data/200/no_gan/taskName_file * data/500/no_gan/taskName_file "no_gan" means that you want to use the model based only on BERT_based model ###**Tokenization and Convert to Json** The training code reads tokenized data in json format, so "prepro_std.py" (modified script of work https://github.com/namisan/mt-dnn) is used to do tokenization and convert data of .tsv files into json format. The args used in the script invocation are: * --model: the model used to tokenize input sentences * --root_dir: the folder from which to get the .tsv files * --task_def: the task_def file of the task, which contains useful information for converting to .json files. In this case the task_def file contains the information about all tasks The script is run for all tasks simultaneously. ``` #edit --root_dir and --task_def depending on the task and train set !python prepro_std.py --model Musixmatch/umberto-commoncrawl-cased-v1 --root_dir data/"0"/ --task_def data/0/haspeede-TW_AMI2018A_AMI2018B_DANKMEMES2020_SENTIPOLC20161_SENTIPOLC20162_task_def.yml ``` ###**Onboard your task into training!** To run the training is used the script "train.py" (modified script of work https://github.com/namisan/mt-dnn). The args used in the script invocation are: * --encoder_type: it means which transformer is used to encode the sentences. In this case is equals to "9", that matches to UmBERTo * --epochs: number of epochs that you want to use in the training * --task_def: the task_def file of the tasks * --data_dir: the folder from which to get the .json files * --init_checkpoint: the name of the transformer to be loaded, in this case "Musixmatch/umberto-commoncrawl-cased-v1" * --max_seq_len: the maximum length of a sequence that the BERT model can handle * --batch_size: the number of training examples in one forward/backward pass * --batch_size_eval: the batch size used for validation and test * --optimizer: the name of optimizer that you want to use * --train_datasets: the name of the tasks without train files extension, separated by "," * --test_datasets: the name of the tasks without test files extension, , separated by "," * --learning_rate: the learning rate that you want to use * --multi_gpu_on: since the model is trained on multiple tasks at the same time, it is possible to train with multiple GPUs, to have a more timely training * --grad_accumulation_step: you may need to use the gradient accumulation to make training stable, when you small GPUs. For example, if you use the flag "--grad_accumulation_step 4" during the training, the actual batch size will be batch_size * 4 The script is run for all tasks simultaneously. ``` #edit --task_def, --data_dir, --train_datasets and test_datasets depending on the task and train set !python train.py --encoder_type 9 --epochs 10 --task_def data/0/haspeede-TW_AMI2018A_AMI2018B_DANKMEMES2020_SENTIPOLC20161_SENTIPOLC20162_task_def.yml --data_dir data/0/musixmatch_cased/ --init_checkpoint Musixmatch/umberto-commoncrawl-cased-v1 --max_seq_len 128 --batch_size 16 --batch_size_eval 16 --optimizer "adamW" --train_datasets haspeede-TW,AMI2018A,AMI2018B,DANKMEMES2020,SENTIPOLC20161,SENTIPOLC20162 --test_datasets haspeede-TW,AMI2018A,AMI2018B,DANKMEMES2020,SENTIPOLC20161,SENTIPOLC20162 --learning_rate "5e-5" --multi_gpu_on --grad_accumulation_step 4 ``` ### Finetuning **Finetune MT-DNN to each of the tasks to get task-specific models!** Use model resulting from previous training! In this case is used the script "finetuning.py" and the different args from the script "train.py" are: * --init_checkpoint: upload the best model of previous training. The model is located in the checkpoint folder * --task: to specify which task to perform finetuning for. 0 is first task. * --string: the string that contains all the tasks name, separated by "_" ``` #finetuning !python finetuning.py --finetuning --task 0 --epochs 5 --string haspeede-TW_AMI2018A_AMI2018B_DANKMEMES2020_SENTIPOLC20161_SENTIPOLC20162 --task_def data/0/haspeede-TW_AMI2018A_AMI2018B_DANKMEMES2020_SENTIPOLC20161_SENTIPOLC20162_task_def.yml --data_dir data/0/musixmatch_cased/ --init_checkpoint checkpoint/model_0.pt --max_seq_len 128 --batch_size 16 --batch_size_eval 16 --optimizer "adamW" --train_datasets haspeede-TW --test_datasets haspeede-TW --learning_rate "5e-5" --multi_gpu_on --grad_accumulation_step 4 ```
github_jupyter
#-------------------------------- # Retrieve the github directory #-------------------------------- !git clone https://github.com/crux82/mt-ganbert %cd mt-ganbert/mttransformer/ #installation of necessary packages !pip install -r requirements.txt !pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 -f https://download.pytorch.org/whl/torch_stable.html !pip install ekphrasis from google.colab import drive import pandas as pd import csv from sklearn.model_selection import train_test_split import numpy as np import random import tensorflow as tf import torch # Get the GPU device name. device_name = tf.test.gpu_device_name() # The device name should look like the following: if device_name == '/device:GPU:0': print('Found GPU at: {}'.format(device_name)) else: raise SystemError('GPU device not found') # If there's a GPU available... if torch.cuda.is_available(): # Tell PyTorch to use the GPU. device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) # If not... else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") #edit --root_dir and --task_def depending on the task and train set !python prepro_std.py --model Musixmatch/umberto-commoncrawl-cased-v1 --root_dir data/"0"/ --task_def data/0/haspeede-TW_AMI2018A_AMI2018B_DANKMEMES2020_SENTIPOLC20161_SENTIPOLC20162_task_def.yml #edit --task_def, --data_dir, --train_datasets and test_datasets depending on the task and train set !python train.py --encoder_type 9 --epochs 10 --task_def data/0/haspeede-TW_AMI2018A_AMI2018B_DANKMEMES2020_SENTIPOLC20161_SENTIPOLC20162_task_def.yml --data_dir data/0/musixmatch_cased/ --init_checkpoint Musixmatch/umberto-commoncrawl-cased-v1 --max_seq_len 128 --batch_size 16 --batch_size_eval 16 --optimizer "adamW" --train_datasets haspeede-TW,AMI2018A,AMI2018B,DANKMEMES2020,SENTIPOLC20161,SENTIPOLC20162 --test_datasets haspeede-TW,AMI2018A,AMI2018B,DANKMEMES2020,SENTIPOLC20161,SENTIPOLC20162 --learning_rate "5e-5" --multi_gpu_on --grad_accumulation_step 4 #finetuning !python finetuning.py --finetuning --task 0 --epochs 5 --string haspeede-TW_AMI2018A_AMI2018B_DANKMEMES2020_SENTIPOLC20161_SENTIPOLC20162 --task_def data/0/haspeede-TW_AMI2018A_AMI2018B_DANKMEMES2020_SENTIPOLC20161_SENTIPOLC20162_task_def.yml --data_dir data/0/musixmatch_cased/ --init_checkpoint checkpoint/model_0.pt --max_seq_len 128 --batch_size 16 --batch_size_eval 16 --optimizer "adamW" --train_datasets haspeede-TW --test_datasets haspeede-TW --learning_rate "5e-5" --multi_gpu_on --grad_accumulation_step 4
0.511473
0.942876